P. 1
IT Security Threats: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

IT Security Threats: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

|Views: 1,656|Likes:
Published by Emereo Publishing
In Computer security a threat is a possible danger that might exploit a vulnerability to breach security and thus cause possible harm.

A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal organization) or "accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God" such as an earthquake, a fire, or a tornado) or otherwise a circumstance, capability, action, or event.

This book is your ultimate resource for IT Security Threats. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about IT Security Threats right away, covering: Threat (computer), Computer security, Portal:Computer security, 2009 Sidekick data loss, AAFID, Absolute Manage, Accelops, Acceptable use policy, Access token, Advanced Persistent Threat, Air gap (networking), Ambient authority, Anomaly-based intrusion detection system, Application firewall, Application security, Asset (computer security), Attack (computer), AutoRun, Blacklist (computing), Blue Cube Security, BlueHat, Centurion guard, Client honeypot, Cloud computing security, Collaboration-oriented architecture, Committee on National Security Systems, Computer Law and Security Report, Computer security compromised by hardware failure, Computer security incident management, Computer security model, Computer surveillance, Confused deputy problem, Countermeasure (computer), CPU modes, Crackme, Cross-site printing, CryptoRights Foundation, CVSS, Control system security, Cyber security standards, Cyber spying, Cyber Storm Exercise, Cyber Storm II, Cyberheist, Dancing pigs, Data breach, Data loss prevention software, Data validation, Digital self-defense, Dolev-Yao model, DREAD: Risk assessment model, Dynamic SSL, Economics of security, Enterprise information security architecture, Entrust, Evasion (network security), Event data, Federal Desktop Core Configuration, Federal Information Security Management Act of 2002, Flaw hypothesis methodology, Footprinting, Forward anonymity, Four Horsemen of the Infocalypse, Fragmented distribution attack, Higgins project, High Assurance Guard, Host Based Security System, Human–computer interaction (security), Inference attack, Information assurance, Information Assurance Vulnerability Alert, Information security, Information Security Automation Program, Information Security Forum, Information sensitivity, Inter-Control Center Communications Protocol, Inter-protocol communication, Inter-protocol exploitation, International Journal of Critical Computer-Based Systems, Internet leak, Internet Security Awareness Training, Intrusion detection system evasion techniques, Intrusion prevention system, Intrusion tolerance, IT baseline protection, IT Baseline Protection Catalogs, IT risk, IT risk management, ITHC, Joe-E, Kill Pill, LAIM Working Group, Layered security, Likejacking, Linked Timestamping, Lock-Keeper, MAGEN (security), Mandatory Integrity Control, Mayfield's Paradox, National Cyber Security Awareness Month, National Vulnerability Database, Neurosecurity, Nobody (username), Non-repudiation, Novell Cloud Security Service, One-time authorization code, Opal Storage Specification, Open security, Outbound content security, Parasitic computing, Parkerian Hexad, Phoraging, Physical access, Polyinstantiation, Portable Executable Automatic Protection, Pre-boot authentication, Presumed security, Principle of least privilege, Privilege Management Infrastructure, Privileged Identity Management, Proof-carrying code, Public computer...and much more.

This book explains in-depth the real drivers and workings of IT Security Threats. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of IT Security Threats with the objectivity of experienced professio
In Computer security a threat is a possible danger that might exploit a vulnerability to breach security and thus cause possible harm.

A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal organization) or "accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God" such as an earthquake, a fire, or a tornado) or otherwise a circumstance, capability, action, or event.

This book is your ultimate resource for IT Security Threats. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about IT Security Threats right away, covering: Threat (computer), Computer security, Portal:Computer security, 2009 Sidekick data loss, AAFID, Absolute Manage, Accelops, Acceptable use policy, Access token, Advanced Persistent Threat, Air gap (networking), Ambient authority, Anomaly-based intrusion detection system, Application firewall, Application security, Asset (computer security), Attack (computer), AutoRun, Blacklist (computing), Blue Cube Security, BlueHat, Centurion guard, Client honeypot, Cloud computing security, Collaboration-oriented architecture, Committee on National Security Systems, Computer Law and Security Report, Computer security compromised by hardware failure, Computer security incident management, Computer security model, Computer surveillance, Confused deputy problem, Countermeasure (computer), CPU modes, Crackme, Cross-site printing, CryptoRights Foundation, CVSS, Control system security, Cyber security standards, Cyber spying, Cyber Storm Exercise, Cyber Storm II, Cyberheist, Dancing pigs, Data breach, Data loss prevention software, Data validation, Digital self-defense, Dolev-Yao model, DREAD: Risk assessment model, Dynamic SSL, Economics of security, Enterprise information security architecture, Entrust, Evasion (network security), Event data, Federal Desktop Core Configuration, Federal Information Security Management Act of 2002, Flaw hypothesis methodology, Footprinting, Forward anonymity, Four Horsemen of the Infocalypse, Fragmented distribution attack, Higgins project, High Assurance Guard, Host Based Security System, Human–computer interaction (security), Inference attack, Information assurance, Information Assurance Vulnerability Alert, Information security, Information Security Automation Program, Information Security Forum, Information sensitivity, Inter-Control Center Communications Protocol, Inter-protocol communication, Inter-protocol exploitation, International Journal of Critical Computer-Based Systems, Internet leak, Internet Security Awareness Training, Intrusion detection system evasion techniques, Intrusion prevention system, Intrusion tolerance, IT baseline protection, IT Baseline Protection Catalogs, IT risk, IT risk management, ITHC, Joe-E, Kill Pill, LAIM Working Group, Layered security, Likejacking, Linked Timestamping, Lock-Keeper, MAGEN (security), Mandatory Integrity Control, Mayfield's Paradox, National Cyber Security Awareness Month, National Vulnerability Database, Neurosecurity, Nobody (username), Non-repudiation, Novell Cloud Security Service, One-time authorization code, Opal Storage Specification, Open security, Outbound content security, Parasitic computing, Parkerian Hexad, Phoraging, Physical access, Polyinstantiation, Portable Executable Automatic Protection, Pre-boot authentication, Presumed security, Principle of least privilege, Privilege Management Infrastructure, Privileged Identity Management, Proof-carrying code, Public computer...and much more.

This book explains in-depth the real drivers and workings of IT Security Threats. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of IT Security Threats with the objectivity of experienced professio

More info:

Published by: Emereo Publishing on Aug 02, 2011
Copyright:Traditional Copyright: All rights reserved
List Price: $39.95

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
This book can be read on up to 6 mobile devices.
Full version available to members
See more
See less

11/25/2014

IT Security Threats

High-impact Strategies - What You Need to Know:
Definitions, Adoptions, Impact, Benefits, Maturity, Vendors
Kevin Roebuck
IN-DEPTH: THE REAL DRIVERS AND
WORKINGS
REDUCES THE RISK OF YOUR
TECHNOLOGY, TIME AND RESOURCES
INVESTMENT DECISIONS
ENABLING YOU TO COMPARE YOUR
UNDERSTANDING WITH THE OBJECTIVITY OF
EXPERIENCED PROFESSIONALS
In Computer security a threat is a possible danger that might exploit a vulnerability to breach security
and thus cause possible harm.
A threat can be either “intentional” (i.e., intelligent; e.g., an individual cracker or a criminal organiza-
tion) or “accidental” (e.g., the possibility of a computer malfunctioning, or the possibility of an “act of
God” such as an earthquake, a fire, or a tornado) or otherwise a circumstance, capability, action, or
event.
This book is your ultimate resource for IT Security Threats. Here you will find the most up-to-date in-
formation, analysis, background and everything you need to know.
In easy to read chapters, with extensive references and links to get you to know all there is to know
about IT Security Threats right away, covering: Threat (computer), Computer security, Portal:Computer
security, 2009 Sidekick data loss, AAFID, Absolute Manage, Accelops, Acceptable use policy, Access to-
ken, Advanced Persistent Threat, Air gap (networking), Ambient authority, Anomaly-based intrusion de-
tection system, Application firewall, Application security, Asset (computer security), Attack (computer),
AutoRun, Blacklist (computing), Blue Cube Security, BlueHat, Centurion guard, Client honeypot, Cloud
computing security, Collaboration-oriented architecture, Committee on National Security Systems,
Computer Law and Security Report, Computer security compromised by hardware failure, Computer se-
curity incident management, Computer security model, Computer surveillance, Confused deputy prob-
lem, Countermeasure (computer), CPU modes, Crackme, Cross-site printing, CryptoRights Foundation,
CVSS, Control system security, Cyber security standards, Cyber spying, Cyber Storm Exercise, Cyber
Storm II, Cyberheist, Dancing pigs, Data breach, Data loss prevention software, Data validation, Digital
self-defense, Dolev-Yao model, DREAD: Risk assessment model, Dynamic SSL, Economics of security,
Enterprise information security architecture, Entrust, Evasion (network security), Event data, Federal
Desktop Core Configuration, Federal Information Security Management Act of 2002, Flaw hypothesis
methodology, Footprinting, Forward anonymity, Four Horsemen of the Infocalypse, Fragmented distri-
bution attack, Higgins project, High Assurance Guard, Host Based Security System, Human–computer
interaction (security), Inference attack, Information assurance, Information Assurance Vulnerability
Alert, Information security, Information Security Automation Program, Information Security Forum,
Information sensitivity, Inter-Control Center Communications Protocol, Inter-protocol communication,
Inter-protocol exploitation, International Journal of Critical Computer-Based Systems, Internet leak,
Internet Security Awareness Training, Intrusion detection system evasion techniques, Intrusion preven-
tion system, Intrusion tolerance, IT baseline protection, IT Baseline Protection Catalogs, IT risk, IT risk
management, ITHC, Joe-E, Kill Pill, LAIM Working Group, Layered security, Likejacking, Linked Time-
stamping, Lock-Keeper, MAGEN (security), Mandatory Integrity Control, Mayfield’s Paradox, National
Cyber Security Awareness Month, National Vulnerability Database, Neurosecurity, Nobody (username),
Non-repudiation, Novell Cloud Security Service, One-time authorization code, Opal Storage Specifi-
cation, Open security, Outbound content security, Parasitic computing, Parkerian Hexad, Phoraging,
Physical access, Polyinstantiation, Portable Executable Automatic Protection, Pre-boot authentication,
Presumed security, Principle of least privilege, Privilege Management Infrastructure, Privileged Identity
Management, Proof-carrying code, Public computer...and much more
This book explains in-depth the real drivers and workings of IT Security Threats. It reduces the risk
of your technology, time and resources investment decisions by enabling you to compare your under-
standing of IT Security Threats with the objectivity of experienced professionals.
I
T

S
e
c
u
r
i
t
y

T
h
r
e
a
t
s
Topic relevant selected content from the highest rated entries, typeset, printed and
shipped.
Combine the advantages of up-to-date and in-depth knowledge with the convenience
of printed books.
A portion of the proceeds of each book will be donated to the Wikimedia Foundation
to support their mission: to empower and engage people around the world to collect
and develop educational content under a free license or in the public domain, and to
disseminate it effectively and globally.
The content within this book was generated collaboratively by volunteers. Please be
advised that nothing found here has necessarily been reviewed by people with the
expertise required to provide you with complete, accurate or reliable information. Some
information in this book maybe misleading or simply wrong. The publisher does not
guarantee the validity of the information found here. If you need specifc advice (for
example, medical, legal, fnancial, or risk management) please seek a professional who
is licensed or knowledgeable in that area.
Sources, licenses and contributors of the articles and images are listed in the section
entitled “References”. Parts of the books may be licensed under the GNU Free
Documentation License. A copy of this license is included in the section entitled “GNU
Free Documentation License”
All used third-party trademarks belong to their respective owners.
Contents
Articles
Threat (computer) 1
Computer security 12
Portal:Computer security 20
2009 Sidekick data loss 23
AAFID 24
Absolute Manage 25
Accelops 28
Acceptable use policy 30
Access token 33
Advanced Persistent Threat 35
Air gap (networking) 36
Ambient authority 37
Anomaly-based intrusion detection system 38
Application firewall 39
Application security 45
Asset (computer security) 49
Attack (computer) 50
AutoRun 53
Blacklist (computing) 68
Blue Cube Security 69
BlueHat 70
Centurion guard 71
Client honeypot 71
Cloud computing security 76
Collaboration-oriented architecture 79
Committee on National Security Systems 81
Computer Law and Security Report 83
Computer security compromised by hardware failure 84
Computer security incident management 97
Computer security model 102
Computer surveillance 103
Confused deputy problem 106
Countermeasure (computer) 108
CPU modes 110
Crackme 111
Cross-site printing 112
CryptoRights Foundation 112
CVSS 114
Control system security 115
Cyber security standards 118
Cyber spying 122
Cyber Storm Exercise 124
Cyber Storm II 125
Cyberheist 125
Dancing pigs 126
Data breach 127
Data loss prevention software 130
Data validation 132
Digital self-defense 134
Dolev-Yao model 136
DREAD: Risk assessment model 137
Dynamic SSL 138
Economics of security 141
Enterprise information security architecture 143
Entrust 148
Evasion (network security) 151
Event data 152
Federal Desktop Core Configuration 153
Federal Information Security Management Act of 2002 154
Flaw hypothesis methodology 159
Footprinting 159
Forward anonymity 160
Four Horsemen of the Infocalypse 160
Fragmented distribution attack 162
Higgins project 163
High Assurance Guard 164
Host Based Security System 165
Human–computer interaction (security) 170
Inference attack 171
Information assurance 172
Information Assurance Vulnerability Alert 177
Information security 178
Information Security Automation Program 193
Information Security Forum 194
Information sensitivity 196
Inter-Control Center Communications Protocol 199
Inter-protocol communication 202
Inter-protocol exploitation 203
International Journal of Critical Computer-Based Systems 204
Internet leak 204
Internet Security Awareness Training 206
Intrusion detection system evasion techniques 207
Intrusion prevention system 209
Intrusion tolerance 211
IT baseline protection 211
IT Baseline Protection Catalogs 215
IT risk 218
IT risk management 232
ITHC 247
Joe-E 248
Kill Pill 249
LAIM Working Group 249
Layered security 250
Likejacking 251
Linked Timestamping 252
Lock-Keeper 256
MAGEN (security) 257
Mandatory Integrity Control 258
Mayfield's Paradox 260
National Cyber Security Awareness Month 260
National Vulnerability Database 261
Neurosecurity 262
nobody (username) 262
Non-repudiation 263
Novell Cloud Security Service 265
One-time authorization code 266
Opal Storage Specification 267
Open security 268
Outbound content security 269
Parasitic computing 269
Parkerian Hexad 270
Phoraging 272
Physical access 272
Polyinstantiation 273
Portable Executable Automatic Protection 274
Pre-boot authentication 281
Presumed security 282
Principle of least privilege 283
Privilege Management Infrastructure 286
Privileged Identity Management 287
Proof-carrying code 289
Public computer 290
Pwnie Awards 291
Real-time adaptive security 294
RED/BLACK concept 295
Reverse engineering 296
RFPolicy 303
Risk factor (computing) 303
Rootkit 305
S/MIME 316
seccomp 318
Secure coding 320
Secure environment 321
Secure state 321
Secure transmission 321
Security architecture 322
Security awareness 323
Security breach notification laws 325
Security bug 326
Security Content Automation Protocol 327
Security event manager 329
Security information and event management 331
Security information management 332
Security log 333
Security operations center (computing) 333
Security principal 336
Security Protocols Open Repository 337
Security risk 337
Security testing 339
SekChek Classic 341
SekChek Local 344
Separation of protection and security 347
Sherwood Applied Business Security Architecture 348
Simple Certificate Enrollment Protocol 350
Site Security Handbook 351
Sourcefire Vulnerability Research Team 351
Standard of Good Practice 352
Stepping stone (computer security) 355
Supply chain attack 355
System Service Dispatch Table 356
Systems assurance 356
Threat model 357
Timeline of computer security hacker history 359
Titan Rain 368
Trademark (computer security) 369
Trust boundary 371
Trusted client 371
Trusted timestamping 372
Typed assembly language 375
Typhoid adware 376
Vanish (computer science) 377
Virus Bulletin 379
Vulnerability Discovery Model 380
Web Access Management 381
Whitelist 383
Windows Security Log 385
Wireless cracking 387
Wireless identity theft 390
WS-Federation 393
WS-SecurityPolicy 394
WS-Trust 396
XSS worm 397
Zardoz (computer security) 398
Zone-H 400
Exploit (computer security) 400
Alphanumeric code 402
Arbitrary code execution 403
Blended threat 404
Buffer overflow 404
Buffer overflow protection 413
Call gate 418
Clear Channel Assessment attack 419
Code injection 420
Common Vulnerabilities and Exposures 427
Copy attack 428
Covert channel 429
CPLINK 432
Cross-application scripting 433
Cross-site scripting 434
Dangling pointer 441
Defensive computing 445
Directory traversal 447
DNS cache poisoning 450
Drive-by download 453
Dynamic linker 454
Uncontrolled format string 456
FTP bounce attack 458
GetAdmin 458
Heap feng shui 459
Heap overflow 459
Heap spraying 460
In-session phishing 462
Computer insecurity 463
Integer overflow 469
IP hijacking 471
JIT spraying 473
Laptop theft 474
Login spoofing 476
lorcon 476
Memory safety 477
Metasploit Project 478
Mixed threat attack 481
NOP slide 481
Null character 482
Off-by-one error 483
Operation: Bot Roast 485
OSVDB 486
Password cracking 488
Payload (software) 490
Pharming 491
Physical information security 493
Port scanner 494
Predictable serial number attack 499
Privilege escalation 500
Race condition 504
Racetrack problem 508
Raw socket 509
Reflection attack 510
Relay attack 511
Remote file inclusion 512
Replay attack 513
Return-oriented programming 514
Return-to-libc attack 515
Session hijacking 516
Shatter attack 518
Shellcode 520
Shoulder surfing (computer security) 525
SMBRelay 526
SMS spoofing 527
Stack buffer overflow 529
STRIDE (security) 534
Improper input validation 534
Swatting 535
Symlink race 536
TCP reset attack 537
The Open Organization Of Lockpickers 539
Time-of-check-to-time-of-use 540
Timeline of computer viruses and worms 543
Twinge attack 551
Computer virus 552
Source code virus 561
Virus hoax 561
Vishing 565
Vulnerability (computing) 566
Vulnerability database 574
War dialing 575
Warchalking 576
Wardriving 577
Warzapping 581
Webattacker 582
Windows Metafile vulnerability 583
XSA 589
Zero-day attack 590
References
Article Sources and Contributors 593
Image Sources, Licenses and Contributors 605
Article Licenses
License 607
Threat (computer)
1
Threat (computer)
In Computer security a threat is a possible danger that might exploit a vulnerability to breach security and thus
cause possible harm.
A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal organization) or
"accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God" such as an
earthquake, a fire, or a tornado) or otherwise a circumstance, capability, action, or event.
[1]
Definitions
ISO 27005 defines threat as:
[2]
A potential cause of an incident, that may result in harm of systems and organization
A more comprehensive definition, tied to an Information assurance point of view, can be found in "Federal
Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and
Information Systems" by NIST of United States of America
[3]
Any circumstance or event with the potential to adversely impact organizational operations (including
mission, functions, image, or reputation), organizational assets, or individuals through an information system
via unauthorized access, destruction, disclosure, modification of information, and/or denial of service. Also,
the potential for a threat-source to successfully exploit a particular information system vulnerability.
National Information Assurance Glossary defines threat as:
Any circumstance or event with the potential to adversely impact an IS through unauthorized access,
destruction, disclosure, modification of data, and/or denial of service.
ENISA gives a similar definition
[4]
:
Any circumstance or event with the potential to adversely impact an asset [G.3] through unauthorized access,
destruction, disclosure, modification of data, and/or denial of service.
The Open Group defines threat in
[5]
as:
Anything that is capable of acting in a manner resulting in harm to an asset and/or organization; for example,
acts of God (weather, geological events,etc.); malicious actors; errors; failures.
Factor Analysis of Information Risk defines threat as:
[6]
threats are anything (e.g., object, substance, human, etc.) that are capable of acting against an asset in a
manner that can result in harm. A tornado is a threat, as is a flood, as is a hacker. The key consideration is
that threats apply the force (water, wind, exploit code, etc.) against an asset that can cause a loss event to
occur.
National Information Assurance Training and Education Center gives a more articulated definition of threat:
[7]

[8]
The means through which the ability or intent of a threat agent to adversely affect an automated system,
facility, or operation can be manifest. Categorize and classify threats as follows: Categories Classes Human
Intentional Unintentional Environmental Natural Fabricated 2. Any circumstance or event with the potential
to cause harm to a system in the form of destruction, disclosure, modification or data, and/or denial of service.
3. Any circumstance or event with the potential to cause harm to the ADP system or activity in the form of
destruction, disclosure, and modification of data, or denial of service. A threat is a potential for harm. The
presence of a threat does not mean that it will necessarily cause actual harm. Threats exist because of the very
existence of the system or activity and not because of any specific weakness. For example, the threat of fire
exists at all facilities regardless of the amount of fire protection available. 4. Types of computer systems
related adverse events (i. e. , perils) that may result in losses. Examples are flooding, sabotage and fraud. 5.
Threat (computer)
2
An assertion primarily concerning entities of the external environment (agents); we say that an agent (or class
of agents) poses a threat to one or more assets; we write: T(e;i) where: e is an external entity; i is an internal
entity or an empty set. 6. An undesirable occurrence that might be anticipated but is not the result of a
conscious act or decision. In threat analysis, a threat is defined as an ordered pair, <peril; asset category>,
suggesting the nature of these occurrences but not the details (details are specific to events). 7. A potential
violation of security. 8. A set of properties of a specific external entity (which may be either an individual or
class of entities) that, in union with a set of properties of a specific internal entity, implies a risk (according to
some body of knowledge).
Phenomenology
The term "threat" relates to some other basic security terms as shown in the following diagram:
[1]
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
| An Attack: | |Counter- | | A System Resource: |
| i.e., A Threat Action | | measure | | Target of the Attack |
| +----------+ | | | | +-----------------+ |
| | Attacker |<==================||<========= | |
| | i.e., | Passive | | | | | Vulnerability | |
| | A Threat |<=================>||<========> | |
| | Agent | or Active | | | | +-------|||-------+ |
| +----------+ Attack | | | | VVV |
| | | | | Threat Consequences |
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a
threat action. The result can potentially compromises the Confidentiality, Integrity or Availability properties of
resources (potentially different that the vulnerable one) of the organization and others involved parties (customers,
suppliers).
The so called CIA triad is the basis of Information Security.
The attack can be active when it attempts to alter system resources or affect their operation: so it compromises
Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does
not affect system resources: so it compromises Confidentiality.
[1]
OWASP: relationship between threat agent and
business impact
OWASP (see figure) depicts the same phenomenon in slightly different
terms: a threat agent through an attack vector exploits a weakness
(vulnerability) of the system and the related security controls causing
an technical impact on an IT resource (asset) connected to a business
impact.
A set of policies concerned with information security management, the
Information Security Management Systems (ISMS), has been
developed to manage, according to Risk management principles, the countermeasures in order to accomplish to a
security strategy set up following rules and regulations applicable in a country. Countermeasures are also called
Security controls; when applied to the transmission of information are named security services.
[9]
The overall picture represents the risk factors of the risk scenario.
[10]
The widespread of computer dependencies and the consequent raising of the consequence of a successful attack, led
to a new term cyberwarfare.
Threat (computer)
3
It should be noted that nowadays the many real attacks exploit Psychology at least as much as technology. Phishing
and Pretexting and other methods are called social engineering techniques.
[11]
The Web 2.0 applications, specifically
Social network services, can be a mean to get in touch with people in charge of system administration or even system
security, inducing them to reveal sensitive information.
[12]
One famous case is Robin Sage.
[13]
The most widespread documentation on Computer insecurity is about technical threats such computer virus, trojan
and other malware, but a serious study to apply cost effective countermeasures can only be conducted following a
rigorous IT risk analysis in the framework of an ISMS: a pure technical approach will let out the psychological
attacks, that are increasing threats.
Threats classification
Threats can be classified according to their type and origin:
[2]
• Type
• Physical damage
• fire
• water
• pollution
• natural events
• climatic
• seismic
• volcanic
• loss of essential services
• electrical power
• air conditioning
• telecommunication
• compromise of information
• eavesdropping,
• theft of media
• retrieval of discarded materials
• technical failures
• equipment
• software
• capacity saturation
• compromise of functions
• error in use
• abuse of rights
• denial of actions
• Origin
• Deliberate: aiming at information asset
• spying
• illegal processing of data
• accidental
• equipment failure
• software failure
• environmental
Threat (computer)
4
• natural event
• loss of power supply
Note that a threat type can have multiple origins.
Threat model
People can be interested in studying all possible threats that can:
• affect an asset,
• affect a software system
• are brought by a threat agent
Threat classification
Microsoft has proposed a threat classification called STRIDE,
[14]
from the initial of threat categories:
• Spoofing of user identity
• Tampering
• Repudiation
• Information disclosure (privacy breach or Data leak)
• Denial of Service (D.o.S.)
• Elevation of privilege
Microsoft used to risk rating security threats using five categories in a classification called DREAD: Risk assessment
model. The model is considered obsolete by Microsoft. The categories were:
• Damage - how bad would an attack be?
• Reproducibility - how easy it is to reproduce the attack?
• Exploitability - how much work is it to launch the attack?
• Affected users - how many people will be impacted?
• Discoverability - how easy it is to discover the threat?
The DREAD name comes from the initials of the five categories listed.
The spread over a network of threats can led to dangerous situations. In military and civil fields, threat level as been
defined: for example INFOCOM is a threat level used by USA. Leading antivirus software vendors publish global
threat level on their websites
[15]

[16]
Associated terms
Threat Agents
Threat Agents
Individuals within a threat population; Practically anyone and anything can, under the right circumstances,
be a threat agent – the well-intentioned, but inept, computer operator who trashes a daily batch job by typing
the wrong command, the regulator performing an audit, or the squirrel that chews through a data cable.
[6]
Threat agents can take one or more of the following actions against an asset
[6]
:
• Access – simple unauthorized access
• Misuse – unauthorized use of assets (e.g., identity theft, setting up a porn distribution service on a compromised
server, etc.)
• Disclose – the threat agent illicitly discloses sensitive information
• Modify – unauthorized changes to an asset
• Deny access – includes destruction, theft of a non-data asset, etc.
Threat (computer)
5
It’s important to recognize that each of these actions affects different assets differently, which drives the degree and
nature of loss. For example, the potential for productivity loss resulting from a destroyed or stolen asset depends
upon how critical that asset is to the organization’s productivity. If a critical asset is simply illicitly accessed, there is
no direct productivity loss. Similarly, the destruction of a highly sensitive asset that doesn’t play a critical role in
productivity won’t directly result in a significant productivity loss. Yet that same asset, if disclosed, can result in
significant loss of competitive advantage or reputation, and generate legal costs. The point is that it’s the combination
of the asset and type of action against the asset that determines the fundamental nature and degree of loss. Which
action(s) a threat agent takes will be driven primarily by that agent’s motive (e.g., financial gain, revenge, recreation,
etc.) and the nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a critical
server than they are to steal an easily pawned asset like a laptop.
[6]
It is important to separate the concept of the event that a threat agent get in contact with the asset (even virtually, i.e.
through the network) and the event that a threat agent act against the asset.
[6]
OWASP collects a list of potential threat agents in order to prevent system designers and programmers insert
vulnerabilities in the software.
[17]
The term Threat Agent is used to indicate an individual or group that can manifest a threat. It is fundamental to
identify who would want to exploit the assets of a company, and how they might use them against the company.
[17]
Threat Agent = Capabilities + Intentions + Past Activities
These individuals and groups can be classified as follows:
[17]
• Non-Target Specific: Non-Target Specific Threat Agents are computer viruses, worms, trojans and logic bombs.
• Employees: Staff, contractors, operational/maintenance personnel, or security guards who are annoyed with the
company.
• Organized Crime and Criminals: Criminals target information that is of value to them, such as bank accounts,
credit cards or intellectual property that can be converted into money. Criminals will often make use of insiders to
help them.
• Corporations: Corporations are engaged in offensive information warfare or competitive intelligence. Partners and
competitors come under this category.
• Human, Unintentional: Accidents, carelessness.
• Human, Intentional: Insider, outsider.
• Natural: Flood, fire, lightning, meteor, earthquakes.
Threat Communities
Threat Communities
Subsets of the overall threat agent population that share key characteristics. The notion of threat communities
is a powerful tool for understanding who and what we’re up against as we try to manage risk. For example,
the probability that an organization would be subject to an attack from the terrorist threat community would
depend in large part on the characteristics of your organization relative to the motives, intents, and
capabilities of the terrorists. Is the organization closely affiliated with ideology that conflicts with known,
active terrorist groups? Does the organization represent a high profile, high impact target? Is the
organization a soft target? How does the organization compare with other potential targets? If the
organization were to come under attack, what components of the organization would be likely targets? For
example, how likely is it that terrorists would target the company information or systems?
[6]
The following threat communities are examples of the human malicious threat landscape many organizations
face:
• Internal
• Employees
Threat (computer)
6
• Contractors (and vendors)
• Partners
• External
• Cyber-criminals (professional hackers)
• Spies
• Non-professional hackers
• Activists
• Nation-state intelligence services (e.g., counterparts to the CIA, etc.)
• Malware (virus/worm/etc.) authors
Threat action
Threat action is an assault on system security.
A complete security architecture deals with both intentional acts (i.e. attacks) and accidental events.
[18]
Various kinds of threat actions are defined as subentries under "threat consequence".
Threat analysis
Threat analysis is the analysis of the probability of occurrences and consequences of damaging actions to a
system.
[1]
It is the basis of risk analysis.
Threat consequence
Threat consequence is a security violation that results from a threat action.
[1]
Includes disclosure, deception, disruption, and usurpation.
The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat
actions that cause each consequence.
[1]
Threat actions that are accidental events are marked by "*".
"(Unauthorized) Disclosure" (a threat consequence)
A circumstance or event whereby an entity gains access to data for which the entity is not authorized. (See:
data confidentiality.). The following threat actions can cause unauthorized disclosure:
"Exposure"
A threat action whereby sensitive data is directly released to an unauthorized entity. This includes:
"Deliberate Exposure"
Intentional release of sensitive data to an unauthorized entity.
"Scavenging"
Searching through data residue in a system to gain unauthorized knowledge of sensitive data.
* "Human error"
Human action or inaction that unintentionally results in an entity gaining unauthorized knowledge
of sensitive data.
* "Hardware/software error"
System failure that results in an entity gaining unauthorized knowledge of sensitive data.
"Interception"
A threat action whereby an unauthorized entity directly accesses sensitive data travelling between
authorized sources and destinations. This includes:
"Theft"
Threat (computer)
7
Gaining access to sensitive data by stealing a shipment of a physical medium, such as a magnetic
tape or disk, that holds the data.
"Wiretapping (passive)"
Monitoring and recording data that is flowing between two points in a communication system.
(See: wiretapping.)
"Emanations analysis"
Gaining direct knowledge of communicated data by monitoring and resolving a signal that is
emitted by a system and that contains the data but is not intended to communicate the data. (See:
emanation.)
"Inference"
A threat action whereby an unauthorized entity indirectly accesses sensitive data (but not necessarily the
data contained in the communication) by reasoning from characteristics or byproducts of
communications. This includes:
"Traffic analysis"
Gaining knowledge of data by observing the characteristics of communications that carry the data.
"Signals analysis"
Gaining indirect knowledge of communicated data by monitoring and analyzing a signal that is
emitted by a system and that contains the data but is not intended to communicate the data. (See:
emanation.)
"Intrusion"
A threat action whereby an unauthorized entity gains access to sensitive data by circumventing a
system's security protections. This includes:
"Trespass"
Gaining unauthorized physical access to sensitive data by circumventing a system's protections.
"Penetration"
Gaining unauthorized logical access to sensitive data by circumventing a system's protections.
"Reverse engineering"
Acquiring sensitive data by disassembling and analyzing the design of a system component.
"Cryptanalysis"
Transforming encrypted data into plain text without having prior knowledge of encryption
parameters or processes.
"Deception" (a threat consequence)
A circumstance or event that may result in an authorized entity receiving false data and believing it to be true.
The following threat actions can cause deception:
"Masquerade"
A threat action whereby an unauthorized entity gains access to a system or performs a malicious act by
posing as an authorized entity.
"Spoof"
Attempt by an unauthorized entity to gain access to a system by posing as an authorized user.
"Malicious logic"
In context of masquerade, any hardware, firmware, or software (e.g., Trojan horse) that appears to
perform a useful or desirable function, but actually gains unauthorized access to system resources
Threat (computer)
8
or tricks a user into executing other malicious logic.
"Falsification"
A threat action whereby false data deceives an authorized entity. (See: active wiretapping.)
"Substitution"
Altering or replacing valid data with false data that serves to deceive an authorized entity.
"Insertion"
Introducing false data that serves to deceive an authorized entity.
"Repudiation"
A threat action whereby an entity deceives another by falsely denying responsibility for an act.
"False denial of origin"
Action whereby the originator of data denies responsibility for its generation.
. "False denial of receipt"
Action whereby the recipient of data denies receiving and possessing the data.
"Disruption" (a threat consequence)
A circumstance or event that interrupts or prevents the correct operation of system services and functions.
(See: denial of service.) The following threat actions can cause disruption:
"Incapacitation"
A threat action that prevents or interrupts system operation by disabling a system component.
"Malicious logic"
In context of incapacitation, any hardware, firmware, or software (e.g., logic bomb) intentionally
introduced into a system to destroy system functions or resources.
"Physical destruction"
Deliberate destruction of a system component to interrupt or prevent system operation.
* "Human error"
Action or inaction that unintentionally disables a system component.
* "Hardware or software error"
Error that causes failure of a system component and leads to disruption of system operation.
* "Natural disaster"
Any "act of God" (e.g., fire, flood, earthquake, lightning, or wind) that disables a system
component.
[18]
"Corruption"
A threat action that undesirably alters system operation by adversely modifying system functions or
data.
"Tamper"
In context of corruption, deliberate alteration of a system's logic, data, or control information to
interrupt or prevent correct operation of system functions.
"Malicious logic"
In context of corruption, any hardware, firmware, or software (e.g., a computer virus)
intentionally introduced into a system to modify system functions or data.
* "Human error"
Threat (computer)
9
Human action or inaction that unintentionally results in the alteration of system functions or data.
* "Hardware or software error"
Error that results in the alteration of system functions or data.
* "Natural disaster"
Any "act of God" (e.g., power surge caused by lightning) that alters system functions or data.
[18]
"Obstruction"
A threat action that interrupts delivery of system services by hindering system operations.
"Interference"
Disruption of system operations by blocking communications or user data or control information.
"Overload"
Hindrance of system operation by placing excess burden on the performance capabilities of a
system component. (See: flooding.)
"Usurpation" (a threat consequence)
A circumstance or event that results in control of system services or functions by an unauthorized entity. The
following threat actions can cause usurpation:
"Misappropriation"
A threat action whereby an entity assumes unauthorized logical or physical control of a system resource.
"Theft of service"
Unauthorized use of service by an entity.
"Theft of functionality"
Unauthorized acquisition of actual hardware, software, or firmware of a system component.
"Theft of data"
Unauthorized acquisition and use of data.
"Misuse"
A threat action that causes a system component to perform a function or service that is detrimental to
system security.
"Tamper"
In context of misuse, deliberate alteration of a system's logic, data, or control information to cause
the system to perform unauthorized functions or services.
"Malicious logic"
In context of misuse, any hardware, software, or firmware intentionally introduced into a system
to perform or control execution of an unauthorized function or service.
"Violation of permissions"
Action by an entity that exceeds the entity's system privileges by executing an unauthorized
function.
Threat (computer)
10
Threat management
Threats should be managed by operating an ISMS, performing all the IT risk management activities foreseen by
laws, standards and methodologies.
Very large organizations tend to adopt business continuity management plans in order to protect, maintain and
recover business-critical processes and systems. Some of these plans foreseen to set up computer security incident
response team (CSIRT) or computer emergency response team (CERT)
There are some kind of verification of the threat management process:
• Information security audit
• Penetration test
Most organizations perform a subset of these steps, adopting countermeasures based on a non systematic approach:
Computer insecurity studies the battlefield of computer security exploits and defences that results.
Information security awareness generates quite a large business: (see the category:Computer security companies).
Countermeasures may include tools such as firewalls, intrusion detection system and anti-virus software, Physical
Security measures, policies and procedures such as regular backups and configuration hardening, training such as
security awareness education.
A lot of software has been developed to deal with IT threats:
• Open source software
• see the category category:free security software
• Proprietary
• see the category category:computer security software companies for a partial list
Threat literature
Well respected authors have published books on threats or computer security (see category:computer security books:
Hacking: The Art of Exploitation Second Edition is a good example.
References
[1] Internet Engineering Task Force RFC 2828 Internet Security Glossary
[2] ISO/IEC, "Information technology -- Security tecniques-Information security risk management" ISO/IEC FIDIS 27005:2008
[3] Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems
(http:// csrc.nist. gov/ publications/ fips/ fips200/ FIPS-200-final-march.pdf)
[4] http:// www. enisa. europa.eu/ act/ rm/cr/ risk-management-inventory/glossary#G51 ENISA Glossary threat
[5] Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009.
[6] "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk Management Insight LLC, November 2006 (http:// www.
riskmanagementinsight.com/ media/ docs/ FAIR_introduction. pdf);
[7] Schou, Corey (1996). Handbook of INFOSEC Terms, Version 2.0. CD-ROM (Idaho State University & Information Systems Security
Organization)
[8] NIATEC Glossary (http:// niatec. info/Glossary. aspx?term=5652&alpha=T)
[9] Wright, Joe; Jim Harmening (2009) "15" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 257
ISBN 978-0-12-374354-1
[10] ISACA THE RISK IT FRAMEWORK (registration required) (http:// www. isaca. org/Knowledge-Center/ Research/ Documents/
RiskIT-FW-18Nov09-Research. pdf)
[11] Security engineering:a guide to building dependable distributed systems, second edition, Ross Anderson, Wiley, 2008 - 1040 pages ISBN
978-0-470.06852-6, Chapter 2, page 17
[12] Eweek Using Facebook to Social Engineer Your Way Around Security (http:// www.eweek. com/ c/ a/ Security/
Social-Engineering-Your-Way-Around-Security-With-Facebook-277803/ )
[13] Networkworld Social engineering via Social networking (http:// www.networkworld.com/ newsletters/ sec/ 2010/ 100410sec1. html)
[14] Uncover Security Design Flaws Using The STRIDE Approach (http:// msdn.microsoft. com/ en-us/ magazine/ cc163519.aspx)
[15] McAfee lab page (http:/ / www.mcafee.com/ us/ mcafee_labs/ gti. html)
Threat (computer)
11
[16] Symantec TreatCon (http:// www.symantec. com/ security_response/ threatconlearn.jsp)
[17] OWASP Threat agents categorization (http:/ / www. owasp.org/ index.php/ Category:Threat_Agent)
[18] FIPS PUB 31 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION 1974 JUNE (http:/ / www.tricare.mil/
tmis_new/ Policy\Federal\fips31.pdf)
External links
• Term in FISMApedia (http:// fismapedia. org/index. php?title=Term:Threat)
Computer security
12
Computer security
Computer security
Secure operating systems
Security architecture
Security by design
Secure coding
Computer insecurity
Vulnerability Social engineering
Eavesdropping
Exploits Trojans
Viruses and
worms
Denial of service
Payloads Backdoors
Rootkits
Keyloggers
Computer security is a branch of computer technology known as information security as applied to computers and
networks. The objective of computer security includes protection of information and property from theft, corruption,
or natural disaster, while allowing the information and property to remain accessible and productive to its intended
users. The term computer system security means the collective processes and mechanisms by which sensitive and
valuable information and services are protected from publication, tampering or collapse by unauthorized activities or
untrustworthy individuals and unplanned events respectively. The strategies and methodologies of computer security
often differ from most other computer technologies because of its somewhat elusive objective of preventing
unwanted computer behavior instead of enabling wanted computer behavior.
Security by design
The technologies of computer security are based on logic. As security is not necessarily the primary goal of most
computer applications, designing a program with security in mind often imposes restrictions on that program's
behavior.
There are 4 approaches to security in computing, sometimes a combination of approaches is valid:
1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer
insecurity).
2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch
and path analysis for example).
3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer
insecurity).
4. Trust no software but enforce a security policy with trustworthy hardware mechanisms.
Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and
non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is
often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more
practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two
and thick layers of four.
Computer security
13
There are various strategies and techniques used to design security systems. However, there are few, if any, effective
strategies to enhance security after design. One technique enforces the principle of least privilege to great extent,
where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to
one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is
reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness
of crucial software subsystems. This enables a closed form solution to security that works well when only a single
well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly,
it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where
formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort
approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the
integrity of the system and the information it holds. Defense in depth works when the breaching of one security
measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that
several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety
of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than
"fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a
deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it
insecure.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that
security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach
occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only
be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs
are found the "window of vulnerability" is kept as short as possible.
Security architecture
Security Architecture can be defined as the design artifacts that describe how the security controls (security
countermeasures) are positioned, and how they relate to the overall information technology architecture. These
controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity,
availability, accountability and assurance.
[1]
Hardware mechanisms that protect computers and data
Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such
as dongles may be considered more secure due to the physical access required in order to be compromised.
Secure operating systems
One use of the term computer security refers to technology to implement a secure operating system. Much of this
technology is based on science developed in the 1980s and used to produce what may be some of the most
impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it
imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure
operating systems are based on operating system kernel technology that can guarantee that certain security policies
are absolutely enforced in an operating environment. An example of such a Computer security policy is the
Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often
involving the memory management unit, to a special correctly implemented operating system kernel. This forms the
Computer security
14
foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can
ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the
configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary
operating systems, on the other hand, lack the features that assure this maximal level of security. The design
methodology to produce such secure systems is precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security although products using
such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with
verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this
way are used primarily to protect national security information, military secrets, and the data of international
financial institutions. These are very powerful security tools and very few secure operating systems have been
certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified"
(including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security
depends not only on the soundness of the design strategy, but also on the assurance of correctness of the
implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria
quantifies security strength of products in terms of two components, security functionality and assurance level (such
as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product
descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for
decades or certified under Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security functions that are
implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can
protect less valuable information, such as income tax information. Secure operating systems designed to meet
medium robustness levels of security functionality and assurance have seen wider use within both government and
commercial markets. Medium robust systems may provide the same security functions as high assurance secure
operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower
levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less
dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are
used not only to protect the data stored on these systems but also to provide a high level of protection for network
connections and routing services.
Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a domain for its own
execution, and capable of protecting application code from malicious subversion, and capable of protecting the
system from subverted code, then high degrees of security are understandably not possible. While such secure
operating systems are possible and have been implemented, most commercial systems fall in a 'low security'
category because they rely on features not supported by secure operating systems (like portability, and others). In
low security operating environments, applications must be relied on to participate in their own protection. There are
'best effort' secure coding practices that can be followed to make an application more resistant to malicious
subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of
coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow,
and code/command injection. It is to be immediately noted that all of the foregoing are specific instances of a general
class of attacks, where situations in which putative "data" actually contains implicit or explicit, executable
instructions are cleverly exploited.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C
and C++"
[2]
). Other languages, such as Java, are more resistant to some of these defects, but are still prone to
code/command injection and other software defects which facilitate subversion.
Computer security
15
Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this
particular problem was presented in July 2007. Before this publication the problem was known but considered to be
academic and not practically exploitable.
[3]
Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically achievable, insofar as
the variety of mechanisms are too wide and the manners in which they can be exploited are too variegated. It is
interesting to note, however, that such vulnerabilities often arise from archaic philosophies in which computers were
assumed to be narrowly disseminated entities used by a chosen few, all of whom were likely highly educated, solidly
trained academics with naught but the goodness of mankind in mind. Thus, it was considered quite harmless if, for
(fictitious) example, a FORMAT string in a FORTRAN program could contain the J format specifier to mean "shut
down system after printing." After all, who would use such a feature but a well-intentioned system programmer? It
was simply beyond conception that software could be deployed in a destructive fashion.
It is worth noting that, in some languages, the distinction between code (ideally, read-only) and data (generally
read/write) is blurred. In LISP, particularly, there is no distinction whatsoever between code and data, both taking the
same form: an S-expression can be code, or data, or both, and the "user" of a LISP program who manages to insert
an executable LAMBDA segment into putative "data" can achieve arbitrarily general and dangerous functionality.
Even something as "modern" as Perl offers the eval() function, which enables one to generate Perl code and submit it
to the interpreter, disguised as string data.
Capabilities and access control lists
Within computer systems, two security models capable of enforcing privilege separation are access control lists
(ACLs) and capability-based security. The semantics of ACLs have been proven to be insecure in many situations,
for example,, the confused deputy problem. It has also been shown that the promise of ACLs of giving access to an
object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities.
This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities
must take responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs.
Capabilities can, however, also be implemented at the language level, leading to a style of programming that is
essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
First the Plessey System 250 and then Cambridge CAP computer demonstrated the use of capabilities, both in
hardware and software, in the 1970s. A reason for the lack of adoption of capabilities may be that ACLs appeared to
offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real
world, the most security comes from operating systems where security is not an add-on.
Applications
Computer security is critical in almost any technology-driven industry which operates on computer systems.
Computer security can also be referred to as computer safety. The issues of computer based systems and addressing
their countless vulnerabilities are an integral part of maintaining an operational industry.
[4]
Cloud computing security
Security in the cloud is challenging, due to varied degree of security features and management schemes within the
cloud entitites. In this connection one logical protocol base need to evolve so that the entire gamet of components
operates synchronously and securely.
Computer security
16
Aviation
The aviation industry is especially important when analyzing computer security because the involved risks include
human life, expensive equipment, cargo, and transportation infrastructure. Security can be compromised by hardware
and software malpractice, human error, and faulty operating environments. Threats that exploit computer
vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction,
and human error.
[5]
The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry
range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data
theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of
passenger life. Military systems that control munitions can pose an even greater risk.
A proper attack does not need to be very high tech or well funded; for a power outage at an airport alone can cause
repercussions worldwide.
[6]
One of the easiest and, arguably, the most difficult to trace security vulnerabilities is
achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may
spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having
altered flight courses of commercial aircraft and caused panic and confusion in the past. Controlling aircraft over
oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's
sight controllers must rely on periodic radio communications with a third party.
Lightning, power fluctuations, surges, brown-outs, blown fuses, and various other power outages instantly disable all
computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have
caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable
communication and electrical power only jeopardizes computer safety.
Notable system accidents
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's
main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to
Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files,
such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics
and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense
contractors, and other private sector organizations, by posing as a trusted Rome center user.
[7]
Computer security policy
United States
Cybersecurity Act of 2010
On April 1, 2009, Senator Jay Rockefeller (D-WV) introduced the "Cybersecurity Act of 2009 - S. 773" (full text
[8]
)
in the Senate; the bill, co-written with Senators Evan Bayh (D-IN), Barbara Mikulski (D-MD), Bill Nelson (D-FL),
and Olympia Snowe (R-ME), was referred to the Committee on Commerce, Science, and Transportation, which
approved a revised version of the same bill (the "Cybersecurity Act of 2010") on March 24, 2010.
[9]
The bill seeks to
increase collaboration between the public and the private sector on cybersecurity issues, especially those private
entities that own infrastructures that are critical to national security interests (the bill quotes John Brennan, the
Assistant to the President for Homeland Security and Counterterrorism: "our nation’s security and economic
prosperity depend on the security, stability, and integrity of communications and information infrastructure that are
largely privately-owned and globally-operated" and talks about the country's response to a "cyber-Katrina".
[10]
),
increase public awareness on cybersecurity issues, and foster and fund cybersecurity research. Some of the most
controversial parts of the bill include Paragraph 315, which grants the President the right to "order the limitation or
Computer security
17
shutdown of Internet traffic to and from any compromised Federal Government or United States critical
infrastructure information system or network."
[10]
The Electronic Frontier Foundation, an international non-profit
digital rights advocacy and legal organization based in the United States, characterized the bill as promoting a
"potentially dangerous approach that favors the dramatic over the sober response".
[11]
International Cybercrime Reporting and Cooperation Act
On March 25, 2010, Representative Yvette Clarke (D-NY) introduced the "International Cybercrime Reporting and
Cooperation Act - H.R.4962" (full text
[12]
) in the House of Representatives; the bill, co-sponsored by seven other
representatives (among whom only one Republican), was referred to three House committees.
[13]
The bill seeks to
make sure that the administration keeps Congress informed on information infrastructure, cybercrime, and end-user
protection worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and
enforcement capabilities with respect to cybercrime to countries with low information and communications
technology levels of development or utilization in their critical infrastructure, telecommunications systems, and
financial industries"
[13]
as well as to develop an action plan and an annual compliance assessment for countries of
"cyber concern".
[13]
Protecting Cyberspace as a National Asset Act of 2010
On June 19, 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting Cyberspace as a
National Asset Act of 2010 - S.3480" (full text in pdf
[14]
), which he co-wrote with Senator Susan Collins (R-ME)
and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed
the "Kill switch bill", would grant the President emergency powers over the Internet. However, all three co-authors
of the bill issued a statement claiming that instead, the bill "[narrowed] existing broad Presidential authority to take
over telecommunications networks".
[15]
White House proposes cybersecurity legislation
On May 12, 2010, The White House sent Congress a proposed cybersecurity law designed to force companies to do
more to fend off cyberattacks, a threat that has been reinforced by recent reports about vulnerabilities in systems
used in power and water utilities.
[16]
Terminology
The following terms used in engineering secure systems are explained below.
• Authentication techniques can be used to ensure that communication end-points are who they say they are.
• Automated theorem proving and other verification tools can enable critical algorithms and code used in secure
systems to be mathematically proven to meet their specifications.
• Capability and access control list techniques can be used to ensure privilege separation and mandatory access
control. This section discusses their use.
• Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic
by the system's designers.
• Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data
exchanged between systems can be intercepted or modified.
• Firewalls can provide some protection from online intrusion.
• A microkernel is a carefully crafted, deliberately small corpus of software that underlies the operating system per
se and is used solely to provide very low-level, very precisely defined primitives upon which an operating system
can be developed. A simple example with considerable didactic value is the early '90s GEMSOS (Gemini
Computers), which provided extremely low-level primitives, such as "segment" management, atop which an
operating system could be built. The theory (in the case of "segments") was that—rather than have the operating
Computer security
18
system itself worry about mandatory access separation by means of military-style labeling—it is safer if a
low-level, independently scrutinized module can be charged solely with the management of individually labeled
segments, be they memory "segments" or file system "segments" or executable text "segments." If software below
the visibility of the operating system is (as in this case) charged with labeling, there is no theoretically viable
means for a clever hacker to subvert the labeling scheme, since the operating system per se does not provide
mechanisms for interfering with labeling: the operating system is, essentially, a client (an "application," arguably)
atop the microkernel and, as such, subject to its restrictions.
• Endpoint Security software helps networks to prevent data theft and virus infection through portable storage
devices, such as USB drives.
Some of the following items may belong to the computer insecurity article:
• Access authorization restricts access to a computer to group of users through the use of authentication systems.
These systems can protect either the whole computer – such as through an interactive logon screen – or individual
services, such as an FTP server. There are many methods for identifying and authenticating users, such as
passwords, identification cards, and, more recently, smart cards and biometric systems.
• Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses
and other malicious software (malware).
• Applications with known security flaws should not be run. Either leave it turned off until it can be patched or
otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry
used by worms to automatically break into a system and then spread to other systems connected to it. The security
website Secunia provides a search tool for unpatched known flaws in popular products.
• Backups are a way of securing information; they are another copy of all the important computer files kept in
another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups
are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original
files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank
vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over
the Internet for both business and individuals.
• Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes,
or tornadoes, may strike the building where the computer is located. The building can be on fire, or an
explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of
disaster. Further, it is recommended that the alternate location be placed where the same disaster would not
affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster
that affected the primary site include having had a primary site in World Trade Center I and the recovery site
in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and
recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (for
example, primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by
Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure
manner, in order to prevent them from being stolen.
Cryptographic techniques involve transforming information, scrambling it so it
becomes unreadable during transmission. The intended recipient can unscramble
the message, but eavesdroppers cannot.
• Encryption is used to protect the message
from the eyes of others.
Cryptographically secure ciphers are
designed to make any practical attempt of
breaking infeasible. Symmetric-key
ciphers are suitable for bulk encryption
using shared keys, and public-key
Computer security
19
encryption using digital certificates can provide a practical solution for the problem of securely communicating
when no key is shared in advance.
• Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion
by restricting the network traffic which can pass through them, based on a set of system administrator defined
rules.
• Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers.
They can be used to catch crackers or fix vulnerabilities.
• Intrusion-detection systems can scan a network for people that are on the network but who should not be there or
are doing things that they should not be doing, for example trying a lot of passwords to gain access to the
network.
• Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker
finds a computer, they can try a port scan to detect and attack services on that computer.
• Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy
in place to prevent social engineering can reduce successful breaches of the network and servers.
Notes
[1] Definitions: IT Security Architecture (http:// opensecurityarchitecture.com). SecurityArchitecture.org, Jan, 2006
[2] http:/ / www. cert.org/ books/ secure-coding
[3] New hacking technique exploits common programming error (http:// searchsecurity.techtarget.com/ originalContent/
0,289142,sid14_gci1265116,00.html). SearchSecurity.com, July 2007
[4] J. C. Willemssen, "FAA Computer Security". GAO/T-AIMD-00-330. Presented at Committee on Science, House of Representatives, 2000.
[5] P. G. Neumann, "Computer Security in Aviation," presented at International Conference on Aviation Safety and Security in the 21st Century,
White House Commission on Safety and Security, 1997.
[6] J. Zellan, Aviation Security. Hauppauge, NY: Nova Science, 2003, pp. 65–70.
[7] Information Security (http:// www. fas. org/irp/ gao/ aim96084. htm). United States Department of Defense, 1986
[8] http:/ / www. opencongress. org/bill/ 111-s773/ text
[9] Cybersecurity bill passes first hurdle (http:/ / www.computerworld.com/ s/ article/9174065/ Cybersecurity_bill_passes_first_hurdle),
Computer World, March 24, 2010. Retrieved on June 26, 2010.
[10] Cybersecurity Act of 2009 (http:/ / www. opencongress. org/ bill/ 111-s773/ text), OpenCongress.org, April 1, 2009. Retrieved on June 26,
2010.
[11] Federal Authority Over the Internet? The Cybersecurity Act of 2009 (http:/ / www.eff.org/ deeplinks/ 2009/ 04/ cybersecurity-act), eff.org,
April 10, 2009. Retrieved on June 26, 2010.
[12] http:/ / www. opencongress. org/bill/ 111-h4962/text
[13] H.R.4962 - International Cybercrime Reporting and Cooperation Act (http:/ / www.opencongress.org/ bill/ 111-h4962/show),
OpenCongress.org. Retrieved on June 26, 2010.
[14] http:/ / hsgac. senate. gov/ public/ index. cfm?FuseAction=Files. View& FileStore_id=4ee63497-ca5b-4a4b-9bba-04b7f4cb0123
[15] Senators Say Cybersecurity Bill Has No 'Kill Switch' (http:// www.informationweek.com/ news/ government/security/ showArticle.
jhtml?articleID=225701368& subSection=News), informationweek.com, June 24, 2010. Retrieved on June 25, 2010.
[16] Declan McCullagh, CNET. " White House proposes cybersecurity legislation (http:// news.cnet.com/ 8301-31921_3-20062277-281.
html?part=rss&subj=news& tag=2547-1_3-0-20)." May 12, 2011. Retrieved May 12, 2011.
References
• Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems (http:/ / www.cl.
cam.ac. uk/ ~rja14/book. html), ISBN 0-471-38922-6
• Morrie Gasser: Building a secure computer system (http:/ / cs. unomaha. edu/ ~stanw/ gasserbook. pdf) ISBN
0-442-23022-2 1988
• Stephen Haag, Maeve Cummings, Donald McCubbrey, Alain Pinsonneault, Richard Donovan: Management
Information Systems for the information age, ISBN 0-07-091120-7
• E. Stewart Lee: Essays about Computer Security (http:// www.cl. cam. ac. uk/ ~mgk25/ lee-essays. pdf)
Cambridge, 1999
Computer security
20
• Peter G. Neumann: Principled Assuredly Trustworthy Composable Architectures (http:// www.csl. sri. com/
neumann/ chats4. pdf) 2004
• Paul A. Karger, Roger R. Schell: Thirty Years Later: Lessons from the Multics Security Evaluation (http:// www.
acsac. org/2002/ papers/ classic-multics. pdf), IBM white paper.
• Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0-471-25311-1
• Robert C. Seacord: Secure Coding in C and C++. Addison Wesley, September, 2005. ISBN 0-321-33572-4
• Clifford Stoll: Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Pocket Books, ISBN
0-7434-1146-3
• Network Infrastructure Security (http:/ / www.springer.com/ computer/communications/ book/
978-1-4419-0165-1), Angus Wong and Alan Yeung, Springer, 2009.
External links
• Security advisories links (http:/ / www. dmoz. org/Computers/ Security/Advisories_and_Patches/ ) from the
Open Directory Project
• Top 5 Security No Brainers for Businesses (http:// www.networkworld.com/ community/ node/ 59971) from
Network World
• The Repository of Industrial Security Incidents (http:// www.securityincidents. org/)
Portal:Computer security
Wikipedia portals: Culture · Geography · Health · History · Mathematics · Natural sciences · People · Philosophy ·
Religion · Society · Technology
Computer Security is anything that has to do with protecting Computer Systems such as
smartphones, desktop computers, company servers, IP phones, set-top boxes, etc. from spam,
viruses, worms, trojan horses, malware and intrusion. It is defined as methods and technologies
for deterring, protection, detection, response, recovery and extended functionality in
information systems.
The jdbgmgr.exe virus hoax involved an e-mail spam in 2002 that advised computer users to delete a file named
jdbgmgr.exe because it was a computer virus. jdbgmgr.exe, which had a little teddy bear-like icon (The Microsoft
Bear), was actually a valid Microsoft Windows file, the Debugger Registrar for Java (also known as Java Debug
Manager, hence jdbgmgr).
Featuring so odd an icon among normally dull system icons had an unexpected counterpoint: an email hoax warning
users that this is a virus that somehow came into your computer and should be deleted. This hoax has taken many
forms and is always very popular among non-expert users that find this icon suspicious.
The email has taken many forms, including saying its purpose was to warn Hotmail users of a virus spreading via
MSN Messenger, or even to alert about a possible virus in the orkut web community. All say that it was not detected
by McAfee or Norton AntiVirus, which is obviously true.
This email in fact could be considered some kind of virus as it has all the normal life cycle of computer virus: It
comes through user mailboxes, harm the system (by deleting a file) and then the message is forwarded to multiple
recipients to reinfect them. Only that all those commands are executed by the user himself, making it a failproof
virus (c.f. honor system virus).
More...
Portal:Computer security
21
Beast, a Windows-based backdoor Trojan horse.
Bruce Schneier (born January 15, 1963, pronounced "shn-EYE-er") is an American cryptographer, computer security
specialist, and writer. He is the author of several books on computer security and cryptography, and is the founder
and chief technology officer of BT Counterpane, formerly Counterpane Internet Security, Inc.
Schneier's Applied Cryptography is a popular reference work for cryptography. Schneier has designed or
co-designed several cryptographic algorithms, including the Blowfish, Twofish and MacGuffin block ciphers, the
Helix and Phelix stream ciphers, and the Yarrow and Fortuna cryptographically secure pseudo-random number
generators. Solitaire is a cryptographic algorithm developed by Schneier for use by people without access to a
computer, called Pontifex in Neal Stephenson's novel Cryptonomicon. In October 2008, Schneier, together with
seven others, introduced the Skein hash function family, a more secure and efficient alternative to older algorithms.
[1]
More...
• ...that Sasser.E removed all other variants of itself before taking over?
• ...that Storm Botnet used custom peer-to-peer communication protocol called Stormnet?
• ...that the Monte Carlo method is named after after the Monte Carlo casino and was invented by physicists
working on nuclear weapons projects in the Los Alamos National Laboratory?
Computer science Information technology Engineering
Linux Free software Internet
• Google may shut down Chinese operations due to censorship and cyber attacks
• European Parliament bounces back from suspected cyber attack
[2]
Portal:Computer security
22
• Antivirus and Content Security
• Sophos
• Antivirus tools
• Avast!
• Avira
• AVG Anti-Virus
• BitDefender
• Clam AntiVirus
• ClamWin
• Comodo Internet Security
• eScan
• Kaspersky Labs
• McAfee VirusScan
• Microsoft Security Essentials
• NOD32
• Norton AntiVirus
• Trend Micro Internet Security
• Windows Live OneCare
• Firewall
• AVG Plus Firewall Edition
• Comodo Internet Security
• McAfee Personal Firewall Plus
• Netfilter/iptables
• Norton Personal Firewall
• Windows Firewall
• ZoneAlarm
• anti-Spyware tools
• Ad-Aware
• AVG Anti-Spware
• McAfee AntiSpyware
• Spybot
• Spy Sweeper
• Spyware Doctor
• Trend Micro Internet Security
• Windows Defender
• STOPzilla
• Anti-Spam Software
• GWAVA
• Worms, Viruses, Trojan Horses
• ILOVEYOU
• Code Red worm
• Nimda
• SQL slammer worm
• Blaster worm
• Welchia
• Sobig worm
• Sober worm
• MyDoom
• Witty worm
• Sasser worm
• Santy
• Zotob worm
• Samy Virus
• Nyxem
• Stuxnet
• List of Trojan Horses
• Pages Needing Attention:Software
• Design an icon for Computer Security
• Find image of any security related software for Selected Picture that isn't copyrighted.
• Computer Security
• Computing
What are portals? · List of portals · Featured portals
[3]
Portal:Computer security
23
References
[1] "The Skein Hash Function Family" (http:/ / www.schneier.com/ skein. html). . Retrieved 2006-10-31.
[2] http:/ / news. idg. no/ cw/ art.cfm?id=AEA9B8AE-1A64-67EA-E4F946CE9B361D72
[3] http:/ / en.wikipedia. org/wiki/ Portal%3Acomputer_security?action=purge
2009 Sidekick data loss
The Sidekick data outage of 2009 resulted in an estimated 800,000 smartphone users in the United States
temporarily losing personal data, such as emails, address books and photos from their mobile handsets. The
computer servers holding the data were run by Microsoft.
[1]
The brand of phone affected was the Danger Hiptop,
also known as the "Sidekick", and were connected via the T-Mobile cellular network. At the time, it was described as
the biggest disaster in cloud computing history.
[2]
T-Mobile Sidekick 2
The Sidekick smartphones were originally produced by Danger, Inc., a
company that was bought by Microsoft in February 2007. After the
acquisition, the former Danger staff were then absorbed into the
Mobile Communications Business (MCB) of the Entertainment and
Devices Division at Microsoft, where they worked on a future
Microsoft mobile phone platform known as Project Pink.
[3]
However,
most of the ex-Danger employees soon left Microsoft to pursue other
things.
[4]
Microsoft took over the running of the data servers, and its
data centers were hosting the customers' data at the time it was lost.
[5]
In late September 2009, T-Mobile Sidekick phone users started
noticing data service outages occurring. The outages lasted
approximately two weeks, and on 10 October, 2009, T-Mobile announced that personal information stored on
Sidekick phones would be permanently lost, which turned out to be incorrect.
[6]
According to the Financial Times, Microsoft said the data centre it acquired from Danger 18 months previously had
not been "updated to run on Microsoft technology."
[1]
A company statement said the mishap was due to "a
confluence of errors from a server failure that hurt its main and backup databases supporting Sidekick users."
[2]
T-Mobile blamed Microsoft for the loss of data.
[1]
The incident caused a public loss of confidence in the concept of cloud computing, which had been plagued by a
series of outages and data losses in 2009.
[7]
It also was problematic for Microsoft, which at the time was trying to
convince corporate clients to use its cloud computing services, such as Azure and My Phone.
[1]
On 14 October 2009, a class action lawsuit was launched against Microsoft and T-mobile. The lawsuit alleged:
T-Mobile and Microsoft promised to safeguard the most important data their customers possess and then
apparently failed to follow even the most basic data protection principles. What they did is unthinkable
in this day and age.
[8]
On 15 October, Microsoft said they had been able to recover most or all data and would begin to restore them.
[9]

[10]
Microsoft CEO, Steve Ballmer disputed whether there had ever been a data loss, instead describing it as an outage.
Ballmer said, “It is not clear there was data loss". However, he said the incident was "not good" for Microsoft.
[11]
2009 Sidekick data loss
24
References
[1] "Data loss puts cloud on Microsoft" (http:/ / www.ft. com/ cms/ s/ 0/ cce17b14-b78e-11de-9812-00144feab49a.html?ftcamp=rss&
nclick_check=1). Financial Times. 13 October 2009. .
[2] "The Sidekick Cloud Disaster" (http:// www. bbc. co. uk/ blogs/ technology/ 2009/ 10/ the_sidekick_cloud_disaster. html). 13 October 2009.
.
[3] "Microsoft's Pink Struggles Spill Over To Sidekick" (http:/ / www.crn.com/ software/
220600334;jsessionid=KCFRRMAUPYH2NQE1GHOSKH4ATMY32JVN?pgno=1). ChannelWeb (UMB). 12 October 2009. .
[4] "The Sidekick catastrophe: A curse for Microsoft, but a blessing for Motorola?" (http:// www.betanews. com/ article/
The-Sidekick-catastrophe-A-curse-for-Microsoft-but-a-blessing-for-Motorola/1255361704). Betanews. 12 October 2009. .
[5] "Danger no backups" (http:// www.theinquirer.net/ inquirer/news/ 1558214/ danger-backups). The Inquirer. 12 October 2009. .
[6] "T-Mobile Sidekick data outage tests mettle of 800,000 customers, carrier" (http:/ / blogs. zdnet. com/ gadgetreviews/ ?p=8306&
tag=col1;post-11306). ZDNet. 12 October 2009. .
[7] "From Sidekick to Gmail: A short history of cloud computing outages" (http:/ / www. networkworld.com/ news/ 2009/
101209-sidekick-cloud-computing-outages-short-history.html?hpg1=bn). Network World. 13 October 2009. .
[8] "Class Action Suit Filed after T-Mobile and Microsoft Lose Data" (http:// www.prnewschannel.com/ absolutenm/ templates/ ?a=1732&
z=4). PR NewsChannel. 14 October 2009. .
[9] "UPDATE: Microsoft Says It Has Recovered Lost Sidekick Data" (http:/ / online.wsj.com/ article/ BT-CO-20091015-710685.html). The
Wall Street Journal. 15 October, 2009. . Retrieved 15 October, 2009.
[10] "Microsoft recovers Sidekick data" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/8309218.stm). BBC News. 15 October, 2009. . Retrieved
15 October, 2009.
[11] "Ballmer: Sidekick outage 'not good'" (http:/ / www. networkworld.com/ news/ 2009/ 101909-microsoft-balmer-sidekick.html). Network
World. 19 October 2009. .
AAFID
AAFID stands for Autonomous Agents for Intrusion Detection, a distributed intrusion detection system. In this
architecture, nodes of the ids are arranged in a hierarchical structure in a tree. The first generation of Distributed
Intrusion Detection Systems had hierarchical architecture.
The general types of distributed intrusion detection systems are as follows:
• hierarchical
• network architecture
• hybrid architecture
• mobile agent architecture.
• disturbed .
Agents in AAFID were not mobile. AAFID was one of the earliest hierarchical distributed intrusion detection
systems.
Absolute Manage
25
Absolute Manage
Developer(s) Absolute Software
Stable release 5.2.2 / November 10, 2009
Operating system Windows, Mac OS X
Type Network management, Systems management, IT automation, Software Asset Management
License Proprietary
Website
www.absolute.com
[1]
Absolute Manage (formerly LANrev) is a systems lifecycle management software for system administrators which
automates IT administration tasks.
[2]
The product is composed of a server and client ("agent") software that runs on
Windows and Mac OS X.
[3]
Vancouver-based Absolute Software acquired LANrev from Pole Position Software in December 2009, for US$12.1
million in cash and 500,000 shares of Absolute's common stock.
[4]
LANrev was rebranded as Absolute Manage in
February 2010.
[5]
Features
LANrev's features include:
• Asset inventory
• Theft Recovery
• Software distribution
• License management
• Patch management
• Remote management
• iPhone management
• Disk imaging
• Power management
• Package building (see LANrev InstallEase)
• Role-based administration
• Security configuration for FDCC
Absolute Manage
26
School webcam controversy
In the 2010 Robbins v. Lower Merion School District case, plaintiffs charged two suburban Philadelphia high
schools secretly spied on students by surreptitiously and remotely activating webcams embedded in school-issued
laptops the students were using at home, and therefore infringed on their privacy rights. The schools admitted to
secretly snapping over 66,000 webshots and screenshots, including webcam shots of students in their bedrooms.
[6]

[7]
LANrev software was used in the Lower Merion school district's student laptop program, overseen by network
technician Michael Perbix.
[8]
In February of 2010, Perbix and other administrators in the district were accused of
using the software to take undisclosed and unauthorized photographs of students through the webcams on their
Macintosh laptops.
[9]
The lawsuit was brought by the parents of 15-year-old sophomore, Blake Robbins, who was
allegedly accused of illicit behavior seen through his computer's webcam of him in his bedroom. The photographs,
taken from a laptop that was reportedly not stolen, were then allegedly used as evidence in a disciplinary action.
[10]
The FBI investigated the incident, and a Philadelphia federal judge intervened to sort out issues relating to the
lawsuit.
[11]

[12]
Perbix had previously praised Theft Track, the name of the feature that lets administrators remotely photograph
potential thieves if a computer is reported stolen, noting in a Youtube video he produced that:
It’s an excellent feature. Yes, we have used it, and yes, it has gleaned some results for us. But it, in and
of itself, is just a fantastic feature for trying to—especially when you’re in a school environment and you
have a lot of laptops and you’re worried about, you know, laptops getting up and missing. I’ve actually
had some laptops we thought were stolen which actually were still in a classroom, because they were
misplaced, and by the time we found out they were back, I had to turn the tracking off. And I had, you
know, a good twenty snapshots of the teacher and students using the machines in the classroom.
[13]
LANrev's new owner, Absolute Software staunchly denounced the use of their software for any illegal purpose,
emphasizing that theft recovery should be left to law enforcement professionals.
[14]
They further denied any
knowledge of or complicity in either Perbix's or the school district's actions. Absolute stated that the next update of
LANrev, which would ship in the next several weeks, would permanently disable Theft Track.
[15]
Partners
• Enterprise Desktop Alliance
• Group Logic
• IBM
• Parallels, Inc.
• Web Help Desk
• Microsoft System Center Alliance
• LiveTime CMDB
[16]
References
[1] http:/ / www. absolute. com/
[2] Faas, Ryan (January 9, 2009). "The Top Five Solutions for Mac/Windows Client Deployment" (http:// www.informit.com/ articles/ article.
aspx?p=1315435). InformIT. . Retrieved June 23, 2009.
[3] Best, Brian (2008). "Managing Your Loadset, Post-Deploy" (http:// www.mactech.com/ articles/ mactech/ Vol.24/ 24. 01/
ManagingYourLoadset-Post-Deploy/ index. html). MacTech 24 (1). . Retrieved June 23, 2009.
[4] Absolute Software (December 3, 2009). "Absolute Software Acquires LANrev product suite from Pole Position Software" (http:// www.
absolute.com/ company/ pressroom/ news/ 2009/ 12/ lanrev). Press release. . Retrieved January 19, 2010.
[5] [backPid (http:/ / www. lanrev.com/ company/ news/ single/ article/
absolute-software-unveils-new-cross-platform-it-asset-management-solution.html?tx_ttnews)=3&cHash=093df143d9 "Absolute Software
Unveils New Cross-Platform IT Asset Management Solution"]. Press Release. February 2, 2010. [backPid]=3&cHash=093df143d9.
Absolute Manage
27
[6] Doug Stanglin (February 18, 2010). "School district accused of spying on kids via laptop webcams" (http:/ / content. usatoday.com/
communities/ ondeadline/ post/ 2010/ 02/ school-district-accused-of-issuing-webcam-laptops-to-spy-on-students/1). USA Today. . Retrieved
February 19, 2010.
[7] "Initial LANrev System Findings" (http:// lmsd. org/documents/ news/ 100503_l3_report.pdf), LMSD Redacted Forensic Analysis, L-3
Services – prepared for Ballard Spahr (LMSD's counsel), May 2010. Retrieved August 15, 2010.
[8] School District Faces Lawsuit Over Webcam Spying Claims (http:/ / www.pcworld.com/ businesscenter/ article/ 190101/
school_district_faces_lawsuit_over_webcam_spying_claims. html)
[9] Worden, Amy (February 22, 2010). "Laptop camera snapped away in one classroom | Philadelphia Inquirer | 02/22/2010" (http:// www.
philly.com/ inquirer/front_page/20100222_Laptop_camera_snapped_away_in_one_classroom.html). Philly.com. . Retrieved August 10,
2010.
[10] Font size Print E-mail Share 13 Comments (February 18, 2010). "Suit: Schools Spied on Students Via Webcam" (http:// www.cbsnews.
com/ stories/ 2010/ 02/ 18/ national/ main6220751. shtml?tag=leftCol;post-1534). CBS News. . Retrieved August 10, 2010.
[11] Claburn, Thomas. "FBI Investigating Web Spycam" (http:/ / www. informationweek.com/ news/ security/ privacy/showArticle.
jhtml?articleID=223100403). InformationWeek. . Retrieved August 10, 2010.
[12] Tanfani, Joseph (February 23, 2010). "Rare ban in laptop lawsuit | Philadelphia Inquirer | 02/23/2010" (http:// www.philly. com/ philly/
news/homepage/ 85021742. html). Philly.com. . Retrieved August 10, 2010.
[13] "FBI, US Attorney Probing Penn. School District's Computer Spying" (http:/ / www. democracynow.org/ 2010/ 2/ 24/ headlines/
fbi_us_attorney_probing_penn_school_districts_computer_spying). Democracynow.org. . Retrieved August 10, 2010.
[14] http:/ / www. computerworld.com/ s/ article/ 9160278/ Software_maker_blasts_vigilantism_in_Pa._school_spying_case?taxonomyId=12
[15] "LANrev to lose Theft Track feature following Pa. school spying allegations | TR Dojo | TechRepublic.com" (http:// blogs.techrepublic.
com. com/ itdojo/ ?p=1559). Blogs.techrepublic.com.com. February 23, 2010. . Retrieved August 10, 2010.
[16] http:/ / www. livetime. com/ itil-service-management/service-manager/configuration-management-cmdb/
External links
• Official homepage (http:// www. absolute. com/ en/ products/ absolute-manage/ features.aspx)
Accelops
28
Accelops
AccelOps
Type Private
Industry Data Center Monitoring, Business Service Management, Security Information Event Management
Founded 2007
Headquarters Santa Clara, CA, USA
Website
www.AccelOps.net
[1]
AccelOps provides integrated datacenter monitoring and Business Service Management software delivered as a
Virtual Appliance or Software-as-a-Service (SaaS).
Overview
AccelOps offers an integrated, unified and service-oriented platform for monitoring, alerting, analyzing and
reporting across performance, availability, security and change management in the context of business services.
Details
AccelOps all-in-one data center and IT service management platform, presented through a Web 2.0 GUI leveraging
Adobe Flex, provides operational data collection, monitoring, predictive alerting, root-cause analysis and detailed
reporting on all IT event, log and performance data cutting through networks, systems, applications, virtualization,
vendors and technology boundaries.
Integrated datacenter monitoring functionality includes:
• Business service management
• Performance and Availability Management
• Security Information and Event Management
• CMDB and Change Management
• Compliance Automation
• Network Visualization and Enterprise Search
• Identity and Location Management
AccelOps can be deployed on-premise as a virtual appliance or delivered as a Software-as-a-Service.
The solution integrates into the data center/IT infrastructure employing an automated, intelligent and low impact
means to continuously discover, capture, analyze and store massive volumes of operational data. AccelOps does not
require agents and it is not an “inline” device.
Accelops
29
Company and Founders
AccelOps is short for Accelerate Operations. The Silicon Valley-based company is privately held, venture-backed
and led by the same team that created the Cisco CS-MARS security appliance. After Imin Lee and Partha
Bhattacharya [2] left Cisco in 2007, they saw an opportunity to leverage their networking, systems and event
correlation experience to introduce a new product using a holistic approach to improve IT service reliability. [3]
References
• Silicon Valley Business Journal, AccelOps Solves Problems By Looking At Whole Datacenter
[4]
• Computer Technology Review of AccelOps v1.5.1
[5]
• Enterprise Management Associates Report
[6]
• Frost and Sullivan Report
[7]
• Redmonk Research Review
[8]
• CIO Magazine: High-Powered Data-Center Management Tools Come Down Market
[9]
• CSO Magazine: Network and Security Operations Converge (a mini-case study)
[10]
External links
• Website
[11]
• News
[12]
• Online Resources
[13]
References
[1] http:/ / www. AccelOps. net/
[2] http:// www. accelops. net/ company/ management. php
[3] http:/ / www. accelops. net/ product/allinone. php
[4] http:/ / sanjose. bizjournals. com/ sanjose/ stories/ 2009/ 11/ 16/ smallb4. html?b=1258347600^2441501
[5] http:/ / www. accelops. net/ pdf/CTR_AccelOps_Review_010610. pdf
[6] http:/ / www. accelops. net/ pdf/EMA_AccelOps-ITSMforMidTier.pdf
[7] http:/ / www. accelops. net/ pdf/2009NorthAmericanNewProductInnovationAward.AccelOps. pdf
[8] http:/ / www. redmonk.com/ cote/ 2009/ 07/ 13/ accelops-all-in-one-it-management
[9] http:// www. cio. com/ article/497910/ High_Powered_Data_Center_Management_Tools_Come_Downmarket
[10] http:/ / www. csoonline. com/ article/507764/ Network_and_Security_Operations_Convergence_A_Mini_Case_Study?page=1
[11] http:/ / www. accelops. net
[12] http:// www. accelops. net/ news/ index. php
[13] http:/ / www. accelops. net/ resources/ resources. php
Acceptable use policy
30
Acceptable use policy
An acceptable use policy (AUP; also sometimes acceptable usage policy or Fair Use Policy) is a set of rules
applied by the owner/manager of a network, website or large computer system that restrict the ways in which the
network site or system may be used. AUP documents are written for corporations,
[1]
businesses, universities,
[2]
schools,
[3]
internet service providers,
[4]
and website owners
[5]
often to reduce the potential for legal action that may
be taken by a user, and often with little prospect of enforcement.
Acceptable Use Policies are an integral part of the framework of information security policies; it is often common
practice to ask new members of an organization to sign an AUP before they are given access to its information
systems. For this reason, an AUP must be concise and clear, while at the same time covering the most important
points about what users are, and are not, allowed to do with the IT systems of an organization. It should refer users to
the more comprehensive security policy where relevant. It should also, and very notably, define what sanctions will
be applied if a user breaks the AUP. Compliance with this policy should, as usual, be measured by regular audits.
Terminology
AUP documents are similar to and often doing the same job as a document labelled Terms of Service for example, as
used by Google Gmail
[6]
and Yahoo!
[7]
, although not in every instance, as in the case of IBM.com
[8]
where the
Terms of Use is about the way in which IBM presents the site for you, and how they will interact with you using the
site with little to no instruction as to how you, the user, will use the site.
In some cases, AUP documents are named Internet and E-mail policy
[9]
, Internet AUP, or Network AUP and also
Acceptable IT Use Policy
[10]
. These documents, even though named differently, largely provide policy statements
as to what behaviour is acceptable from users of the local network/Internet connected via the local network.
Common elements of AUP statements
In general, AUP statements/documents often begin with a statement of the philosophy
[11]
of the sponsoring
organisation and intended reason as to why Internet use is offered to the users of that organisation's network. For
example, the sponsoring organisation adopts a philosophy of self-regulation and offers the user connection to the
local network and also connection to the Internet providing that the user accepts the fact she/he is going to be
personally responsible for actions taken when connected to the network or Internet. This may mean that the
organisation is not going to provide any warning system should the user contravene policy, maintaining that it is up
to the user to know when his/her actions are in violation of policy. Often Acceptable Use Policy documents provide
a statement about the use of the network and/or Internet and its uses and advantages
[12]
to the business, school or
other organisation sponsoring connection to the Internet. Such a statement may outline the benefit of email systems,
ability to gain information from websites, connection with other people through the use of instant messaging, and
other similar benefits of various protocols including the relatively new VoIP services.
The most important part of an AUP document is the code of conduct
[13]
governing the behaviour of a user whilst
connected to the network/Internet. The code of conduct may include some description of what may be called
netiquette which includes such items of conduct as using appropriate/polite language while online, avoiding illegal
activities, ensuring that activities the user may embark on should not disturb or disrupt any other user on the system,
and caution not to reveal personal information that could be the cause of identity theft.
Most AUP statements outline consequences of violating
[14]
the policy. Such violations are met with consequences
depending on the relationship of the user with the organisation. Common actions that schools and universities take is
to withdraw the service to the violator and sometimes if the activities are illegal the organization may involve
appropriate authorities, such as the local police. Employers will at times withdraw the service from employees,
although a more common action is to terminate employment when violations may be hurting the employer in some
Acceptable use policy
31
way, or may compromise security. Earthlink
[14]
, an American Internet service provider has a very clear policy
relating to violations of its policy. The company identifies six levels of response to violations:
• issue warnings: written or verbal
• suspend the Member's newsgroup posting privileges
• suspend the Member's account
• terminate the Member's account
• bill the Member for administrative costs and/or reactivation charges
• bring legal action to enjoin violations and/or to collect damages, if any, caused by violations.
Central to most AUP documents is the section detailing unacceptable uses of the network, as displayed in the
University of Chicago AUP
[15]
. Unacceptable behaviours may include creation and transmission of offensive,
obscene, or indecent document or images, creation and transmission of material which is designed to cause
annoyance, inconvenience or anxiety, creation of defamatory material, creation and transmission that infringes
copyright of another person, transmission of unsolicited commercial or advertising material and deliberate
unauthorised access to other services accessible using the connection to the network/Internet. Then there is the type
of activity that uses the network to waste time, as indicated in SurfControl's advice on writing AUPs
[16]
, of technical
staff to troubleshoot a problem for which the user is the cause, corrupting or destroying other user's data, violating
the privacy of others online, using the network in such a way that it denies the service to others, continuing to use
software or other system for which the user has already been warned about using, and any other misuse of the
network such as introduction of viruses.
Disclaimers are often added in order to absolve an organisation from responsibility under specific circumstances. For
example, in the case of Anglia Ruskin University
[17]
a disclaimer is added absolving the University for errors or
omissions or for any consequences arising from the use of information contained on the University website. While
disclaimers may be added to any AUP, disclaimers are most often found on AUP documents relating to the use of a
website while those offering a service fail to add such clauses. PsychologyUK
[18]
, a magazine forum site, includes
the type of disclaimer that can be used in an AUP for a website or online service of some type.
Particularly when an AUP is written for a college or school setting, AUPs remind students (or when in the case of a
company, employees) that connection to the Internet, or use of a website, is a privilege, as demonstrated in the
Loughborough University's Janet Service AUP
[10]
and not a right. Through emphasising this "privilege" aspect,
Northern Illinois University
[19]
then make the connection that any abuse of that privilege can result in legal action
from the University.
In a handbook for writing AUP documents
[12]
, the Virginia Department of Education indicate that there are three
other areas needing to be addressed in an AUP:
• a statement that the AUP is in compliance with state and national telecommunication rules and regulations
• a statement regarding the need to maintain personal safety and privacy while accessing the Internet
• a statement regarding the need to comply with Fair Use Laws and other copyright regulations while accessing the
Internet
Through a cursory reading of AUP statements found by a Google Search
[20]
the variation of the inclusion of these
items in AUP documents is highly variable. However, those statements in a school or university setting are more
likely to include a statement to address at least the "personal safety" issue.
Acceptable use policy
32
Enforceability
6.3 This Policy shall be governed by the laws of England and the parties submit to the exclusive
jurisdiction of the Courts of England and Wales.
And of course with the ever widening of the number of jurisdictions covered by the Internet, the AUP document
needs to indicate the jurisdiction, meaning the laws that are applicable and govern the use of an AUP. Even if a
company is only located in one jurisdiction and the AUP applies to only its employees naming the jurisdiction saves
difficulties of interpretation should legal action be required to enforce its statements.
References
[1] "IS.SEC.005" (http:/ / google. com/ search?q=cache:hBXqLlQN39IJ:ec.hcahealthcare.com/ CPM/ ISSEC005.doc+ "internet+policy"+
site:hcahealthcare. com& hl=en& ct=clnk&cd=1& gl=us). Hospital Corporation of America. 2007-12-01. . Retrieved 2008-12-13.
[2] "Policy on Acceptable Use of Electronic Resources" (http:// www.upenn.edu/computing/ policy/ aup.html). University of Pennsylvania. .
Retrieved 2008-12-13.
[3] "2008-2009 Code of Student Conduct" (http:/ / www.ccps. k12.fl.us/ Downloads/ COSC0809. pdf) (pdf). Charlotte County Public Schools.
. Retrieved 2008-12-13.
[4] "EMBARQ ACCEPTABLE USE POLICY & VISITOR AGREEMENT" (http:// www2. embarq.com/ legal/ acceptableuse. html). Embarq.
2006-10-20. . Retrieved 2008-12-13.
[5] "MySpace.com Terms of Use Agreement" (http:/ / www. myspace. com/ index.cfm?fuseaction=misc.terms). Myspace. 2008-02-28. .
Retrieved 2008-12-13.
[6] http:/ / mail.google. com/ mail/ help/ terms_of_use. html
[7] http:/ / info.yahoo. com/ legal/ us/ yahoo/ utos/ utos-173. html
[8] http:// www. ibm. com/ legal/ us/
[9] http:/ / www. visiongateway. net/ support/ downloads/ document/ AUP%20-%20Employee.pdf
[10] http:// www. lboro.ac. uk/ computing/ policies/ loughborough-aup.html
[11] http:// www. solis. co. uk/ aup/ index
[12] http:/ / www. pen. k12. va. us/ VDOE/Technology/AUP/ home.shtml
[13] http:/ / title3.sde. state. ok. us/ technology/ aup. htm
[14] http:// www. earthlink.net/ about/ policies/ use. faces
[15] http:// nsit. uchicago. edu/ policies/ eaup/
[16] http:/ / www. surfcontrol.com/ uploadedfiles/ AUP_Booklet_10011_uk.pdf
[17] http:/ / www. anglia. ac. uk/ ruskin/ en/ home/ tools/ disclaimer.html
[18] http:/ / www. psychologyuk. co.uk/ forum/faq.php?faq=aup#faq_forum_disclaimer
[19] http:/ / www. niu. edu/ aup/
[20] http:// www. google. co. uk/ search?q=acceptable+use+ policy& sourceid=navclient-ff&ie=UTF-8&rlz=1B3GGGL_enAU213AU213
External links
• Responsible ISP Acceptable Use Policies (http:/ / www.spamhaus. org/aups. html) on SpamHaus
• Critiquing Acceptable Use Policies (http:// www.io.com/ ~kinnaman/ aupessay. html) by Dave Kinnaman
• Virginia Department of Education Handbook for writing AUP documents (http:// www.pen.k12. va. us/ VDOE/
Technology/ AUP/ home.shtml)
Access token
33
Access token
In Microsoft Windows operating systems, an access token contains the security information for a login session and
identifies the user, the user's groups, and the user's privileges.
Overview
An access token is an object encapsulating the security descriptor of a process.
[1]
Attached to a process, a security
descriptor identifies the owner of the object (in this case, the process) and ACLs that specifies accessing rights
allowed or denied to the owner of the object.
[2]

[3]
While a token is used to represent only the security information, it
is technically free-form and can enclose any data. The access token is used by Windows when the process or thread
tries to interact with objects whose security descriptors enforce access control (securable objects).
[1]
An access token
is represented by the system object of type Token. Because a token is a regular system object, access to a token itself
can be controlled by attaching a security descriptor, but it is generally never done in practice.
The access token is generated by the logon service when a user logs on to the system and the credentials provided by
the user are authenticated against the authentication database, by specifying the rights the user has in the security
descriptor enclosed by the token. The token is attached to every process created by the user session (processes whose
owner is the user).
[1]
Whenever such a process accesses any resource which has access control enabled, Windows
looks up in the security descriptor in the access token whether the user owning the process is eligible to access the
data, and if so, what operations (read, write/modify, etc.) the user is allowed to do. If the accessing operation is
allowed in the context of the user, Windows allows the process to continue with the operation, else it is denied
access.
Types of tokens
There are two types of tokens:
Primary token
Primary tokens can only be associated to processes, and they represent a process's security subject. The
creation of primary tokens and their association to processes are both privileged operations, requiring two
different privileges in the name of privilege separation - the typical scenario sees the authentication service
creating the token, and a logon service associating it to the user's operating system shell. Processes initially
inherit a copy of the parent process's primary token. Impersonation tokens can only be associated to threads,
and they represent a client process's security subject. Impersonation tokens are usually created and associated
to the current thread implicitly, by IPC mechanisms such as DCE RPC, DDE and named pipes.
Impersonation token
Impersonation is a security concept unique to Windows NT, that allows a server application to temporarily
"be" the client in terms of access to secure objects. Impersonation has three possible levels: identification,
letting the server inspect the client's identity, impersonation, letting the server act on behalf of the client, and
delegation, same as impersonation but extended to remote systems to which the server connects (through the
preservation of credentials). The client can choose the maximum impersonation level (if any) available to the
server as a connection parameter. Delegation and impersonation are privileged operations (impersonation
initially wasn't, but historical carelessness in the implementation of client APIs failing to restrict the default
level to "identification", letting an unprivileged server impersonate an unwilling privileged client, called for
it).
Access token
34
Contents of a token
A token is composed of various fields, including but not limited to:
• an identifier.
• the identifier of the associated logon session. The session is maintained by the authentication service, and is
populated by the authentication packages with a collection of all the information (credentials) the user provided
when logging in. Credentials are used to access remote systems without the need for the user to re-authenticate
(single sign-on), provided that all the systems involved share an authentication authority (e.g. a Kerberos ticket
server)
• the user identifier. This field is the most important and it's strictly read-only.
• the identifiers of groups the user (or, more precisely, the subject) is part of. Group identifiers cannot be deleted,
but they can be disabled. At most one of the groups is designated as the session id, a volatile group representing
the logon session, allowing access to volatile objects associated to the session, such as the display.
• the restricting group identifiers (optional). This additional set of groups doesn't grant additional access, but further
restricts it: access to an object is only allowed if it's allowed also to one of these groups. Restricting groups cannot
be deleted nor disabled. Restricting groups are a recent addition, and they are used in the implementation of
sandboxes.
• the privileges, i.e. special capabilities the user has. Most privileges are disabled by default, to prevent damage
from non-security-conscious programs. Starting in Windows XP Service Pack 2 and Windows Server 2003
privileges can be permanently removed from a token by a call to AdjustTokenPrivileges() with the
SE_PRIVILEGE_REMOVED attribute.
• the default owner, primary group and ACL for objects created by the subject associated to the token.
References
[1] "Access Tokens" (http:/ /msdn2. microsoft.com/ en-us/ library/Aa374909.aspx). MSDN. . Retrieved 2007-10-08.
[2] "Security Descriptors" (http:/ / msdn2. microsoft.com/ en-us/ library/aa379563. aspx). . Retrieved 2007-10-08.
[3] "Securable Objects" (http:/ / msdn2. microsoft.com/ en-us/ library/aa379557. aspx). . Retrieved 2007-10-08.
Advanced Persistent Threat
35
Advanced Persistent Threat
Advanced persistent threat (APT) usually refers to a group, such as a foreign nation state government, with both
the capability and the intent to persistently and effectively target a specific entity. The term is commonly used to
refer to cyber threats, in particular that of Internet enabled espionage, but applies equally to other threats such as that
of traditional espionage or attack.
[1]
Other recognised attack vectors include infected media, supply chain
compromise, and social engineering. Individuals, such as an individual hacker, are not usually referred to as an APT
as they rarely have the resources to be both advanced and persistent even if they are intent on gaining access to, or
attacking, a specific target.
[2]
The global landscape of APTs from all sources is sometimes referred to in the singular as "the" APT, as are
references to the actor behind a specific incident or series of incidents.
The Stuxnet computer worm could be considered to be the product of an Advanced Persistent Threat, but by
classifying its creators as such one would purport to expect further sabotage of the Iranian nuclear program.
Within the computer security community, and increasingly within the media, the term is almost always used in
reference to a long-term pattern of targeted sophisticated hacking attacks aimed at governments, companies and
political activists, and by extension, also to refer to the groups behind these attacks. A common misconception
associated with the APT relates to its specificity to the targeting of Western governments. While examples of
technological APTs against Western governments may be more publicized in the West, actors in many nations have
used the technological (cyber) APT as a means to gather intelligence on individuals, and groups of individuals of
interest.
[3]

[4]

[5]
The United States Cyber Command is tasked with coordinating the US military's response to this
cyber threat.
Numerous sources have alleged that some groups involved in APTs are affiliated with, or agents of, nation-states.
[6]
[7]

[8]
Definitions of precisely what an APT is can vary, but can be summarized by their named requirements below:
[9]

[10]
[11]
• Advanced – Operators behind the threat have a full spectrum of intelligence gathering techniques at their disposal.
These may include computer intrusion technologies and techniques, but also extend to conventional intelligence
gathering techniques such as telephone interception technologies and satellite imaging. While individual
components of the attack may not be classed as particularly “advanced” (e.g. malware components generated from
commonly available do-it-yourself malware construction kits, or the use of easily procured exploit materials),
their operators can typically access and develop more advanced tools as required. They often combine multiple
targeting methods, tools and techniques in order to reach and compromise their target and maintain access to it.
• Persistent – Operators give priority to a specific task, rather than opportunistically seeking information for
financial or other gain. This distinction implies that the attackers are guided by external entities. The targeting is
conducted through continuous monitoring and interaction in order to achieve the defined objectives. It does not
mean a barrage of constant attacks and malware updates. In fact, a “low-and-slow” approach is usually more
successful. If the operator loses access to their target they usually will reattempt access, and most often,
successfully.
• Threat – APTs are a threat because they have both capability and intent. There is a level of coordinated human
involvement in the attack, rather than a mindless and automated piece of code. The operators have a specific
objective and are skilled, motivated, organized and well funded.
Advanced Persistent Threat
36
References
[1] "Are you being targeted by an Advanced Persistent Threat?" (http:// www. commandfive.com/ apt. html). Command Five Pty Ltd. .
Retrieved 2011-03-31.
[2] "The changing threat environment..." (http:// www.commandfive.com/ threats.html). Command Five Pty Ltd. . Retrieved 2011-03-31.
[3] "An Evolving Crisis" (http:// www.businessweek. com/ magazine/ content/ 08_16/ b4080032220668. htm). BusinessWeek. April 10, 2008. .
Retrieved 2010-01-20.
[4] "The New E-spionage Threat" (http:// www.businessweek. com/ magazine/ content/ 08_16/ b4080032218430. htm). BusinessWeek. April
10, 2008. . Retrieved 2011-03-19.
[5] "Google Under Attack: The High Cost of Doing Business in China" (http:// www. spiegel.de/ international/ world/0,1518,672742,00. html).
Der Spiegel. 01/19/2010. . Retrieved 2010-01-20.
[6] "Under Cyberthreat: Defense Contractors" (http:// www.businessweek. com/ technology/ content/ jul2009/ tc2009076_873512.htm).
BusinessWeek. July 6, 2009. . Retrieved 2010-01-20.
[7] "Understanding the Advanced Persistent Threat" (http:// tominfosec.blogspot. com/ 2010/ 02/ understanding-apt. html). Tom Parker.
February 4, 2010. . Retrieved 2010-02-04.
[8] "Advanced Persistent Threat (or Informationized Force Operations)" (http://www. usenix. org/event/ lisa09/ tech/ slides/ daly.pdf). Usenix,
Michael K. Daly. November 4, 2009. . Retrieved 2009-11-04.
[9] "What's an APT? A Brief Definition" (http:// www.damballa. com/ solutions/ advanced-persistent-threats.php). Damballa. January 20, 2010.
. Retrieved 2010-01-20.
[10] "Are you being targeted by an Advanced Persistent Threat?" (http:/ / www.commandfive.com/ apt. html). Command Five Pty Ltd. .
Retrieved 2011-03-31.
[11] "The changing threat environment..." (http:// www.commandfive.com/ threats. html). Command Five Pty Ltd. . Retrieved 2011-03-31.
Air gap (networking)
An air gap or air wall
[1]
is a security measure often taken for computers and computer networks that must be
extraordinarily secure. It consists of ensuring that a secure network is completely physically, electrically, and
electromagnetically isolated from insecure networks, such as the public Internet or an insecure local area network.
Limitations imposed on devices used in these environments may include a ban on wireless connections to or from
the secure network or similar restrictions on EM leakage from the secure network through the use of TEMPEST or a
faraday cage. It is most recognizable in the time-honored configuration known as "sneaker-net" where the only
connection between two devices or networks is via a human being providing media-switching, i.e.; floppies, CDs, or
USB drives. The term derives from the notion that one must put on sneakers and walk to transfer data.
In environments where networks or devices are rated to handle different levels of classified information, the two
(dis-)connected devices/networks are referred to as "low side" and "high side", low being unclassified and high
referring to classified, or classified at a higher level. This is also occasionally referred to as red or high (classified)
and black or low (unclassified). To move data from the high side to the low side, it is necessary to write data to a
physical medium, and move it to a device on the latter network. Traditionally based on the Bell-La Padula
Confidentiality Model, data can move low-to-high with minimal processes while high-to-low requires much more
stringent procedures to ensure protection of the data at a higher level of classification.
The concept represents the maximum protection one network can have from another (save turning the device off). It
is not possible for packets or datagrams to "leap" across the air gap from one network to another.
The upside to this is that such a network can generally be regarded as a closed system (in terms of information,
signals, and emissions security) unable to be accessed from the outside world. The downside is that transferring
information (from the outside world) to be analyzed by computers on the secure network is extraordinarily labor
intensive, often involving human security analysis of prospective programs or data to be entered onto air-gapped
networks and possibly even human manual re-entry of the data following security analysis.
[2]
Examples of the types of networks or systems that may be air gapped include:
• Military/governmental computer networks/systems
[3]
• Life-critical systems, such as:
Air gap (networking)
37
• Controls of nuclear power plants;
• Computers used in aviation
[4]
, such as FADECs and avionics;
• Computerized medical equipment.
• Very simple systems, where there is no need to compromise security in the first place, such as:
• The engine control unit in an automobile;
• A digital thermostat for temperature and compressor regulation in home HVAC and refrigeration systems.
• Electronic sprinkler controls for watering of lawns.
References
[1] Wiktionary: Airwall (http:// en. wiktionary. org/wiki/ airwall), retrieved on 2010-05-13
[2] Lemos, Robert (2001-02-01). "NSA attempting to design crack-proof computer" (http:// news. zdnet.com/ 2100-9595_22-114035.html) (in
English). ZDNet News. CBS Interactive, Inc.. . Retrieved 2009-01-16. "For example, top-secret data might be kept on a different computer
than data classified merely as sensitive material. Sometimes, for a worker to access information, up to six different computers can be on a
single desk. That type of security is called, in typical intelligence community jargon, an air gap."
[3] Rist, Oliver (2006-05-29). "Hack Tales: Air-gap networking for the price of a pair of sneakers" (http:// www.infoworld.com/ article/ 06/ 05/
29/ 78289_22FEenterhack1_1.html) (in English). Infoworld. IDG Network. . Retrieved 2009-01-16. "In high-security situations, various
forms of data often must be kept off production networks, due to possible contamination from nonsecure resources — such as, say, the
Internet. So IT admins must build enclosed systems to house that data — stand-alone servers, for example, or small networks of servers that
aren't connected to anything but one another. There's nothing but air between these and other networks, hence the term air gap, and
transferring data between them is done the old-fashioned way: moving disks back and forth by hand, via 'sneakernet'."
[4] Zetter, Kim (2008-01-04). "FAA: Boeing's New 787 May Be Vulnerable to Hacker Attack" (http:// www.wired. com/ politics/ security/
news/2008/ 01/ dreamliner_security) (in English). Wired Magazine. Condénet, Inc.. . Retrieved 2009-01-16. "(...Boeing...) wouldn't go into
detail about how (...it...) is tackling the issue but says it is employing a combination of solutions that involves some physical separation of the
networks, known as air gaps, and software firewalls."
Ambient authority
Ambient authority is a term used in the study of access control systems.
A subject, such as a computer program, is said to be using ambient authority, if making a request that only specifies
the names of the object(s) involved and the operation to be performed on them, is enough for a permitted action to
succeed.
In this definition,
• a "name" is any way of referring to an object that does not itself include authorising information, and could
potentially be used by any subject;
• an action is "permitted" for a subject if there exists any request that that subject could make that would cause the
action to be carried out.
The authority is "ambient" in the sense that it exists in a broadly visible environment (often, but not necessarily a
global environment) where any subject can request it by name.
For example, suppose a C program opens a file for read access by executing the call:
open("filename", O_RDONLY, 0)
The desired file is designated by its name on the filesystem, which does not by itself include authorising information,
so the program is exercising ambient authority.
When ambient authority is requested, permissions are granted or denied based on one or more global properties of
the executing program, such as its identity or its role. In such cases, the management of access control is handled
separately from explicit communication to the executing program or process, through means such as access control
lists associated with objects or through Role-Based Access Control mechanisms. The executing program has no
Ambient authority
38
means to reify the permissions that it was granted for a specific purpose as first-class values. So, if the program
should be able to access an object when acting on its own behalf but not when acting on behalf of one of its clients
(or, on behalf of one client but not another), it has no way to express that intention. This inevitably leads to such
programs being subject to the Confused deputy problem.
The term "ambient authority" is used primarily to contrast with capability-based security (including object-capability
models), in which executing programs receive permissions as they might receive data, as communicated first-class
object references. This allows them to determine where the permissions came from, and thus avoid the Confused
deputy problem. However, since there are additional requirements for a system to be considered a capability system
besides avoiding ambient authority, "non-ambient authority system" is not just a synonym for "capability system".
Ambient authority is the dominant form of access control in computer systems today. The user model of access
control as used in Unix and in Windows systems is an ambient authority model because programs execute with the
authorities of the user that started them. This not only means that executing programs are inevitably given more
permissions (see Principle of least privilege) than they need for their task, but that they are unable to determine the
source or the number and types of permission that they have. A program executing under an ambient authority access
control model has little option but to designate permissions and try to exercise them, hoping for the best. This
property requires an excess of permissions to be granted to users or roles, in order for programs to execute without
error.
Anomaly-based intrusion detection system
An Anomaly-Based Intrusion Detection System, is a system for detecting computer intrusions and misuse by
monitoring system activity and classifying it as either normal or anomalous. The classification is based on heuristics
or rules, rather than patterns or signatures, and will detect any type of misuse that falls out of normal system
operation. This is as opposed to signature based systems which can only detect attacks for which a signature has
previously been created.
[1]
In order to determine what is attack traffic, the system must be taught to recognize normal system activity. This can
be accomplished in several ways, most often with artificial intelligence type techniques. Systems using neural
networks have been used to great effect. Another method is to define what normal usage of the system comprises
using a strict mathematical model, and flag any deviation from this as an attack. This is known as strict anomaly
detection.
[2]
Anomaly-based Intrusion Detection does have some short-comings, namely a high false positive rate and the ability
to be fooled by a correctly delivered attack.
[2]
Attempts have been made to address these issues through techniques
used by PAYL
[1]
and MCPAD.
[3]
References
[1] Wang, Ke. "Anomalous Payload-Based Network Intrusion Detection" (http:/ / dx.doi. org/ 10. 1007/ 978-3-540-30143-1_11). Recent
Advances in Intrusion Detection. Springer Berlin. . Retrieved 4/22/2011.
[2] A strict anomaly detection model for IDS, Phrack 56 0x11, Sasha/Beetle (http:/ / artofhacking.com/ files/ phrack/phrack56/ P56-11.TXT)
[3] Perdisci, Roberto; Davide Ariu, Prahlad Fogla, Giorgio Giacinto, and Wenke Lee (2009). "McPAD : A Multiple Classifier System for
Accurate Payload-based Anomaly Detection" (http:/ / 3407859467364186361-a-1802744773732722657-s-sites. googlegroups.com/ site/
robertoperdisci/publications/ publication-files/ McPAD-revision1.
pdf?attachauth=ANoY7cracv9VJh1PrdgVG8tSdMRh7AImufRG9pxwHd4gHQuws7RxQXrD4duQNiRXFBPSRouloipzLOAWbDV16ZUkCyICr4RYRsiknMqLos8pM7Ekc9haXLA38Y7kEsAjpeFPzx4_eORchAi_KOu2lExw5x5B6grAAwLpxA_pymGQRChq5CWF70EDv9zuUtIaTUR8oaDd_D2NRE8mJHaP7TYYj0xAG3YJPGQSsRT8HetfDytgojYuo9m4Rnh8lBSqjuCYBueX2pw7&
attredirects=0). Computer Networks, Special Issue on Traffic Classification and Its Applications to Modern Networks 5 (6): 864-881. .
Application firewall
39
Application firewall
An application firewall is a form of firewall which controls input, output, and/or access from, to, or by an
application or service. It operates by monitoring and potentially blocking the input, output, or system service calls
which do not meet the configured policy of the firewall. The application firewall is typically built to monitor one or
more specific applications or services (such as a web or database service), unlike a stateful network firewall which
can provide some access controls for nearly any kind of network traffic. There are two primary categories of
application firewalls, network-based application firewalls and host-based application firewalls.
Network-based application firewalls
A network-based application layer firewall is a computer networking firewall operating at the application layer of a
protocol stack,
[1]
and are also known as a proxy-based or reverse-proxy firewall. Application firewalls specific to a
particular kind of network traffic may be titled with the service name, such as a web application firewall. They may
be implemented through software running on a host or a stand-alone piece of network hardware. Often, it is a host
using various forms of proxy servers to proxy traffic before passing it on to the client or server. Because it acts on
the application layer, it may inspect the contents of the traffic, blocking specified content, such as certain websites,
viruses, attempts to exploit known logical flaws in client software.
Network-based application-layer firewalls work on the application level of the network stack (for example, all web
browser, telnet, or ftp traffic), and may intercept all packets traveling to or from an application. In principle,
application firewalls can prevent all unwanted outside traffic from reaching protected machines.
Modern application firewalls may also offload encryption from servers, block application input/output from detected
intrusions or malformed communication, manage or consolidate authentication, or block content which violates
policies.
History
Gene Spafford of Purdue University, Bill Cheswick at AT&T Laboratories, and Marcus Ranum described a third
generation firewall known as an application layer firewall. Marcus Ranum's work on the technology spearheaded the
creation of the first commercial product. The product was released by DEC who named it the DEC SEAL product.
DEC’s first major sale was on June 13, 1991 to a chemical company based on the East Coast of the USA.
TIS, under a broader DARPA contract, developed the Firewall Toolkit (FWTK), and made it freely available under
license on October 1, 1993. The purposes for releasing the freely-available, not for commercial use, FWTK were: to
demonstrate, via the software, documentation, and methods used, how a company with (at the time) 11 years'
experience in formal security methods, and individuals with firewall experience, developed firewall software; to
create a common base of very good firewall software for others to build on (so people did not have to continue to
"roll their own" from scratch); and to "raise the bar" of firewall software being used.
The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as
File Transfer Protocol, DNS, or web browsing), and it can detect whether an unwanted protocol is being sneaked
through on a non-standard port or whether a protocol is being abused in any harmful way.
Application firewall
40
Host-based application firewalls
A host-based application firewall can monitor any application input, output, and/or system service calls made from,
to, or by an application. This is done by examining information passed through system calls instead of or in addition
to a network stack. A host-based application firewall can only provide protection to the applications running on the
same host.
Application firewalls function by determining whether a process should accept any given connection. Application
firewalls accomplish their function by hooking into socket calls to filter the connections between the application
layer and the lower layers of the OSI model. Application firewalls that hook into socket calls are also referred to as
socket filters. Application firewalls work much like a packet filter but application filters apply filtering rules
(allow/block) on a per process basis instead of filtering connections on a per port basis. Generally, prompts are used
to define rules for processes that have not yet received a connection. It is rare to find application firewalls not
combined or used in conjunction with a packet filter.
[2]
Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for
the local process involved in the data transmission. The extent of the filtering that occurs is defined by the provided
ruleset. Given the variety of software that exists, application firewalls only have more complex rulesets for the
standard services, such as sharing services. These per process rulesets have limited efficacy in filtering every
possible association that may occur with other processes. Also, these per process ruleset cannot defend against
modification of the process via exploitation, such as memory corruption exploits.
[2]
Because of these limitations,
application firewalls are beginning to be supplanted by a new generation of application firewalls that rely on
mandatory access control (MAC), also referred to as sandboxing, to protect vulnerable services. Examples of next
generation host-based application firewalls which control system service calls by an application are AppArmor
[3]
and
the TrustedBSD MAC framework (sandboxing) in Mac OS X.
[4]
Host-based application firewalls may also provide network-based application firewalling.
Examples
To better illustrate the concept, this section enumerates some specific application firewall examples.
Database firewall
A database firewall is an application firewall which protects databases from application attacks- for example, SQL
injection, database rootkits, and unauthorized information disclosure.
A database firewall is a computer application firewall operating at the database application layer of a protocol stack.
Also known as a proxy-based firewall, it may be implemented as a piece of software running on a single computer,
or a stand-alone piece of hardware. Often, it is a host using various forms of reverse proxy services to proxy traffic
before passing it to a gateway router. Because it acts on the database application layer, it may inspect the contents of
the traffic, blocking specified content, such as certain websites, viruses, attempts to exploit known logical flaws in
client software.
Most often, database firewalls work on the SQL application level atop the TCP/IP stack, all applications' connection
to the database or SQL management interfaces, and may intercept and enforce all packets traveling to or from a
database network or application interface.
Some database firewalls include automated SQL learning capabilities, which assist in policy configuration. The
learning capabilities will list queries directed to a specific Database.
Application firewall
41
Implementations
There are various application firewalls available, including both free and open source software and commercial
products.
Mac OS X
Mac OS X, as of Leopard, includes an implementation of the TrustedBSD MAC framework, which is taken from
FreeBSD.
[5]
The TrustedBSD MAC framework is used to sandbox some services, such as mDNSresponder, much
like AppArmor is used to sandbox services in some Linux distributions. The TrustedBSD MAC framework provides
a default layer of firewalling given the default configuration of the sharing services in Mac OS X Leopard and Snow
Leopard.
The Application firewall located in the security preferences of Mac OS X starting with Leopard provides the
functionality of this type of firewall to a limited degree via the use of code signing apps added to the firewall list. For
the most part, this Application firewall only manages network connections by checking to see if incoming
connections are directed toward an app in the firewall list and applies the rule (block/allow) specified for those apps.
Linux
This is a list of security software packages for Linux, allowing filtering of application to OS communication,
possibly on a by-user basis:
• AppArmor
• ModSecurity - Also works under Mac OS X, Solaris and other versions of Unix.
• Systrace
• Zorp
Windows
• WinGate
• WinRoute
Network appliances
These devices are sold as hardware network appliances.
Specialized application firewalls
Specialized application firewalls offer a rich feature-set in protecting and controlling a specific application. Most
specialized network appliance application firewalls are for web applications.
History
Large-scale web server hacker attacks, such as the 1996 PHF CGI exploit,
[6]
lead to the investigation into security
models to protect web applications. This was the beginning of what is currently referred to as the web application
firewall (WAF) technology family. Early entrants in the market started appearing in 1999, such as Perfecto
Software’s AppShield,
[7]
(who later changed their name to Sanctum and in 2004 was acquired by Watchfire
[8]
(acquired by IBM in 2007), which focused primarily on the ecommerce market and protected against illegal web
page character entries. NetContinuum (acquired by Barracuda Networks in 2007) approached the issue by providing
pre-configured ‘security servers’. Such pioneers faced proprietary rule-set issues, business case obstacles and cost
barriers to wide adoption, however, the need for such solutions was taking root.
Application firewall
42
In 2002 the industry took another major step forward when the open source project, called ModSecurity run by
Thinking Stone (acquired by Breach Security
[9]
in 2006), was formed with a mission to solve these obstacles and
make WAF technology accessible for every company. With the release of the core rule set, a unique open source rule
set for protecting Web applications, based on the OASIS Web Application Security Technical Committee’s (WAS
TC) vulnerability work, the market had a stable, well documented and standardized model to follow.
In 2003, the WAS TC’s work was expanded and standardized across the industry through the work of the Open Web
Application Security Project’s (OWASP) Top 10 List. This annual ranking is a classification scheme for web security
vulnerabilities, a model to provide guidance for initial threat, impact, and a way to describe conditions that can be
used by both assessment and protection tools, such as a WAF. This list would go on to become the industry
benchmark for many compliance schemes.
In 2004, large traffic management and security vendors, primarily in the network layer space, entered the WAF
market through a flurry of mergers and acquisitions. Key among these was the mid-year move by F5 to acquire
Magnifire WebSystems
[10]
and the integration of the latter’s TrafficShield software solution with the former’s Big-IP
traffic management system. This same year, F5 acquired AppShield and discontinued the technology. Further
consolidation occurred in 2006 with the acquisition of Kavado by Protegrity
[11]
, and Citrix Systems’ buying of
Teros
[12]
.
Until this point, the WAF market was dominated by niche providers who focused on web application layer security.
Now the market was firmly directed at integrating WAF products with the large network technologies – load
balancing, application servers, network firewalls, etc. – and began a rush of rebranding, renaming and repositioning
the WAF. Options were confusing, expensive and still hardly understood by the larger market.
In 2006, another milestone was reached when the Web Application Security Consortium formed to help make sense
of the now widely divergent WAF market. Dubbed the Web Application Firewall Evaluation Criteria project
(WAFEC), this open community of users, vendors, academia and independent analysts and researchers created a
common evaluation criterion for WAF adoption that is still maintained today.
Wide-scale interest in the WAF began in earnest, tied to the 2006 PCI Security Standards Council formation and
compliance mandate. Major payment card brands (AMEX, Visa, Master Card, etc.) formed PCI as a way to regulate
security practices across the industry and curtail the rampant credit card fraud taking place. In particular, this
standard mandated that all web applications must be secure, either through secure development or use of a WAF
(requirement 6.6). The OWASP Top 10 forms the backbone of this requirement.
With the increased focus on virtualization and Cloud computing to maximize existing resources, scaling of WAF
technology has become the most recent milestone, marked by the 2009 white paper, Defining a dWAF to Secure
Cloud Applications
[13]
from art of defence and the Guidance for Critical Areas of Focus in Cloud Computing
[14]
paper from the Cloud Security Alliance (CSA).
By 2010, the WAF market had matured to a market exceeding $200M in size according to Forrester. In a February
2010 report, Web Application Firewall: 2010 And Beyond, Forrester analyst Chenxi Wang wrote, "Forrester
estimates the 2009 market revenue of the WAF+ market to be nearly $200 million, and the market will grow by a
solid 20% in 2010. Security and risk managers can expect two WAF trends in 2010: 1) midmarket-friendly WAFs
will become available, and 2) larger enterprises will gravitate toward the increasingly prevalent WAF+ solutions."
She also wrote that "Imperva is the stand alone WAF leader."
Distributed web application firewalls
Distributed Web Application Firewall (also called a dWAF) is a member of the web application firewall (WAF) and
Web applications security family of technologies. Purely software-based, the dWAF architecture is designed as
separate components able to physically exist in different areas of the network. This advance in architecture allows
the resource consumption of the dWAF to be spread across a network rather than depend on one appliance, while
allowing complete freedom to scale as needed. In particular, it allows the addition / subtraction of any number of
Application firewall
43
components independently of each other for better resource management. This approach is ideal for large and
distributed virtualized infrastructures such as private, public or hybrid cloud models.
Cloud-based web application firewalls
Cloud-based Web Application Firewall is also member of the web application firewall (WAF) and Web applications
security family of technologies. This technology is unique due to the fact that it is platform agnostic and does not
require any hardware or software changes on the host, just a DNS change. By applying this DNS change, all web
traffic is routed through the WAF where it is inspected and threats are thwarted. Cloud-based WAFs are typically
centrally orchestrated, which means that threat detection information is shared among all the tenants of the service.
This collaboration results in improved detection rates and lower false positives. Like other cloud-based solutions,
this technology is elastic, scalable and is typically offered as a pay-as-you grow service. This approach is ideal for
cloud-based web applications and small or medium sized websites that require web application security but are not
willing or able to make software or hardware changes to their systems.
In 2010, Imperva spun out Incapsula to provide a cloud-based WAF to small to medium sized businesses.
Web application firewalls
• Armorlogic Profense web application firewall
• Array Networks WebWall Multi-Layered Application Security
• Barracuda Web Application Firewall
• Cisco Application Control Engine (ACE) Web Application Firewall
• Citrix NetScaler Application Firewall
• F5 Networks Application Security Manager ASM
• Fortinet - Fortiweb web application firewall
• ModSecurity - Opensource web application firewall
• Radware AppWall Web Application Firewall
• SonicWALL - SonicWALL Web Application Firewall Service
• List of Additional Web Application Firewalls
[15]
, Mosaic Security Research
Combination network and application firewalls
Combination network and application firewalls typically offer fewer features than specialized application firewalls.
Many of these require separate licenses to activate the full application firewall functionality.
• Cyberoam
• Check Point Security Gateways
• Cisco Adaptive Security Appliance
• Fortinet FortiGate firewalls
• Juniper Networks SRX services gateway and SSG firewalls
• SonicWALL firewalls
• WatchGuard firewalls
• McAfee Firewall Enterprise
• Paloalto Next Generation of Firewall
• Network Box
[16]
• List of Additional Enterprise Firewalls
[17]
, Mosaic Security Research
Application firewall
44
References
[1] Luis F. Medina (2003). The Weakest Security Link Series (http:/ / books. google.com/ books?id=Yz34zXV7VB8C& pg=PA54&
dq=application+ layer+firewall&hl=en& ei=kdNqTcnxNY2CsQPHwuzuBw& sa=X& oi=book_result& ct=result&resnum=5&
ved=0CEgQ6AEwBA#v=onepage& q=application layer firewall&f=false) (1st ed.). IUniverse. pp. 54. ISBN 978-0595264940. .
[2] http:// www. symantec. com/ connect/ articles/ software-firewalls-made-straw-part-1-2
[3] "Firewall your applications with AppArmor" (http:/ / www.linux.com/ archive/feed/58789). . Retrieved 2010-02-15.
[4] "The TrustedBSD Project" (http:/ / www. trustedbsd. org/mac.html). The TrustedBSD Project. 2008-11-21. . Retrieved 2010-02-15.
[5] http:// www. trustedbsd. org/mac. html
[6] CERT (March 20, 1996). "CERT Advisory CA-1996-06 Vulnerability in NCSA/Apache CGI example code" (http:/ / www. cert.org/
advisories/ CA-1996-06.html). CERT Coordination Center. . Retrieved 2010-11-17.
[7] Ellen Messmer (September 7, 1999). "New tool blocks wily e-comm hacker tricks" (http:/ / www.cnn.com/ TECH/ computing/ 9909/ 07/
ecomm. hack.idg/ index. html). CNN. . Retrieved 2010-11-17.
[8] Jaikumar Vijayan (August 4, 2004). "Q&A: Watchfire CTO sees Sanctum acquisition as a good fit" (http:/ / www. computerworld.com/ s/
article/95035/ Q_A_Watchfire_CTO_sees_Sanctum_acquisition_as_a_good_fit). Computerworld. . Retrieved 2010-11-17.
[9] http:// www. infoworld.com/ d/ security-central/breach-security-acquires-rival-firewall-modsecurity-998
[10] http:/ / www. networkworld.com/ news/ 2004/ 0601f5buys. html
[11] http:/ / www. computerworld.com/ s/ article/ 104063/ Protegrity_acquires_Web_apps_security_vendor_Kavado
[12] http:/ / www. networkcomputing.com/ cloud-storage/ citrix-picks-up-teros.php
[13] http:/ / www. artofdefence.com/ dokumente/ Cloud_AppSec_Whitepaper.pdf
[14] http:/ /
\\server\MarchPR\Clients\AoD\Marketing\Wiki\Guidance%20for%20Critical%20Areas%20of%20Focus%20in%20Cloud%20Computing
[15] https:// mosaicsecurity. com/ categories/ 118-web-application-firewall
[16] http:// www. network-box.com
[17] https:/ / mosaicsecurity. com/ categories/ 54-enterprise-firewall
External links
• Mac OS X 10.5 Leopard: About the Application Firewall (http:// support. apple.com/ kb/ HT1810)
• Web Application Firewall (http:/ / www.owasp. org/ index. php/ Web_Application_Firewall), Open Web
Application Security Project
• Web Application Firewall Evaluation Criteria (http:// www.webappsec. org/projects/ wafec/ ), from the Web
Application Security Consortium (http:/ / www. webappsec. org)
• Safety in the cloud(s): 'Vaporizing' the Web application firewall to secure cloud computing (http:// www.
net-security. org/ article.php?id=1270)
Application security
45
Application security
Application security encompasses measures taken throughout the application's life-cycle to prevent exceptions in
the security policy of an application or the underlying system (vulnerabilities) through flaws in the design,
development, deployment, upgrade, or maintenance of the application.
Applications only control the use of resources granted to them, and not which resources are granted to them. They, in
turn, determine the use of these resources by users of the application through application security.
Open Web Application Security Project (OWASP) and Web Application Security Consortium (WASC) updates on
the latest threats which impair web based applications. This aids developers, security testers and architects to focus
on better design and mitigation strategy. OWASP Top 10 has become an industrial norm is assessing Web
Applications.
Methodology
According to the patterns & practices Improving Web Application Security book, a principle-based approach for
application security includes:
[1]
• Knowing your threats.
• Securing the network, host and application.
• Incorporating security into your software development process
Note that this approach is technology / platform independent. It is focused on principles, patterns, and practices.
Threats, Attacks, Vulnerabilities, and Countermeasures
According to the patterns & practices Improving Web Application Security book, the following terms are relevant to
application security:
[1]
• Asset. A resource of value such as the data in a database or on the file system, or a system resource.
• Threat. A negative effect.
• Vulnerability. A weakness that makes a threat possible.
• Attack (or exploit). An action taken to harm an asset.
• Countermeasure. A safeguard that addresses a threat and mitigates risk.
Application Threats / Attacks
According to the patterns & practices Improving Web Application Security book, the following are classes of
common application security threats / attacks:
[1]
Category Threats / Attacks
Input Validation Buffer overflow; cross-site scripting; SQL injection; canonicalization
Authentication Network eavesdropping ; Brute force attack; dictionary attacks; cookie replay; credential theft
Authorization Elevation of privilege; disclosure of confidential data; data tampering; luring attacks
Configuration
management
Unauthorized access to administration interfaces; unauthorized access to configuration stores; retrieval of clear text
configuration data; lack of individual accountability; over-privileged process and service accounts
Sensitive information Access sensitive data in storage; network eavesdropping; data tampering
Session management Session hijacking; session replay; man in the middle
Cryptography Poor key generation or key management; weak or custom encryption
Application security
46
Parameter
manipulation
Query string manipulation; form field manipulation; cookie manipulation; HTTP header manipulation
Exception
management
Information disclosure; denial of service
Auditing and logging User denies performing an operation; attacker exploits an application without trace; attacker covers his or her tracks
Mobile Application Security
The proportion of mobile devices providing open platform functionality is expected to continue to increase as time
moves on. The openness of these platforms offers significant opportunities to all parts of the mobile eco-system by
delivering the ability for flexible program and service delivery options that may be installed, removed or refreshed
multiple times in line with the user’s needs and requirements. However, with openness comes responsibility and
unrestricted access to mobile resources and APIs by applications of unknown or untrusted origin could result in
damage to the user, the device, the network or all of these, if not managed by suitable security architectures and
network precautions. Mobile Application Security is provided in some form on most open OS mobile devices
(Symbian OS
[2]
, Microsoft , BREW, etc.). Industry groups have also created recommendations including the GSM
Association and Open Mobile Terminal Platform (OMTP)
[3]
Security testing for applications
Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave
applications open to exploitation. Ideally, security testing is implemented throughout the entire software
development life cycle (SDLC) so that vulnerabilities may be addressed in a timely and thorough manner.
Unfortunately, testing is often conducted as an afterthought at the end of the development cycle.
Vulnerability scanners, and more specifically web application scanners, otherwise known as penetration testing tools
(i.e. ethical hacking tools) have been historically used by security organizations within corporations and security
consultants to automate the security testing of http request/responses; however, this is not a substitute for the need
for actual source code review. Physical code reviews of an application's source code can be accomplished manually
or in an automated fashion. Given the common size of individual programs (often 500K Lines of Code or more), the
human brain can not execute a comprehensive data flow analysis needed in order to completely check all circuitous
paths of an application program to find vulnerability points. The human brain is suited more for filtering, interrupting
and reporting the outputs of automated source code analysis tools available commercially versus trying to trace every
possible path through a compiled code base to find the root cause level vulnerabilities.
The two types of automated tools associated with application vulnerability detection (application vulnerability
scanners) are Penetration Testing Tools (often categorized as Black Box Testing Tools) and static code analysis tools
(often categorized as White Box Testing Tools). Tools in the Black Box Testing arena include IBM Rational
AppScan, HP Application Security Center
[4]
suite of applications (through the acquisition of SPI Dynamics
[5]
),
Nikto (open source). Tools in the static code analysis arena include Veracode
[6]
, Pre-Emptive Solutions
[7]
, and
Parasoft
[8]
.
Banking and large E-Commerce corporations have been the very early adopter customer profile for these types of
tools. It is commonly held within these firms that both Black Box testing and White Box testing tools are needed in
the pursuit of application security. Typically sited, Black Box testing (meaning Penetration Testing tools) are ethical
hacking tools used to attack the application surface to expose vulnerabilities suspended within the source code
hierarchy. Penetration testing tools are executed on the already deployed application. White Box testing (meaning
Source Code Analysis tools) are used by either the application security groups or application development groups.
Typically introduced into a company through the application security organization, the White Box tools complement
the Black Box testing tools in that they give specific visibility into the specific root vulnerabilities within the source
Application security
47
code in advance of the source code being deployed. Vulnerabilities identified with White Box testing and Black Box
testing are typically in accordance with the OWASP taxonomy for software coding errors. White Box testing
vendors have recently introduced dynamic versions of their source code analysis methods; which operates on
deployed applications. Given that the White Box testing tools have dynamic versions similar to the Black Box
testing tools, both tools can be correlated in the same software error detection paradigm ensuring full application
protection to the client company.
The advances in professional Malware targeted at the Internet customers of online organizations has seen a change in
Web application design requirements since 2007. It is generally assumed that a sizable percentage of Internet users
will be compromised through malware and that any data coming from their infected host may be tainted. Therefore
application security has begun to manifest more advanced anti-fraud and heuristic detection systems in the
back-office, rather than within the client-side or Web server code.
[9]
Security standards and regulations
• Sarbanes-Oxley Act (SOX)
• Health Insurance Portability and Accountability Act (HIPAA)
• IEEE P1074
• ISO/IEC 7064:2003 Information technology -- Security techniques -- Check character systems
• ISO/IEC 9796-2:2002 Information technology -- Security techniques -- Digital signature schemes giving message
recovery -- Part 2: Integer factorization based mechanisms
• ISO/IEC 9796-3:2006 Information technology -- Security techniques -- Digital signature schemes giving message
recovery -- Part 3: Discrete logarithm based mechanisms
• ISO/IEC 9797-1:1999 Information technology -- Security techniques -- Message Authentication Codes (MACs) --
Part 1: Mechanisms using a block cipher
• ISO/IEC 9797-2:2002 Information technology -- Security techniques -- Message Authentication Codes (MACs) --
Part 2: Mechanisms using a dedicated hash-function
• ISO/IEC 9798-1:1997 Information technology -- Security techniques -- Entity authentication -- Part 1: General
• ISO/IEC 9798-2:1999 Information technology -- Security techniques -- Entity authentication -- Part 2:
Mechanisms using symmetric encipherment algorithms
• ISO/IEC 9798-3:1998 Information technology -- Security techniques -- Entity authentication -- Part 3:
Mechanisms using digital signature techniques
• ISO/IEC 9798-4:1999 Information technology -- Security techniques -- Entity authentication -- Part 4:
Mechanisms using a cryptographic check function
• ISO/IEC 9798-5:2004 Information technology -- Security techniques -- Entity authentication -- Part 5:
Mechanisms using zero-knowledge techniques
• ISO/IEC 9798-6:2005 Information technology -- Security techniques -- Entity authentication -- Part 6:
Mechanisms using manual data transfer
• ISO/IEC 14888-1:1998 Information technology -- Security techniques -- Digital signatures with appendix -- Part
1: General
• ISO/IEC 14888-2:1999 Information technology -- Security techniques -- Digital signatures with appendix -- Part
2: Identity-based mechanisms
• ISO/IEC 14888-3:2006 Information technology -- Security techniques -- Digital signatures with appendix -- Part
3: Discrete logarithm based mechanisms
• ISO/IEC 17799:2005 Information technology -- Security techniques -- Code of practice for information security
management
Application security
48
• ISO/IEC 24762:2008 Information technology -- Security techniques -- Guidelines for information and
communications technology disaster recovery services
• ISO/IEC 27006:2007 Information technology -- Security techniques -- Requirements for bodies providing audit
and certification of information security management systems
• Gramm-Leach-Bliley Act
• PCI Data Security Standard (PCI DSS)
References
[1] Improving Web Application Security: Threats and Countermeasures (http:// msdn2. microsoft.com/ en-us/ library/ms994920. aspx#),
published by Microsoft Corporation.
[2] "Platform Security Concepts" (http:/ / developer. symbian. com/ main/ documentation/ books/ books_files/ sops/ plat_sec_chap. pdf), Simon
Higginson.
[3] Application Security Framework (https:/ / www.omtp. org/ Publications/ Display. aspx?Id=c4ee46b6-36ae-46ae-95e2-cfb164b758b5), Open
Mobile Terminal Platform
[4] Application security: Find web application security vulnerabilities during every phase of the software development lifecycle (https:/ / h10078.
www1.hp. com/ cda/ hpms/ display/ main/ hpms_content. jsp?zn=bto& cp=1-11-201_4000_100__), HP center
[5] HP acquires SPI Dynamics (http:// news. cnet. com/ 8301-10784_3-9731312-7.html), CNET news.com
[6] http:// www. veracode. com/ solutions Veracode Security Static Analysis Solutions
[7] http:// www. preemptive.com/ application-protection.html Application Protection
[8] http:// www. parasoft.com/ parasoft_security Parasoft Application Security Solution
[9] "Continuing Business with Malware Infected Customers" (http:// www.technicalinfo.net/ papers/ MalwareInfectedCustomers.html). Gunter
Ollmann. October, 2008. .
External links
• Open Web Application Security Project (http:// www.owasp. org)
• The Web Application Security Consortium (http:/ / www.webappsec.org)
• The Microsoft Security Development Lifecycle (SDL) (http:// msdn. microsoft.com/ en-us/ security/ cc420639.
aspx)
• patterns & practices Security Guidance for Applications (http:/ / msdn. microsoft.com/ en-gb/library/ms998408.
aspx)
• QuietMove Web Application Security Testing Plug-in Collection for FireFox (https:/ / addons. mozilla. org/
en-US/ firefox/collection/ webappsec)
• Advantages of an integrated security solution for HTML and XML (http:/ / community. citrix.com/ blogs/ citrite/
sridharg/ 2008/ 11/ 17/ Advantages+ of+an+ integrated+security+solution+ for+HTML+and+ XML)
• Security Solutions (http:/ / www. securite-solutions. com)
• patterns & practices Application Security Methodology (http:// channel9.msdn. com/ wiki/ default.aspx/
SecurityWiki. ApplicationSecurityMethodology)
• Understanding the Windows Mobile Security Model (http:/ / technet. microsoft.com/ en-us/ library/cc512651.
aspx), Windows Mobile Security]
Asset (computer security)
49
Asset (computer security)
In information security, computer security and network security an Asset is any data, device, or other component of
the environment that supports information-related activities. Assets generally include hardware (eg. servers and
switches), software (eg. mission critical applications and support systems) and confidential information.
[1]

[2]
Assets
should be protected from illicit access, use, disclosure, alteration, destruction, and/or theft, resulting in loss to the
company.
[3]
The CIA Triad
The goal of Information Security is to ensure the Confidentiality, Integrity and Availability of assets from various
threats. For example, a hacker might attack a system in order to steal credit card numbers by exploiting a
vulnerability. Information Security experts must asses the likely impact of an attack and employ appropriate
countermeasures.
[4]
In this case they might put up a firewall and encrypt their credit card numbers.
Risk analysis
When performing risk analysis it is important to weigh how much to spend protecting each asset against the cost of
losing the asset. It is also important to take into account the chance of each loss occurring. Intangible costs must also
be factored in. If a hacker makes a copy of all a companies credit card numbers it does not cost them anything
directly but the loss in fines and reputation can be enormous.
References
[1] ISO/IEC 13335-1:2004 Information technology -- Security techniques -- Management of information and communications technology
security -- Part 1: Concepts and models for information and communications technology security management (http:// www.iso. org/ iso/
catalogue_detail.htm?csnumber=39066)
[2] ENISA Glossary (http:// www.enisa. europa. eu/ act/ rm/cr/ risk-management-inventory/glossary#G3)
[3] "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk Management Insight LLC, November 2006 (http:// www.
riskmanagementinsight.com/ media/ docs/ FAIR_introduction. pdf);
[4] IETF RFC 2828
External links
• FISMApedia TERM (http:// fismapedia. org/ index.php?title=Term:Asset)
Attack (computer)
50
Attack (computer)
In computer and computer networks an attack is any attempt to destroy, expose, alter, disable, steal or gain
unauthorized access to or make unauthorized use of an asset.
[1]
Definitions
IETF
Internet Engineering Task Force defines attack in RFC 2828 as:
[2]
an assault on system security that derives from an intelligent threat, i.e., an intelligent act that is a deliberate
attempt (especially in the sense of a method or technique) to evade security services and violate the security
policy of a system.
US Government
CNSS Instruction No. 4009 dated 26 April 2010 by Committee on National Security Systems of United States of
America
[3]
defines an attack as:
Any kind of malicious activity that attempts to collect, disrupt, deny, degrade, or destroy information system
resources or the information itself.
The increasing dependencies of modern society on information and computers networks (both in private and public
sectors, including military)
[4]

[5]

[6]
has led to new terms like cyber attack and Cyberwarfare.
CNSS Instruction No. 4009
[3]
define a cyber attack as:
An attack, via cyberspace, targeting an enterprise’s use of cyberspace for the purpose of disrupting, disabling,
destroying, or maliciously controlling a computing environment/infrastructure; or destroying the integrity of
the data or stealing controlled information.
Phenomenology
An attack can be active or passive.
[2]
An "active attack" attempts to alter system resources or affect their operation.
A "passive attack" attempts to learn or make use of information from the system but does not affect system
resources. (E.g., see: wiretapping.)
An attack can be perpetrated by an insider or from outside the organization;
[2]
An "inside attack" is an attack initiated by an entity inside the security perimeter (an "insider"), i.e., an entity
that is authorized to access system resources but uses them in a way not approved by those who granted the
authorization.
An "outside attack" is initiated from outside the perimeter, by an unauthorized or illegitimate user of the
system (an "outsider"). In the Internet, potential outside attackers range from amateur pranksters to organized
criminals, international terrorists, and hostile governments.
The term "attack" relates to some other basic security terms as shown in the following diagram:
[2]
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
| An Attack: | |Counter- | | A System Resource: |
| i.e., A Threat Action | | measure | | Target of the Attack |
| +----------+ | | | | +-----------------+ |
| | Attacker |<==================||<========= | |
Attack (computer)
51
| | i.e., | Passive | | | | | Vulnerability | |
| | A Threat |<=================>||<========> | |
| | Agent | or Active | | | | +-------|||-------+ |
| +----------+ Attack | | | | VVV |
| | | | | Threat Consequences |
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
A resource (both physical or logical), called an asset, can have one or more vulnerabilities that can be exploited by a
threat agent in a threat action. The result can potentially compromises the Confidentiality, Integrity or Availability
properties of resources (potentially different that the vulnerable one) of the organization and others involved parties
(customers, suppliers).
The so called CIA triad is the basis of Information Security.
The attack can be active when it attempts to alter system resources or affect their operation: so it compromises
Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does
not affect system resources: so it compromises Confidentiality.
A Threat is a potential for violation of security, which exists when there is a circumstance, capability, action, or
event that could breach security and cause harm. That is, a threat is a possible danger that might exploit a
vulnerability. A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal
organization) or "accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God"
such as an earthquake, a fire, or a tornado).
[2]
A set of policies concerned with information security management, the Information Security Management Systems
(ISMS), has been developed to manage, according to Risk management principles, the countermeasures in order to
accomplish to a security strategy set up following rules and regulations applicable in a country.
[7]
An attack should led to a security incident i.e. a security event that involves a security violation. In other words, a
security-relevant system event in which the system's security policy is disobeyed or otherwise breached.
The overall picture represents the risk factors of the risk scenario.
[8]
An organization should make steps to detect, classify and manage security incidents. The first logical step is to set up
an Incident response plan and eventually a Computer emergency response team.
In order to detect attacks, a number of countermeasures can be set up at organizational, procedural and technical
levels. Computer emergency response team, Information technology security audit and Intrusion detection system
are example of these.
[9]
Types of attacks
An attack usually is perpetrated by someone with bad intentions: Black hatted attacks falls in this category, while
other perform Penetration testing on an organization information system to find out if all foreseen controls are in
place.
The attacks can be classified according to their origin: i.e. if it is conducted using one or more computers: in the last
case is called a distributed attack. Botnet are used to conduct distributed attacks.
Other classifications are according to the procedures used or the type of vulnerabilities exploited: attacks can be
concentrated on network mechanisms or host features.
Some attacks are physical: i.e. theft or damage of computers and other equipments. Other are logical, trying to force
changes in the logic used by computers or network protocols in order to achieve unforeseen (by the original
designer) result but useful for the attacker. The general term used to describe the category of software used to
logically attacking computers is called malware.
The following is a partial short list of attacks:
Attack (computer)
52
• Passive
• Network
• wiretapping
• Port scanner
• Idle scan
• Active
• Denial-of-service attack
• Spoofing
• Network
• Man in the middle
• ARP poisoning
• Ping flood
• Ping of death
• Smurf attack
• Host
• Buffer overflow
• Heap overflow
• Format string attack
Consequence of a potential attack
A whole industry is working trying to minimize the likelihood and the consequence of an information attack.
For a partial list look at Category:Computer security software companies
They offer different products and services, aimed at:
• study all possible attacks category
• publish books and articles about the subject
• discovering vulnerabilities
• evaluating the risks
• fixing vulnerabilities
• invent, design and deploy countermeasures
• set up contingency plan in order to be ready to respond
Many organization are trying to classify vulnerability and their consequence: the most famous vulnerability database
is the Common Vulnerabilities and Exposures
The Computer emergency response teams were set up by government and large organization to handle computer
security incidents.
Attack (computer)
53
References
[1] Free download of ISO/IEC 27000:2009 from ISO, via their ITTF web site. (http:// standards. iso. org/ittf/PubliclyAvailableStandards/
c041933_ISO_IEC_27000_2009.zip)
[2] Internet Engineering Task Force RFC 2828 Internet Security Glossary
[3] CNSS Instruction No. 4009 (http:// www.cnss. gov/ Assets/ pdf/ cnssi_4009. pdf) dated 26 April 2010
[4] Cortada, James W. (2003-12-04) The Digital Hand: How Computers Changed the Work of American Manufacturing, Transportation, and
Retail Industries USA: Oxford University Press pp. 512 ISBN 0195165888
[5] Cortada, James W. (2005-11-03) The Digital Hand: Volume II: How Computers Changed the Work of American Financial, Telecommunicati
ons, Media, and Entertainment Industries USA: Oxford University Press ISBN 978-0195165876
[6] Cortada, James W. (2007-11-06) The Digital Hand, Vol 3: How Computers Changed the Work of American Public Sector Industries USA:
Oxford University Press pp. 496 ISBN 978-0195165869
[7] Wright, Joe; Jim Harmening (2009) "15" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 257
ISBN 978-0-12-374354-1
[8] ISACA THE RISK IT FRAMEWORK (registration required) (http:// www.isaca. org/Knowledge-Center/ Research/ Documents/
RiskIT-FW-18Nov09-Research. pdf)
[9] Caballero, Albert (2009) "14" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 225
ISBN 978-0-12-374354-1
External links
• Term in FISMApedia (http:// fismapedia. org/index. php?title=Term:Attack)
AutoRun
AutoRun and the companion feature AutoPlay are components of the Microsoft Windows operating system that
dictate what actions the system takes when a drive is mounted.
AutoRun was introduced in Windows 95 to ease application installation for non-technical users and reduce the cost
of software support calls. When an appropriately configured CD-ROM is inserted into a CD-ROM drive, Windows
detects the arrival and checks the contents for a special file containing a set of instructions. For a commercial
application, these instructions normally initiate installation of the software from the CD-ROM. To maximise the
likelihood of installation success, AutoRun also acts when the drive is accessed ("double-clicked") in Windows
Explorer (or "My Computer").
Until the introduction of Windows XP, the terms AutoRun and AutoPlay were used interchangeably, developers
often using the former term and end users the latter. This tendency is reflected in Windows Policy settings named
AutoPlay that change Windows Registry entries named AutoRun, and in the autorun.inf file which causes
"AutoPlay" to be added to drives’ context menus. The terminology was of little importance until the arrival of
Windows XP and its addition of a new feature to assist users in selecting appropriate actions when new media and
devices were detected. This new feature was called AutoPlay and a differentiation between the two terms was
created.
[1]
AutoRun
54
AutoRun
AutoRun, a feature of Windows Explorer (actually of the shell32 dll) introduced in Windows 95, enables media and
devices to launch programs by use of commands listed in a file called autorun.inf, stored in the root directory of
the medium.
Primarily used on installation CD-ROMs, the applications called are usually application installers. The autorun.inf
file can also specify an icon which will represent the device visually in Explorer along with other advanced
features.
[1]
The terms AutoRun and AutoPlay tend to be interchangeably used when referring to the initiating action, the action
that detects and starts reading from discovered volumes. The flowchart illustration in the AutoPlay article shows how
AutoRun is positioned as a layer between AutoPlay and the Shell Hardware Detection service and may help in
understanding the terminology. However, to avoid confusion, this article uses the term AutoRun when referring to
the initiating action.
AutoPlay
AutoPlay in Windows Vista
AutoPlay is a feature introduced in Windows XP which examines
removable media and devices and, based on content such as pictures,
music or video files, launches an appropriate application to play or
display the content.
[1]
If available, settings in an autorun.inf file can
add to the options presented to the user.
AutoPlay is based on a set of handler applications registered with the
AutoPlay system. Each media type (Pictures, Music, Video) can have a
set of registered handlers which can deal with playing or display that
type of media.
Each hardware device can have a default action occurring on discovery
of a particular media type, or the AutoPlay dialog can prompt the user
what action to take.
AutoRun activation
The AutoRun sequence starts with the initial discovery of a new device or new piece of media. Following this,
notification of interested parties occurs, of which the Windows Explorer shell is of primary interest. After checking
certain Registry settings to see if AutoRun can proceed, parsing of an optional autorun.inf may occur and any
necessary actions are taken.(not working)
The initial sequence is handled much the same in every version of Windows from Windows 95. However, the way
the autorun.inf file is read and acted upon and the level of integration of AutoRun with AutoPlay has changed
significantly from the time AutoPlay was introduced in Windows XP until the present handling in Windows 7.
AutoRun
55
Initiation and notification
When a device with AutoRun-compatible drivers receives new media, a "Media Change Notification" event occurs.
The Windows OS then notifies interested applications that a device change has occurred. The notification method
used can change depending on the device type.
If the device changed is a volume (like a CD) or a port (like a serial port) Windows broadcasts a
WM_DEVICECHANGE notification to all top-level windows.
[2]

[3]
Windows calls this a "basic" notification. A
top-level window is one which is a descendant of the desktop.
However, if the device changed is not one of these types an application can use the
RegisterDeviceNotification
[4]
function to register to receive device notifications.
An article on the CodeProject website, "Detecting Hardware Insertion and/or Removal"
[5]
, with
clarifications
[6]
from a blog by Doran Holan is of particular technical interest here.
Non-volume devices are those devices that do not appear as drive letters in "My Computer". These are not handled
by any part of AutoRun - any actions taken for these devices are taken either by device specific software or by
AutoPlay. See AutoPlay#Devices that are not drives.
When Explorer receives notification of a volume change, it performs a number of actions:
[7]

[8]
1. Checks to see if AutoRun has been disabled through the Registry. If AutoRun is disabled for that drive or drive
type, Explorer does not proceed further. There have been bugs in this area.
2. Checks that the root directory of the inserted media contains an autorun.inf file, which might be read. See below.
3. Sends a QueryCancelAutoPlay message to the foreground window. An application which has registered its
interest in receiving this message using RegisterWindowMessage can respond to this message to halt
AutoRun (and thus AutoPlay) at this point. Any application, foreground or not, can also be notified by using the
IQueryCancelAutoPlay COM interface
[9]
available in Windows XP and later.
4. Alters double-click and contextual menu behaviours. When a user double clicks on the drive icon in Explorer or
right clicks to get a context menu, what happens is fully programmable by settings in the autorun.inf file.
5. Adds an autorun.inf controllable icon and descriptive text to the drive icon.
6. Checks to see if the ⇧ Shift key is held down. If it is then Windows Vista (and later Windows versions) will invoke
the AutoPlay dialog regardless of settings to the contrary.
[10]
Previous versions of Windows will not continue
with the process.
[8]
7. Finally, if this point has been reached, either:
• takes no further action.
• executes the "AutoRun task", the application optionally specified in the open or shellexecute keys in an
autorun.inf's [autorun] section.
• invokes AutoPlay.
Which choice is made depends on the version of Windows in use, instructions from the autorun.inf if available
and the type of the media discovered.
AutoRun
56
Changing behaviour
Before AutoPlay
On Windows versions prior to Windows XP, an autorun.inf file on any drive type will be read and its instructions
followed. The AutoRun task, if specified, is executed immediately without user interaction.
[11]
This includes
DRIVE_REMOVABLE, DRIVE_FIXED and DRIVE_REMOTE drive types.
AutoRun will work with network drives (the DRIVE_REMOTE drive type) that are mapped to a drive letter.
AutoRun will also work with floppy drives that are provided with autorun-compatible drivers.
[8]
The default Registry settings on Windows versions prior to Windows XP (See NoDriveTypeAutoRun), disable
Remote and Removable drives from AutoRun initiation, leaving Fixed and CDROM drive types active by default.
Introducing AutoPlay
With the introduction of AutoPlay in Windows XP, the final stage action (stage 7 above) for some drive types
changed from executing an application to invoking AutoPlay. From Windows Vista, the AutoPlay system is
integrated into every aspect of media handling and there is no automatic execution of the AutoRun task.
The default Registry settings add Removable drives to those that initiated AutoRun. In Windows XP and higher,
except Windows Server 2003, only the Unknown and Remote drive types are not active for AutoRun.
The handling of the autorun.inf file changes very significantly between each Windows version. The details can be
found in the autorun.inf article. The current handling in Windows 7 is that only drives of type DRIVE_CDROM may
specify an AutoRun task, alter double-click behaviour or change context menus.
The AutoPlay safety net
It would appear that AutoPlay, by transferring control of what were previously automatic and invisible actions to
AutoPlay, acts to increase user control and safety. This applies especially from Windows Vista, where all media and
devices fall under AutoPlay control.
However, it is important to note that:
• A user can instruct AutoPlay to make automatic choices on their behalf, including the execution of any AutoRun
task.
• When a user double clicks on the drive icon in Explorer or right clicks to get a context menu, what happens next
is fully programmable by the autorun.inf file and is essentially outside AutoPlay's purview. This is true under any
Windows operating system.
• Disabling AutoRun may force a user to double click the drive icon to get a contents list, thus (potentially?
possibly?) increasing the chance of malware infiltration.
Registry settings
AutoRun consults Windows Registry values to decide whether to initiate actions for any particular drive or drive
type. These values can be changed using several methods, one of which is using Group Policy.
The primary relevant Registry entry names are NoDriveTypeAutoRun and NoDriveAutoRun. These exist in
both per-machine and per-user settings and their location and priority in the Registry are described in further detail
below.
AutoRun
57
Drive types
The drive types are distinguished by Type Name as follows:
[12]
Type name Description
DRIVE_UNKNOWN The drive type cannot be determined
DRIVE_REMOVABLE The drive has removable media (floppy drive, USB flash drive)
DRIVE_FIXED The disk cannot be removed from the drive (hard disk)
DRIVE_REMOTE The drive is a remote (network) drive
DRIVE_CDROM The drive is a CD-ROM or DVD-ROM drive
DRIVE_RAMDISK The drive is a RAM disk
Registry terminology
The Windows Registry is a hierarchical database that stores configuration settings and options for the operating
system. The terminology is somewhat misleading so it is briefly summarised here.
A Registry key is similar to a folder that, in addition to values, each key can contain subkeys which in turn may
contain subkeys, and so on.
A Registry value consists of a name-data pair. Microsoft documentation commonly uses the term "entry" as an
equivalent term. It also uses "value" for "data" when it is obvious what is meant. To avoid confusion, this article
always uses the term "entry" when referring to the name-data pair.
Two Registry keys that are very commonly referred to are HKEY_LOCAL_MACHINE which contains per-machine
settings, and HKEY_CURRENT_USER which contains settings for the currently logged-on user. These are almost
always abbreviated as HKLM and HKCU respectively. There may be many users of a machine; their settings are
stored in HKEY_USERS, HKCU is actually just a link to the appropriate place in HKEY_USERS.
Changing Registry settings
Registry settings may be changed directly by using the GUI regedit tool or the command line reg.exe utility.
Settings can also be placed in a text file,
[13]
named with a .reg extension type. For example, "mychanges.reg".
When the file is double clicked, the settings in the file are entered into the Registry, permissions allowing.
They can be changed indirectly by using Group Policy, applied locally to a single computer with GPEdit.msc or
to a domain with gpmc.msc.
It may be necessary to either logout or restart the computer in order for any Registry changes to take effect.
Evaluation order
The NoDriveAutoRun and NoDriveTypeAutoRun Registry entries can exist in two places, the per-user
setting (under HKEY_CURRENT_USER) and the per-machine setting (under HKEY_LOCAL_MACHINE). If an
entry appears under HKEY_LOCAL_MACHINE, then any corresponding entry under HKEY_CURRENT_USER is
completely ignored. The data values are not merged in any way.
When deciding whether to activate AutoRun, both NoDriveAutoRun and NoDriveTypeAutoRun Registry
entries are consulted. If either value indicates a drive should be disabled then AutoRun is disabled for that drive.
Thus in the following example:
AutoRun
58
HKEY_LOCAL_MACHINE HKEY_CURRENT_USER
NoDriveAutoRun NoDriveTypeAutoRun NoDriveAutoRun NoDriveTypeAutoRun
0x08 (Not Present) 0x03FFFFFF 0x95
the data value taken for NoDriveAutoRun is 0x08, disabling drive D and the data value taken for
NoDriveTypeAutoRun is 0x95, disabling removable and network drives. The per-user NoDriveAutoRun
entry is never used.
NoDriveTypeAutoRun
HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
Entry name Data type Range Default
NoDriveTypeAutoRun REG_DWORD 0x00 to 0xFF 0x95 or 0x91
This Registry entry disables or enables the AutoRun feature on all drives of the type specified.
[14]
It reflects the
setting of the relevant Autoplay Group Policy. Valid data ranges from 0x00 to 0xFF in hexadecimal notation. If the
entry is not present, the default data value is either 0x95 or 0x91 depending on the version of Windows used. An
entry present in HKLM overrides any entry present in HKCU.
The entry data is a bitmapped value, where a bit set to 1 disables AutoRun on a particular type of drive. The bit
settings for each type of drive are shown below:
Note that bit number 1 is unused and that the "Unknown" type is represented twice. Setting all bits to 1 would give a
hexadecimal value of 0xFF, decimal 255, and would disable AutoRun on all types of drives.
The default setting for this entry depends on the version of Windows being used:
[11]

[15]
AutoRun
59
Operating system Default setting
Windows 7 0x91
Windows Server 2008 0x91
Windows Vista 0x91
Windows Server 2003 0x95
Windows XP 0x91
Windows 2000 0x95
Windows 95/98 0x95
NoDriveAutoRun
HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
Entry name Data type Range Default
NoDriveAutoRun REG_DWORD 0x0 to 0x03FFFFFF 0x0
This Registry entry disables or enables the AutoRun feature on individual drives.
[16]
It is not associated with a Group
Policy and does not exist by default. The data value is taken to be 0 if the entry is not present. An entry present in
HKLM overrides any entry present in HKCU.
The data is a 32 bit (DWORD) bitmapped value, of which the lower 26 bits are used to represent each of the 26 drive
letters from A to Z. Thus the valid data range is from 0x0 to 0x03FFFFFF. The least significant bit (the right most
bit) represents drive A, and the 26th bit from the right represents drive Z.
A bit set to 1 disables AutoRun on a particular drive. For example, if the data value is set to 0x8 (1000 binary),
AutoRun is disabled on drive D.
Group Policy
The Group Policy settings dialog
The only Group Policy settings available for AutoRun affect the
NoDriveTypeAutoRun Registry entries. The policy is available on
either a per-machine or a per-user basis reflecting the Registry entry
location in either HKLM or HKCU.
[14]

[16]
As described above, a
per-machine policy setting will cause the per-user policy setting to be
ignored.
When a policy is Enabled, Group Policy will add the
NoDriveTypeAutoRun entry to the Registry. If the policy is
Disabled or set to Not configured, Group Policy deletes this entry from
the Registry. System defaults may then take effect as described in the
NoDriveTypeAutoRun section.
The policy names, locations and possible settings vary slightly between
Windows versions. The list of settings are relatively short and are
always additional to the system default setting. Therefore, on Windows 2000, enabling the "Disable Autoplay"
policy and setting it to "CD-ROM drives", disables AutoRun (as distinct from AutoPlay) for CD-ROM and DVD
drives, removable drives, network drives, and drives of unknown type.
AutoRun
60
This setting cannot be used to enable AutoRun on drives on which it is disabled by default or disable AutoRun for
drives not listed. To disable or enable any particular drives or drive types, the Registry must be edited manually.
Windows Server 2003, Windows XP, and Windows 2000
The per-machine policy location is:
Group Policy \ Computer Configuration \ Administrative Templates \ System
The per-user policy location is:
Group Policy \ User Configuration \ Administrative Templates \ System
The relevant policy is "Turn off Autoplay". In Windows 2000 the policy is called "Disable Autoplay" instead.
Once the policy is Enabled it can be set to "All drives" or "CD-ROM drives". The latter setting adds CD-ROM drives
to the existing list of disabled drive types as described above.
Windows Vista, Windows Server 2008
The per-machine policy location is:
[17]
Computer Configuration \ Administrative Templates \ Windows Components \ Autoplay Policies
The per-user policy location is:
User Configuration \ Administrative Templates \ Windows Components \ AutoPlay Policies
The relevant policy is "Turn off Autoplay" and can be set for CD-ROM, DVD-ROM and removable drives or all
drives.
Two related policies were added in Vista and Server 2008:
[18]
Default behavior for AutoRun
HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
Entry name Data type Range Default
NoAutoRun REG_DWORD
Sets the default behavior for AutoRun commands found in autorun.inf files.
Prior to Windows Vista, when media containing an autorun.inf specifying an AutoRun task was inserted, the default
action was to automatically execute the program without user intervention. From Windows Vista the default
behaviour is to invoke AutoPlay and represent the AutoRun task as one of the dialog options. This is also the
behaviour when this policy is Not configured or Disabled.
If this policy is Enabled, the behaviour can be changed to either:
• Completely disable autorun.inf commands or
• Automatically execute the autorun.inf command as per previous Windows versions.
Don't set the always do this checkbox
AutoRun
61
HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
Entry name Data type Range Default
DontSetAutoplayCheckbox REG_DWORD
If this policy is Enabled, the "Always do this..." checkbox in the AutoPlay dialog will not be set by default when the
dialog is shown.
Windows 7, Windows Server 2008 R2
In these versions of Windows, the ability of an autorun.inf file to set an AutoRun task, alter double-click behaviour
or change context menus is restricted to drives of type DRIVE_CDROM. There are no policy settings that will
override this behaviour. Policy locations and settings are as per Windows Vista, Windows Server 2008 above with the
addition of:
[18]
Turn off Autoplay for non-volume devices
HKLM\Software\Policies\Microsoft\Windows\Explorer
HKCU\Software\Policies\Microsoft\Windows\Explorer
Entry name Data type Range Default
NoAutoplayfornonVolume
If this policy is enabled, AutoPlay will be disabled for non-volume devices.
Altering AutoRun behaviour
Pressing the Shift key
If the ⇧ Shift key is held down at a certain point in the execution sequence Windows Vista invokes the AutoPlay
dialog regardless of any AutoPlay settings to the contrary.
[10]
Previous versions of Windows do not execute the
AutoRun task.
[8]
Given that Shift must be held down until Windows checks for it, it may be a considerable amount of time before it
becomes effective. The time taken primarily depends on the time to recognise the new hardware and the time taken
for CD-ROMs to spin up. It is unsafe to rely on this method.
Auto Insert Notification
Certain Media Change Notification events may be suppressed by altering certain Registry entries. "Media Change
Notification" is the generic term; for CD-ROM drives, the specific term is "Auto Insert Notification".
AutoRun
62
HKLM\SYSTEM\CurrentControlSet\Services\Cdrom
Entry name Data type Range Default
AutoRun REG_DWORD 0 or 1 1
For CD-ROM drives, changing the value of this Registry entry to 0 will disable Auto Insert Notification for
CD-ROM drives only.
[19]
A Windows restart will be necessary.
Data value Meaning
0 Does not send an MCN message
1 Sends an MCN message
Under Windows 95/98/ME, this setting can be changed under Device Manager, accessible from the System icon in
Control Panel.
Auto insert notification under Windows 98
Although the Registry entry is named "AutoRun", it only suppresses
the MCN message. The MCN message does trigger AutoRun initiation
but it also instructs the Explorer shell to update its views and contents.
Thus, as a side effect only, this disables AutoRun for CD-ROM drives.
However, Explorer will now not update its view when a new CD is
inserted; it will show the contents of the previous CD until F5 is
pressed or View/Refresh is selected from the Explorer menu. This
could result in severe confusion for users.
For this reason the Media Change Notification message should not be
disabled unless there is absolutely no alternative; AutoRun can be
disabled for individual drives using Group Policy or the Registry.
HKLM\SYSTEM\CurrentControlSet\Services\Cdrom
Entry name Data type
AutoRunAlwaysDisable REG_MULTI_SZ
This entry is used to suppress the MCN message for specifically listed type of CD-ROM drive,
[20]
primarily
CD-ROM changers. The data is a set of device identifiers, which matches those identifiers reported to the system by
the devices themselves.
The default value for this entry consists of products identified by Microsoft testing as being unable to support
AutoRun. This entry should not be altered from its default.
AutoRun
63
Editing Group Policy
AutoRun may be suppressed on particular drives and drive types by using the methods described in the Group Policy
section. However, the Group Policy Editor is not available on Home versions of Windows XP
[21]
and does not
provide any fine-grained drive selection facilities.
However, Group Policy would be the accepted method of disabling AutoRun on an entire Windows domain.
Registry files
A Registry setting file can be created that, when executed, makes the desired changes in the Registry.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\policies\Explorer]
"NoDriveTypeAutoRun"=dword:000000ff
Note that the actual file should always end with a blank line. This is not optional.
[13]
In the above example, AutoRun would be disabled for all drives and for all users. This example would need to be run
as Administrator and a reboot would be needed for the setting to take complete effect.
Initialization file mapping
Windows Vista and later versions of Windows have a policy setting, "Default behavior for AutoRun", that can be set
to disallow the reading of an autorun.inf file on any volume. This avoids certain scenarios where malware leverages
autorun.inf functionality to infect a machine. Previous versions of Windows do not have this policy setting but the
use of initialisation file mapping is an effective workaround.
[22]
As an autorun.inf file is a standard Windows INI file, the appropriate API calls are used by Windows when fetching
its settings. These API calls can be redirected using the INI file mapping method. The following Registry file
illustrates the workaround, where all autorun.inf settings are taken solely from the
HKEY_LOCAL_MACHINE\Software\DoesNotExist Registry key:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\IniFileMapping\Autorun.inf]
@="@SYS:DoesNotExist"
As this key does not exist, it is as if the autorun.inf file contains no settings information. It is important to note that
this applies to any autorun.inf in any location and on any drive.
Both the policy setting and this workaround have the drawback is that installation of software from an autorunning
install CD or DVD is no longer automatic. It will be necessary to view the CD's autorun.inf file and then execute the
appropriate install program manually.
AutoRun
64
Issues and security
The AutoRun disable bug
From Windows 2000 through to Windows 2008 Server, AutoRun-relevant Registry entries were not handled
properly leading to a security vulnerability.
[23]
Windows 95 and Windows 98 were not affected.
When AutoRun is disabled, Windows should not proceed further through the activation sequence than the Registry
check. However, it parses any autorun.inf found and does everything except the final action to invoke AutoPlay or
execute an application.
This leaves the user open to attack from malware which uses the autorun.inf to alter the double-click and contextual
menu behaviours. Double clicking the drive icon will infect the machine. Right Clicking and selecting the "Explore"
or "Open" options from the context menu is not a workaround as these menu items can be coopted by the appropriate
autorun.inf entries.
This bug was fixed in a number of security updates, detailed in Microsoft Knowledge Base article 967715.
[15]
Other issues
• If you add the computer to an Active Directory domain, the NoDriveTypeAutoRun value may be reset to a
default value.
[24]
This is due to Group Policy settings in the domain taking effect. This is not a bug.
• Some programs may deliberately change AutoRun Registry settings. Early versions of CD burning software like
Roxio have been known to change settings in this way.
[25]
• If the Group Policy "Restrict CD-ROM access to locally logged-on user only" security option under:
Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options
is turned on (Enabled), then AutoRun may not function.
[25]
Windows Installers will also malfunction because
"Local System" access to the CD-ROM will be denied.
[26]
This Group Policy setting reflects the value of the
Registry entry:
HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon
Entry name Data type Range Default
allocatecdroms REG_SZ 0 or 1 0
and should be set to 0.
• Real Player 10 interferes with AutoPlay functionality to the extent that it may look as if AutoRun or AutoPlay is
not working at all.
[27]

[28]

[29]
Attack vectors
AutoRun functionality has been used as a malware vector for some time. Prior to Windows Vista, the default action
with a CD-ROM drive type was to follow any autorun.inf file instructions without prompts or warnings. This makes
rogue CD-ROMs one possible infection vector.
In the same category are mixed content CD-ROMs. An audio CD, that a user would not expect to contain software at
all, can contain a data section with an autorun.inf. Some companies, such as Sony BMG, have used this vector to
install malware that attempts to protect against copying of the audio tracks.
U3 enabled flash drives, by emulating a CD-ROM unit, can also cause Windows to execute commands from the
autorun.inf found on the emulated CD-ROM.
Devices like the Huawei E220 HSDPA modem, validly use this method to autoinstall drivers for the modem itself.
However plugging in a flash drive from an unknown source is an unwise move. USB Switchblade
[30]
, and other
similar tools, have made U3 flash drive attacks trivial. Given the ease of writing script based attacks, anti-virus
AutoRun
65
software may be ineffective in preventing data and password stealing.
Social Engineering: The Conficker worm in
action
[31]
With a standard flash drive, social engineering attacks can be
employed to entice a user to click on the appropriate item in the
AutoPlay dialog. An alluring action string promising free games or
pornography would lure many users into the trap. At any time, double
clicking on the drive icon will use the autorun.inf automatically, a trap
more advanced users could fall into.
Any user can configure AutoPlay to make various decisions for them;
by checking the appropriate box in the AutoPlay dialog, running flash
drive malware becomes silent and automatic.
AutoRun malware has been extended to use hard drives,
[32]
picture
frames and other digital devices.
[33]
Care in dealing with external
devices is a security priority.
Attack mitigation
In addition to basic security precautions, which include
[34]
• following the principle of least privilege by not habitually running with Administrator privileges and
• applying all relevant security patches and updates,
exposure to these attacks can be minimised through the appropriate use of Group Policy and Registry settings. The
following security policies are a summary of those described within this article:
• Disable AutoRun (but see the AutoRun disable bug)
• Use the "Default behavior for AutoRun" Group Policy under Vista (see above) to disable autorun.inf commands
• Use initialization file mapping to nullify autorun.inf sections
• Under Windows 7, only CD and DVD drives may specify applications like the AutoRun task in the autorun.inf
file. Windows XP and later can be patched to behave in the same way with update KB971029.
[35]
In February
2011, this patch was added to the official Windows Update channel.
[36]
The Windows 7 AutoRun task behaviour
now becomes the default for all current versions of the Windows OS.
In addition, the following actions have been recommended by Microsoft, primarily during the Conficker worm
attacks:
• Prevent autorun.inf invocation from network shares by:
[15]
1. Deleting any existing autorun.inf file from the root of a mapped network drive
2. Denying Create rights to the root of a mapped network drive
• Prevent the use of USB storage devices by means of:
• USB settings within the System BIOS
• Appropriate Registry settings as described in Knowledge Base article 823732
[37]
• Setting USB devices to read only to prevent propagation of unknown worms (and theft of proprietary data)
[38]
AutoRun
66
References
[1] What's the difference between AutoPlay and AutoRun? (http:// windowshelp.microsoft.com/ Windows/ en-us/ help/
a19ac945-1007-4638-9615-e2c3bfd92b751033. mspx), Microsoft, Windows Vista Help
[2] How to receive notification of CD-ROM insertion or removal (http:// support.microsoft.com/ kb/ q163503/ ), Microsoft, Knowledge Base
[3] Detecting media insertion or removal (http:/ / msdn. microsoft.com/ en-us/ library/aa363215(VS.85). aspx), Microsoft, MSDN Library
[4] RegisterDeviceNotification function (http:// msdn. microsoft.com/ en-us/ library/aa363431(VS.85).aspx), Microsoft, MSDN Library
[5] http:// www. codeproject. com/ KB/ system/ HwDetect. aspx
[6] http:// blogs. msdn. com/ doronh/ archive/2006/ 02/ 15/ 532679. aspx
[7] Creating an AutoRun-Enabled Application (http:/ / msdn. microsoft.com/ en-us/ library/cc144206(VS.85).aspx), Microsoft, MSDN Library
[8] Enabling and Disabling AutoRun (http:// msdn. microsoft. com/ en-us/ library/cc144204(VS.85). aspx), Microsoft, MSDN Library
[9] IQueryCancelAutoPlay Interface (http:// msdn. microsoft.com/ en-us/ library/bb761373(VS.85).aspx), Microsoft, MSDN Library
[10] AutoPlay: frequently asked questions (http:// windowshelp. microsoft.com/ Windows/ en-us/ help/
7e1fe788-0747-4e00-895b-c3461b1ddd971033. mspx), Microsoft, Windows Vista Help
[11] How to Test autorun.inf Files (http:// support. microsoft.com/ kb/ 136214), Microsoft, Knowledge Base
[12] GetDriveType Function (http:// msdn. microsoft.com/ en-us/ library/aa364939.aspx), Microsoft, MSDN Library
[13] How to use a registration entries file (http:/ / support. microsoft.com/ kb/ 310516), Microsoft, Knowledge Base
[14] Windows 2000 Registry: NoDriveTypeAutoRun (http:// technet.microsoft.com/ en-us/ library/cc959381.aspx), Microsoft, TechNet
[15] How to disable the Autorun functionality in Windows (http:/ / support. microsoft.com/ kb/ 967715), Microsoft, Knowledge Base
[16] Windows 2000 Registry: NoDriveAutoRun (http:// technet.microsoft.com/ en-us/ library/cc959387.aspx), Microsoft, TechNet
[17] Windows Vista Security Guide, Chapter 3 (http:/ / technet. microsoft.com/ en-us/library/bb629455. aspx), Microsoft, TechNet
[18] Group Policy Settings Reference for Windows and Windows Server (http:/ / www. microsoft.com/ downloads/ details.
aspx?familyid=18C90C80-8B0A-4906-A4F5-FF24CC2030FB&displaylang=en), Microsoft, Downloads, Excel Spreadsheets.
[19] Windows 2000 Registry: AutoRun (http:// technet. microsoft. com/ en-gb/library/cc976182.aspx), Microsoft, TechNet
[20] Windows 2000 Registry: AutoRunAlwaysDisable (http:/ / technet.microsoft.com/ en-gb/library/cc960238.aspx), Microsoft, TechNet
[21] Windows XP Pro Resource Kit, Differences with Windows XP Home Edition (http:// technet.microsoft.com/ en-us/ library/bb457127.
aspx), Microsoft, TechNet
[22] Memory stick worms (http:/ / nick. brown.free.fr/blog/ 2007/ 10/ memory-stick-worms.html), Nick Brown's blog
[23] Windows Vista fails to properly handle the NoDriveTypeAutoRun registry value (http:// www.kb.cert. org/vuls/ id/ 889747), US-CERT
[24] The NoDriveTypeAutoRun subkey value is reset... (http:/ / support.microsoft.com/ kb/ 895108), Microsoft, Knowledge Base
[25] The AutoRun feature or the AutoPlay feature does not work... (http:// support.microsoft. com/ kb/ 330135), Microsoft, Knowledge Base
[26] You receive an "Installation ended prematurely because..." (http:// support. microsoft. com/ kb/230895), Microsoft, Knowledge Base
[27] Camera and Scanner Wizard, stopped working (http:// gladiator-antivirus.com/ forum/index.php?s=& showtopic=15090&
view=findpost& p=54229), Gladiator Security chat forum
[28] Autoplay not working with digital camera (http:// www.vista-xp.co.uk/ forums/ hardware-operating-problems/
3469-autoplay-not-working-digital-camera-sorted. html#post30012), vista-xp chat forum
[29] How to repair your camera Autoplay download (http:// www.tech-archive.net/ Archive/WinXP/ microsoft.public.windowsxp.photos/
2005-01/0365. html), tech-archive.net chat forum
[30] http:// wiki.hak5. org/wiki/ USB_Switchblade
[31] http:// isc. sans. org/diary. html?storyid=5695
[32] Chinese Trojan on Maxtor HDDs spooks Taiwan (http:/ / www. channelregister.co. uk/ 2007/ 11/ 12/ maxtor_infected_hdd_updated/), The
Register, 12 November 2007
[33] Malware hitches a ride on digital devices (http:// www.theregister.co. uk/ 2008/ 01/ 11/malware_digital_devices/ ), The Register, 11
January 2008
[34] Virus alert about the Win32/Conficker worm (http:// support. microsoft.com/ kb/ 962007), Microsoft, Knowledge Base
[35] Update to the AutoPlay functionality in Windows (http:// support. microsoft.com/ kb/ 971029), Microsoft, Knowledge Base
[36] Deeper insight into the Security Advisory 967940 update (http:// blogs. technet.com/ b/ msrc/ archive/ 2011/ 02/ 08/
deeper-insight-into-the-security-advisory-967940-update.aspx), Microsoft, Security Response Center blogs
[37] How can I prevent users from connecting to a USB storage device? (http://support.microsoft.com/ kb/ 823732/ ), Microsoft, Knowledge
Base
[38] Removable storage devices are not recognized after installing Windows XP SP2 (http:/ / support. microsoft.com/ kb/ 555443), Microsoft,
Knowledge Base
AutoRun
67
External links
• AutoRun and AutoPlay Reference (http:// msdn. microsoft.com/ en-us/ library/cc136610(VS.85).aspx),
Microsoft, MSDN Library
• Memory stick worms (http:/ / nick. brown.free.fr/blog/ 2007/ 10/ memory-stick-worms.html), Nick Brown's
blog
• Dan McCloy's Autorun Reference Guide (http:/ / autorun.synthasite. com/ )
• Security Watch Island Hopping: The Infectious Allure of Vendor Swag (http:// technet. microsoft.com/ en-us/
magazine/ cc137730. aspx), TechNet Magazine
• Figure 4: querycancelautoplay example code (http:// www.microsoft.com/ msj/ 0998/ win320998. aspx),
Microsoft Systems Journal, September 1998
• AutoPlay Repair Wizard (http:/ / www. microsoft.com/ downloads/ details.
aspx?familyid=C680A7B6-E8FA-45C4-A171-1B389CFACDAD& displaylang=en), Microsoft Download Center
• Test your defenses against malicious USB flash drives (http:/ / blogs. computerworld.com/
test_your_defenses_against_malicious_usb_flash_drives), Computerworld blog, January 24, 2009
• The best way to disable Autorun for protection from infected USB flash drives (http:// blogs. computerworld.
com/ the_best_way_to_disable_autorun_to_be_protected_from_infected_usb_flash_drives), Computerworld blog,
January 30, 2009
• How To Remove AutoRun Virus (http:/ / www.yahowto. com/ How_To_Remove_Autorun_Virus)
• Microsoft PowerToys (http:// www. microsoft.com/ windowsxp/ Downloads/ powertoys/ Xppowertoys. mspx),
Microsoft, Tweak UI
• Online Information Resource (http:// www. autostart-cd-rom.com/ en/ ) on the autorun/autostart feature of
Microsoft Windows
• AutoRunConf (http:/ / tabgen. com/ autorunconf/autorunconf.htm) simple configuration tool for Autorun
settings.
• Disable Autorun (http:/ / www. disableautorun. com) turn on/off the autorun/autoplay feature of Windows
Blacklist (computing)
68
Blacklist (computing)
In computing, a blacklist or block list is a basic access control mechanism that allows everyone access, except for
the members of the black list (i.e. list of denied accesses). The opposite is a whitelist, which means allow nobody,
except members of the white list. As a sort of middle ground, a greylist, contains entries that are temporarily blocked
or temporarily allowed. Greylist items may be reviewed or further tested for inclusion in a blacklist or whitelist.
An organization may keep a blacklist of software or websites in its computer system. Titles on the list would be
banned and everything else would be allowed. For example, a school might blacklist Limewire and ICQ; other
Internet services would still be allowed.
Examples
• Companies like Google, Norton and Sucuri keep internal blacklists of sites known to have malware and they
display a warning before allowing the user to click them.
• Content-control software such as DansGuardian and SquidGuard may work with a blacklist in order to block
URLs of sites deemed inappropriate for a work or educational environment.
• An e-mail spam filter may keep a blacklist of addresses, any mail from which would be prevented from reaching
its intended destination. A popular technique for implementing blacklists is DNS blacklisting (DNSBL).
• A firewall or IDS may also use a blacklist to block known hostile IPs and/or networks. An example for such a list
would be the OpenBL project.
• Many copy protection schemes include software blacklisting.
• Members of online auction sites may add other members to a personal blacklist. This means that they cannot bid
on or ask questions about your auctions, nor can they use a "buy it now" function on your items.
• Yet another form of list is the yellow list which is a list of email server IP addresses that send mostly good email
but do send some spam. Examples include Yahoo, Hotmail, and Gmail. A yellow listed server is a server that
should never be accidentally blacklisted. The yellow list is checked first and if listed then black list tests are
ignored.
• In Linux modprobe, the blacklist modulename entry in a modprobe configuration file indicates that all of the
particular module's internal aliases are to be ignored. There are cases where two or more modules both support the
same devices, or a module invalidly claims to support a device.
• Many browsers have the ability to consult anti-phishing blacklists in order to warn users who unwittingly aim to
visit a fraudulent website
• Many P2P programs support blacklists that block access from sites known to be owned by Intellectual Property
holders. An example is the Bluetack [1] blocklist set.
Blacklist (computing)
69
References
• http:/ / code. google. com/ apis/ safebrowsing/
[2]
• http:/ / sucuri. net/ tools/ blacklist/
[3]
• http:/ / safeweb. norton.com/
[4]
References
[1] http:/ / webcache.googleusercontent. com/ search?q=cache:Wdc1QjvkOh0J:www. bluetack. co.uk/ forums/index.
php%3Fautocom%3Dfaq%26CODE%3D02%26qid%3D18+ bluetack+list& cd=6&hl=en& ct=clnk& gl=us& client=safari&source=www.
google. com
[2] http:// code.google. com/ apis/ safebrowsing/
[3] http:/ / sucuri.net/ tools/ blacklist/
[4] http:// safeweb.norton.com/
Blue Cube Security
Blue Cube Security Ltd is an independent IT solutions provider delivering enterprise-wide IT security solutions.
Blue Cube was founded by CEO Gary Haycock-West in 2000. Blue Cube's headquarters are in Forest Row, East
Sussex in the UK, with a further office in Wellington, New Zealand.
Blue Cube provides solutions in enterprise-wide security, including authentication, digital certificates, intrusion
detection, intrusion prevention, email and file encryption, firewalls, VPNs, vulnerability, and scanning.
References
• Pro Security Zone -The Choice Between Managing Internet Access And Blocking It
[1]
• Pro Security Zone - PINsafe Added To Security Tools Offered By Blue Cube
[2]
• Info Security Magazine - Data Lost, Not Found
[3]
-
• HR Magazine - Collaboration: Learn from an organisation in the same sector as your own
[4]
-
• Post Magazine - Policyholder Security - The key to data protection
[5]
-
• SC Magazine - Blue Cube and SC survey will map the effects of recession on IT security
[6]
• Computer Weekly - Data leakage protection: how to secure your most vital assets
[7]
-
• CRN - Mature Imperva opts for two tiers
[8]
• Financial Times - How to Survive an IT Squeeze
[9]
• Post Magazine - The ego and the ID
[10]
External links
• www.bluecubesecurity.com
[11]
References
[1] http:/ / www. prosecurityzone. com/ Customisation/ News/ IT_Security/Internet_Security_and_Content_Filtering/
The_Choice_Between_Managing_Internet_Access_And_Blocking_It. asp
[2] http:// www. prosecurityzone. com/ Customisation/ News/ IT_Security/Data_Protection/
PINsafe_Added_To_Security_Tools_Offered_By_Blue_Cube.asp
[3] http:// www. infosecurity-magazine.com/ view/ 2297/ data-lost-not-found-why-data-loss-is-still-prevalent-in-many-organisations-/
[4] http:/ / www. hrmagazine. co.uk/ news/ search/ 909971/ Collaboration-Learn-organisation-sector-own/
[5] http:// www. postonline. co. uk/ post/ analysis/ 1222314/ policyholder-security-the-key-protection
[6] http:// www. scmagazineuk. com/ blue-cube-and-sc-survey-will-map-the-effects-of-recession-on-it-security/article/ 123513/
[7] http:/ / www. computerweekly. com/ Articles/2009/ 01/ 06/ 233890/ Data-leakage-protection-how-to-secure-your-most-vital.htm
[8] http:/ / www. channelweb. co. uk/ crn/ news/ 2231450/ mature-imperva-opts-two-tiers-4366224
Blue Cube Security
70
[9] http:/ / www. ft.com/ cms/ s/ 0/ 31b34948-aadc-11dd-897c-000077b07658.html?nclick_check=1
[10] http:// www. postonline. co. uk/ post/ analysis/ 1217189/ the-ego-id
[11] http:// www. bluecubesecurity. com
BlueHat
BlueHat or Blue Hat is a term used to refer to outside computer security consulting firms that are employed to bug
test a system prior to its launch, looking for exploits so they can be closed. In particular, Microsoft uses the term to
refer to the computer security professionals they invited to find the vulnerability of their products such as
Windows.
[1]

[2]

[3]
Blue Hat Microsoft Hacker Conference
An event that is intended to open communication between Microsoft engineers and hackers is called Blue Hat
Microsoft Hacker Conference. The event has led to both mutual understanding as well as the occasional
confrontation. Microsoft developers were visibly uncomfortable when Metasploit was demonstrated.
[4]
References
[1] "Blue hat hacker Definition" (http:/ / www.pcmag. com/ encyclopedia_term/0,2542,t=blue+hat+ hacker&i=56321,00. asp). PC Magazine
Encyclopedia. . Retrieved 31 May 2010. "A security professional invited by Microsoft to find vulnerabilities in Windows."
[2] Fried, Ina (June 15, 2005). ""Blue Hat" summit meant to reveal ways of the other side" (http:// news.cnet.com/
Microsoft-meets-the-hackers/2009-1002_3-5747813.html). Microsoft meets the hackers. CNET News. . Retrieved 31 May 2010.
[3] Markoff, John (October 17, 2005). "At Microsoft, Interlopers Sound Off on Security" (http:// www.nytimes. com/ 2005/ 10/ 17/ technology/
17hackers. html?pagewanted=1& _r=1). New York Times. . Retrieved 31 May 2010.
[4] cNet news (http:// www.news. com) - Microsoft Meets the Hackers (http:// www.news. com/ Microsoft-meets-the-hackers/
2009-1002_3-5747813.html) - Ina Fried (staff writer)
External links
• Microsoft's BlueHat Security Briefings (http:// www.microsoft. com/ technet/ security/ bluehat/ default.mspx)
• BlueHat Security Briefings Blog (http:/ / blogs. technet. com/ bluehat/ )
Centurion guard
71
Centurion guard
Centurion Technologies www.centuriontech.com
[1]
The Centurion Guard is a PC hardware and software
based security product developed by Centurion Technologies and released in 1996. There were many different
releases and versions of this product, and many were distributed in the Bill & Melinda Gates Foundation computers
that were donated to libraries.
Operating system compatibility
• Microsoft Windows 7
• Microsoft Windows Vista
• Microsoft Windows XP
References
[1] http:/ / www. centuriontech. com
Client honeypot
Honeypots are security devices whose value lie in being probed and compromised. Traditional honeypots are servers
(or devices that expose server services) that wait passively to be attacked. Client Honeypots are active security
devices in search of malicious servers that attack clients. The client honeypot poses as a client and interacts with the
server to examine whether an attack has occurred. Often the focus of client honeypots is on web browsers, but any
client that interacts with servers can be part of a client honeypot (for example ftp, ssh, email, etc.).
There are several terms that are used to describe client honeypots. Besides client honeypot, which is the generic
classification, honeyclient is the other term that is generally used and accepted. However, there is a subtlety here, as
"honeyclient" is actually a homograph that could also refer to the first open source client honeypot implementation
(see below), although this should be clear from the context.
Architecture
A client honeypot is composed of three components. The first component, a queuer, is responsible for creating a list
of servers for the client to visit. This list can be created, for example, through crawling. The second component is the
client itself, which is able to make a requests to servers identified by the queuer. After the interaction with the server
has taken place, the third component, an analysis engine, is responsible for determining whether an attack has taken
place on the client honeypot.
In addition to these components, client honeypots are usually equipped with some sort of containment strategy to
prevent successful attacks from spreading beyond the client honeypot. This is usually achieved through the use of
firewalls and virtual machine sandboxes.
Analogous to traditional server honeypots, client honeypots are mainly classified by their interaction level: high or
low; which denotes the level of functional interaction the server can utilize on the client honeypot. In addition to this
there are also newly hybrid approaches which denotes the usage of both high and low interaction detection
techniques.
Client honeypot
72
High interaction
High interaction client honeypots are fully functional systems comparable to real systems with real clients. As such,
no functional limitations (besides the containment strategy) exist on high interaction client honeypots. Attacks on
high interaction client honeypots are detected via inspection of the state of the system after a server has been
interacted with. The detection of changes to the client honeypot may indicate the occurrence of an attack against that
has exploited a vulnerability of the client. An example of such a change is the presence of a new or altered file.
High interaction client honeypots are very effective at detecting unknown attacks on clients. However, the tradeoff
for this accuracy is a performance hit from the amount of system state that has to be monitored to make an attack
assessment. Also, this detection mechanism is prone to various forms of evasion by the exploit. For example, an
attack could delay the exploit from immediately triggering (time bombs) or could trigger upon a particular set of
conditions or actions (logic bombs). Since no immediate, detectable state change occurred, the client honeypot is
likely to incorrectly classify the server as safe even though it did successfully perform its attack on the client.
Finally, if the client honeypots are running in virtual machines, then an exploit may try to detect the presence of the
virtual environment and cease from triggering or behave differently.
Capture-HPC
Capture
capture
is a high interaction client honeypot developed by researchers at Victoria University of Wellington,
NZ. Capture differs from existing client honeypots in various ways. First, it is designed to be fast. State changes are
being detected using an event based model allowing to react to state changes as they occur. Second, Capture is
designed to be scalable. A central Capture server is able to control numerous clients across a network. Third, Capture
is supposed to be a framework that allows to utilize different clients. The initial version of Capture supports Internet
Explorer, but the current version supports all major browsers (Internet Explorer, Firefox, Opera, Safari) as well as
other HTTP aware client applications, such as office applications and media players.
HoneyClient
HoneyClient
honeyclient
is a web browser based (IE/FireFox) high interaction client honeypot designed by Kathy
Wang in 2004 and subsequently developed at MITRE. It was the first open source client honeypot and is a mix of
Perl, C++, and Ruby. HoneyClient is state-based and detects attacks on Windows clients by monitoring files, process
events, and registry entries. It has integrated the Capture-HPC real-time integrity checker to perform this detection.
HoneyClient also contains a crawler, so it can be seeded with a list of initial URLs from which to start and can then
continue to traverse web sites in search of client-side malware.
HoneyMonkey
HoneyMonkey
honeymonkey
is a web browser based (IE) high interaction client honeypot implemented by Microsoft
in 2005. It is not available for download. HoneyMonkey is state based and detects attacks on clients by monitoring
files, registry, and processes. A unique characteristic of HoneyMonkey is its layered approach to interacting with
servers in order to identify zero-day exploits. HoneyMonkey initially crawls the web with a vulnerable configuration.
Once an attack has been identified, the server is reexamined with a fully patched configuration. If the attack is still
detected, one can conclude that the attack utilizes an exploit for which no patch has been publicly released yet and
therefore is quite dangerous.
Client honeypot
73
SHELIA
Shelia
shelia
is a high interaction client honeypot developed by Joan Robert Rocaspana at Vrije Universiteit
Amsterdam. It integrates with an email reader and processes each email it receives (URLs & attachments).
Depending on the type of URL or attachment received, it opens a different client application (e.g. browser, office
application, etc.) It monitors whether executable instructions are executed in data area of memory (which would
indicate a buffer overflow exploit has been triggered). With such an approach, SHELIA is not only able to detect
exploits, but is able to actually ward off exploits from triggering.
UW Spycrawler
The Spycrawler
spycrawler
developed at the University of Washington is yet another browser based (Mozilla) high
interaction client honeypot developed by Moshchuk et al. in 2005. This client honeypot is not available for
download. The Spycrawler is state based and detects attacks on clients by monitoring files, processes, registry, and
browser crashes. Spycrawlers detection mechanism is event based. Further, it increases the passage of time of the
virtual machine the Spycrawler is operating in to overcome (or rather reduce the impact) of time bombs.
Web Exploit Finder
WEF
stuttgart
is an implementation of an automatic drive-by-download – detection in a virtualized environment,
developed by Thomas Müller, Benjamin Mack and Mehmet Arziman, three students from the Hochschule der
Medien (HdM), Stuttgart during the summer term in 2006. WEF can be used as an active HoneyNet with a complete
virtualization architecture underneath for rollbacks of compromised virtualized machines.
Low interaction
Low interaction client honeypots differ from high interaction client honeypots in that they do not utilize an entire
real system, but rather use lightweight or simulated clients to interact with the server. (in the browser world, they are
similar to web crawlers). Responses from servers are examined directly to assess whether an attack has taken place.
This could be done, for example, by examining the response for the presence of malicious strings.
Low interaction client honeypots are easier to deploy and operate than high interaction client honeypots and also
perform better. However, they are likely to have a lower detection rate since attacks have to be known to the client
honeypot in order for it to detect them; new attacks are likely to go unnoticed. They also suffer from the problem of
evasion by exploits, which may be exacerbated due to their simplicity, thus making it easier for an exploit to detect
the presence of the client honeypot.
HoneyC
HoneyC
honeyc
is a low interaction client honeypot developed at Victoria University of Wellington by Christian
Seifert in 2006. HoneyC is a platform independent open source framework written in Ruby. It currently concentrates
driving a web browser simulator to interact with servers. Malicious servers are detected by statically examining the
web server’s response for malicious strings through the usage of Snort signatures.
Monkey-Spider
Monkey-Spider
monkeyspider
is a low-interaction client honeypot initially developed at the University of Mannheim by
Ali Ikinci. Monkey-Spider is a crawler based client honeypot initially utilizing anti-virus solutions to detect malware.
It is claimed to be fast and expandable with other detection mechanisms. The work has started as a diploma thesis
and is continued and released as Free Software under the GPL.
Client honeypot
74
PhoneyC
PhoneyC
phoneyc
is a low-interaction client developed by Jose Nazario. PhoneyC mimics legitimate web browsers
and can understand dynamic content by de-obfuscating malicious content for detection. Furthermore, PhoneyC
emulates specific vulnerabilities to pinpoint the attack vector. PhoneyC is a modular framework that enables the
study of malicious HTTP pages and understands modern vulnerabilities and attacker techniques.
SpyBye
SpyBye
spybye
is a low interaction client honeypot developed by Niels Provos. SpyBye allows a web master to
determine whether a web site is malicious by a set of heuristics and scanning of content against the ClamAV engine.
Hybrid Client Honeypots
Hybrid client honeypots combine both low and high interaction client honeypots to gain from the advantages of both
approaches.
HoneySpider
The HoneySpider
honeyspider
network is a hybrid client honeypot developed as a joint venture between NASK/CERT
Polska , GOVCERT.NL
[1]
and SURFnet
[2]
. The projects goal is to develop a complete client honeypot system,
based on existing client honeypot solutions and a crawler specially for the bulk processing of URLs.
Literature
• Jan Göbel, Andreas Dewald, Client-Honeypots: Exploring Malicious Websites, Oldenbourg Verlag 2010, ISBN:
978-3-486-70526-3
[3]
, This book at Amazon
[4]
Papers
• M. Egele, P. Wurzinger, C. Kruegel, and E. Kirda, Defending Browsers against Drive-by Downloads: Mitigating
Heap-spraying Code Injection Attacks, Secure Systems Lab, 2009, p. Available from http:/ / www.iseclab. org/
papers/driveby.pdf; accessed on 15 May 2009.
• Feinstein, Ben. Caffeine Monkey: Automated Collection, Detection and Analysis of JavaScript. BlackHat USA.
Las Vegas, 2007.
• Ikinci, A, Holz, T., Freiling, F.C. : Monkey-Spider: Detecting Malicious Websites with Low-Interaction
Honeyclients. Sicherheit 2008: 407-421
[5]
,
• Moshchuk, A., Bragin, T., Gribble, S.D. and Levy, H.M. A Crawler-based Study of Spyware on the Web. In 13th
Annual Network and Distributed System Security Symposium (NDSS). San Diego, 2006. The Internet Society.
• Provos, N., Holz, T. Virtual Honeypots: From Botnet Tracking to Intrusion Detection. Addison-Wesley. Boston,
2007.
• Provos, N., Mavrommatis, P., Abu Rajab, M., Monrose, F. All Your iFRAMEs Point to Us. Google Technical
Report. Google, Inc., 2008.
• Provos, N., McNamee, D., Mavrommatis, P., Wang, K., Modadugu, N. The Ghost In The Browser: Analysis of
Web-based Malware. Proceedings of the 2007 HotBots. Cambridge, April 2007. USENIX.
• Seifert, C., Endicott-Popovsky, B., Frincke, D., Komisarczuk, P., Muschevici, R. and Welch, I., Justifying the
Need for Forensically Ready Protocols: A Case Study of Identifying Malicious Web Servers Using Client
Honeypots. in 4th Annual IFIP WG 11.9 International Conference on Digital Forensics, Kyoto, 2008.
• Seifert, C. Know Your Enemy: Behind The Scenes Of Malicious Web Servers. The Honeynet Project. 2007.
• Seifert, C., Komisarczuk, P. and Welch, I. Application of divide-and-conquer algorithm paradigm to improve the
detection speed of high interaction client honeypots. 23rd Annual ACM Symposium on Applied Computing.
Client honeypot
75
Ceara, Brazil, 2008.
• Seifert, C., Steenson, R., Holz, T., Yuan, B., Davis, M. A. Know Your Enemy: Malicious Web Servers. The
Honeynet Project. 2007. (available at http:// www.honeynet. org/papers/ mws/ index. html)
• Seifert, C., Welch, I. and Komisarczuk, P. HoneyC: The Low-Interaction Client Honeypot. Proceedings of the
2007 NZCSRCS. Waikato University, Hamilton, New Zealand. April 2007.
• C. Seifert, V. Delwadia, P. Komisarczuk, D. Stirling, and I. Welch, Measurement Study on Malicious Web
Servers in the.nz Domain, in 14th Australasian Conference on Information Security and Privacy (ACISP),
Brisbane, 2009.
• C. Seifert, P. Komisarczuk, and I. Welch, True Positive Cost Curve: A Cost-Based Evaluation Method for
High-Interaction Client Honeypots, in SECURWARE, Athens, 2009.
• C. Seifert, P. Komisarczuk, and I. Welch, Identification of Malicious Web Pages with Static Heuristics, in
Austalasian Telecommunication Networks and Applications Conference, Adelaide, 2008.
• Stuurman, Thijs, Verduin, Alex. Honeyclients - Low interaction detection method. Technical Report. University
of Amsterdam. February 2008.
• Wang, Y.-M., Beck, D., Jiang, X., Roussev, R., Verbowski, C., Chen, S. and King, S. Automated Web Patrol with
Strider HoneyMonkeys: Finding Web Sites That Exploit Browser Vulnerabilities. In 13th Annual Network and
Distributed System Security Symposium (NDSS). San Diego, 2006. The Internet Society.
• Zhuge, Jianwei, Holz, Thorsten, Guo, Jinpeng, Han, Xinhui, Zou, Wei. Studying Malicious Websites and the
Underground Economy on the Chinese Web. Proceedings of the 2008 Workshop on the Economics of Information
Security. Hanover, June 2008.
Presentations
• Presentation by Websense on their Honeyclient infrastructure and the next generation of Honeyclients they are
currently working; April 2008 at RSA-2008
[6]
• The Honeynet Project
[7]
• Virtuelle Leimrouten (in German)
[8]
• Video of Michael Davis' Client Honeypot Presentation at HITB 2006
[9]
• Video of Kathy Wang's Presentation of HoneyClient at Recon 2005
[10]
• Video of Wolfgarten's presentation at CCC conference
[11]
Sites
• https:/ / projects. honeynet. org/capture-hpc
• http:/ / handlers. dshield. org/rdanford/pub/ 2nd_generation_honeyclients. ppt
• https:/ / projects. honeynet. org/honeyc
• http:/ / www. honeyclient. org/trac
• http:/ / www. honeyspider. net
• http:/ / monkeyspider. sourceforge.net/
• http:/ / code. google. com/ p/ phoneyc/
• http:/ / www. cs. vu. nl/ ~herbertb/misc/ shelia/
• http:/ / www. spybye. org/
• http:// www. xnos. org/security/ overview.html
• http:/ / code. mwcollect. org/
• http:/ / nz-honeynet.org
Client honeypot
76
References
[1] http:/ / www. govcert.nl/
[2] http:/ / www. surfnet.nl/
[3] http:/ / www. oldenbourg-verlag.de/ wissenschaftsverlag/ client-honeypots/ 9783486705263
[4] http:/ / www. amazon. de/ Client-Honeypots-Exploring-Jan-Gerrit-G%C3%B6bel/dp/ 3486705261/
[5] http:/ / pi1.informatik.uni-mannheim.de/ filepool/publications/ monkey-spider.pdf
[6] http:// securitylabs. websense. com/ images/ alerts/ rsa_2008_honeyclient_preso. mov
[7] http:// www. honeynet. org
[8] http:/ / www. devtarget.org/ downloads/ kes-2-2006-wolfgarten-honeypots.pdf
[9] http:/ / www. secguru. com/ link/ client_honeypots_its_not_only_the_network_video
[10] http:/ / www. archive. org/details/ Recon2005_Kathy_Wang
[11] http:// dewy.fem.tu-ilmenau. de/ CCC/ 22C3/ video/ mp4/ 22C3-871-en-honeymonkeys.mp4
Cloud computing security
Cloud computing security (sometimes referred to simply as "cloud security") is an evolving sub-domain of
computer security, network security, and, more broadly, information security. It refers to a broad set of policies,
technologies, and controls deployed to protect data, applications, and the associated infrastructure of cloud
computing. Cloud security is not to be confused with security software offerings that are "cloud-based" (a.k.a.
security-as-a-service). Many commercial software vendors have offerings such as cloud-based anti-virus or
vulnerability management.
[1]
Security issues associated with the cloud
There are a number of security issues/concerns
[2]
associated with cloud computing but these issues fall into two
broad categories: Security issues faced by cloud providers (organizations providing Software-, Platform-, or
Infrastructure-as-a-Service via the cloud) and security issues faced by their customers. In most cases, the provider
must ensure that their infrastructure is secure and that their clients’ data and applications are protected while the
customer must ensure that the provider has taken the proper security measures to protect their information.
Dimensions of cloud security
While cloud security concerns can be grouped into any number of dimensions (Gartner names seven
[3]
while the
Cloud Security Alliance identifies thirteen areas of concern
[4]
) these dimensions have been aggregated into three
general areas
[5]
: Security and Privacy, Compliance, and Legal or Contractual Issues.
Security and privacy
In order to ensure that data is secure (that it cannot be accessed by unauthorized users or simply lost) and that data
privacy is maintained, cloud providers attend to the following areas:
[5]
Cloud computing security
77
Data protection
To be considered protected, data from one customer must be properly segregated from that of another; it must be
stored securely when “at rest” and it must be able to move securely from one location to another. Cloud providers
have systems in place to prevent data leaks or access by third parties. Proper separation of duties should ensure that
auditing and/or monitoring cannot be defeated, even by privileged users at the cloud provider.
Identity management
Every enterprise will have its own identity management system to control access to information and computing
resources. Cloud providers either integrate the customer’s identity management system into their own infrastructure,
using federation or SSO technology, or provide an identity management solution of their own.
Physical and personnel security
Providers ensure that physical machines are adequately secure and that access to these machines as well as all
relevant customer data is not only restricted but that access is documented.
Availability
Cloud providers assure customers that they will have regular and predictable access to their data and applications.
Application security
Cloud providers ensure that applications available as a service via the cloud are secure by implementing testing and
acceptance procedures for outsourced or packaged application code. It also requires application security measures
(application-level firewalls) be in place in the production environment.
Privacy
Finally, providers ensure that all critical data (credit card numbers, for example) are masked and that only authorized
users have access to data in its entirety. Moreover, digital identities and credentials must be protected as should any
data that the provider collects or produces about customer activity in the cloud.
Compliance
Numerous regulations pertain to the storage and use of data, including Payment Card Industry Data Security
Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Sarbanes-Oxley Act,
among others. Many of these regulations require regular reporting and audit trails. Cloud providers must enable their
customers to comply appropriately with these regulations.
Business continuity and data recovery
Cloud providers have business continuity and data recovery plans in place to ensure that service can be maintained in
case of a disaster or an emergency and that any data lost will be recovered. These plans are shared with and reviewed
by their customers.
Logs and audit trails
In addition to producing logs and audit trails, cloud providers work with their customers to ensure that these logs and
audit trails are properly secured, maintained for as long as the customer requires, and are accessible for the purposes
of forensic investigation (e.g., eDiscovery).
Cloud computing security
78
Unique compliance requirements
In addition to the requirements to which customers are subject, the data centers maintained by cloud providers may
also be subject to compliance requirements.
Legal and contractual issues
Aside from the security and compliance issues enumerated above, cloud providers and their customers will negotiate
terms around liability (stipulating how incidents involving data loss or compromise will be resolved, for example),
intellectual property, and end-of-service (when data and applications are ultimately returned to the customer
Public records
Legal issues may also include records-keeping requirements in the public sector, where many agencies are required
by law to retain and make available electronic records in a specific fashion. This may be determined by legislation,
or law may require agencies to conform to the rules and practices set by a records-keeping agency. Public agencies
using cloud computing and storage must take these concerns into account.
References
[1] "Cloud-based Security Software Directory" (https:// mosaicsecurity.com/ categories/ 7-securityasaservice). Mosaic Security Research. .
[2] ""Swamp Computing" a.k.a. Cloud Computing" (http:// security. sys-con. com/node/ 1231725). Web Security Journal. 2009-12-28. .
Retrieved 2010-01-25.
[3] "Gartner: Seven cloud-computing security risks" (http:/ / www.infoworld.com/ d/ security-central/
gartner-seven-cloud-computing-security-risks-853). InfoWorld. 2008-07-02. . Retrieved 2010-01-25.
[4] "Security Guidance for Critical Areas of Focus in Cloud Computing" (https:// cloudsecurityalliance.org/ research/projects/
security-guidance-for-critical-areas-of-focus-in-cloud-computing/). Cloud Security Alliance. 2011. . Retrieved 2011-05-04.
[5] "Cloud Security Front and Center" (http:// blogs. forrester.com/ srm/ 2009/ 11/ cloud-security-front-and-center.html). Forrester Research.
2009-11-18. . Retrieved 2010-01-25.
External links
• Cloud Security Alliance (http:// www.cloudsecurityalliance. org/ )
• Jericho Forum (http:// www. opengroup.org/jericho/)
• Insecure State of the Union - US Government Cloud Compromised (http:/ / www.theaeonsolution. com/ security/
?p=199)
• Cloud Computing Security (http:/ / www. oyyas. com/ articles/ cloud-computing-security)
• Cloud Computing: An auditor's perspective, ISACA Journal, 2009 Volume 6 (http:// www.isaca. org/Journal/
Past-Issues/ 2009/ Volume-6/Pages/ Cloud-Computing-An-Auditor-s-Perspective1.aspx)
Collaboration-oriented architecture
79
Collaboration-oriented architecture
Collaboration Oriented Architecture is a concept used to describe the design of a computer system that is
designed to collaborate, or use services, from systems that are outside of your locus of control. Collaboration
Oriented Architecture will often utilize Service Oriented Architecture to deliver the technical framework.
Collaboration Oriented Architecture is the ability to collaborate between systems that are based on the Jericho Forum
principles or “Commandments”.
[1]
Bill Gates and Craig Mundie (Microsoft)
[2]

[3]
clearly articulated the need for people to work outside of their
organizations in a secure and collaborative manner in their opening keynote to the RSA Security Conference in
February 2007.
Successful implementation of a Collaboration Oriented Architecture implies the ability to successfully inter-work
securely over the Internet and will typically mean the resolution of the problems that come with de-perimeterisation.
Origin of the term
The term Collaboration Oriented Architectures
[4]
was defined and developed in a meeting of the Jericho Forum at a
meeting held at HSBC
[5]
on the 6th July 2007.
Definition of a Collaboration Oriented Architecture
The key elements that qualify a security architecture as a Collaboration Oriented Architecture are as follows;
• Protocol: Systems use appropriately secure protocols to communicate.
• Authentication: The protocol is authenticated with user and/or system credentials).
• Federation: User and/or systems credentials are accepted and validated by systems that are not under your (locus
of) control.
• Network Agnostic: The design does not rely on a secure network, thus it will operate securely from an Intranet to
raw-Internet
• Trust: The collaborating system have the capacity to be able to confirm to a specified degree of confidence that
the components in a transaction chain have.
• Risk: The collaborating systems can make a risk assessment on any transaction based on the communicated levels
of required trust, based on the required degree of identity, confidentiality, integrity, availability.
Authentication in a Collaboration Oriented Architecture
Working in a collaborative multi-sourced environment implies the need for authentication, authorization and
accountability which must interoperate / exchange outside of your locus / area of control.
[6]
• People/systems must be able to manage permissions of resources and rights of users they don't control
• There must be capability of trusting an organization, which can authenticate individuals or groups, thus
eliminating the need to create separate identities
• In principle, only one instance of person / system / identity may exist, but privacy necessitates the support for
multiple instances, or one instance with multiple facets, often referred to s personas
• Systems must be able to pass on security credentials /assertions
• Multiple loci (areas) of control must be supported
Collaboration-oriented architecture
80
References
[1] Jericho Forum, "Commandments", Jericho Forum Commandments (http:/ / www.jerichoforum.org/commandments_v1. 2. pdf), May 2007.
[2] Bill Gates, Craig Mundie: RSA Conference 2007. Transcript of keynote discussion between Microsoft Chairman Bill Gates and Chief
Research & Strategy (http:/ / www. microsoft. com/ Presspass/ exec/ billg/ speeches/ 2007/ 02-06RSA. mspx)
[3] Bill Gates Webcast, Bill Gates and Craig Mundie Keynote at RSA Conference 2007: Advancing Trust in Today’s Connected World (http:/ /
www.microsoft. com/winme/ 0702/ 29377/ RSA_mbr. asx)
[4] https:/ / www. opengroup.org/jericho/ COA_v2. 0. pdf
[5] http:/ / www. hsbc. com
[6] Jericho Forum Commandment #8 (http:// www.jerichoforum.org/commandments_v1. 2. pdf)
External links
• http:/ / www. jerichoforum.org (http:// www. jerichoforum.org)
• Open SOA Collaboration (http:/ / www.osoa. org)
• Service Component Architecture Specifications (http:// www.osoa. org/display/ Main/ Service+Component+
Architecture+ Specifications)
• A collaboration-oriented software architecture modeling system (http:// doi.ieeecomputersociety. org/10. 1109/
ECBS. 2006. 5)
• Enterprise collaboration with Service Oriented Architecture (SOA) (http:// www.ibm. com/ systems/ z/ soa/ )
• Collaboration Services in a Services Oriented Architecture (http:/ / whitepapers. silicon. com/
0,39024759,60288728p-39000639q,00. htm)
Committee on National Security Systems
81
Committee on National Security Systems
Committee on National Security Systems
CNSS
Logo of the CNSS
Agency overview
Formed 16 October 2001
Preceding agency National Security Telecommunications and Information Systems Security Committee (NSTISSC)
Jurisdiction United States
Headquarters Fort Meade, Maryland
Agency executive John G. Grimes, Chair
Parent agency Intergovernmental, chaired by DoD
The Committee on National Security Systems (CNSS) is a United States intergovernmental organization that sets
policy for the security of the US security systems.
Charter, mission, and leadership
The National Security Telecommunications and Information Systems Security Committee (NSTISSC) was
established under National Security Directive 42, "National Policy for the Security of National Security
Telecommunications and Information Systems", dated 5 July 1990. On October 16, 2001, President George W. Bush
signed Executive Order 13231, the Critical Infrastructure Protection in the Information Age, re-designating the
National Security Telecommunications and Information Systems Security Committee (NSTISSC) as the
Committee on National Security Systems. The CNSS holds discussions of policy issues, sets national policy,
directions, operational procedures, and guidance for the information systems operated by the U.S. Government, its
contractors or agents that either contain classified information, involve intelligence activities, involve cryptographic
activities related to national security, involve command and control of military forces, involve equipment that is an
integral part of a weapon or weapons system(s), or are critical to the direct fulfillment of military or intelligence
missions.
The Department of Defense chairs the committee. Membership consists of representatives from 21 U.S. Government
Departments and Agencies with voting privileges, to include the CIA, DIA, DOD, DOJ, FBI, NSA, and the National
Security Council, and all United States Military Services. Members not on the voting commitiee include the DISA,
NGA, NIST, and the NRO. The operating Agency for CNSS appears to be the National Security Agency, Which
serves as primary contact for public inquiries.
Committee on National Security Systems
82
Education certification
The CNSS defines several standards, which include standards on training in IT security. Current certifications
include:
• NSTISSI-4011 National Training Standard for Information Systems Security (INFOSEC) Professionals
• CNSSI-4012 National Information Assurance Training Standard for Senior Systems Managers
• CNSSI-4013 National Information Assurance Training Standard For System Administrators
• CNSSI-4014 Information Assurance Training Standard for Information Systems Security Officers
• NSTISSI-4015 National Training Standard for Systems Certifiers
• CNSSI-4016 National Information Assurance Training Standard For Risk Analysts
External links
• CNSS official site
[1]
References
[1] http:/ / www. cnss. gov/
Computer Law and Security Report
83
Computer Law and Security Report
Computer Law & Security Review
Abbreviated title
(ISO)
CLSR
Discipline Intellectual Property, Information Technology, Telecommunications law, Data protection, software protection, IT contracts,
Internet law, Electronic commerce, Computer Law
Language English
Publication details
Publisher
Elsevier
[1]
(UK)
Publication history 1985 to present
Indexing
ISSN
0267-3649
[2]
Links
• Journal homepage
[3]
The Computer Law & Security Review is a journal accessible to a wide range of professional legal and IT
practitioners, businesses, academics, researchers, libraries and organisations in both the public and private sectors,
the Computer Law and Security Review regularly covers:
• CLSR Briefing with special emphasis on UK/US developments
• European Union update
• National news from 10 European jurisdictions
• Pacific rim news column
• Refereed practitioner and academic papers on topics such as Web 2.0, IT security, Identity management, ID cards,
RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property,
software law, e-commerce, outsourcing, data protection and freedom of information and many other topics.
The Journal's Correspondent Panel includes more than 40 specialists in IT law and security - between them offering
expert analysis on all aspects of this fast moving field of law - spotting trends, highlighting practical concerns,
monitoring new problems, and outlining key developments.
Each issue contains well-researched reliable and thought provoking articles, case law analysis and current news -
ensuring that you do not miss out on the emerging issues worldwide and that you understand the problems of
managing the legal and security requirements of information and communications technology.
Special Features
• High quality peer reviewed papers from internationally renowned practitioner and academic experts
• Latest developments reported in situ by more than 20 leading law firms from around the world
• Highly experienced and respected editor and correspondents panel
Computer Law and Security Report
84
• Online access to all 23 volumes of CLSR with embedded web links to primary sources
• Contact details of all authors
• A pool of expertise that can collectively identify the key topics that need to be examined.
External links
• Elsevier.com - Computer Law & Security Review
[3]
• Computer Law & Security Review
[4]
References
[1] http:/ / www. elsevier. com
[2] http:/ / www. worldcat.org/ issn/ 0267-3649
[3] http:/ / www. elsevier. com/ wps/ find/journaldescription.cws_home/ 422550/ description#description
[4] http:// www. sciencedirect. com/ science/ journal/02673649
Computer security compromised by hardware
failure
Computer security compromised by hardware failure is a branch of computer security applied to hardware. The
objective of computer security includes protection of information and property from theft, corruption, or natural
disaster, while allowing the information and property to remain accessible and productive to its intended users.
[1]
Such secret information could be retrieved by different ways. This article focus on the retrieval of data thanks to
misused hardware or hardware failure. Hardware could be misused or exploited to get secret data. This article
collects main types of attack that can be lead in a data thief.
Computer security can be comprised by devices, such as keyboards, monitors or printers (thanks to electromagnetic
or acoustic emanation for example) or by components of the computer, such as the memory, the network card or the
processor (thanks to time or temperature analysis for example).
Devices
Monitor
The monitor, main device of the computer, can be used to retrieve data. Even if monitors seem harmless, they radiate
or reflect data on their environment : this let attackers know useful data about the content displayed on the monitor.
Electromagnetic emanations
As previously said, video display units radiate :
• narrowband harmonics of the digital clock signals ;
• broadband harmonics of the various 'random' digital signals such as the video signal.
[2]
Known as compromising emanations or TEMPEST radiation, a code word for a U.S. government programme aimed
at attacking the problem, the electromagnetic broadcast of data has been a significant concern in sensitive computer
applications. Eavesdroppers can reconstruct video screen content from radio frequency emanations.
[3]
Each
(radiated) harmonic of the video signal shows a remarkable resemblance to a broadcast TV signal. It is therefore
possible to reconstruct the picture displayed on the video display unit from the radiated emission by means of a
normal television receiver.
[2]
If no preventive measures are taken, eavesdropping on a video display unit is possible
Computer security compromised by hardware failure
85
at several hundreds of metres distance, using only a normal black-and-white TV receiver, a directional antenna and
an antenna amplifier. It is even possible to pick up information from some types of video display units at a distance
of over 1 kilometre If more sophisticated receiving and decoding equipment is used, the maximum distance can be
much greater.
[4]
Compromising reflections
What is displayed by the monitor is reflected on the environment. The time-varying diffuse reflections of the light
emitted by a CRT monitor can be exploited to recover the original monitor image.
[5]
This is a eavesdropping
technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently
prevalent LCD monitors.
The technique exploits reflections of the screen’s optical emanations in various objects that one commonly finds in
close proximity to the screen and uses those reflections to recover the original screen content. Such objects include
eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. This attack can be successfully mounted to
spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to
10 meters. Relying on more expensive equipment allowed to conduct this attack from over 30 meters away,
demonstrating that similar attacks are feasible from the other side of the street or from a close-by building.
[6]
Many objects that may be found at a usual workplace can be exploited to retrieve information on a computer’s
display by an outsider.
[7]
Particularly good results were obtained from reflections in a user’s eyeglasses or a tea pot
located on the desk next to the screen. Reflections that stem from the eye of the user also provide good results.
However, eyes are harder to spy on at a distance because they are fast-moving objects and require high exposure
times. Using more expensive equipment with lower exposure times helps to remedy this problem.
[8]
The reflections gathered from curved surfaces on close-by objects indeed pose a substantial threat to the
confidentiality of data displayed on the screen. Fully invalidating this threat without at the same time hiding the
screen from the legitimate user seems difficult, without using curtains on the windows or similar forms of strong
optical shielding. Most users, however, will not be aware of this risk and may not be willing to close the curtains on
a nice day.
[9]
The reflection of an object, a computer display, in a curved mirror creates a virtual image that is located
behind the reflecting surface. For a flat mirror this virtual image has the same size and is located behind the mirror at
the same distance as the original object. For curved mirrors, however, the situation is more complex.
[10]
Keyboard
Electromagnetic emanations
Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic
components, keyboards emit electromagnetic waves. These emanations could reveal sensitive information such as
keystrokes.
[11]
Electromagnetic emanations have turned out to constitute a security threat to computer equipment.
[9]
The figure below presents how a keystroke is retrieved and what material is necessary.
Diagram presenting all material necessary to detect keystrokes
The approach is to acquire the raw signal directly from the antenna and to process the entire captured
electromagnetic spectrum. Thanks to this method, four different kinds of compromising electromagnetic emanations
Computer security compromised by hardware failure
86
have been detected, generated by wired and wireless keyboards. These emissions lead to a full or a partial recovery
of the keystrokes. The best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up
to 20 meters, even through walls.
[11]
Because each keyboard has a specific fingerprint based on the clock frequency
inconsistencies, it can determine the source keyboard of a compromising emanation, even if multiple keyboards from
the same model are used at the same time.
[12]
The four different kinds way of compromising electromagnetic emanations are described below.
The Falling Edge Transition Technique
When a key is pressed, released or held down, the keyboard sends a packet of information known as a scan code to
the computer.
[13]
The protocol used to transmit these scan codes is a bidirectional serial communication, based on
four wires: Vcc (5 volts), ground, data and clock.
[13]
Clock and data signals are identically generated. Hence, the
compromising emanation detected is the combination of both signals. However, the edges of the data and the clock
lines are not superposed. Thus, they can be easily separated to obtain independent signals.
[14]
The Generalized Transition Technique
The Falling Edge Transition attack is limited to a partial recovery of the keystrokes. This is a significant
limitation.
[15]
The GTT is a falling edge transition attack improved, which recover almost all keystrokes. Indeed,
between two traces, there is exactly one data rising edge. If attackers are able to detect this transition, they can fully
recover the keystrokes.
[15]
The Modulation Technique
Harmonics compromising electromagnetic emissions come from unintentional emanations such as radiations emitted
by the clock, non-linear elements, crosstalk, ground pollution, etc. Determining theoretically the reasons of these
compromising radiations is a very complex task.
[16]
These harmonics correspond to a carrier of approximately
4 MHz which is very likely the internal clock of the micro-controller inside the keyboard. These harmonics are
correlated with both clock and data signals, which describe modulated signals (in amplitude and frequency) and the
full state of both clock and data signals. This means that the scan code can be completely recovered from these
harmonics.
[16]
The Matrix Scan Technique
Keyboard manufacturers arrange the keys in a matrix. The keyboard controller, often a 8-bit processor, parses
columns one-by-one and recovers the state of 8 keys at once. This matrix scan process can be described as 192 keys
(some keys may not be used, for instance modern keyboards use 104/105 keys) arranged in 24 columns and 8
rows.
[17]
These columns are continuously pulsed one-by-one for at least 3μs. Thus, these leads may act as an antenna
and generate electromagnetic emanations. If an attacker is able to capture these emanations, he can easily recover the
column of the pressed key. Even if this signal does not fully describe the pressed key, it still gives partial information
on the transmitted scan code, i.e. the column number.
[17]
Note that the matrix scan routine loops continuously. When no key is pressed, we still have a signal composed of
multiple equidistant peaks. These emanations may be used to remotely detect the presence of powered computers.
Concerning wireless keyboards, the wireless data burst transmission can be used as an electromagnetic trigger to
detect exactly when a key is pressed, while the matrix scan emanations are used to determine the column it belongs
to.
[17]
Computer security compromised by hardware failure
87
Summary
Some techniques can only target some keyboards. This table sums up which technique could be used to find
keystroke for different kind of keyboard.
Technique name Wired Keyboard Laptop Keyboard Wireless Keyboard
Falling Edge Transition Technique Yes Yes
Generalized Transition Technique Yes Yes
Modulation Technique Yes Yes
Matrix Scan Technique Yes Yes Yes
In their paper called "Compromising Electromagnetic Emanations of Wired and Wireless Keyboards", Martin
Vuagnoux and Sylvain Pasini tested 12 different keyboard models, with PS/2, USB connectors and wireless
communication in different setups: a semi-anechoic chamber, a small office, an adjacent office and a flat in a
building. The table below presents their results.
Type of keyboard Number of tested keyboard FETT GTT MT MST
PS/2 7 7/7 6/7 4/7 5/7
USB 2 0/2 0/2 0/2 2/2
Laptop 2 1/2 1/2 0/2 2/2
Wireless 1 0/1 0/1 0/1 1/1
Acoustic emanations
Attacks against emanations caused by human typing have attracted interest in recent years. In particular, works
showed that keyboard acoustic emanations do leak information that can be exploited to reconstruct the typed text.
[18]
PC keyboards, notebook keyboards are vulnerable to attacks based on differentiating the sound emanated by
different keys.
[19]
This attack takes as input an audio signal containing a recording of a single word typed by a single
person on a keyboard, and a dictionary of words. It is assume that the typed word is present in the dictionary. The
aim of the attack is to reconstruct the original word from the signal.
[20]
This attack, taking as input a 10-minute
sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed
characters.
[21]
This attack is inexpensive because the other hardware required is a parabolic microphone and
non-invasive because it does not require physical intrusion into the system. The attack employs a neural network to
recognize the key being pressed.
[19]
It combines signal processing and efficient data structures and algorithms, to
successfully reconstruct single words of 7-13 characters from a recording of the clicks made when typing them on a
keyboard.
[18]
The sound of clicks can differ slightly from key to key, because the keys are positioned at different
positions on the keyboard plate, although the clicks of different keys sound similar to the human ear.
[19]
On average, there were only 0.5 incorrect recognitions per 20 clicks, which shows the exposure of keyboard to the
eavesdropping using this attack.
[22]
The attack is very efficient, taking under 20 seconds per word on a standard PC.
A 90% or better success rate of finding the correct word for words of 10 or more characters, and a success rate of
73% over all the words tested.
[18]
In practice, a human attacker can typically determine if text is random. An attacker
can also identify occasions when the user types user names and passwords.
[23]
Short audio signals containing a
single word, with seven or more characters long was considered. This means that the signal is only a few seconds
long. Such short words are often chosen as a password.
[18]
The dominant factors affecting the attack's success are the
word length, and more importantly, the number of repeated characters within the word.
[18]
This is a procedure that makes it possible to efficiently uncover a word out of audio recordings of keyboard click
sounds.
[24]
More recently, extracting information out of an other type of emanations was demonstrated: acoustic
Computer security compromised by hardware failure
88
emanations from mechanical devices such as dot-matrix printers.
[18]
Video Eavesdropping on Keyboard
While extracting private information by watching somebody typing on a keyboard might seem to be an easy task, it
becomes extremely challenging if it has to be automated. However, an automated tool is needed in the case of
long-lasting surveillance procedures or long user activity, as a human being is able to reconstruct only a few
characters per minute. The paper "ClearShot: Eavesdropping on Keyboard Input from Video" presents a novel
approach to automatically recovering the text being typed on a keyboard, based solely on a video of the user
typing.
[25]
Automatically recognizing the keys being pressed by a user is a hard problem that requires sophisticated motion
analysis. Experiments show that, for a human, reconstructing a few sentences requires lengthy hours of slow-motion
analysis of the video.
[26]
The attacker might install a surveillance device in the room of the victim, might take control
of an existing camera by exploiting a vulnerability in the camera’s control software, or might simply point a mobile
phone with an integrated camera at the laptop’s keyboard when the victim is working in a public space.
[26]
Balzarotti's analysis is divided into two main phases (figure below). The first phase analyzes the video recorded by
the camera using computer vision techniques. For each frame of the video, the computer vision analysis computes
the set of keys that were likely pressed, the set of keys that were certainly not pressed, and the position of space
characters. Because the results of this phase of the analysis are noisy, a second phase, called the text analysis, is
required. The goal of this phase is to remove errors using both language and context-sensitive techniques. The result
of this phase is the reconstructed text, where each word is represented by a list of possible candidates, ranked by
likelihood.
[26]
Diagram presenting steps to go through when detecting keystroke with video input
Printer
Acoustic emanations
With acoustic emanations, an attack that recovers what a dot-matrix printer processing English text is printing is
possible. It is based on a record of the sound the printer makes, if the microphone is close enough to it. This attack
recovers up to 72 % of printed words, and up to 95 % if knowledge about the text are done, with a microphone at a
distance of 10 cm from the printer.
[5]
After an upfront training phase ("a" in the picture below), the attack ("b" in the picture below) is fully automated and
uses a combination of machine learning, audio processing, and speech recognition techniques, including spectrum
features, Hidden Markov Models and linear classification.
[9]
The fundamental reason why the reconstruction of the
printed text works is that, the emitted sound becomes louder if more needles strike the paper at a given time.
[9]
There
Computer security compromised by hardware failure
89
is a correlation between the number of needles and the intensity of the acoustic emanation.
[9]
A training phase was conducted where words from a dictionary are printed and characteristic sound features of these
words are extracted and stored in a database. The trained characteristic features was used to recognize the printed
English text.
[9]
But, this task is not trivial. Major challenges include :
1. Identifying and extracting sound features that suitably capture the acoustic emanation of dot-matrix printers;
2. Compensating for the blurred and overlapping features that are induced by the substantial decay time of the
emanations;
3. Identifying and eliminating wrongly recognized words to increase the overall percentage of correctly identified
words (recognition rate).
[9]
Diagram presenting phases when retrieving data from a printer
Computer components
Network Interface Card
Timing attack
Timing attacks enable an attacker to extract secrets maintained in a security system by observing the time it takes the
system to respond to various queries.
[27]
SSH is designed to provide a secure channel between two hosts. Despite the encryption and authentication
mechanisms it uses, SSH has weaknesses. In interactive mode, every individual keystroke that a user types is sent to
the remote machine in a separate IP packet immediately after the key is pressed, which leaks the inter-keystroke
timing information of users’ typing. Below, the picture represents the command su processed through a SSH
connection.
Computer security compromised by hardware failure
90
Network messages sent between the host and the client for the command 'su' - numbers are size of network packet in byte
A very simple statistical techniques suffice to reveal sensitive information such as the length of users’ passwords or
even root passwords. By using advanced statistical techniques on timing information collected from the network, the
eavesdropper can learn significant information about what users type in SSH sessions.
[28]
Because the time it takes
the operating system to send out the packet after the keypress is in general negligible comparing to the interkeystroke
timing, this also enables an eavesdropper to learn the precise interkeystroke timings of users’ typing from the arrival
times of packets.
[29]
Memory
Physical chemistry
Data remanence problems affect not only obvious areas such as RAM and non-volatile memory cells but can also
occur in other areas of the device through hot-carrier effects (which change the characteristics of the semiconductors
in the device) and various other effects which are examined alongside the more obvious memory-cell remanence
problems.
[30]
It is possible to analyse and recover data from these cells and from semiconductor devices in general
long after it should (in theory) have vanished.
[31]
Electromigration, which means physically move the atom to new locations (physically alter the device itself) is an
other type of attack.
[30]
It involves the relocation of metal atoms due to high current densities, a phenomenon in
which atoms are carried along by an “electron wind” in the opposite direction to the conventional current flow,
producing voids at the negative electrode and hillocks and whiskers at the positive electrode. Void formation leads to
a local increase in current density and Joule heating (the interaction of electrons and metal ions to produce thermal
energy), producing further electromigration effects. When the external stress is removed, the disturbed system tends
to relax back to its original equilibrium state, resulting in a backflow which heals some of the electromigration
damage. In the long term though this can cause device failure, but in less extreme cases simply serves to alter a
device’s operating characteristics in noticeable ways.
For example the excavations of voids leads to increased wiring resistance and the growth of whiskers leads to
contact formation and current leakage.
[30]
An example of a conductor which exhibits whisker growth due to
electromigration is shown in the figure below :
Computer security compromised by hardware failure
91
Whisker growth due to electromigration
One example which exhibits void formation (in this case severe enough to have lead to complete failure) is shown in
this figure :
Void formation due to electromigration
Temperature
Contrary to popular assumption, DRAMs used in most modern computers retain their contents for several seconds
after power is lost, even at room temperature and even if removed from a motherboard.
[32]
Many products do cryptographic and other security-related computations using secret keys or other variables that the
equipment’s operator must not be able to read out or alter. The usual solution is for the secret data to be kept in
volatile memory inside a tamper-sensing enclosure. Security processors typically store secret key material in static
RAM, from which power is removed if the device is tampered with. At temperatures below −20°C, the contents of
SRAM can be ‘frozen’. It is interesting to know the period of time for which a static RAM device will retain data
once the power has been removed. Low temperatures can increase the data retention time of SRAM to many seconds
Computer security compromised by hardware failure
92
or even minutes.
[33]
Read/Write exploits thanks to FireWire
Maximillian Dornseif presented a technique in these slides, which let him take the control of a Apple computer
thanks to an iPod. The attacks needed a first generic phase where the iPod software was modified so that it behave as
master on the FireWire bus. Then the iPod had full read/write access on the Apple Computer when the iPod was
plugged.
[34]
FireWire is used by : audio devices, printers, scanners, cameras, gps, etc. Generally, a device connected
by FireWire has full access (read/write). Indeed, OHCI Standard (FireWire standard) reads :

Physical requests, including physical read, physical write and lock requests to some CSR registers (section 5.5), are handled directly by the
Host Controller without assistance by system software.

—OHCI Standard
So, any device connected by FireWire can read and write data on the computer memory. For example, a device can :
• Grab the screen contents ;
• Just search the memory for strings such as login, passwords ;
• Scan for possible key material ;
• Search cryptographic keys stored in RAM ;
• Parse the whole physical memory to understand logical memory layout.
or
• Mess up the memory ;
• Change screen content ;
• Change UID/GID of a certain process ;
• Inject code into a process ;
• Inject an additional process.
Processor
Cache attack
To increase the computational power, processors are generally equipped with a cache memory which decreases the
memory access latency. Below, the figure shows the hierarchy between the processor and the memory. First the
processor looks for data in the cache L1, then L2, then in the memory.
Processor cache hierarchy
When the data is not where the processor is looking for, it is called a cache-miss. Below, pictures show how the
processor fetch data when there are two cache levels.
Computer security compromised by hardware failure
93
Data A is in the L1-Cache
Data A is in the L2-Cache
Data A is in the memory
Unfortunately caches contain only a small portion of the application data and can introduce additional latency to the
memory transaction in the case of a miss. This involves also additional power consumption which is due to the
activation of memory devices down in the memory hierarchy. The miss penalty has been already used to attack
symmetric encryption algorithms, like DES.
[35]
The basic idea proposed in this paper is to force a cache miss while
the processor is executing the AES encryption algorithm on a known plain text.
[36]
The attacks allow an unprivileged
process to attack other process running in parallel on the same processor, despite partitioning methods such as
memory protection, sandboxing and virtualization.
[37]
Timing attack
By carefully measuring the amount of time required to perform private key operations, attackers may be able to find
fixed Diffie-Hellman exponents, factor RSA keys, and break other cryptosystems. Against a vulnerable system, the
attack is computationally inexpensive and often requires only known ciphertext.
[38]
The attack can be treated as a
signal detection problem. The signal consists of the timing variation due to the target exponent bit, and noise results
from measurement inaccuracies and timing variations due to unknown exponent bits. The properties of the signal and
noise determine the number of timing measurements required to for the attack. Timing attacks can potentially be
used against other cryptosystems, including symmetric functions.
[39]
Privilege escalation
A simple and generic processor backdoor can be used by attackers as a means to privilege escalation to get to
privileges equivalent to those of any given running operating system.
[40]
Also, a non-privileged process of one of the
non-privileged invited domain running on top of a virtual machine monitor can get to privileges equivalent to those
of the virtual machine monitor.
[40]
Loïc Duflot studied Intel processors in the paper "CPU bugs, CPU backdoors and consequences on security" ; he
explains that the processor defines four different privilege rings numbered from 0 (most privileged) to 3 (least
privileged). Kernel code is usually running in ring 0, whereas user-space code is generally running in ring 3. The use
of some security-critical assembly language instructions is restricted to ring 0 code. In order to escalate privilege
through the backdoor, the attacker must :
[41]
1. activate the backdoor by placing the CPU in the desired state ;
2. inject code and run it in ring 0 ;
3. get back to ring 3 in order to return the system to a stable state. Indeed, when code is running in ring 0, system
calls do not work : Leaving the system in ring 0 and running a random system call (exit() typically) is likely to
crash the system.
The backdoors Loïc Duflot presents are simple as they only modify the behavior of three assembly language
instructions and have very simple and specific activation conditions, so that they are very unlikely to be accidently
Computer security compromised by hardware failure
94
activated.
References
[1] Computer security
[2] Eck, 1985, p.2
[3] Kuhn,1998, p.1
[4] Eck, 1985, p.3
[5] Backes, 2010, p.4
[6] Backes, 2008, p.1
[7] Backes, 2008, p.4
[8] Backes, 2008, p.11
[9] Backes, 2008, p.2
[10] Backes, 2008, p.3
[11] Vuagnoux, 2009, p.1
[12] Vuagnoux, 2009, p.2
[13] Vuagnoux, 2009, p.5
[14] Vuagnoux, 2009, p.6
[15] Vuagnoux, 2009, p.7
[16] Vuagnoux, 2009, p.8
[17] Vuagnoux, 2009, p.9
[18] Berger, 2006, p.1
[19] Asonov, 2004, p.1
[20] Berger, 2006, p.2
[21] Zhuang, 2005, p.1
[22] Asonov, 2004, p.4
[23] Zhuang, 2005, p.4
[24] Berger, 2006, p.8
[25] Balzarotti, 2008, p.1
[26] Balzarotti, 2008, p.2
[27] Brumley, 2003, p.1
[28] Song, 2001, p.1
[29] Song, 2001, p.2
[30] Gutmann, 2001, p.1
[31] Gutmann, 2001, p.4
[32] Halderman, 2008, p1
[33] Skorobogatov, 2002, p.3
[34] Dornseif, 2004
[35] Bertoni, 2005, p.1
[36] Bertoni, 2005, p.3
[37] Shamir, 2005, p.1
[38] Kocher, 1996, p.1
[39] Kocher, 1996, p.9
[40] Duflot, 2008, p.1
[41] Duflot, 2008, p.5
Computer security compromised by hardware failure
95
Bibliography
Acoustic
• Asonov, D.; Agrawal, R. (2004), "Keyboard acoustic emanations" (http:/ / citeseerx. ist. psu. edu/ viewdoc/
download?doi=10.1. 1. 89. 8231& rep=rep1&type=pdf), Proceedings 2004 IEEE Symposium on Security and
Privacy: 3–11, doi:10.1109/SECPRI.2004.1301311, ISBN 0-7695-2136-3, ISSN 1081-6011, retrieved 1 June
2004
• Zhuang, Li; Zhou, Feng; Tygar, J.D. (2005), "Keyboard acoustic emanations revisited" (http:// citeseerx. ist. psu.
edu/ viewdoc/ download?doi=10. 1. 1. 117. 5791& rep=rep1&type=pdf), Proceedings of the 12th ACM
Conference on Computer and Communications Security (Alexandria, Virginia, USA: ACM New York, NY,
USA) 13 (1): 373–382, doi:10.1145/1609956.1609959, ISBN 1-59593-226-7, ISSN 1094-9224
• Berger, Yigael; Wool, Avishai; Yeredor, Arie (2006), "Dictionary attacks using keyboard acoustic emanations"
(http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10.1.1. 99. 8028& rep=rep1&type=pdf), Proceedings
of the 13th ACM conference on Computer and communications security (Alexandria, Virginia, USA: ACM New
York, NY, USA): 245–254, doi:10.1145/1180405.1180436, ISBN 1-59593-518-5, ISSN 1-59593-518-5
• Backes, Mickael; Dürmuth, Markus; Gerling, Sebastian; Pinkal, Manfred; Sporleder, Caroline (2010), "Acoustic
Side-Channel Attacks on Printers" (http:/ / www.usenix. org/events/ sec10/ tech/ full_papers/ Backes. pdf),
Proceedings of the 19th USENIX Security Symposium (Washington, DC), ISBN 978-1-931971-77-5
Cache attack
• Osvik, Dag Arne; Shamir, Adi; Tromer, Eran (2006), "Cache Attacks and Countermeasures: The Case of AES"
(http:// citeseerx. ist. psu. edu/ viewdoc/ download?doi=10.1.1. 60. 1857& rep=rep1&type=pdf), Topics in
Cryptology CT-RSA (San Jose, California, USA: Springer-Verlag Berlin, Heidelberg) 3860: 1–20,
doi:10.1007/11605805_1, ISBN 3-540-31033-9, ISSN 0302-9743
• Page, Daniel (2005), "Partitioned cache architecture as a side-channel defence mechanism" (http:/ / eprint.iacr.
org/ 2005/ 280. pdf), Cryptology ePrint Archive
• Bertoni, Guido; Zaccaria, Vittorio; Breveglieri, Luca; Monchiero, Matteo; Palermo, Gianluca (2005), "AES
Power Attack Based on Induced Cache Miss and Countermeasure" (http:/ / home. dei. polimi. it/ gpalermo/
papers/ITCC05. pdf), International Conference on Information Technology: Coding and Computing (ITCC'05)
(Washington, DC, USA: IEEE Computer Society, Los Alamitos, California, USA) 1: 586–591,
doi:10.1109/ITCC.2005.62, ISBN 0-7695-2315-3
Chemical
• Gutmann, Peter (2001), "Data Remanence in Semiconductor Devices" (http:// www.cypherpunks. to/ ~peter/
usenix01. pdf), Proceedings of the 10th conference on USENIX Security Symposium SSYM'01 (USENIX
Association Berkeley, California, USA) 10: 4
Electromagnetic
• Kuhn, Markus G.; Anderson, Ross J. (1998), "Soft Tempest: Hidden Data Transmission Using Electromagnetic
Emanations" (http:// www. springerlink.com/ content/ dm6kgf2p4mnrp0uv/), Lecture Notes in Computer
Science: 124–142, doi:10.1007/3-540-49380-8_10, ISBN 3-540-65386-4
• Van Eck, Wim; Laborato, Neher (1985), "Electromagnetic Radiation from Video Display Units: An
Eavesdropping Risk?" (http:/ /portal. acm.org/citation. cfm?id=7308), Computers & Security: 269–286,
doi:10.1016/0167-4048(85)90046-Xv
• Kuhn, Markus G. (2002), "Optical Time-Domain Eavesdropping Risks of CRT Displays" (http:/ / www.
computer.org/ portal/web/ csdl/ doi/ 10. 1109/ SECPRI.2002. 1004358), Proceedings of the 2002 IEEE
Symposium on Security and Privacy: 3--, doi:http:/ / portal.acm. org/citation. cfm?id=829514.830537,&
Computer security compromised by hardware failure
96
#32;ISBN& nbsp;0-7695-1543-6
• Vuagnoux, Martin; Pasini, Sylvain (2009), "Compromising electromagnetic emanations of wired and wireless
keyboards" (http:/ / www. usenix. org/events/ sec09/ tech/ full_papers/ vuagnoux.pdf), In Proceedings of the
18th conference on USENIX security symposium (SSYM'09): 1–16
• Backes, Mickael; Dürmuth, Markus; Unruh, Dominique (2008), "Compromising Reflections-or-How to Read
LCD Monitors around the Corner" (http:// crypto.m2ci. org/ unruh/publications/ reflections.pdf), Proceedings
of the IEEE Symposium on Security and Privacy (Oakland, California, USA): 158–169, doi:http:/ / doi.
ieeecomputersociety.org/10. 1109/ SECPRI. 2002.1004358,& #32;ISBN& nbsp;978-0-7695-3168-7
FireWire
• Dornseif, Maximillian (2004), "0wned by an iPod" (http:/ / pi1.informatik.uni-mannheim.de/ filepool/
publications/ 13.pdf), PacSec
• Dornseif, Maximillian (2005), "FireWire all your memory are belong to us" (http:// md. hudora.de/
presentations/ firewire/2005-firewire-cansecwest.pdf), CanSecWest
Processor bug and backdoors
• Duflot, Loïc (2008), "CPU bugs, CPU backdoors and consequences on security" (http:/ / www.springerlink.com/
index/ jp07870p24560678. pdf), ESORICS '08 Proceedings of the 13th European Symposium on Research in
Computer Security: Computer Security: 580–599, doi:10.1007/978-3-540-88313-5_37, ISBN 978-3-540-88312-8
• Duflot, Loïc (2008), "Using CPU System Management Mode to Circumvent Operating System Security
Functions" (http:/ / www.ssi. gouv. fr/fr/sciences/ fichiers/lti/ cansecwest2006-duflot-paper.pdf), Proceedings
of CanSecWest: 580–599, doi:10.1.1.115.2702
Temperature
• Skorobogatov, Sergei (2002), Low temperature data remanence in static RAM (http:/ / www.cl. cam. ac. uk/
techreports/UCAM-CL-TR-536. pdf), Cambridge, UK: University of Cambridge Computer Laboratory,
ISSN 1476-2986
• Halderman, J. Alex; Schoen, Seth D.; Heninger, Nadia; Clarkson, William; Paul, William; Calandrino, Joseph A.;
Feldman, Ariel J.; Appelbaum, Jacob et al. (2008), "Lest We Remember: Cold Boot Attacks on Encryption Keys"
(http:/ / citp. princeton.edu/ pub/coldboot. pdf), Proceedings of the USENIX Security Symposium (ACM New
York, New York, USA) 52 (5): 45–60, doi:10.1145/1506409.1506429, ISBN 978-1-931971-60-7,
ISSN 0001-0782
Timing attacks
• Song, Dawn Xiaodong; Wagner, David; Tian, Xuqing (2001), "Timing analysis of keystrokes and timing attacks
on SSH" (http:/ / www. usenix. org/ events/ sec01/ full_papers/ song/ song. pdf), Proceedings of the 10th
conference on USENIX Security Symposium (Washington, D.C., USA: USENIX Association Berkeley, California,
USA) 10: 337–352
• Kocher, Paul C. (1996), "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems"
(http:// citeseerx. ist. psu. edu/ viewdoc/ download?doi=10.1.1. 40. 5024& rep=rep1&type=pdf), Proceedings
of the 16th Annual International Cryptology Conference on Advances in Cryptology — CRYPTO ’96, Lecture
Notes in Computer Science (Santa Barbara, California, USA: Springer-Verlag, London, UK) 1109: 104–113,
doi:10.1007/3-540-68697-5_9, ISBN 3-540-61512-1
• Brumley, David; Boneh, Dan (2003), "Remote timing attacks are practical" (http:/ / crypto.stanford.edu/ ~dabo/
papers/ ssl-timing. pdf), Proceedings of the 12th conference on USENIX Security Symposium SSYM'03
(Washington, DC, USA: USENIX Association Berkeley, California, USA) 12, doi:10.1016/j.comnet.2005.01.010
Computer security compromised by hardware failure
97
Other
• Balzarotti, D.; Cova, M.; Vigna, G. (2008), "ClearShot: Eavesdropping on Keyboard Input from Video" (http://
ieeexplore. ieee. org/ xpl/ freeabs_all.jsp?arnumber=4531152), Security and Privacy, 2008. SP 2008. IEEE
Symposium on (Oakland, CA): 170–183, doi:10.1109/SP.2008.28, ISBN 978-0-7695-3168-7, ISSN 1081-6011
• Duflot, Loïc (2007) (in fr), Contribution à la sécurité des systèmes d’exploitation et des microprocesseurs (http:/ /
www.ssi. gouv.fr/archive/fr/sciences/ fichiers/ lti/ these-duflot.pdf)
Computer security incident management
In the fields of computer security and information technology, computer security incident management involves
the monitoring and detection of security events on a computer or computer network, and the execution of proper
responses to those events. Computer security incident management is a specialized form of incident management, the
primary purpose of which is the development of a well understood and predictable response to damaging events and
computer intrusions.
[1]
Incident management requires a process and a response team which follows this process. This definition of computer
security incident management follows the standards and definitions described in the National Incident Management
System (NIMS). The incident coordinator manages the response to an emergency security incident. In a Natural
Disaster or other event requiring response from Emergency services, the incident coordinator would act as a liaison
to the emergency services incident manager.
[2]
Overview
Computer security incident management is an administrative function of managing and protecting computer assets,
networks and information systems. These systems continue to become more critical to the personal and economic
welfare of our society. Organizations (public and private sector groups, associations and enterprises) must
understand their responsibilities to the public good and to the welfare of their memberships and stakeholders. This
responsibility extends to having a management program for “what to do, when things go wrong.” Incident
management is a program which defines and implements a process that an organization may adopt to promote its
own welfare and the security of the public.
Components of an incident
Events
An event is an observable change to the normal behavior of a system, environment, process, workflow or person
(components). There are three basic types of events:
1. Normal -- a normal event does not affect critical components or require change controls prior to the
implementation of a resolution. Normal events do not require the participation of senior personnel or management
notification of the event.
2. Escalation – an escalated event affects critical production systems or requires that implementation of a resolution
that must follow a change control process. Escalated events require the participation of senior personnel and
stakeholder notification of the event.
3. Emergency – an emergency is an event which may
1. impact the health or safety of human beings
2. breach primary controls of critical systems
3. materially affect component performance or because of impact to component systems prevent activities which
protect or may affect the health or safety of individuals
Computer security incident management
98
4. be deemed an emergency as a matter of policy or by declaration by the available incident coordinator
Computer security and information technology personnel must handle emergency events according to well-defined
computer security incident response plan.
Incident
An incident is an event attributable to a human root cause. This distinction is particularly important when the event is
the product of malicious intent to do harm. An important note: all incidents are events but many events are not
incidents. A system or application failure due to age or defect may be an emergency event but a random flaw or
failure is not an incident.
Incident response team
The incident coordinator manages the response process and is responsible for assembling the team. The coordinator
will ensure the team includes all the individuals necessary to properly assess the incident and make decisions
regarding the proper course of action. The incident team meets regularly to review status reports and to authorize
specific remedies. The team should utilize a pre-allocated physical and virtual meeting place.
[3]
Computer security incident management
99
Incident investigation
The investigation seeks to determine the human perpetrator who is the root cause for the incident. Very few incidents
will warrant or require an investigation. However, investigation resources like forensic tools, dirty networks,
quarantine networks and consultation with law enforcement may be useful for the effective and rapid resolution of an
emergency incident.
Process
Initial incident management process
Author: Michael Berman (tanjstaffl)
1. Employee, vendor, customer, partner, device or sensor reports event to
Help Desk.
2. Prior to creating the ticket, the help desk may filter the event as a false
positive. Otherwise, the help desk system creates a ticket that captures
the event, event source, initial event severity and event priority.
1. The ticket system creates a unique ID for the event. IT Personnel
must use the ticket to capture email, IM and other informal
communication.
2. Subsequent activities like change control, incident management
reports and compliance reports must reference the ticket number.
3. In instances where event information is “Restricted Access,” the
ticket must reference the relevant documents in the secure
document management system.
3. The First Level Responder captures additional event data and
performs preliminary analysis. The First Responder determines
criticality of the event. At this level, it is either a Normal or an
Escalation event.
1. Normal events do not affect critical production systems or require
change controls prior to the implementation of a resolution.
2. Events that affect critical production systems or require change
controls must be escalated.
3. Organization management may request an immediate escalation
without first level review – 2nd tier will create ticket.
4. The event is ready to resolve. The resource enters the resolution and
the problem category into the ticket and submits the ticket for closure.
5. The ticket owner (employee, vendor, customer or partner) receives the resolution. They determine that the
problem is resolved to their satisfaction or escalate the ticket.
6. The escalation report is updated to show this event and the ticket is assigned a second tier resource to investigate
and respond to the event.
7. The Second Tier resource performs additional analysis and re-evaluates the criticality of the ticket. When
necessary, the Second Tier resource is responsible for implementing a change control and notifying IT
Management of the event.
8. Emergency Response:
1. Events may follow the escalation chain until it is determined that an emergency response is necessary.
2. Top-level organization management may determine that an emergency response is necessary and invoke this
process directly.
Computer security incident management
100
Emergency response detail
1. Emergency response is initiated by escalation of a security event or be direct declaration by the CIO or other
executive organization staff. The CIO may assign the incident coordinator, but by default, the coordinator will be
the most senior security staff member available at the time of the incident.
2. The incident coordinator assembles the incident response team. The team meets using a pre-defined conference
meeting space. One of the (CIO, CSO or Director IT) must attend each incident team meeting.
3. The meeting minutes capture the status, actions and resolution(s) for the incident. The incident coordinator
reports on the cost, exposure and continuing business risk of the incident. The incident response team determines
the next course of action: (go to 4, 5, or 6)
4. Lock-down and Repair – Perform the actions necessary to prevent further damage to the organization, repair
impacted systems and perform changes to prevent a re-occurrence.
5. False Positive – The incident team determines this issue did not warrant an emergency response. The team
provides a written report to senior management and the issue is handled as a normal incident (see page 1), or
closed.
6. Monitor and Capture – Perform a thorough investigation with continued monitoring to detect and capture the
perpetrator. This process must include notification to the following senior and professional staff:
1. CEO and CFO
2. Corporate Attorney and Public Relations
7. Review and analyze log data to determine nature and scope of incident. This step would include utilizing virus,
spyware, rootkit and other detection tools to determine necessary mitigation and repair.
8. Repair Systems, eliminate vector of attack mitigate exploitable vulnerabilities
9. The Test Report documents the validation of the repair process.
1. Test Systems to ensure compliance with policy and risk mitigation.
2. Perform additional repairs to resolve all current vulnerabilities.
10. Investigate incident to determine source of attack and capture perpetrator. This will require the use of forensics
tools, log analysis, clean lab and dirty lab environments and possible communication with Law Enforcement or
other outside entities.
11. The “Investigation Status Report” captures all current information regarding the incident. The Incident response
team uses this information to determine the next course of action. (See Ref 2 and Ref 3)
Definitions
First Responder/First level review
first person to be on scene or receive notification of an event, organizations should provide training to the first
responder to recognize and properly react to emergency circumstances.
Help Desk Ticket (Control)
an electronic document captured in a database and issue tracking/resolution system
Ticket Owner
person reporting the event, the principal owner of the assets associated with the event or the common law or
jurisdictional owner.
Escalation Report (Control)
First Responder’s documentation for ticket escalation, the Responder writes this information into the ticket or
the WIKI log for the event. The ticket references the WIKI log for the event.
Second Tier
Senior technical resources assigned to resolve an escalated event.
Computer security incident management
101
Incident Coordinator
individual assigned by organization senior management to assemble the incident response team, manage and
document response to the incident.
Investigation Status Report (Control)
documentation of the current investigation results, the coordinator may document this material in the ticket,
WIKI or an engineer’s journal.
Meeting Minutes (Control)
documentation of the incident team meeting, the minutes document the attendees, current nature of the
incident and the recommended actions. The coordinator may document this material in the ticket, WIKI or an
engineer’s journal.
Lock-down Change Control
a process ordered as a resolution to the incident. This process follows the same authorization and response
requirements as an Emergency Change Control.
Test Report (Control)
this report validates that IT personal have performed all necessary and available repairs to systems prior to
bringing them back online.
War Room
a secure environment for review of confidential material and the investigation of a security incident.
Report to Senior Management (Control)
the incident coordinator is responsible for drafting a senior management report. The coordinator may
document this material in the ticket, WIKI or an engineer’s journal
References
[1] "[[ISO 17799|ISO/IEC 17799:2005(E) (http:// www. iso. org)]"]. Information technology - Security techniques - Code of practice for
information security management. ISO copyright office. 2005-06-15. pp. 90–94. .
[2] "NIMS - The Incident Command System" (http:/ / web.archive.org/ web/ 20070318154341/http:/ / www.nimsonline. com/ nims_3_04/
incident_command_system. htm). National Incident Management System. Department of Homeland Security. 2004-03-01. Archived from the
original (http:// www. nimsonline. com/ nims_3_04/ incident_command_system. htm) on 2007-03-18. . Retrieved 2007-04-08.
[3] "Creating a Computer Security Incident Response Team" (http:/ / www. cert.org/archive/pdf/ csirt-handbook.pdf). Computer Emergency
Response Team. US-CERT. 2003-04-01. . Retrieved 2007-04-08.
Further reading
• Handbook for Computer Security Incident Response Teams (CSIRTs) http:// www.sei. cmu.edu/ library/
abstracts/ reports/03hb002. cfm
• National Incident Management System
Computer security model
102
Computer security model
A computer security model is a scheme for specifying and enforcing security policies. A security model may be
founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no
particular theoretical grounding at all.
For a more complete list of available articles on specific security models, see Category:Computer security models.
Selected Topics
• Access control list (ACL)
• Capability-based security
• Multi-level security (MLS)
• Role-based access control (RBAC)
• Context-based access control (CBAC)
• Lattice-based access control (LBAC)
• Bell-La Padula model
• Biba model
• Clark-Wilson model
• Graham-Denning model
• Take-grant protection model
• Object-capability model
• Brewer and Nash model
• Non-interference (security)
References
• Krutz, Ronald L. and Vines, Russell Dean, The CISSP Prep Guide; Gold Edition, Wiley Publishing, Inc.,
Indianapolis, Indiana, 2003.
• CISSP Boot Camp Student Guide, Book 1 (v.082807), Vigilar, Inc.
Computer surveillance
103
Computer surveillance
Computer surveillance is the act of performing surveillance of computer activity, and of data stored on a hard drive
or being transferred over the Internet.
Computer surveillance programs are widespread today, and almost all Internet traffic is closely monitored for clues
of illegal activity.
Supporters say that watching all Internet traffic is important, because by knowing everything that everyone is reading
and writing, they can identify terrorists and criminals, and protect society from them.
Critics cite concerns over privacy and the possibility of a totalitarian state where political dissent is impossible and
opponents of state policy are removed in COINTELPRO-like purges. Such a state may be referred to as an
Electronic Police State, in which the government aggressively uses electronic technologies to record, organize,
search and distribute forensic evidence against its citizens.
Network surveillance
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet.
[1]
In the
United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and
broadband internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded
real-time monitoring by Federal law enforcement agencies.
[2]

[3]

[4]
Packet sniffing is the monitoring of data traffic on a computer network. Computers communicate over the Internet by
breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are
routed through a network of computers, until they reach their destination, where they are assembled back into a
complete "message" again. Packet sniffers are programs that intercept these packets as they are travelling through the
network, in order to examine their contents using other programs. A packet sniffer is an information gathering tool,
but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean.
Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful
information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications
providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence
agencies to intercept all of their customers' broadband Internet traffic.
There is far too much data gathered by these packet sniffers for human investigators to manually search through all
of it. So automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, and
filter out and report to human investigators those bits of information which are "interesting" -- such as the use of
certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain
individual or group.
[5]
Billions of dollars per year are spent, by agencies such as the Information Awareness Office,
NSA, and the FBI, to develop, purchase, implement, and operate systems which intercept and analyze all of this data,
and extract only the information which is useful to law enforcement and intelligence agencies.
[6]
Similar systems are now operated by Iranian secret police to identify and suppress dissidents. All required hardware
and software has been allegedly installed by German Siemens AG and Finnish Nokia
[7]
Corporate surveillance
Corporate surveillance of computer activity is very common. The data collected is most often used for marketing
purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a
form of business intelligence, which enables the corporation to better tailor their products and/or services to be
desirable by their customers. Or it the data can be sold to other corporations, so that they can use it for the
aforementioned purpose. Or it can be used for direct marketing purposes, such as targeted advertisements, where ads
Computer surveillance
104
are targeted to the user of the search engine by analyzing their search history and emails
[8]
(if they use free webmail
services), which is kept in a database.
[9]
For instance, Google, the world's most popular search engine, stores identifying information for each web search. An
IP address and the search phrase used are stored in a database for up to 18 months.
[10]
Google also scans the content
of emails of users of its Gmail webmail service, in order to create targeted advertising based on what people are
talking about in their personal email correspondences.
[11]
Google is, by far, the largest Internet advertising
agency—millions of sites place Google's advertising banners and links on their websites, in order to earn money
from visitors who click on the ads. Each page containing Google ads adds, reads, and modifies "cookies" on each
visitor's computer.
[12]
These cookies track the user across all of these sites, and gather information about their web
surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This
information, along with the information from their email accounts, and search engine histories, is stored by Google
to use for build a profile of the user to deliver better-targeted advertising.
[11]
The United States government often gains access to these databases, either by producing a warrant for it, or by
simply asking. The Department of Homeland Security has openly stated that it uses data collected from consumer
credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring.
[9]
Malicious software
For a more detailed discussion of topics mentioned in this section see: Spyware, Computer virus, Trojan (computer
security), Keylogger, Backdoor (computing)
In addition to monitoring information sent over a computer network, there is also the need to examine data stored on
a computer's hard drive, and to monitor the activities of a person using the computer. A surveillance program
installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use,
collect passwords, and/or report back activities in real-time to its operator through the Internet connection.
There are multiple ways of installing such software. The most common is remote installation, using a backdoor
created by a computer virus or trojan. This tactic has the advantage of potentially subjecting multiple computers to
surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible
over a network connection, and enable an intruder to remotely install software and execute commands. These viruses
and trojans are sometimes developed by government agencies, such as CIPAV and Magic Lantern. More often,
however, viruses created by other people or spyware installed by marketing agencies can be used to gain access
through the security breaches that they create.
Another method is "cracking" into the computer to gain access over a network. An attacker can then install
surveillance software remotely. Servers and computers with permanent broadband connections are most vulnerable
to this type of attack.
One can also physically place surveillance software on a computer by gaining entry to the place where the computer
is stored and install it from a compact disc, floppy disk, or thumbdrive. This method shares a disadvantage with
hardware devices in that it requires physical access to the computer.
Computer surveillance
105
Social network analysis
One common form of surveillance is to create maps of social networks based on data from social networking sites as
well as from traffic analysis information from phone call records such as those in the NSA call database,
[13]
and
internet traffic data gathered under CALEA. These social network "maps" are then data mined to extract useful
information such as personal interests, friendships & affiliations, wants, beliefs, thoughts, and activities.
[14]

[15]

[16]
Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National
Security Agency (NSA), and the Department of Homeland Security (DHS) are currently investing heavily in
research involving social network analysis.
[17]

[18]
The intelligence community believes that the biggest threat to the
U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily
countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the
network.
[16]

[19]
Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the
Scalable Social Network Analysis Program developed by the Information Awareness Office:
The purpose of the SSNA algorithms program is to extend techniques of social network analysis to assist with
distinguishing potential terrorist cells from legitimate groups of people ... In order to be successful SSNA will
require information on the social interactions of the majority of people around the globe. Since the Defense
Department cannot easily distinguish between peaceful citizens and terrorists, it will be necessary for them to
gather data on innocent civilians as well as on potential terrorists.
—Jason Ethier
[16]
Emanations
It has been shown that it is possible to surveil computers from a distance, with only commercially available
equipment, by detecting the radiation emitted by the CRT monitor. This form of computer surveillance, known as
TEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them
at distances of hundreds of meters.
[20]

[21]

[22]
IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when
pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes
without actually requiring logging software to run on the associated computer.
And it has also been shown, by Adi Shamir et al., that even the high frequency noise emitted by a CPU includes
information about the instructions being executed.
References
[1] Diffie, Whitfield; Susan Landau (August, 2008). "Internet Eavesdropping: A Brave New World of Wiretapping" (http:// www.sciam. com/
article. cfm?id=internet-eavesdropping). Scientific American. . Retrieved 2009-03-13.
[2] "CALEA Archive -- Electronic Frontier Foundation" (http:// w2. eff.org/ Privacy/Surveillance/CALEA/ ?f=archive.html). Electronic
Frontier Foundation (website). . Retrieved 2009-03-14.
[3] "CALEA: The Perils of Wiretapping the Internet" (http:// www. eff.org/issues/ calea). Electronic Frontier Foundation (website). . Retrieved
2009-03-14.
[4] "CALEA: Frequently Asked Questions" (http:// www.eff.org/pages/ calea-faq). Electronic Frontier Foundation (website). . Retrieved
2009-03-14.
[5] Hill, Michael (October 11, 2004). "Government funds chat room surveillance research" (http:// www.usatoday. com/ tech/ news/
surveillance/2004-10-11-chatroom-surv_x.htm). Associated Press (USA Today). . Retrieved 2009-03-19.
[6] McCullagh, Declan (January 30, 2007). "FBI turns to broad new wiretap method" (http:// news. zdnet.com/ 2100-9595_22-151059.html).
ZDNet News. . Retrieved 2009-03-13.
[7] First round in Internet war goes to Iranian intelligence (http:// www.debka. com/article.php?aid=1396) by Debka.com
[8] Story, Louise (November 1, 2007). "F.T.C. to Review Online Ads and Privacy" (http:// www. nytimes. com/ 2007/ 11/ 01/ technology/
01Privacy. html?_r=1). New York Times. . Retrieved 2009-03-17.
Computer surveillance
106
[9] Butler, Don (February 24, 2009). "Surveillance in society" (http:// www. thestarphoenix.com/ Technology/Surveillance+society/ 1322333/
story. html). The Star Phoenix (CanWest). . Retrieved 2009-03-17.
[10] Soghoian, Chris (September 11, 2008). "Debunking Google's log anonymization propaganda" (http:// news. cnet.com/
8301-13739_3-10038963-46.html). CNET News. . Retrieved 2009-03-21.
[11] Joshi, Priyanki (March 21, 2009). "Every move you make, Google will be watching you" (http:// www.business-standard. com/ india/
news/every-move-you-make-google-will-be-watching-you/57071/ on). Business Standard. . Retrieved 2009-03-21.
[12] "Advertising and Privacy" (http:/ / www.google. com/ privacy_ads.html). Google (company page). 2009. . Retrieved 2009-03-21.
[13] Keefe, Patrick (March 12, 2006). ", Can Network Theory Thwart Terrorists?". New York Times.
[14] Albrechtslund, Anders (March 3, 2008). "Online Social Networking as Participatory Surveillance" (http:// www.uic. edu/ htbin/ cgiwrap/
bin/ojs/ index.php/ fm/article/view/ 2142/ 1949). First Monday 13 (3). . Retrieved March 14, 2009.
[15] Fuchs, Christian (2009). Social Networking Sites and the Surveillance Society. A Critical Case Study of the Usage of studiVZ, Facebook, and
MySpace by Students in Salzburg in the Context of Electronic Surveillance (http:/ / www.google.com/ url?sa=t&source=web& ct=res&
cd=3& url=http:// fuchs. icts. sbg. ac. at/ SNS_Surveillance_Fuchs. pdf&ei=2XG7SY6PI4GEsQP2v9VI&
usg=AFQjCNGKqGRfx90BvrQgdCZbfLfSLCqE5A&sig2=1THunpIf4nMT4MV79FGufQ). Salzburg and Vienna: Forschungsgruppe
Unified Theory of Information. ISBN 978-3-200-01428-2. . Retrieved March 14, 2009.
[16] Ethier, Jason. "Current Research in Social Network Theory" (http:// www.ccs.neu.edu/ home/ perrolle/archive/Ethier-SocialNetworks.
html). Northeastern University College of Computer and Information Science. . Retrieved 2009-03-15.
[17] Marks, Paul (June 9, 2006). "Pentagon sets its sights on social networking websites" (http:// www.newscientist. com/ article/mg19025556.
200?DCMP=NLC-nletter& nsref=mg19025556.200). New Scientist. . Retrieved 2009-03-16.
[18] Kawamoto, Dawn (June 9, 2006). "Is the NSA reading your MySpace profile?" (http:// news. cnet. com/ 8301-10784_3-6082047-7.html).
CNET News. . Retrieved 2009-03-16.
[19] Ressler, Steve (July 2006). "Social Network Analysis as an Approach to Combat Terrorism: Past, Present, and Future Research" (http://
www.hsaj. org/?fullarticle=2.2. 8). Homeland Security Affairs II (2). . Retrieved March 14, 2009.
[20] McNamara, Joel. "Complete, Unofficial Tempest Page" (http:// www.eskimo.com/ ~joelm/ tempest.html). . Retrieved 2009-03-12.
[21] Van Eck, Wim (1985). "Electromagnetic Radiation from Video Display Units: An Eavesdropping Risk?" (http:/ / jya. com/ emr. pdf).
Computers & Security 4: 269–286. doi:10.1016/0167-4048(85)90046-X. .
[22] Kuhn, M.G. (2004). "Electromagnetic Eavesdropping Risks of Flat-Panel Displays" (http:// www.cl.cam. ac.uk/ ~mgk25/ pet2004-fpd.
pdf). 4th Workshop on Privacy Enhancing Technologies: 23–25. .
Confused deputy problem
A confused deputy is a computer program that is innocently fooled by some other party into misusing its authority.
It is a specific type of privilege escalation. In information security, the confused deputy problem is often cited as an
example of why capability-based security is important.
Example
In the original example of a confused deputy, there is a program that provides compilation services to other
programs. Normally, the client program specifies the name of the input and output files, and the server is given the
same access to those files that the client has.
The compiler service is pay-per-use, and the compiler program has access to a file (dubbed BILL) where it stores
billing information. Clients obviously cannot write into the billing file.
Now suppose a client calls the service and specifies BILL as the name of the output file. The service opens the output
file. Even though the client did not have access to that file, the service does, so the open succeeds, and the server
writes the compilation output to the file, overwriting it, and thus destroying the billing information.
Confused deputy problem
107
The confused deputy
In this example, the compilation service is the deputy because it is acting at the request of the client. It is confused
because it was tricked into overwriting its billing file.
Whenever a program tries to access a file, the operating system needs to know two things: which file the program is
asking for, and whether the program has permission to access the file. In the example, the file is designated by its
name, “BILL”. The server receives the file name from the client, but does not know whether the client had
permission to write the file. When the server opens the file, the system uses the server’s permission, not the client’s.
When the file name was passed from the client to the server, the permission did not go along with it; the permission
was increased by the system silently and automatically.
It is not essential to the attack that the billing file is designated by a name represented as a string. The essential points
are that:
• the designator for the file does not carry the full authority needed to access the file;
• the server's own permission to the file is used implicitly.
Other examples
A cross-site request forgery (CSRF) is an example of a confused deputy attack against a web browser. In this case a
client's web browser has no means to distinguish the authority of the client from any authority of a "cross" site that
the client is accessing.
Clickjacking is another category of web attacks that can be analysed as confused deputy attacks, where the user acts
as the confused deputy, tricked into activating a control that does something dangerous.
[1]
An FTP bounce attack can allow an attacker to indirectly connect to TCP ports that the attacker's machine has no
access to, using a remote FTP server as the confused deputy.
Another example relates to personal firewall software. It can restrict internet access for specific applications. Some
applications circumvent this by starting a browser with a specific URL. The browser has authority to open a network
connection, even though the application does not. Firewall software can attempt to address this by prompting the
user in cases where one program starts another which then accesses the network. However, the user frequently does
not have sufficient information to determine whether such an access is legitimate—false positives are common, and
there is a substantial risk that even sophisticated users will become habituated to clicking 'OK' to these prompts.
[2]
Not every program that misuses authority is a confused deputy. Sometimes misuse of authority is simply a result of a
program error. The confused deputy problem occurs when the designation of an object is passed from one program
to another, and the associated permission changes unintentionally, without any explicit action by either party. It is
insidious because neither party did anything explicit to change the authority.
Solutions
In some systems, it is possible to ask the operating system to open a file using the permissions of another client. This
solution has some drawbacks:
• It requires explicit attention to security by the server. A naive or careless server might not take this extra step.
• It becomes more difficult to identify the correct permission if the server is in turn the client of another service and
wants to pass along access to the file.
• It requires the server to be trusted with the permissions of the client. Note that intersecting the server and client's
permissions does not solve the problem either, because the server may then have to be given very wide
permissions (all of the time, rather than those needed for a given request) in order to act for arbitrary clients.
The simplest way to solve the confused deputy problem is to bundle together the designation of an object and the
permission to access that object. This is exactly what a capability is.
Confused deputy problem
108
Using capability security in the compiler example, the client would pass to the server a capability to the output file,
not the name of the file. Since it lacks a capability to the billing file, it cannot designate that file for output. In the
cross-site request forgery example, a URL supplied "cross"-site would include its own authority independent of that
of the client of the web browser.
References
[1] The Confused Deputy rides again! (http:// waterken.sourceforge.net/ clickjacking/ )
[2] Alfred Spiessens: Patterns of Safe Collaboration, PhD thesis. http:// www. evoluware.eu/ fsp_thesis. pdf Section 8.1.5
External links
• Norman Hardy, The Confused Deputy: (or why capabilities might have been invented), ACM SIGOPS Operating
Systems Review, Volume 22, Issue 4 (October 1988).
• ACM published document (http:/ / portal.acm. org/citation. cfm?id=871709).
• Document text on Norm Hardy's website (http:// cap-lore.com/ CapTheory/ConfusedDeputy. html).
• Document text on University of Pennsylvania's website (http:/ / www.cis. upenn.edu/ ~KeyKOS/
ConfusedDeputy. html).
• Citeseer cross reference (http:// citeseer. ist. psu. edu/ hardy94confused. html).
• Capability Theory Notes from several sources (collated by Norm Hardy) (http:// cap-lore.com/ CapTheory/ ).
• Everything2: Confused Deputy (http:/ / www.everything2.com/ index. pl?node=confused deputy) (some
introductory level text).
Countermeasure (computer)
In Computer Security a countermeasure is an action, device, procedure, or technique that reduces a threat, a
vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering
and reporting it so that corrective action can be taken.
The definition is as IETF RFC 2828
[1]
that is the same as CNSS Instruction No. 4009 dated 26 April 2010 by
Committee on National Security Systems of United States of America
[2]
According to the Glossary
[3]
by InfosecToday
[4]
, the meaning of countermeasure is:
The deployment of a set of security services to protect against a security threat.
A synonym is security control.
[2]

[5]
In telecommunications, communication countermeasures are defined as Security
services as part of OSI Reference model by ITU-T X.800 Recommendation. X.800 and ISO ISO 7498-2
(Information processing systems – Open systems interconnection – Basic Reference Model – Part 2: Security
architecture are technically aligned.
The following picture explain the relationships between these concepts and terms:
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
| An Attack: | |Counter- | | A System Resource: |
| i.e., A Threat Action | | measure | | Target of the Attack |
| +----------+ | | | | +-----------------+ |
| | Attacker |<==================||<========= | |
| | i.e., | Passive | | | | | Vulnerability | |
| | A Threat |<=================>||<========> | |
| | Agent | or Active | | | | +-------|||-------+ |
| +----------+ Attack | | | | VVV |
Countermeasure (computer)
109
| | | | | Threat Consequences |
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a
threat action. The result can potentially compromises the Confidentiality, Integrity or Availability properties of
resources (potentially different that the vulnerable one) of the organization and others involved parties (customers,
suppliers).
The so called CIA triad is the basis of Information Security.
The attack can be active when it attempts to alter system resources or affect their operation: so it compromises
Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does
not affect system resources: so it compromises Confidentiality.
A Threat is a potential for violation of security, which exists when there is a circumstance, capability, action, or
event that could breach security and cause harm. That is, a threat is a possible danger that might exploit a
vulnerability. A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal
organization) or "accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God"
such as an earthquake, a fire, or a tornado).
[1]
A set of policies concerned with information security management, the Information Security Management Systems
(ISMS), has been developed to manage, according to Risk management principles, the countermeasures in order to
accomplish to a security strategy set up following rules and regulations applicable in a country.
[5]
References
[1] RFC 2828 Internet Security Glossary
[2] CNSS Instruction No. 4009 (http:// www.cnss. gov/ Assets/ pdf/ cnssi_4009. pdf) dated 26 April 2010
[3] InfosecToday Glossary (http:// www. infosectoday. com/ Articles/ Glossary. pdf)
[4] http:/ / www. infosectoday. com
[5] Wright, Joe; Jim Harmening (2009) "15" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 257
ISBN 978-0-12-374354-1
External links
• Term in FISMApedia (http:// fismapedia. org/index. php?title=Term:Countermeasure)
CPU modes
110
CPU modes
CPU modes (also called processor modes, CPU states, CPU privilege levels and other names) are operating modes
for the central processing unit of some computer architectures that place restrictions on the type and scope of
operations that can be performed by certain processes being run by the CPU. This design allows the operating system
to run with more privileges than application software.
Ideally, only highly-trusted kernel code is allowed to execute in the unrestricted mode; everything else (including
non-supervisory portions of the operating system) run in a restricted mode and must use a system call to request the
kernel perform on its behalf any operation that could damage or compromise the system, making it impossible for
untrusted programs to alter or damage other programs (or the computing system itself).
In practice, however, system calls take time and can hurt the performance of a computing system, so it is not
uncommon for system designers to allow some time-critical software (especially device drivers) to run with full
kernel privileges.
Multiple modes can be implemented—allowing a hypervisor to run multiple operating system supervisors beneath it,
which is the basic design of many virtual machine systems available today.
Mode types
At a minimum, any CPU architecture supporting protected execution will offer two distinct operating modes; at least
one of the modes must allow completely unrestricted operation of the processor. The unrestricted mode is often
called kernel mode, but many other designations exist (master mode, supervisor mode, privileged mode, supervisor
state, etc.). Restricted modes are usually referred to as user modes, but are also known by many other names (slave
mode, user mode, problem state, etc.).
In kernel mode, the CPU may perform any operation allowed by its architecture; any instruction may be executed,
any I/O operation initiated, any area of memory accessed, and so on. In the other CPU modes, certain restrictions on
CPU operations are enforced by the hardware. Typically, certain instructions are not permitted (especially
those—including I/O operations—that could alter the global state of the machine), some memory areas cannot be
accessed, etc. User-mode capabilities of the CPU are typically a subset of those available in kernel mode but in some
cases, such as hardware emulation of non-native architectures, they may be significantly different from those
available in standard kernel mode.
Some CPU architectures support multiple user modes, often with a hierarchy of privileges. These architectures are
often said to have ring-based security, wherein the hierarchy of privileges resembles a set of concentric rings, with
the kernel mode in the center. Multics hardware was the first significant implementation of ring security, but many
other hardware platforms have been designed along similar lines, including the Intel 80286 protected mode, and the
IA-64 as well, though it is referred to by a different name in these cases.
Mode protection may extend to resources beyond the CPU hardware itself. Hardware registers track the current
operating mode of the CPU, but additional virtual-memory registers, page-table entries, and other data may track
mode identifiers for other resources. For example, a CPU may be operating in Ring 0 as indicated by a status word in
the CPU itself, but every access to memory may additionally be validated against a separate ring number for the
virtual-memory segment targeted by the access, and/or against a ring number for the physical page (if any) being
targeted. This has been demonstrated with the PSP handheld system.
For details about interoperation between CPU and OS levels of abstraction, see the dedicated section in the Ring
(computer security) article.
Hardware that supports the Popek and Goldberg virtualization requirements makes writing software to efficiently
support a virtual machine much simpler. Such a system can run software that "believes" it is running in supervisor
mode, but is actually running in user mode.
Crackme
111
Crackme
A crackme (often abbreviated by cm) is a small program designed to test a programmer's reverse engineering
skills.[1]
They are programmed by other reversers as a legal way to "crack" software, since no company is being infringed
upon.
Crackmes, Reversemes and Keygenmes generally have similar protection schemes and algorithms to those found in
commercial protections. However due to the wide use of packers/protectors in commercial software, many crackmes
are actually more difficult as the algorithm is harder to find and track than in commercial software.
A Keygenme specifically is designed for the reverser to not only find the algorithm used in the application, but also
write a small Keygen in the programming language of their choice. Although, most keygenmes properly manipulated
can be self-keygenning.
An example of a keygenme.
Often anti-debugging and anti-disassemble routines are used to confuse
debuggers or make the disassembly useless.[2] Code-obfuscation is
also used to make the reversing even harder.[3]
External links
• crackmes.de
[4]
- A great site for testing your reversing skills.
Crackmes range from Very Easy to Very Hard [1-9] for many
Operating systems
EDIT: Crackmes.de is down as of June 11th 2011 due to 2 virus found
in their database.
• tdhack.com
[5]
- a lot of challenges including cryptographic riddles,
hackmes and software applications to crack for both Windows and Linux. Polish and English languages are
supported.
References
[1] http:/ / www. crackmes. de/ faq/
[2] http:// www. securityfocus. com/ infocus/ 1893
[3] http:/ / palisade. plynt. com/ issues/ 2005Aug/ code-obfuscation/
[4] http:/ / www. crackmes. de
[5] http:/ / tdhack.com
Cross-site printing
112
Cross-site printing
Cross-site printing or XSP is a variation of cross-site scripting (XSS) to allow attackers to spam intranet printers.
Network printers in a LAN often listen on TCP port 9100 to get a print job in RAW or PostScript format. The
security around this is usually minimal. If you are in the same LAN as the printer, you can send anything to that port.
With a crafted website, an attacker can send commands via the user's browser to the network printer. For example,
an attacker could print a PostScript document or ASCII art with advertisements.
External links
• Cross-site printing explanation
[1]
References
[1] http:/ / www. net-security.org/ dl/ articles/ CrossSitePrinting. pdf
CryptoRights Foundation
The CryptoRights Foundation, Inc. (CRF) is a 501(c)(3) non-profit organization based in San Francisco and
established in 1998, notable for the development of HighFire and work on other encryption standards, such as PGP
and IPsec. The organization supports the use of cryptography to protect the privacy and security of communications,
ensure freedom of expression and the press, and to protect the privacy of individuals from surveillance and consumer
profiling that could negatively affect the work of social justice, journalism and human rights organizations.
[1]
Significant technology projects include the development of HighFire (from "Human rights Firewall"), a secure,
distributed communications platform for private NGO communications, and the related HighWire, a secure wireless
human rights communications networking project based on the pioneering open source Software Defined Radio
source code now maintained at GnuRadio
[2]
. As of 2011, CRF is quiet but still active, continuing to provide free
security training and support for human rights and journalism organizations on the use of cryptography
[1]
and doing
early research and development on a new private identity and medical information security project known only by
the cryptic codename "P6".
The organization was conceived and founded on a ship during a total solar eclipse by five cryptography experts and
cyberliberty activists led by Dave Del Torto (an early PGP volunteer and employee at PGP, Inc and co-founder of
the OpenPGP Working Group at the IETF) and John Gilmore,
[3]
co-founder of the Electronic Frontier Foundation.
CRF has included directors, staff, advisors, volunteers and engineers such as Eric Blossom, Jon Callas, David
Chaum, Cindy Cohn, Whit Diffie, Jennifer Granick, Peter Hope-Tindall, Joichi Ito, Stanton "Mech" McCandlish,
Declan McCullagh, Sameer Parekh and other notable cryptography, computer security, civil liberties and privacy
activists.
CryptoRights Foundation
113
References
• Will Rodger, "Safe Haven", Interactive Week, v.8, no. 28, p.30 (July 16, 2001).
• Patrick Goodenough, "'Data Haven' Offers Snooping-Free Internet Service," CNSNews.com, July 28, 2000
(available at cnsnews.com
[4]
)
• "Encryption Backers Brace for New Threats", AP/CNN, March 31, 2003
• Steve Kettmann, "Hackers: Wake Up and Be Useful", Wired News, Aug. 13, 2001 (available at wired.com
[5]
)
(covering event about CryptoRights Foundation; see also "Crime: A Social Hacker's Duty", National Journal's
Technology Daily, Aug. 14, 2001, covering the same event)
• CryptoRights corporate history documents (origin date: February 26, 1998 during an IFCA
[6]
financial
cryptography conference on Anguilla, BWI).
Further reading
• James Glave, "Is Strong Crypto a Human Right"?" Wired News, Dec. 10, 1998 (available at wired.com
[7]
)
Footnotes
[1] Will Rodger, "Safe Haven", eWeek, July 2001.
[2] http:/ / gnuradio.org
[3] Thom Stark, "They Might Be Giants", Boardwatch Magazine, n.12, v.14, p.122 (Dec. 1, 2000.)
[4] http:/ / www. cnsnews. com/ ViewForeignBureaus. asp?Page=/ ForeignBureaus/ archive/200007/ For20000728c.html
[5] http:/ / www. wired.com/ culture/ lifestyle/ news/ 2001/ 08/ 46035
[6] http:// www. ifca.ai/
[7] http:/ / www. wired.com/ politics/ law/ news/ 1998/ 12/ 16768
External links
• inf0 [at] cryptorights.org
CVSS
114
CVSS
Common Vulnerability Scoring System (CVSS) is an industry standard for assessing the severity of computer
system security vulnerabilities. It attempts to establish a measure of how much concern a vulnerability warrants,
compared to other vulnerabilities, so efforts can be prioritized. The score is based on a series of measurements
(called metrics) based on expert assessment.
Metrics
The CVSS assessment measures three areas of concern:
1. Base Metrics for qualities intrinsic to a vulnerability.
2. Temporal Metrics for characteristics that evolve over the lifetime of vulnerability.
3. Environmental Metrics for characteristics of a vulnerability that depend on a particular implementation or
environment.
Base Metrics
1. Is the vulnerability exploitable remotely (as opposed to only locally).
2. How complex must an attack be to exploit the vulnerability?
3. Is authentication required to attack?
4. Does the vulnerability expose confidential data?
5. Can attacking the vulnerability damage the integrity of the system?
6. Does it impact availability of the system?
Temporal Metrics
1. How complex (or how long will it take) to exploit the vulnerability.
2. How hard (or how long) will it take to remediate the vulnerability.
3. How certain is the vulnerability's existence.
Environmental Metrics
1. Potential to cause collateral damage.
2. How many systems (or how much of a system) does the vulnerability impact.
3. Security Requirement(CIA)
External links
• the Forum of Incident Response Teams FIRST CVSS site
[1]
• National Vulnerability Database NVD CVSS site
[2]
• Security-Database online CVSS 2.0 calculator
[3]
• A list of early adopters
[4]
• All software/hardware vulnerabilities are CVSS scored and can be viewed at the NVD site
[5]
• Security-Database vulnerabilities dashboard scored with CVSS and other Open Standards CVE, CPE, CWE,
CAPEC, OVAL
[6]
CVSS
115
References
[1] http:/ / www. first.org/ cvss
[2] http:// nvd.nist. gov/ cvss. cfm
[3] http:/ / www. security-database. com/ cvss. php
[4] http:/ / www. first.org/ cvss/ eadopters. html
[5] http:/ / nvd.nist. gov/
[6] http:// www. security-database. com/ dashboard. php
Control system security
Control system security is the prevention of intentional or unintentional interference with the proper operation of
industrial automation and control systems. These control systems manage essential services including electricity,
petroleum production, water, transportation, manufacturing, and communications. They rely on computers,
networks, operating systems, applications, and programmable controllers, each of which could contain security
vulnerabilities. The 2010 discovery of the Stuxnet worm demonstrated the vulnerability of these systems to cyber
incidents. The United States and other governments have passed cyber-security regulations requiring enhanced
protection for control systems operating critical infrastructure.
Control system security is known by several other names such as SCADA security, PCN security, industrial network
security, and control system cyber security.
Risks
Insecurity of industrial automation and control systems can lead the following risks:
• Safety
• Environmental impact
• Lost production
• Equipment damage
• Information theft
• Company image
Vulnerability of control systems
Industrial automation and control systems have become far more vulnerable to security incidents due to the
following trends that have occurred over the last 10 to 15 years.
• Heavy use of Commercial Off-the Shelf Technology (COTS) and protocols. Integration of technology such as MS
Windows, SQL, and Ethernet means that process control systems are now vulnerable to the same viruses, worms
and trojans that affect IT systems Increased Connectivity
• Enterprise integration (using plant, corporate and even public networks) means that process control systems
(legacy) are now being subjected to stresses they were not designed for
• Demand for Remote Access - 24/7 access for engineering, operations or technical support means more insecure or
rogue connections to control system
• Public Information - Manuals on how to use control system are publicly available to would be attackers as well as
to legitimate users
Regulation of control system security is rare. The United States, for example, only does so for the nuclear power and
the chemical industries.
[1]
Control system security
116
Government efforts
The U.S. Government Computer Emergency Readiness team (US-CERT)
[2]
has instituted a Control Systems
Security Program (CSSP
[3]
) which has made available a large set of free National Institute of Standards and
Technology (NIST) standards documents regarding control system security
[4]
.
Control system security standards
ISA99
ISA99 is the Industrial Automation and Control System Security Committee of the International Society for
Automation (ISA). The committee is developing a multi-part series of standards and technical reports on the subject,
several of which have been publicly released. Work products from the ISA99 committee are also submitted to IEC as
standards and specifications in the IEC 63443 series.
• ISA-99.01.01 (formerly referred to as "Part 1") (ANSI/ISA 99.00.01
[5]
) is approved and published.
• ISA-TR99.01.02 is a master glossary of terms used by the committee. This document is still a working draft but
the content is available on the committee Wiki site (http:/ / isa99. isa. org/ ISA99%20Wiki/Master%20Glossary.
aspx)
• ISA-99.01.03 identifies a set of compliance metrics for IACS security. This document is currently under
development.
• ISA-99.02.01 (formerly referred to as "Part 2") (ANSI/ISA 99.02.01-2009
[6]
) addresses how to establish an IACS
security program. This standard is approved and published. It has also been approved and published by the IEC as
IEC 62443-2-1
[7]
• ISA-99.02.02 addresses how to operate an IACS security program. This standard is currently under development.
• ISA-TR99.02.03 is a technical report on the subject of patch management. This report is currently under
development.
• ISA-TR99.03.01 ([8])is a technical report on the subject of suitable technologies for IACS security. This report is
approved and published.
• ISA-99.03.02 addresses how to define security assurance levels using the zones and conduits concept. This
standard is currently under development.
• ISA-99.03.03 defines detailed technical requirements for IACS security. This standard is currently under
development.
• ISA-99.03.04 addresses the requirements for the development of secure IACS products and solutions. This
standard is currently under development.
• Standards in the ISA-99.04.xx series address detailed technical requirements at the component level. These
standards are currently under development.
More information about the activities and plans of the ISA99 committee is available on the committee Wiki site ([9])
Control system security
117
American Petroleum Institute
API 1164 Pipeline SCADA Security
[10]
North American Electric Reliability Committee (NERC)
NERC Critical Infrastructure Protection (CIP) Standards
[11]
Guidance documents
American Chemistry Council
ChemITC Guidance Documents
[12]
Insightful Articles
Industrial Netorking Security
[13]
Control system security certification
ISA Security Compliance Institute
Related to the work of ISA 99 is the work of the ISA Security Compliance Institute
[14]
. The ISA Security
Compliance Institute (ISCI) has developed compliance test specifications for ISA99 and other control system
security standards. They have also created an ANSI
[15]
accredited certification program called ISASecure for the
certification of industrial automation devices such as programmable logic controllers (PLC), distributed control
systems (DCS) and safety instrumented systems (SIS). These types of devices provided automated control of
industrial processes such as those found in the oil & gas, chemical, electric utility, manufacturing, food & beverage
and water/wastewater processing industries. There is growing concern from both governments as well as private
industry regarding the risk that these systems could be intentionally compromised by "evildoers" such as hackers,
disgruntled employees, organized criminals, terrorist organizations or even state-sponsored groups. The recent news
about the industrial control system malware known as Stuxnet has heightened concerns about the vulnerability of
these systems.
References
[1] Gross, Michael Joseph (2011-04). "A Declaration of Cyber-War" (http:// www. vanityfair.com/ culture/ features/2011/ 04/ stuxnet-201104).
Vanity Fair. Condé Nast. . Retrieved March 03, 2011.
[2] http:// www. us-cert.gov/
[3] http:/ / www. us-cert.gov/ control_systems/
[4] http:// www. us-cert.gov/ control_systems/ csstandards. html
[5] http:/ / www. isa. org/Template.cfm?Section=Shop_ISA&Template=/Ecommerce/ProductDisplay.cfm&Productid=9661
[6] http:/ / www. isa. org/Template.cfm?Section=Standards2&template=/ Ecommerce/ProductDisplay.cfm&ProductID=10243
[7] http:/ / www. iec. ch/ cgi-bin/procgi. pl/ www/ iecwww.p?wwwlang=E&wwwprog=pro-det.p& He=IEC&Pu=62443&Pa=2& Se=1&
Am=& Fr=&TR=&Ed=1
[8] http:/ / www. isa. org/Template.cfm?Section=Standards&template=/ Ecommerce/ProductDisplay.cfm&ProductID=9665
[9] http:/ / isa99. isa. org/ ISA99%20Wiki/Home. aspx
[10] http:/ / www. api. org/Standards/ new/ api-standard-1164.cfm
[11] http:/ / www. nerc.com/ page. php?cid=2|20
[12] http:// www. americanchemistry. com/s_chemitc/ sec. asp?CID=1641& DID=6201
[13] http:// www. bin95.com/ Industrial-network-security.htm
[14] http:// www. isasecure. org/
[15] https:/ / www. ansica. org/ wwwversion2/ outside/ PROpilotISA.asp?menuID=1
Control system security
118
External links
• ISA 99 Standards (http:// www. isa. org/ isa99/ )
• ISA Security Compliance Institute (http:// www. isasecure. org/ )
• NERC Standards (see CIP 002-009) (http:/ / www.nerc.com/ page. php?cid=220/)
• NIST webpage (http:/ / www. nist. gov)| NIST
• The Repository of Industrial Security Incidents (http:// www.securityincidents. org/)
Cyber security standards
Cyber security standards are security standards which enable organizations to practice safe security techniques to
minimize the number of successful cyber security attacks. These guides provide general outlines as well as specific
techniques for implementing cyber security. For certain specific standards, cyber security certification by an
accredited body can be obtained. There are many advantages to obtaining certification including the ability to get
cyber security insurance.
History
Cyber security standards have been created recently because sensitive information is now frequently stored on
computers that are attached to the Internet. Also many tasks that were once done by hand are carried out by
computer; therefore there is a need for Information Assurance (IA) and security. Cyber security is important in order
to guard against identity theft. Businesses also have a need for cyber security because they need to protect their trade
secrets, proprietary information, and personally identifiable information (PII) of their customers or employees. The
government also has the need to secure its information. One of the most widely used security standards today is
ISO/IEC 27002 which started in 1995. This standard consists of two basic parts. BS 7799 part 1 and BS 7799 part 2
both of which were created by (British Standards Institute) BSI. Recently this standard has become ISO 27001. The
National Institute of Standards and Technology (NIST) has released several special publications addressing cyber
security. Three of these special papers are very relevant to cyber security: the 800-12 titled “Computer Security
Handbook;” 800-14 titled “Generally Accepted Principles and Practices for Securing Information Technology;” and
the 800-26 titled “Security Self-Assessment Guide for Information Technology Systems”. The International Society
of Automation (ISA) developed cyber security standards for industrial automation control systems (IACS) that are
broadly applicable across manufacturing industries. The series of ISA industrial cyber security standards are known
as ISA-99 and are being expanded to address new areas of concern.
ISO 27002
ISO 27002 incorporates both parts of the BS 7799 standard. Sometimes ISO/IEC 27002 is referred to as BS 7799
part 1 and sometimes it refers to part 1 and part 2. BS 7799 part 1 provides an outline for cyber security policy;
whereas BS 7799 part 2 provides a certification. The outline is a high level guide to cyber security. It is most
beneficial for an organization to obtain a certification to be recognized as compliant with the standard. The
certification once obtained lasts three years and is periodically checked by the BSI to ensure an organization
continues to be compliant throughout that three year period. ISO 27001 (ISMS) replaces BS 7799 part 2, but since it
is backward compatible any organization working toward BS 7799 part 2 can easily transition to the ISO 27001
certification process. There is also a transitional audit available to make it easier once an organization is BS 7799
part 2-certified for the organization to become ISO 27001-certified. ISO/IEC 27002 states that information security
is characterized by integrity, confidentiality, and availability. The ISO/IEC 27002 standard is arranged into eleven
control areas; security policy, organizing information security, asset management, human resources security,
physical and environmental security, communication and operations, access controls, information systems
Cyber security standards
119
acquisition/development/maintenance, incident handling, business continuity management, compliance.
department
Standard of good practice
In the 1990s, the Information Security Forum (ISF) published a comprehensive list of best practices for information
security, published as the Standard of Good Practice (SoGP). The ISF continues to update the SoGP every two
years; the latest version was published in February 2007.
Originally the Standard of Good Practice was a private document available only to ISF members, but the ISF has
since made the full document available to the general public at no cost.
Among other programs, the ISF offers its member organizations a comprehensive benchmarking program based on
the SoGP.
NERC
The North American Electric Reliability Corporation (NERC) has created many standards. The most widely
recognized is NERC 1300 which is a modification/update of NERC 1200. The newest version of NERC 1300 is
called CIP-002-1 through CIP-009-2 (CIP=Critical Infrastructure Protection). These standards are used to secure
bulk electric systems although NERC has created standards within other areas. The bulk electric system standards
also provide network security administration while still supporting best practice industry processes.
NERC
NIST
1. Special publication 800-12 provides a broad overview of computer security and control areas. It also emphasizes
the importance of the security controls and ways to implement them. Initially this document was aimed at the
federal government although most practices in this document can be applied to the private sector as well.
Specifically it was written for those people in the federal government responsible for handling sensitive systems.
800-12
2. Special publication 800-14 describes common security principles that are used. It provides a high level
description of what should be incorporated within a computer security policy. It describes what can be done to
improve existing security as well as how to develop a new security practice. Eight principles and fourteen
practices are described within this document.
800-14
3. Special publication 800-26 provides advice on how to manage IT security. This document emphasizes the
importance of self assessments as well as risk assessments.
800-26
4. Special publication 800-37, updated in 2010 provides a new risk approach: "Guide for Applying the Risk
Management Framework to Federal Information Systems"
5. Special publication 800-53 rev3, "Guide for Assessing the Security Controls in Federal Information Systems",
updated in August 2009, specifically addresses the 194 security controls that are applied to a system to make it
"more secure."
Cyber security standards
120
ISO 15408
This standard develops what is called the “Common Criteria”. It allows many different software applications to be
integrated and tested in a secure way.
RFC 2196
RFC 2196 is memorandum published by Internet Engineering Task Force for developing security policies and
procedures for information systems connected on the Internet. The RFC 2196 provides a general and broad overview
of information security including network security, incident response or security policies. The document is very
practical and focusing on day-to-day operations.
ISA-99
ISA99 is the Industrial Automation and Control System Security Committee of the International Society for
Automation (ISA). The committee is developing a multi-part series of standards and technical reports on the subject,
several of which have been publicly released. Work products from the ISA99 committee are also submitted to IEC as
standards and specifications in the IEC 63443 series.
• ISA-99.01.01 (formerly referred to as "Part 1") (ANSI/ISA 99.00.01
[5]
) is approved and published.
• ISA-TR99.01.02 is a master glossary of terms used by the committee. This document is still a working draft but
the content is available on the committee Wiki site (http:/ / isa99. isa. org/ ISA99%20Wiki/Master%20Glossary.
aspx)
• ISA-99.01.03 identifies a set of compliance metrics for IACS security. This document is currently under
development.
• ISA-99.02.01 (formerly referred to as "Part 2") (ANSI/ISA 99.02.01-2009
[6]
) addresses how to establish an IACS
security program. This standard is approved and published. It has also been approved and published by the IEC as
IEC 62443-2-1
[7]
• ISA-99.02.02 addresses how to operate an IACS security program. This standard is currently under development.
• ISA-TR99.02.03 is a technical report on the subject of patch management. This report is currently under
development.
• ISA-TR99.03.01 ([8])is a technical report on the subject of suitable technologies for IACS security. This report is
approved and published.
• ISA-99.03.02 addresses how to define security assurance levels using the zones and conduits concept. This
standard is currently under development.
• ISA-99.03.03 defines detailed technical requirements for IACS security. This standard is currently under
development.
• ISA-99.03.04 addresses the requirements for the development of secure IACS products and solutions. This
standard is currently under development.
• Standards in the ISA-99.04.xx series address detailed technical requirements at the component level. These
standards are currently under development.
More information about the activities and plans of the ISA99 committee is available on the committee Wiki site ([9])
ISA Security Compliance Institute
Related to the work of ISA 99 is the work of the ISA Security Compliance Institute
[14]
. The ISA Security
Compliance Institute (ISCI) has developed compliance test specifications for ISA99 and other control system
security standards. They have also created an ANSI
[15]
accredited certification program called ISASecure for the
certification of industrial automation devices such as programmable logic controllers (PLC), distributed control
systems (DCS) and safety instrumented systems (SIS). These types of devices provided automated control of
Cyber security standards
121
industrial processes such as those found in the oil & gas, chemical, electric utility, manufacturing, food & beverage
and water/wastewater processing industries. There is growing concern from both governments as well as private
industry regarding the risk that these systems could be intentionally compromised by "evildoers" such as hackers,
disgruntled employees, organized criminals, terrorist organizations or even state-sponsored groups. The recent news
about the industrial control system malware known as Stuxnet has heightened concerns about the vulnerability of
these systems.
References
1. 1.Department of Homeland Security, A Comparison of Cyber Security Standards Developed by the Oil and Gas
Segment. (November 5, 2004)
2. 2.Guttman, M., Swanson, M., National Institute of Standards and Technology; Technology Administration; U.S.
Department of Commerce., Generally Accepted Principles and Practices for Securing Information Technology
Systems (800-14). (September 1996)
3. 3.National Institute of Standards and Technology; Technology Administration; U.S. Department of Commerce.,
An Introduction to Computer Security: The NIST Handbook, Special Publication 800-12.
4. 4.Swanson, M., National Institute of Standards and Technology; Technology Administration; U.S. Department of
Commerce., Security Self-Assessment Guide for Information Technology Systems (800-26).
5. 5.The North America Electric Reliability (NERC). http:// www.nerc. com. Retrieved November 12, 2005.
External links
• [1]
• [2]
• NEWS about ISO 27002
[3]
• BS 7799 certification
[4]
• ISO webpage
[5]
• BSI website
[6]
• NERC Standards (see CIP 002-009)
[11]
• NIST webpage
[7]
• The Information Security Forum (ISF)
[8]
• The Standard of Good Practice (SoGP)
[9]
• CYBER-ATTACKS! Trends in US Corporations
[10]
• Securing Cyberspace-Media
[11]
• Presentation by Professor William Sanders, University of Illinois
[12]
• Carnegie Mellon University Portal for Cyber Security
[13]
• Critical Infrastructure Protection
[14]
• Cybertelecom :: Security
[15]
Surveying federal cyber security work
• Global Cybersecurity Policy Conference
[16]
• The Repository of Industrial Security Incidents
[17]
• Rsam: Standards Based IT GRC Management Platform
[18]
Cyber security standards
122
References
[1] http:/ / www. isa. org/isa99
[2] http:// www. isasecure. org
[3] http:// www. molemag. net
[4] http:/ / www. itmanagementnews. com/ itmanagementnews-54-20040224BS7799CompliancyandCertification. html
[5] http:/ / www. iso. org/iso/ en/ ISOOnline.frontpage
[6] http:/ / www. bsi-global. com/ index. xalter
[7] http:/ / www. nist. gov
[8] http:// www. securityforum.org
[9] http:/ / www. isfsecuritystandard. com
[10] http:/ / www. bizforum.org/whitepapers/ rand001. htm
[11] http:/ / hsgac. senate. gov/ index. cfm?Fuseaction=Hearings. Detail& HearingID=261
[12] http:// media.cs. uiuc. edu/ DCS/ 2007-08/ ETSI-2008-02-27.asx
[13] https:// www. mysecurecyberspace. com/
[14] https:/ / inlportal.inl. gov/ portal/server. pt?open=514&objID=1275&parentname=CommunityPage& parentid=5&mode=2&
in_hi_userid=200& cached=true
[15] http:// www. cybertelecom. org/security/
[16] http:// www. stevens. edu/ cyberpolicy/
[17] http:// www. securityincidents. org/
[18] http:// www. rsam. com/ products_iso. htm/
Cyber spying
Cyber spying or Cyber espionage is the act or practice of obtaining secrets without the permission of the holder of
the information (personal, sensitive, proprietary or of classified nature), from individuals, competitors, rivals, groups,
governments and enemies for personal, economic, political or military advantage using illegal exploitation methods
on the Internet, networks or individual computers through the use of cracking techniques and malicious software
including Trojan horses and spyware. It may wholly be perpetrated online from computer desks of professionals on
bases in far away countries or may involve infiltration at home by computer trained conventional spies and moles or
in other cases may be the criminal handiwork of amateur malicious hackers and software programmers.
Cyber spying typically involves the use of such illegally gained access to secrets and classified information or
illegally gained control of individual computers or whole networks for an unethical and illegal strategic advantage
and for psychological, political and physical subversion activities and sabotage.
References
• Bill Schiller, ASIA BUREAU (Apr 01, 2009), Chinese ridicule U of T spy report - But government officials
choose words carefully, never denying country engages in cyber-espionage (http:/ / www.thestar. com/ News/
World/article/ 611481), Toronto, Ontario, Canada, retrieved 2009-04-04
• Kelly, Cathal (Mar 31, 2009), Cyberspies' code a click away - Simple Google search quickly finds link to software
for Ghost Rat program used to target governments (http:/ / www.thestar. com/ News/ World/Article/610860),
Toronto, Ontario, Canada, retrieved 2009-04-04
• All about Chinese cyber spying (http:// infotech.indiatimes. com/ quickiearticleshow/ 4334292. cms),
infotech.indiatimes.com (Times of India), March 30, 2009, retrieved 2009-04-01
• Cooper, Alex (March 30, 2009), We can lead in cyber spy war, sleuth says; Toronto investigator helped expose
hacking of embassies, NATO (http:// www. thestar. com/ news/ canada/ article/ 610329), Toronto, Ontario,
Canada, retrieved 2009-03-31
• Chinese-based cyber spy network exposes need for better security: Cdn researchers (http:/ / ca.news.yahoo.
com/s/ capress/ 090330/ national/ computer_spying), Yahoo News Canada, March 30, 2009, retrieved
2009-03-31
Cyber spying
123
• Steve Herman (30 March 2009), Exiled Tibetan Government Expresses Concern over Cyber-Spying Traced to
China (http:/ / www. globalsecurity. org/intell/ library/news/ 2009/intell-090330-voa01.htm), New Delhi:
GlobalSecurity.org, retrieved 2009-03-31
• "Chinese government accused of cyber spying" (http:/ / www.belfasttelegraph. co.uk/ news/ world-news/
chinese-government-accused-of-cyber-spying-14248347.html), Belfast Telegraph, 30 March 2009
• Patrick Goodenough, International Editor (March 30, 2009), China Rejects Cyber Spying Allegations; ‘Dalai
Lama Propaganda’ (http:/ / www. cnsnews. com/ public/ content/ article. aspx?RsrcID=45797), CNSNews.com,
retrieved 2009-03-31
• Harvey, Mike (March 29, 2009), "'World's biggest cyber spy network' snoops on classified documents in 103
countries" (http:// www. timesonline. co. uk/ tol/news/ uk/ crime/ article5996253.ece), The Times (London),
retrieved 2009-03-30
• Major cyber spy network uncovered (http:/ / news. bbc.co.uk/ 2/ hi/ americas/ 7970471.stm), BBC News, 29
March 2009, retrieved 2009-03-30
• SciTech Cyber spy network 'smoking gun' for China: expert (http:/ / www.ctv. ca/ servlet/ ArticleNews/ story/
CTVNews/20090329/ China_Hackers_090329/ 20090329?hub), CTV Canada, March 29, 2009, retrieved
2009-03-30
• Kim Covert (March 28, 2009), "Canadian researchers uncover vast Chinese cyber spy network" (http:// www.
nationalpost. com/ news/ story. html?id=1440426), National Post, Don Mills, Ontario, Canada (Canwest News
Service)
• US warned of China 'cyber-spying' (http:/ / news. bbc. co. uk/ 2/ hi/ asia-pacific/7740483.stm), BBC News, 20
November 2008, retrieved 2009-04-01
• Mark Hosenball (June 2, 2008), "INTELLIGENCE - Cyber-Spying for Dummies" (http:// www.newsweek. com/
id/ 138520), Newsweek
• Walton, Gregory (April 2008). "Year of the Gh0st RAT" (http:// www. beijing2008conference.com/ articles.
php?id=101). World Association of Newspapers. Retrieved 2009-04-01.
• German court limits cyber spying (http:/ / news. bbc.co.uk/ 2/ hi/ europe/7266543.stm), BBC News, 27
February 2008
• Rowan Callick; Jane Macartney (December 7, 2007), "Chinese fury at cyber spy claims" (http:// www.
theaustralian.news. com. au/ story/ 0,25197,22882854-2703,00.html), The Australian
External links
• Congress to Investigate Google Charges Of Chinese Internet Spying (AHN) (http:// www. allheadlinenews. com/
articles/ 7017511426?Congress to Investigate Google Charges Of Chinese Internet Spying)
• Information Warfare Monitor - Tracking Cyberpower (University of Toronto, Canada/Munk Centre) (http://
infowar-monitor. net/ index. php)
• Twitter: InfowarMonitor (http:// twitter. com/ InfowarMonitor)
• Spy software for cyber spies (http:/ / www. SnoopPal. com)
Cyber Storm Exercise
124
Cyber Storm Exercise
The Cyber Storm exercise was a simulated exercise overseen by the Department of Homeland Security that took
place February 6 through February 10, 2006 with the purpose of testing the nations defenses against digital
espionage.
[1]

[2]
The simulation was targeted primarily at American security organizations but officials from Britain,
Canada, Australia and New Zealand participated as well.
[3]
Simulation
The exercise simulated a large scale attack on critical digital infrastructure such as communications, transportation,
and energy production. The simulation took place a series of incidents which included.
[4]
• Washington, D.C. metro trains mysteriously shutting down.
• Bloggers revealing locations of railcars containing hazardous materials.
• The airport control towers of Philadelphia and Chicago mysteriously shutting down.
• A mysterious liquid appearing on a London subway.
• Significant numbers of people on "no fly" lists suddenly appearing at airports all over the nation.
• Planes flying too close to the White House.
• Water utilities in Los Angeles getting compromised.
Internal difficulties
During the exercise the computers running the simulation came under attack by the players themselves. Heavily
censored files released to the Associated Press reveal that at some time during the exercise the organizers sent every
one involved an e-mail marked "IMPORTANT!" telling the participants in the simulation not to attack the game's
control computers.
[4]
Performance of participants
The Cyber Storm exercise highlighted the gaps and shortcomings of the nation's cyber defenses. The cyber storm
exercise report found that institutions under attack had a hard time getting the bigger picture and instead focused on
single incidents treating them as "individual and discrete."
[5]
In light of the test the Department of Homeland
Security raised concern that the relatively modest resources assigned to cyber-defense would be "overwhelmed in a
real attack".
[6]
References
[1] Fact Sheet: Cyber Storm Exercise (http:// www.dhs. gov/ xnews/ releases/ pr_1158340980371.shtm) (Department of Homeland Security).
Accessed February 1, 2008.
[2] Cyber Storm Exercise Report (http:// www. dhs. gov/ xlibrary/assets/ prep_cyberstormreport_sep06.pdf) (Department of Homeland
Security)
[3] Kapica, Jack. A blogger’s paranoia (http:/ / www.theglobeandmail.com/ servlet/ story/ RTGAM. 20080131. WBcyberia20080131112514/
WBStory/WBcyberia), The Globe and Mail, Accessed February 1, 2008.
[4] Bridis, Ted. "Threats From Everywhere in Cyber Storm". (http:// ap.google. com/ article/
ALeqM5gd_SXvPiXXwcW63vRuZtrAn2IX5AD8UH1N480) Associated Press. Accessed February 1, 2008.
[5] Wait, Patience. Cyber Storm exercise challenged coordination, communications (http:// www.gcn.com/ online/ vol1_no1/ 42017-1.html)
(Government computer news). Accessed February 1, 2008.
[6] DHS releases report on Cyber Storm exercise (http:/ / www. securityfocus.com/brief/303). Accessed February 18, 2008.
Cyber Storm II
125
Cyber Storm II
Cyber Storm II was an international cyber security exercise sponsored by the United States Department of
Homeland Security in 2008. The week long exercise was centered in Washington, DC and concluded on March 15.
[1]
References
[1] Ian Grant. "Cyber Storm 2 exercise reveals security preparedness" (http:// www.computerweekly.com/ Articles/ 2008/ 03/ 18/ 229909/
cyber-storm-2-exercise-reveals-security-preparedness.htm) Computerweekly.com. Accessed March 21, 2008.
Cyberheist
Cyberheist: attacks by cyber criminals are rapidly getting more sophisticated. They are now going after employees.
They bypass the antivirus security software and ‘social engineer’ employees to click on something. From that point
forward they hack into the network and put keyloggers on accounting systems. A few days later the organization’s
bank accounts are empty: A cyberheist.
Brian Krebs, a Washington Post columnist has a good article about this: a cyberheist
[1]
.
Here is a map with recent cyberheist victims:
[2]
Also, the title of a book released in 2011
[3]
by Stu Sjouwerman
[4]
References
[1] http:/ / voices. washingtonpost. com/ securityfix/2009/ 10/ avoid_windows_malware_bank_on.html
[2] http:// www. knowbe4. com/ resources/ cyberheist-map/
[3] http:/ / www. amazon. com/ Cyberheist-financial-American-businesses-ebook/ dp/ B004XDE20O/
[4] http:/ / www. sjouwerman. com/
Dancing pigs
126
Dancing pigs
In computer security, the dancing pigs problem (also known as the dancing bunnies problem) is a statement on user
attitudes to computer security: that users primarily desire features without considering security, and so security must
be designed in without the computer having to ask a technically ignorant user. The term has its origin in a remark by
Edward Felten and Gary McGraw:

Given a choice between dancing pigs and security, users will pick dancing pigs every time.
[1]

Bruce Schneier expands on this remark as follows:
If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead
gets a hortatory message describing the potential dangers of the applet — he's going to choose dancing pigs
over computer security any day. If the computer prompts him with a warning screen like: "The applet
DANCING PIGS could contain malicious code that might do permanent damage to your computer, steal your
life's savings, and impair your ability to have children," he'll click OK without even reading it. Thirty seconds
later he won't even remember that the warning screen even existed.
[2]
The Mozilla Security Reviewers' Guide states:
Many of our potential users are inexperienced computer users, who do not understand the risks involved in
using interactive Web content. This means we must rely on the user's judgement as little as possible.
[3]
A widely-publicized 2009 paper
[4]
directly addresses the dancing pigs quotation and suggests that users' behavior is
entirely rational:
While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything
else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips. These
sometimes carry vague and tentative suggestions of reduced risk, never security.
[5]
Experimental support
One study of phishing
[6]
found that people really do prefer dancing animals to security. The study showed
participants a number of phishing sites, including one that copied the Bank of the West home page:
For many participants the "cute" design, the level of detail and the fact that the site does not ask for a great
deal of information were the most convincing factors. Two participants mentioned the animated bear video
that appears on the page, (e.g., "because that would take a lot of effort to copy"). Participants in general found
this animation appealing and many reloaded the page just to see the animation again.
References
[1] Gary McGraw and Edward Felten: Securing Java (http:// www. securingjava.com/ ) (John Wiley & Sons, 1999; ISBN 0-471-31952-X),
Chapter one, Part seven
[2] Bruce Schneier: Secrets and Lies (John Wiley & Sons, 2000; ISBN 0-471-45380-3), p262
[3] Mozilla Security Reviewers' Guide (http:// www.mozilla.org/projects/ security/ components/ reviewguide.html) (mozilla.org)
[4] Mark Pothier (2010-04-11). "Please Do Not Change Your Password" (http:// www.boston. com/ bostonglobe/ ideas/ articles/ 2010/ 04/ 11/
please_do_not_change_your_password/ ). The Boston Globe. . Retrieved 2011-05-25.
[5] Cormac Herley (2009). "So Long and No Thanks for the Externalities: the Rational Rejection of Security Advice by Users" (http:// research.
microsoft.com/ en-us/ um/ people/ cormac/ papers/ 2009/ SoLongAndNoThanks.pdf). New Security Paradigms Workshop (http:// www.
nspw. org/2009/ ). .
[6] Rachna Dhamija, J. D. Tygar and Marti Hearst. "Why Phishing Works" (http:/ / people.seas. harvard.edu/ ~rachna/papers/
why_phishing_works. pdf). . Retrieved 2011-05-25.
Dancing pigs
127
External links
• Larry Osterman's WebLog : Beware of the dancing bunnies. (http:// blogs. msdn. com/ larryosterman/archive/
2005/ 07/ 12/ 438284. aspx)
• HoneyMonkey Project (http:// research. microsoft.com/ honeymonkey/ )
Data breach
A data breach is the intentional or unintentional release of secure information to an untrusted environment. Other
terms for this phenomenon include unintentional information disclosure, data leak and also data spill. Incidents
range from concerted attack by black hats with the backing of organized crime or national governments to careless
disposal of used computer equipment or data storage media. Definition "A data breach is a security incident in which
sensitive, protected or confidential data is copied, transmitted, viewed, stolen or used by an individual unauthorized
to do so." Data breaches may involve financial information such as credit card or bank details, personal health
information (PHI), Personally identifiable information (PII), trade secrets of corporations or intellectual property.
According to the nonprofit consumer organization Privacy Rights Clearinghouse, a total of 227,052,199 individual
records containing sensitive personal information were involved in security breaches in the United States between
January 2005 and May 2008, excluding incidents where sensitive data was apparently not actually exposed.
[1]
Definition
This may include incidents such as theft or loss of digital media such as computer tapes, hard drives, or laptop
computers containing such media upon which such information is stored unencrypted, posting such information on
the world wide web or on a computer otherwise accessible from the Internet without proper information security
precautions, transfer of such information to a system which is not completely open but is not appropriately or
formally accredited for security at the approved level, such as unencrypted e-mail, or transfer of such information to
the information systems of a possibly hostile agency, such as a competing corporation or a foreign nation, where it
may be exposed to more intensive decryption techniques.
[2]
Trusted environment
The notion of a trusted environment is somewhat fluid. The departure of a trusted staff member with access to
sensitive information can become a data breach if the staff member retains access to the data subsequent to
termination of the trust relationship. In distributed systems, this can also occur with a breakdown in a web of trust.
Data privacy
Most such incidents publicized in the media involve private information on individuals, i.e. social security numbers,
etc.. Loss of corporate information such as trade secrets, sensitive corporate information, details of contracts, etc. or
of government information is frequently unreported, as there is no compelling reason to do so in the absence of
potential damage to private citizens, and the publicity around such an event may be more damaging than the loss of
the data itself.
Data breach
128
Consequences
Although such incidents pose the risk of identity theft or other serious consequences, in most cases there is no lasting
damage; either the breach in security is remedied before the information is accessed by unscrupulous people, or the
thief is only interested in the hardware stolen, not the data it contains. Nevertheless, when such incidents become
publicly known, it is customary for the offending party to attempt to mitigate damages by providing to the victims
subscription to a credit reporting agency, for instance.
Major incidents
Well known incidents include:
2011
• In April 2011, Sony experienced a data breach within their Playstation Network. It is estimated that the
information of 100 million users was compromised.
2009
• In December 2009 a RockYou! password database was breached containing 32 million user names and plaintext
passwords, further compromising the use of weak passwords for any purpose.
• In January 2009 Heartland Payment Systems announced that it had been "the victim of a security breach within its
processing system", possibly part of a "global cyber fraud operation".
[3]
The intrusion has been called the largest
criminal breach of card data ever, with estimates of up to 100 million cards from more than 650 financial services
companies compromised.
[4]
2008
• In January 2008, GE Money, a division of General Electric, discloses that a magnetic tape containing 150,000
social security numbers and in-store credit card information from 650,000 retail customers is known to be missing
from an Iron Mountain Incorporated storage facility. J.C. Penney is among 230 retailers affected.
[5]
• Horizon Blue Cross and Blue Shield of New Jersey, January, 300,000 members
[1]
• Lifeblood, February, 321,000 blood donors
[1]
• British National Party membership list leak,
[6]
2007
• The 2007 loss of Ohio and Connecticut state data by Accenture
• TJ Maxx, data for 45 million credit and debit accounts
[7]
• 2007 UK child benefit data scandal
• CGI Group, August, 283,000 retirees from New York City
[1]
• The Gap, September, 800,000 job applicants
[1]
• Memorial Blood Center, December, 268,000 blood donors
[1]
• Davidson County Election Commission, December, 337,000 voters
[1]
Data breach
129
2006
• AOL search data scandal (sometimes referred to as a "Data Valdez"
[8]
,
[9]
,
[10]
due to its size)
• Department of Veterans Affairs, May, 28,600,000 veterans, reserves, and active duty military personnel
[1]
,
[11]
• Ernst & Young, May, 234,000 customers of Hotels.com (after a similar loss of data on 38,000 employees of Ernst
& Young clients in February)
[1]
• Boeing, December, 382,000 employees (after similar losses of data on 3,600 employees in April and 161,000
employees in November, 2005)
[1]
2005
• Ameriprise Financial, stolen laptop, December 24, 260,000 customer records
[1]
References
[1] " A Chronology of Data Breaches (http:/ / www.privacyrights.org/ ar/ChronDataBreaches. htm)", Privacy Rights Clearinghouse
[2] When we discuss incidents occurring on NSSs, are we using commonly defined terms? (http:/ / www.archives.gov/ isoo/ faqs/
agency-declass-plans. html), "Frequently Asked Questions on Incidents and Spills", National Archives Information Security Oversight Office
[3] Heartland Payment Systems Uncovers Malicious Software In Its Processing System (http:// www.2008breach.com/ Information20090120.
asp)
[4] Lessons from the Data Breach at Heartland (http:// www. businessweek. com/ technology/ content/ jul2009/ tc2009076_891369. htm),
MSNBC, July 7, 2009
[5] GE Money Backup Tape With 650,000 Records Missing At Iron Mountain - Iron Mountain (http:// www.informationweek.com/ news/
showArticle.jhtml?articleID=205901244)
[6] BNP activists' details published - BBC News (http:// news. bbc. co.uk/ 1/ hi/uk/ 7736405. stm)
[7] "T.J. Maxx data theft worse than first reported" (http:/ / www. msnbc. msn. com/ id/ 17853440/ ). msnbc.com. 2007-03-29. . Retrieved
2009-02-16.
[8] data Valdez (http:// www. doubletongued. org/ index. php/ dictionary/data_valdez/ ) Doubletongued dictionary
[9] AOL's Massive Data Leak (http:// www. eff.org/Privacy/ AOL/), Electronic Frontier Foundation
[10] data Valdez (http:/ / www. netlingo. com/ lookup. cfm?term=data Valdez), Net Lingo
[11] " Active-duty troop information part of stolen VA data (http:// www.networkworld.com/ news/ 2006/
060606-active-duty-troop-information-part-of.html?nwwpkg=slideshows)", Network World, June 6, 2006
External links
• " Most Recent Data Breaches (http:/ / www. teamshatter. com/ breaches/ )", TeamSHATTER, updated regularly
• " A Chronology of Data Breaches (http:// www.privacyrights.org/ar/ ChronDataBreaches. htm)", Privacy
Rights Clearinghouse, updated twice a week
• " Identity Theft Resource Center - Data Breaches (http:// www. idtheftcenter.org/artman2/ publish/ lib_survey/
ITRC_2008_Breach_List. shtml)", Updated weekly with statistical analyses
• " Data Loss Database (http:/ / datalossdb. org/) Open Security Foundation's research project documenting data
loss incidents worldwide.
• " Office of Inadequate Security (http:// www. databreaches. net/ )", Breach incidents reported in the media and
from primary sources, worldwide.
• " Personal Health Information Privacy (http:/ / www.phiprivacy.net/ )", Breach incidents from the health care
sector, worldwide.
• " Notices of Security Breaches (http:/ / doj.nh. gov/ consumer/ breaches. html)", New Hampshire Department of
Justice
• " Maryland Notice of Information Security Breaches (http:// www.oag. state. md.us/ idtheft/ breacheNotices.
htm)", Maryland Attorney General's Office
• " Breaches Affecting 500 or More Individuals (http:/ / www.hhs. gov/ ocr/privacy/ hipaa/ administrative/
breachnotificationrule/ breachtool.html)", Breaches reported to the United States Department of Health and
Human Services by HIPAA-covered (Health Insurance Portability and Accountability Act) entities.
Data breach
130
• " Information That Matter (http:// www. infothatmatter.org/ )", A data breach responsible disclosure project
associated with OWASP Singapore.
• " The Breach Blog (http:/ / www. breachblog.com/ )", Data breach commentary and analysis.
• " SC Magazine Data Breach Blog (http:/ / breach.scmagazineblogs. com/ )", The SC Magazine Data Breach
Blog.
Data loss prevention software
Data Loss Prevention (DLP) is a computer security term referring to systems that identify, monitor, and protect
data in use (e.g. endpoint actions), data in motion (e.g. network actions), and data at rest (e.g. data storage)
through deep content inspection, contextual security analysis of transaction (attributes of originator, data object,
medium, timing, recipient/destination and so on) and with a centralized management framework. Systems are
designed to detect and prevent unauthorized use and transmission of confidential information Vendors refer to the
term as Data Leak Prevention, Information Leak Detection and Prevention (ILDP), Information Leak
Prevention (ILP), Content Monitoring and Filtering (CMF), Information Protection and Control (IPC) or
Extrusion Prevention System by analogy to Intrusion-prevention system.
Types of DLP Systems
Network DLP (aka Data in Motion <DiM>)
Typically a software or hardware solution that is installed at network egress points near the perimeter. It analyzes
network traffic to detect sensitive data that is being sent in violation of information security policies.
Storage DLP (aka Data at Rest <DaR>)
Typically a software solution that is installed in data centers to discover confidential data is stored in inappropriate
and/or unsecured locations (e.g. open file share).
Endpoint DLP (aka Data in Use <DiU>)
Such systems run on end-user workstations or servers in the organization. Like network-based systems,
endpoint-based can address internal as well as external communications, and can therefore be used to control
information flow between groups or types of users (e.g. 'Chinese walls'). They can also control email and Instant
Messaging communications before they are stored in the corporate archive, such that a blocked communication (i.e.,
one that was never sent, and therefore not subject to retention rules) will not be identified in a subsequent legal
discovery situation. Endpoint systems have the advantage that they can monitor and control access to physical
devices (such as mobile devices with data storage capabilities) and in some cases can access information before it
has been encrypted. Some endpoint-based systems can also provide application controls to block attempted
transmissions of confidential information, and provide immediate feedback to the user. They have the disadvantage
that they need to be installed on every workstation in the network, cannot be used on mobile devices (e.g., cell
phones and PDAs) or where they cannot be practically installed (for example on a workstation in an internet café).
Data loss prevention software
131
Data identification
DLP solutions include a number of techniques for identifying confidential or sensitive information. Sometimes
confused with discovery, data identification is a process by which organizations use a DLP technology to determine
what to look for (in motion, at rest, or in use). DLP solutions use multiple methods for deep content analysis, ranging
from keywords, dictionaries, and regular expressions to partial document matching and fingerprinting. The strength
of the analysis engine directly correlates to its accuracy. The accuracy of DLP identification is important to
lowering/avoiding false positives and negatives. Accuracy can depend on many variables, some of which may be
situational or technological. Testing for accuracy is recommended to ensure a solution has virtually zero false
positives/negatives.
Data leakage detection
Sometimes a data distributor gives sensitive data to a set of third parties. Some time later, some of the data is found
in an unauthorized place (e.g., on the web or on a user's laptop). The distributor must then investigate if data leaked
from one or more of the third parties, or if it was independently gathered by other means.
[1]
References
[1] Panagiotis Papadimitriou, Hector Garcia-Molina (January 2011), "Data Leakage Detection", IEEE Transactions on Knowledge and Data
Engineering 23 (1): 51–63, doi:10.1109/TKDE.2010.100
External links
• Data Loss Database (http:/ / datalossdb. org/ ), maintained by attrition.org
• Cost of a Data Breach (http:// www. ponemon. org/blog/ post/ cost-of-a-data-breach-climbs-higher), maintained
by ponemon.org
Data validation
132
Data validation
In computer science, data validation is the process of ensuring that a program operates on clean, correct and useful
data. It uses routines, often called "validation rules" or "check routines", that check for correctness, meaningfulness,
and security of data that are input to the system. The rules may be implemented through the automated facilities of a
data dictionary, or by the inclusion of explicit application program validation logic.
For business applications, data validation can be defined through declarative data integrity rules, or procedure-based
business rules.
[1]
Data that does not conform to these rules will negatively affect business process execution.
Therefore, data validation should start with business process definition and set of business rules within this process.
Rules can be collected through the requirements capture exercise.
[2]
The simplest data validation verifies that the characters provided come from a valid set. For example, telephone
numbers should include the digits and possibly the characters +, -, (, and ) (plus, minus, and brackets). A more
sophisticated data validation routine would check to see the user had entered a valid country code, i.e., that the
number of digits entered matched the convention for the country or area specified.
Incorrect data validation can lead to data corruption or a security vulnerability. Data validation checks that data are
valid, sensible, reasonable, and secure before they are processed.
Validation methods
Allowed character checks
Checks that ascertain that only expected characters are present in a field. For example a numeric field may
only allow the digits 0-9, the decimal point and perhaps a minus sign or commas. A text field such as a
personal name might disallow characters such as < and >, as they could be evidence of a markup-based
security attack. An e-mail address might require exactly one @ sign and various other structural details.
Regular expressions are effective ways of implementing such checks. (See also data type checks below)
Batch totals
Checks for missing records. Numerical fields may be added together for all records in a batch. The batch total
is entered and the computer checks that the total is correct, e.g., add the 'Total Cost' field of a number of
transactions together.
Cardinality check
Checks that record has a valid number of related records. For example if Contact record classified as a
Customer it must have at least one associated Order (Cardinality > 0). If order does not exist for a "customer"
record then it must be either changed to "seed" or the order must be created. This type of rule can be
complicated by additional conditions. For example if contact record in Payroll database is marked as "former
employee", then this record must not have any associated salary payments after the date on which employee
left organisation (Cardinality = 0).
Check digits
Used for numerical data. An extra digit is added to a number which is calculated from the digits. The computer
checks this calculation when data are entered. For example the last digit of an ISBN for a book is a check digit
calculated modulus 10.
[3]
Consistency checks
Checks fields to ensure data in these fields corresponds, e.g., If Title = "Mr.", then Gender = "M".
Control totals
Data validation
133
This is a total done on one or more numeric fields which appears in every record. This is a meaningful total,
e.g., add the total payment for a number of Customers.
Cross-system consistency checks
Compares data in different systems to ensure it is consistent, e.g., The address for the customer with the same
id is the same in both systems. The data may be represented differently in different systems and may need to
be transformed to a common format to be compared, e.g., one system may store customer name in a single
Name field as 'Doe, John Q', while another in three different fields: First_Name (John), Last_Name (Doe) and
Middle_Name (Quality); to compare the two, the validation engine would have to transform data from the
second system to match the data from the first, for example, using SQL: Last_Name || ', ' || First_Name ||
substr(Middle_Name, 1, 1) would convert the data from the second system to look like the data from the first
'Doe, John Q'
Data type checks
Checks the data type of the input and give an error message if the input data does not match with the chosen
data type, e.g., In an input box accepting numeric data, if the letter 'O' was typed instead of the number zero,
an error message would appear.
File existence check
Checks that a file with a specified name exists. This check is essential for programs that use file handling.
Format or picture check
Checks that the data is in a specified format (template), e.g., dates have to be in the format DD/MM/YYYY.
Regular expressions should be considered for this type of validation.
Hash totals
This is just a batch total done on one or more numeric fields which appears in every record. This is a
meaningless total, e.g., add the Telephone Numbers together for a number of Customers.
Limit check
Unlike range checks, data is checked for one limit only, upper OR lower, e.g., data should not be greater than
2 (<=2).
Logic check
Checks that an input does not yield a logical error, e.g., an input value should not be 0 when there will be a
number that divides it somewhere in a program.
Presence check
Checks that important data are actually present and have not been missed out, e.g., customers may be required
to have their telephone numbers listed.
Range check
Checks that the data lie within a specified range of values, e.g., the month of a person's date of birth should lie
between 1 and 12.
Referential integrity
In modern Relational database values in two tables can be linked through foreign key and primary key. If
values in the primary key field are not constrained by database internal mechanism,
[4]
then they should be
validated. Validation of the foreign key field checks that referencing table must always refer to a valid row in
the referenced table.
[5]
Spelling and grammar check
Looks for spelling and grammatical errors.
Data validation
134
Uniqueness check
Checks that each value is unique. This can be applied to several fields (i.e. Address, First Name, Last Name).
External links
• Data validation in Microsoft Excel
[6]
• Flat File Checker - data validation tool
[7]
• Unix Shell Script based input validation function
[8]
References
[1] Data Validation, Data Integrity, Designing Distributed Applications with Visual Studio .NET (http:// msdn. microsoft.com/ en-us/ library/
aa291820(VS.71).aspx)
[2] Arkady Maydanchik (2007), "Data Quality Assessment", Technics Publications, LLC
[3] ISBN International ISBN Agency Frequently Asked Questions: What is the format of an ISBN? (http:// www.isbn-international.org/faqs/
view/5#q_5)
[4] Oracle Foreign Keys (http:// www. techonthenet. com/ oracle/foreign_keys/ foreign_keys.php)
[5] Referential Integrity, Designing Distributed Applications with Visual Studio .NET (http:/ / msdn. microsoft.com/ en-us/ library/
aa292166(VS.71).aspx)
[6] http:// www. contextures. com/ xlDataVal01. html
[7] http:/ / www. flat-file.net
[8] http:/ / blog.anantshri. info/2009/ 06/ 08/ input_validation_shell_script/
Digital self-defense
Digital self-defense is the use of self-defense strategies by Internet users to ensure digital security; that is to say, the
protection of confidential personal electronic information.
[1]
Internet security software provides initial protection by
setting up a firewall, as well as scanning computers for malware, viruses, Trojan horses, worms and spyware.
However information at most risk includes personal details such as birthdates, phone numbers, bank account and
schooling details, sexuality, religious affiliations, email addresses and passwords. This information is often openly
revealed in social networking sites leaving Internet users vulnerable to social engineering and possibly Internet
crime. Mobile devices, especially those with Wi-Fi, allow this information to be shared inadvertently.
[2]
Digital self-defense requires Internet users to take an active part in guarding their own personal information. Four
key strategies are frequently suggested to assist that protection.
Computer security
Computer security in this context is referring to Internet security software. The ongoing security of private
information requires frequent updating of virus and spyware definitions so that ongoing developments in malicious
software can't interfere with, or copy,private information.
[3]
Email Accounts and Usernames
Choice of Appropriate Email Account
The practice of utilising more than one email account to separate personal and business usage from recreational
usage is a strategy commonly used to manage personal privacy. The free and ready availability of email accounts
from sites such as Yahoo, Google or Hotmail allows the protection of personal identity through the use of different
names to identify each email account. These throw-away accounts can be discarded or replaced at will, providing
another level of protection.
Digital self-defense
135
Choice of Username
A username is required to set up email accounts and to open accounts for various official, commercial, recreational
and social networking sites. In many cases an email address may also be utilised as a username. Usernames that
correlate with personal information such as names or nicknames are more at risk than ones that are cryptic or
anonymous, particularly on social and recreational sites.
Password Strength
A password is a mandatory security measure that accompanies usernames. The use of personal data to construct
passwords i.e. family members’ names, pet’s names or birth dates increases the risk to confidential information and
are easier to crack than long complicated passwords so password strength is a key strategy for protecting personal
information. A password can be weak or strong:a weak password is cutekittens, a strong password is
?lACpAs56IKMs.
According to Microsoft an ideal password should be at least 14 characters in length and have letters, punctuation,
symbols, and numbers, where complexity is added by the inclusion of uppercase letters.
[4]
Managing Personal Information Using Privacy Options
Social networking sites offer greater security risks to personal electronic information because sensitive, private or
confidential information such as personal identifiers are routinely used to create public profiles.
[5]
Many websites
give options to suppress the amount of personal information revealed through the customisation of privacy settings.
However privacy settings can reset if changes to the website occur.
[6]
References
[1] “Components of Security”, http:// nms. csail. mit. edu/ ~snoeren/ stp307/ ppt/ sld002. htm
[2] “Protect yourself in the online, social network community”, Creston News Advertiser, 11 Feb 2011. http:/ / www.crestonnewsadvertiser.
com/ articles/ ara/2011/ 02/ 11/ 8044960708/ index. xml
[3] "Secure your computer", © Commonwealth of Australia 2010 and © Stay Smart Online. http:// www. staysmartonline.gov. au/
home_internet_users/secure_your_computer
[4] "Create Strong Passwords",Microsoft Safety and Security Center. http:// www. microsoft.com/ security/ online-privacy/passwords-create.
aspx
[5] "Safer Social Networking". © Commonwealth of Australia 2010.http:/ / www.cybersmart.gov.au/ Parents/
Brochures%20and%20posters%20and%20contacts/Cybersmart%20contacts.aspx#Information
[6] “Protect yourself in the online, social network community”, Creston News Advertiser, 11 Feb 2011. http:// www.crestonnewsadvertiser.
com/ articles/ ara/2011/ 02/ 11/ 8044960708/ index. xml
Dolev-Yao model
136
Dolev-Yao model
The Dolev-Yao model is a formal model used to prove properties of interactive protocols.
The network
The network is represented by a set of abstract machines that can exchange messages. These messages consist of
formal terms.
The adversary
The adversary in this model can overhear, intercept, and synthesise any
message and is only limited by the constraints of the cryptographic
methods used. In other words: "the attacker carries the message."
This omnipotence has been very difficult to model and many threat
models simplify it, as, for example, the attacker in ubiquitous
computing.
The algebraic model
Cryptographic primitives are modeled by abstract operators. Asymmetric encryption for a user for example is
represented by the encryption function and the decryption . Their main properties are that their composition
is the identity function ( ) and that an encrypted message reveals nothing about
. Unlike in the real world, the adversary can neither manipulate the encryption's bit representation nor guess the
key.
References
• Dolev, D.; Yao, A. C. (1983), "On the security of public key protocols"
[1]
, IEEE trans. on Information Theory
IT-29: 198–208
• Backes, Michael; Pfitzmann, Birgit; Waidner, Michael (2006), "Soundness Limits of Dolev-Yao Models"
[2]
,
Workshop on Formal and Computational Cryptography (FCC'06), affiliated with ICALP'06
• "Secure Transaction Protocol Analysis: Models and Applications"
[3]
, Lecture Notes in Computer Science /
Programming and Software Engineering, 2008
References
[1] http:/ / www. cs. huji. ac. il/ ~dolev/ pubs/ dolev-yao-ieee-01056650.pdf
[2] http:// www. infsec. cs. uni-saarland.de/ ~backes/ papers/ backes06soundness. html
[3] http:/ / books. google. com/ books?id=IMIuV_tUYfMC& printsec=frontcover&dq=secure+transaction+ protocol+analysis+ models+
applications& source=bl&ots=7iqJoLjEmJ& sig=8SSMmTl8djd4St90QW6zlYPzcDA& hl=en&ei=OPMNTavHIIrmvQP1qLXLDQ&
sa=X&oi=book_result& ct=result& resnum=3&ved=0CC8Q6AEwAg#v=onepage& q& f=false
DREAD: Risk assessment model
137
DREAD: Risk assessment model
DREAD is part of a system for classifying computer security threats used at Microsoft. It provides a mnemonic for
risk rating security threats using five categories.
The categories are:
• Damage - how bad would an attack be?
• Reproducibility - how easy it is to reproduce the attack?
• Exploitability - how much work is it to launch the attack?
• Affected users - how many people will be impacted?
• Discoverability - how easy it is to discover the threat?
The DREAD name comes from the initials of the five categories listed. It was initially proposed for threat modeling,
but is now used more broadly.
When a given threat is assessed using DREAD, each category is given a rating. For example, 3 for high, 2 for
medium, 1 for low and 0 for none. The sum of all ratings for a given exploit can be used to prioritize among different
exploits.
External links
• Improving Web Application Security: Threats and Countermeasures
[1]
• DREADful, an MSDN blog post
[2]
References
[1] http:/ / msdn. microsoft.com/ en-us/ library/aa302419. aspx#c03618429_011
[2] http:/ / blogs. msdn. com/ david_leblanc/ archive/ 2007/ 08/ 13/ dreadful.aspx
Dynamic SSL
138
Dynamic SSL
Dynamic SSL is an endpoint security technology developed by Daniel McCann and Nima Sharifimehr of NetSecure
Technologies Ltd. Dynamic SSL was created to solve the endpoint security problem in public networks by
transparently correcting the implementation flaws in SSL systems that expose sensitive data to interception and
tampering. Dynamic SSL is sometimes referred to as Dynamic TLS.
Endpoint vulnerabilities in SSL/TLS systems
Most implementations of SSL assume that the client computer is a secure environment for key negotiation, key
storage, and encryption. This is untrue in principle and in practice, as malicious technologies such as Spyware,
KeyJacking, and Man in the Browser have proven to be able to circumvent SSL by obtaining sensitive data prior to
encryption
[1]

[2]
. Furthermore, the reliance on the host PC for PKI certificate validation renders the infrastructure
vulnerable to man-in-the-middle attacks
[3]
.
Challenges for public networks
Traditional solutions to endpoint security rely on custom protocols or proprietary authentication architectures that are
not interoperable with SSL. In many circumstances, particularly in anonymous or distributed environments where
interoperability with SSL is a requirement, synchronization of client and server systems with a proprietary security
protocol is simply not feasible. This is known as the Endpoint Security Problem in public networks.
In layman's terms, the Endpoint Security Problem essentially asserts that any anonymous transaction through a web
browser must inherently be at risk, and that it cannot be fixed without removing the anonymity of the transaction.
Since virtually all web transactions assume that the client is anonymous at the protocol level
[4]
, this means that any
proposed third-party solution will essentially break the system.
[5]
This is a fundamental vulnerability that undermines
the entire web security infrastructure, rendering virtually ever web transaction at risk.
Dynamic SSL solves this fundamental problem by focusing on transparently closing the implementation
vulnerabilities rather than on redefining a new protocol. Therefore, the existing SSL system remains intact as the
default secure communication protocol. However, implementation vulnerabilities are solved to achieve endpoint
security.
Principles of Dynamic SSL
The underlying principle in Dynamic SSL is that encryption of sensitive information cannot be performed in an
untrusted environment, such as most personal computers, where the security of the encryption process could be
compromised. Rather, encryption of sensitive information must be done outside of the personal computer. Dynamic
SSL assumes that the end user's computer is an untrusted environment, and can only be used as the channel to
transmit sensitive information. Dynamic SSL thus guarantees the security of sensitive data by ensuring that it is
never present in the insecure environment.
Implementations
Typical implementations involve two core components: a secure environment which hosts the sensitive data to be
protected, and a cryptographic proxy, which securely redirects the encryption of the host process of the insecure
environment to a cryptographic provider within the secure environment which hosts the sensitive data. An optional
third component which controls the tokenization process can be inserted at the application layer. Nothing is required
at the server end.
Dynamic SSL
139
Generally speaking, the only change which is required on the endpoint computer is the replacement of the default
SSL cryptographic provider with a Dynamic SSL cryptographic proxy. Most operating systems and web browsers
support pluggable cryptographic providers, meaning that most implementations will require no changes whatsoever
to the application on either end of the system.
Dynamic SSL works by using a tokenization system for sensitive data in conjunction with secure redirection of
cryptographic operations to a secure environment. For example, rather than typing in your online banking password
into a web browser, you would type in a meaningless token instead. When the form is submitted, an HTTP request
containing the token is generated, and sent to the SSL cryptographic provider for encryption and secure transmission
to the remote server. Instead of the encryption happening on the host PC, which may be compromised, the session is
redirected to the secure environment (secure input device, server, etc..) which would contain your real online
banking password. Inside the secure environment, the token within the HTTP request would get swapped out with
your real online banking password at the moment of encryption. The encrypted packet, containing your real online
banking password, would then be returned to the SSL system of the host PC for transmission to the remote server.
From the server's perspective, the request appears like any other regular keyed-in request. The packet was encrypted
with the SSL session key negotiated with the client, and so is able to decrypt and process the packet as normal. It
cannot tell the difference between a regular SSL transaction and a Dynamic SSL transaction.
The only difference in the transaction was that the sensitive data was never present on the host PC. Any malicious
attempt to harvest the data from the host PC would be unable to locate the data.
A whitepaper
[6]
is available which describes the process of Dynamic SSL in further detail.
Strengths and weaknesses
Dynamic SSL is the only known approach to endpoint security that requires no changes to existing server systems,
and can therefore be used to transparently retrofit existing systems for endpoint security, while retaining the benefits
of using a proven standard like SSL. By offloading cryptographic operations to a secure environment which acts as
the point of origin for sensitive data, thereby ensuring the endpoint computer does not have access to said sensitive
data, proactive protection from endpoint threats can in theory be achieved.
However, since Dynamic SSL is simply a process that is applied to SSL implementation, rather than a new protocol,
it remains vulnerable to protocol vulnerabilities inherent within SSL, namely Man-in-the-Middle attacks
[7]
.
Sharifimehr has proposed a supplementary solution involving Man-in-the-Middle Protection for Dynamic SSL
[8]
.
His algorithm uses a combination of redundant cert verification and key tagging to prevent Man-in-the-Middle
attacks and Keyjacking. Most known implementations of Dynamic SSL include Sharifimehr's additional process,
described below:
Man-in-the-Middle protection
Known valid root certificates are digitally signed by an independent third party. When an X509 certificate arrives
containing the authentication information for a remote website as part of the SSL authentication phase. Since the root
certificate signatures are redundantly verified in a secure environment against a pre-verified list of valid digital
signatures for known valid root certificates, this prevents any compromise via tampering of the certificate
authentication chain. Phony certificates or certification chains can therefore be detected and the session rejected
before it begins.
Dynamic SSL
140
Keyjacking protection
Session and authentication keys are contextually bound to the operations which they are semantically required to
perform, and may not be exported. An encryption key may not be exported and used to decrypt the ciphertext which
is encrypted. In laymans terms, keys are "tagged" to ensure that they can never be exported for use outside of their
intended use.
Commercial applications
A consumer product called SmartSwipe
[9]
is the first known commercial application of Dynamic SSL. It claims to
provide security against malware and other client-side attacks while providing universal support for virtually every
eCommerce merchant that uses SSL.
[10]
It is currently unknown whether other products are using this technology.
References
[1] Philip Guhring, "Concepts against Man in the Browser Attack", 2006 http:/ / www2. futureware.at/ svn/ sourcerer/CAcert/ SecureClient.pdf
[2] John Marchesini, S.W.Smith, Meiyuan Zhao, “Keyjacking: risks of the current client-side infrastructure,” Proceedings of the 2nd Annual PKI
Research Workshop, 2003. pp. 80-95
[3] Serpanos D N, Lipton R J, "Defense Against Man-in-the-Middle Attack in Client-Server Systems with Secure Servers" 2003
[4] www.ietf.org/rfc/rfc2246.txt
[5] Note: SSL client authentication does not solve this problem. Since all clients in a web session are anonymous until authenticated, the act of
authentication is therefore inherently a risky transaction which is subject to endpoint vulnerabilities.
[6] http:// 208.68. 104. 126/ smartswipe/ dynamic-ssl/ dynamic-ssl-white-paper.html
[7] http:// www. opera.com/ support/ kb/ view/ 944/
[8] http:/ / www. smartswipe. ca/ images/ stories/ site/ dynamic-ssl-white-paper.pdf
[9] http:// www. smartswipe. ca
[10] http:// www. smartswipe. ca/ benefits
External links
• www.dynamic-ssl.com (http:/ / www. dynamic-ssl. com/ )
• Dynamic SSL White Paper (http:// www. smartswipe. ca/ images/ stories/ site/ dynamic-ssl-white-paper.pdf)
Economics of security
141
Economics of security
The economics of information security addresses the economic aspects of privacy and computer security.
Economics of information security includes models of the strictly rational “homo economicus” as well as behavioral
economics. Economics of security addresses individual and organizational decisions and behaviors with respect to
security and privacy as market decisions.
Economics of security addresses a core question: why do agents choose technical risks when there exists technical
solutions to mitigate security and privacy risks? Economics addresses not only this question, but also inform design
decisions in security engineering.
Emergence of economics of security
National security is the canonical public good. The economic status of information security came to the intellectual
fore around 2000. As is the case with innovations it arose simultaneously in multiple venues.
In 2000, Ross Anderson wrote, Why Computer Security is Hard
[1]
. Anderson explained that a significant difficulty
in optimal development of security technology is that incentives must be aligned with the technology to enable
rational adoption. Thus, economic insights should be integrated into technical design. A security technology should
enable the party at risk to invest to limit that risk. Otherwise, the designers are simply counting on altruism for
adoption and diffusion. Many consider this publication the birth of economics of security.
Also in 2000 at Harvard, Camp at the School of Government and Wolfram in the Department of Economics argued
that security is not a public good but rather each extant vulnerabilities has an associated negative externality value.
Vulnerabilities were defined in this work as tradable goods. Six years later, iDEFENSE
[2]
, ZDI
[3]
and Mozilla
[4]
have extant markets for vulnerabilities. Vulnerabilities are also known as computer security exploits.
In 2000, the scientists at the Computer Emergency Response Team at Carnegie Mellon University proposed an early
mechanism for risk assessment. The Hierarchical Holographic Model provided the first multi-faceted evaluation tool
to guide security investments using the science of risk. Since that time, CERT has developed a suite of systematic
mechanism for organizations to use in risk evaluations, depending on the size and expertise of the organization:
OCTAVE
[5]
. The study of computer security as an investment in risk avoidance has become standard practice.
In 2001 in an unrelated development, Larry Gordon and Marty Leob published A framework on using information
security as a response to competitor analysis systems
[6]
. These professor of Maryland's Smith School of Business
examined the strategic use of security information from a classical business perspective.
The authors came together to develop and expand a series of flagship events under the name Worksop on the
Economics of Information Security.
Examples of findings in economics of security
Proof of work is a security technology designed to stop spam by altering the economics. An early paper in
economics of information security argued that proof of work cannot work. In fact, the finding was that proof of work
cannot work without price discrimination as illustrated by a later paper, Proof of Work can Work
[7]
.
Another finding, one that is critical to an understanding of current American data practices, is that the opposite of
privacy is not, in economic terms anonymity, but rather price discrimination. Privacy and price discrimination
[8]
was
authored by Andrew Odlyzko and illustrates that what may appear as information pathology in collection of data is
in fact rational organizational behavior.
Hal Varian presented three models of security using the metaphor of the height of walls around a town to show
security as a normal good, public good, or good with externalities. Free riding
[9]
is the end result, in any case.
Economics of security
142
References
[1] http:/ / www. acsac. org/2001/ papers/ 110. pdf
[2] http:/ / idefense.com/
[3] http:/ / zerodayinitiative.com/
[4] http:/ / www. mozilla. org/security/ bug-bounty. html
[5] http:/ / www. cert.org/ octave
[6] http:// old-www.rhsmith. umd. edu/ accounting/ mloeb
[7] http:/ / weis2006. econinfosec. org/docs/ 50. pdf
[8] http:/ / citeseer.ist. psu. edu/ odlyzko03privacy. html
[9] http:/ / www. sims. berkeley. edu/ resources/ affiliates/workshops/ econsecurity/ econws/ 49. pdf
External links
• Economics of Information Security (http://infosecon. net/ ) links to all the past workshops, with the
corresponding papers, as well as current conferences and calls for papers.
Centers that study economics of security
• Carnegie Mellon University Heinz School (http:/ / www.heinz. cmu.edu/ )
• Carnegie Mellon University Privacy Lab (http:// privacy.cs. cmu. edu/ )
• Cambridge University Computer Science Laboratory (http:// www.cl. cam. ac.uk/ research/ security/ )
• Indiana University School of Informatics (http:// informatics. indiana. edu/ )
• University of Minnesota (http:/ / www. dtc. umn. edu/ )
• University of Michigan School of Information (http:// www.si. umich. edu/ )
• Harvard University Division of Engineering and Applied Sciences (http:/ / www.eecs. harvard.edu/ index/ cs/
cs_index. php)
• Dartmouth hosts the I3P (http:// www. thei3p.org/) which includes the Tuck School as well as the Computer
Science Department in studying economics of information security.
Resources in economics of security
• Ross Anderson maintains the Economics of Information Security (http:// www.cl. cam.ac.uk/ ~rja14/econsec.
html) page.
• Alessandro Acquisti (http:/ / www. heinz. cmu. edu/ ~acquisti) has the corresponding Economics of Privacy
Resources (http:/ / www. heinz. cmu. edu/ ~acquisti/ economics-privacy.html) page.
• Economics of Information Security (http://infosecon. net/ ) provides events, books, past workshops, and an
annotated bibliography.
• Return on Information Security Investment (http:/ / www.adrianmizzi. com/ ) provides self assessment
questionnaire, papers and links to Information security economics resources.
Enterprise information security architecture
143
Enterprise information security architecture
Enterprise information security architecture (EISA) is a part of enterprise architecture focusing on information
security throughout the enterprise.
Overview
Enterprise information security architecture (EISA) is the practice of applying a comprehensive and rigorous method
for describing a current and/or future structure and behavior for an organization's security processes, information
security systems, personnel and organizational sub-units, so that they align with the organization's core goals and
strategic direction. Although often associated strictly with information security technology, it relates more broadly to
the security practice of business optimization in that it addresses business security architecture, performance
management and security process architecture as well.
Enterprise information security architecture is becoming a common practice within the financial institutions around
the globe. The primary purpose of creating an enterprise information security architecture is to ensure that business
strategy and IT security are aligned. As such, enterprise information security architecture allows traceability from the
business strategy down to the underlying technology.
Enterprise information security architecture topics
Positioning
Enterprise information security
architecture was first formally
positioned by Gartner in their
whitepaper called “Incorporating
Security into the Enterprise
Architecture Process
[1]
”. This was
published on 24 January 2006. Since
this publication, security architecture
has moved from being a silo based
architecture to an enterprise focused
solution that incorporates business,
information and technology. The
picture below represents a
one-dimensional view of enterprise
architecture as a service-oriented
architecture. It also reflects the new
addition to the enterprise architecture
family called “Security”. Business
architecture, information architecture and technology architecture use to be called BIT for short. Now with security
as part of the architecture family it has become BITS.
Security architectural change imperatives now include things like
• Business roadmaps
• Legislative and legal requirements
• Technology roadmaps
Enterprise information security architecture
144
• Best practices
• Industry trends
• Visionaries
Goals
• Provide structure, coherence and cohesiveness.
• Must enable business-to-security alignment.
• Defined top-down beginning with business strategy.
• Ensure that all models and implementations can be traced back to the business strategy, specific business
requirements and key principles.
• Provide abstraction so that complicating factors, such as geography and technology religion, can be removed and
reinstated at different levels of detail only when required.
• Establish a common "language" for information security within the organization
Methodology
The practice of enterprise information security architecture involves developing an architecture security framework
to describe a series of "current", "intermediate" and "target" reference architectures and applying them to align
programs of change. These frameworks detail the organizations, roles, entities and relationships that exist or should
exist to perform a set of business processes. This framework will provide a rigorous taxonomy and ontology that
clearly identifies what processes a business performs and detailed information about how those processes are
executed and secured. The end product is a set of artifacts that describe in varying degrees of detail exactly what and
how a business operates and what security controls are required. These artifacts are often graphical.
Given these descriptions, whose levels of detail will vary according to affordability and other practical
considerations, decision makers are provided the means to make informed decisions about where to invest resources,
where to realign organizational goals and processes, and what policies and procedures will support core missions or
business functions.
A strong enterprise information security architecture process helps to answer basic questions like:
• Is the current architecture supporting and adding value to the security of the organization?
• How might a security architecture be modified so that it adds more value to the organization?
• Based on what we know about what the organization wants to accomplish in the future, will the current security
architecture support or hinder that?
Implementing enterprise information security architecture generally starts with documenting the organization's
strategy and other necessary details such as where and how it operates. The process then cascades down to
documenting discrete core competencies, business processes, and how the organization interacts with itself and with
external parties such as customers, suppliers, and government entities.
Having documented the organization's strategy and structure, the architecture process then flows down into the
discrete information technology components such as:
• Organization charts, activities, and process flows of how the IT Organization operates
• Organization cycles, periods and timing
• Suppliers of technology hardware, software, and services
• Applications and software inventories and diagrams
• Interfaces between applications - that is: events, messages and data flows
• Intranet, Extranet, Internet, eCommerce, EDI links with parties within and outside of the organization
• Data classifications, Databases and supporting data models
• Hardware, platforms, hosting: servers, network components and security devices and where they are kept
• Local and wide area networks, Internet connectivity diagrams
Enterprise information security architecture
145
Wherever possible, all of the above should be related explicitly to the organization's strategy, goals, and operations.
The enterprise information security architecture will document the current state of the technical security components
listed above, as well as an ideal-world desired future state (Reference Architecture) and finally a "Target" future state
which is the result of engineering tradeoffs and compromises vs. the ideal. Essentially the result is a nested and
interrelated set of models, usually managed and maintained with specialised software available on the market.
Such exhaustive mapping of IT dependencies has notable overlaps with both metadata in the general IT sense, and
with the ITIL concept of the Configuration Management Database. Maintaining the accuracy of such data can be a
significant challenge.
Along with the models and diagrams goes a set of best practices aimed at securing adaptability, scalability,
manageability etc. These systems engineering best practices are not unique to enterprise information security
architecture but are essential to its success nonetheless. They involve such things as componentization, asynchronous
communication between major components, standardization of key identifiers and so on.
Successful application of enterprise information security architecture requires appropriate positioning in the
organization. The analogy of city-planning is often invoked in this connection, and is instructive.
An intermediate outcome of an architecture process is a comprehensive inventory of business security strategy,
business security processes, organizational charts, technical security inventories, system and interface diagrams, and
network topologies, and the explicit relationships between them. The inventories and diagrams are merely tools that
support decision making. But this is not sufficient. It must be a living process.
The organization must design and implement a process that ensures continual movement from the current state to the
future state. The future state will generally be a combination of one or more
• Closing gaps that are present between the current organization strategy and the ability of the IT security
dimensions to support it
• Closing gaps that are present between the desired future organization strategy and the ability of the security
dimensions to support it
• Necessary upgrades and replacements that must be made to the IT security architecture based on supplier
viability, age and performance of hardware and software, capacity issues, known or anticipated regulatory
requirements, and other issues not driven explicitly by the organization's functional management.
• On a regular basis, the current state and future state are redefined to account for evolution of the architecture,
changes in organizational strategy, and purely external factors such as changes in technology and
customer/vendor/government requirements.
Enterprise information security architecture
146
High-level security architecture framework
Enterprise information security
architecture frameworks is only a
subset of enterprise architecture
frameworks. If we had to simplify the
conceptual abstraction of enterprise
information security architecture
within a generic framework, the
picture on the right would be
acceptable as a high-level conceptual
security architecture framework.
Other open enterprise architecture
frameworks are:
• The U.S. Department of Defense
(DoD) Architecture Framework
(DoDAF)
• Extended Enterprise Architecture
Framework (E2AF) from the Institute For Enterprise Architecture Developments.
[2]
• Federal Enterprise Architecture of the United States Government (FEA)
• Capgemini's Integrated Architecture Framework
[3]
• The UK Ministry of Defence (MOD) Architecture Framework (MODAF)
• NIH Enterprise Architecture Framework
[4]
• Open Security Architecture
[5]
• Information Assurance Enterprise Architectural Framework (IAEAF)
• SABSA framework and methodology
• Service-Oriented Modeling Framework (SOMF)
• The Open Group Architecture Framework (TOGAF)
• Zachman Framework
Relationship to other IT disciplines
Enterprise information security architecture is a key component of the information security technology governance
process at any organization of significant size. More and more companies are implementing a formal enterprise
security architecture process to support the governance and management of IT.
However, as noted in the opening paragraph of this article it ideally relates more broadly to the practice of business
optimization in that it addresses business security architecture, performance management and process security
architecture as well. Enterprise Information Security Architecture is also related to IT security portfolio management
and metadata in the enterprise IT sense.
Enterprise information security architecture
147
References
[1] http:/ / www. gartner.com/ DisplayDocument?ref=g_search&id=488575
[2] Extended Enterprise Architecture Framework (http:// www.enterprise-architecture.info).
[3] Capgemini's Integrated Architecture Framework (http:/ / www.capgemini.com/ services/ soa/ ent_architecture/iaf/)
[4] NIH Enterprise Architecture Framework (http:// enterprisearchitecture.nih. gov/ About/ Approach/Framework.htm)
[5] Open Security Architecture (http:/ / opensecurityarchitecture.org)
Further reading
• Carbone, J. A. (2004). IT architecture toolkit. Enterprise computing series. Upper Saddle River, NJ, Prentice Hall
PTR.
• Cook, M. A. (1996). Building enterprise information architectures : reengineering information systems.
Hewlett-Packard professional books. Upper Saddle River, NJ, Prentice Hall.
• Fowler, M. (2003). Patterns of enterprise application architecture. The Addison-Wesley signature series. Boston,
Addison-Wesley.
• Togaf Guide to Security Architecture "http:/ / www.opengroup.org/ pubs/ catalog/ w055.htm"
• Groot, R., M. Smits and H. Kuipers (2005). " A Method to Redesign the IS Portfolios in Large Organisations
(http:// doi. ieeecomputersociety. org/ 10. 1109/ HICSS.2005.25)", Proceedings of the 38th Annual Hawaii
International Conference on System Sciences (HICSS'05). Track 8, p. 223a. IEEE.
• Steven Spewak and S. C. Hill (1993). Enterprise architecture planning : developing a blueprint for data,
applications, and technology. Boston, QED Pub. Group.
External links
• Open Security Architecture- Controls and patterns to secure IT systems (http:// www.opensecurityarchitecture.
org)
Entrust
148
Entrust
Entrust, Inc.
Industry Online information and identity protection
Founded 1994
Headquarters Dallas, TX, United States
Number of
locations
15
Key people Bill (F. William) Conner; President and CEO
Services Public key infrastructure, Secure Socket Layer certificates, multifactor authentication, fraud detection, digital certificates and
mobile authentication
Employees 350
Divisions CygnaCom Solutions
Website
www.entrust.com
[1]
Entrust Inc. is a $100 million privately-owned software company with 350 employees. It provides identity-based
security software and services in the areas of public key infrastructure (PKI), multifactor authentication, Secure
Socket Layer certificates, fraud detection, digital certificates and mobile authentication.
[2]
Headquartered in the
Dallas-Fort Worth Metroplex, the company’s largest office is near Ottawa, Ontario, Canada. It also has offices in
London, Tokyo, Washington, D.C. and other cities internationally.
[3]
Bill (F. William) Conner, President and CEO of
Entrust, speaking on global cybersecurity before
the INTERPOL 79th General Assembly in Doha,
Qatar, November 2010.
Entrust reports having customers at public and private organizations in
60 countries, with 125 patents either granted or pending in the areas of
authentication, physical/logical access, certificates, e-content delivery
and citizen identities.
[4]
Entrust lists customers including the U.S. Departments of Energy,
Homeland Security, State, Treasury and Labor; Citibank;
Expedia.com; the FBI; Credit Suisse; SWIFT; the Government of the
United Kingdom; Kingdom of Saudi Arabia Government; the United
Arab Emirates; the Danish National Police; the Royal Bank of
Scotland; NASA; the Federal Reserve Bank; Chase Manhattan Bank;
the State of Illinois; Hotwire.com; the Quebec Ministry of Justice; and
other government entities and business enterprises.
[5]
Previously a publicly-traded company, in July 2009 Entrust was
acquired by Thoma Bravo, a U.S.-based private equity firm, for $124
million.
[6]
Current President and CEO Bill (F. William) Conner speaks regularly
on global and national cybersecurity and infrastructure issues.
• In May 2011, he gave opening remarks for a luncheon at Cards
Middle East 2011 in Abu Dhabi.
[7]
Entrust
149
• In November 2010, he was invited to address global security and law enforcement officers at the 79th General
Assembly of INTERPOL in Doha, Qatar.
[8]
• In June 2010, he addressed the United Nations on global challenges in cybercrime.
[9]
History
Gartner, an information technology research and advisory firm headquartered in Stamford, Conn., listed Entrust as a
"leader" in its 2009 Magic Quadrant for Web Fraud Detection
[10]
released in February 2009. Based on organizations'
fraud detection and multifactor authentication solutions, Gartner's Magic Quadrant for Web Fraud Detection placed
Entrust in the leadership position for its capabilities in several non-biased categories.
In September 2008, Entrust participated in the ePassports EAC Conformity & Interoperability Tests in Prague,
Czech Republic.
[11]
Facilitated by a consortium of the European Commission, Brussels Interoperability Group (BIG)
and the European Commission Joint Research Centre, the Prague tests allowed European countries to verify
conformance of their second-generation ePassports containing fingerprint biometric data protected by Extended
Access Control functions, commonly referred to as EAC. Additional testing included verification of crossover
interoperability between EAC inspection systems and ePassports from different countries.
Prior to it becoming a private-equity company Entrust was included on the Russell 3000 Index in July 2008.
[12]
In
July 2007, Entrust contributed public key infrastructure (PKI) technology to the open-source community through
Sun Microsystems, Inc. and the Mozilla Foundation. Specifically, Entrust supplied certificate revocation list
distribution points (CRL-DP), Patent 5,699,431, to Sun under a royalty-free license for incorporation of that
capability into the Mozilla open-source libraries.
[13]
In July 2006, Entrust acquired Business Signatures Corporation,
[14]
a leading supplier of non-invasive fraud
detection solutions, for $50 million (USD). From a GAAP accounting perspective, the total purchase price was
approximately $55.0 million, including assumed stock options, transaction expenses and net asset value. Giving
Entrust a West coast presence in Redwood City, Calif., Business Signatures was founded in 2001 by former
executives from Oracle, HP and Cisco. It originally was funded by the Texas Pacific Group, Walden International,
Ram Shriram of Google and Dave Roux of Silver Lake Partners. The company had just more than 40 employees
before the acquisition.
Entrust acquired Orion Security Solutions, a supplier of public key infrastructure services, in June 2006.
[15]
In mid-2004, Entrust acquired AmikaNow! Corporation's advanced content scanning, analysis and compliance
technology.
[16]
Using highly sophisticated content analysis tools, the technology is designed to automatically analyze
and categorize email message and document content based on the contextual meaning, not simply pre-defined word
lists. Policies can be customized to suit the corporate environment and be automatically enforced at the boundary to
help customers reduce business risk and help in their compliance with privacy and securities laws including HIPAA,
Gramm-Leach-Bliley Act, Personal Information Protection and Electronic Documents Act and various U.S.
Securities and Exchange Commission (SEC) regulations.
In April 2002, Entrust’s PKI technology served as the foundation for the prototype of what is now the United States
Federal Bridge Certification Authority (FBCA). The Federal Bridge certificate authority is a fundamental element of
the trust infrastructure that provides the basis for intergovernmental and cross-governmental secure communications.
Acting as a trust conduit, the FBCA extends the benefits that agencies and government organizations achieve
through the use of Public Key technology to a broader set of applications and transactions. Entrust's PKI serves as a
core element of the Federal Bridge and demonstrates interoperability with all major FBCA vendors.
Through its acquisition of enCommerce in May 2000, Entrust combined authentication and authorization
technologies in a single security infrastructure. In 1994, Entrust built and sold the first commercially available PKI to
make it possible to manage the keys and certificates that enable encryption and digital signatures.
Entrust
150
References
[1] http:/ / www. entrust. com
[2] Entrust Profile, Hoovers, Inc. (subscription required) (http:// premium. hoovers. com/ subscribe/ co/ factsheet.xhtml?ID=hjtccksschcfjr)
[3] Entrust global office locations, from Entrust.com (http:/ / entrust.com/ contact/offices.htm)
[4] Entrust service areas and customer statistics, from Entrust.com (http:// entrust.com/ corporate/factsheet.htm)
[5] Entrust reported customers and statistics, from Entrust.com (http:// www.entrust.com/ success/ index.htm)
[6] "Thoma Bravo Completes Entrust Acquisition," Joint News Release, July 29, 2009 (http:/ / www. entrust. com/ news/ index. php?s=43&
item=689)
[7] Release about speech and Entrust's presence at Cards Middle East 2011 (http:/ / www.thestreet.com/ story/ 11121518/ 1/
big-presence-at-cards-middle-east-2011--entrust-ceo-bill-conner-shows-strong-support-for-identity-based-security-approach. html)
[8] Speech transcript, Address to Global Security and Law Enforcement Officers, INTERPOL 79th General Assembly, Doha, Qatar, November
2010 (http:// www. interpol. int/ Public/ ICPO/ speeches/ 2010/ 79thAGEdapsSpeech.pdf)
[9] Address to United Nations on global challenges in cybercrime, June 2010 (http:/ / www.entrust. com/ bill-conner-united-nations/index.htm)
[10] "Gartner Names Entrust as 'Leader' in 2009 Web Fraud Detection Magic Quadrant" (http:/ / news.prnewswire.com/
DisplayReleaseContent. aspx?ACCT=104& STORY=/ www/ story/ 02-09-2009/0004968555& EDATE=). PR Newswire. Entrust. 9 February
2009. .
[11] http:/ / www. e-passports2008. org/exhibitors/
[12] "Entrust added to Russell 3000 Index," Dallas Business Journal, July 3, 2008 (http:// www.bizjournals.com/ dallas/ stories/ 2008/ 06/ 30/
daily41.html)
[13] "Entrust offers certificate technology to Mozilla," ComputerWorld Canada, July 25, 2007 (http:// www. itworldcanada.com/ a/
Daily-News/e19751df-078f-4ca9-ac98-330260d9ee68.html)
[14] "Entrust acquires Business Signatures for $50M," Computer World, July 20, 2006 (http:// www.computerworld.com/ action/ article.
do?command=viewArticleBasic& taxonomyName=Intellectual_Property_and_DRM&articleId=9001907&taxonomyId=144&
intsrc=kc_li_story)
[15] "Entrust buys Orion Security Solutions," Washington Technology, June 15, 2006 (http:/ / www.washingtontechnology. com/ online/ 1_1/
28767-1. html)
[16] "Entrust buys up AmikaNow!," Ottawa Business Journal, May 19, 2004 (http:// www.canadait.com/ cfm/index.cfm?It=106&Id=19998&
Se=2&Sv=Company& Lo=2)
External links
• Main website of Entrust (http:// www. entrust. com)
• Entrust SSL certificates website (http:// www. entrust. net)
• Entrust company profile on LinkedIn (http://www. linkedin. com/ company/ entrust)
• Profile of President and CEO Bill (F. William) Conner on LinkedIn (http:// www.linkedin. com/ pub/
bill-conner/ 14/ 157/ 243)
• Profile of President and CEO Bill (F. William) Conner on ProfNet speakers website (http:// www.
profnetconnect. com/ bill_conner)
Evasion (network security)
151
Evasion (network security)
Evasion is a term used to describe techniques of bypassing an information security device in order to deliver an
exploit, attack or other malware to a target network or system, without detection. Evasions are typically used to
counter network-based intrusion detection and prevention systems (IPS, IDS) but can also be used to by-pass
firewalls. A further target of evasions can be to crash a network security device, rendering it in-effective to
subsequent targeted attacks.
Evasions can be particularly nasty because a well-planned and implemented evasion can enable full sessions to be
carried forth in packets that evade an IDS. Attacks carried in such sessions will happen right under the nose of the
network and service administrators.
The security systems are rendered ineffective against well-designed evasion techniques, in the same way a stealth
fighter can attack without detection by radar and other defensive systems.
A good analogy to evasions is a system designed to recognize keywords in speech patterns on a phone system, such
as “break into system X”. A simple evasion would be to use a language other than English, but which both parties can
still understand, and wishfully a language that as few people as possible can talk.
Various advanced and targeted evasion attacks have been known since the mid 1990's:
• A seminal text describing the attacks against IDS systems appeared in 1997
[1]
.
• One of the first comprehensive description of attacks was reported by Ptacek and Newsham in a technical report
in 1998
[2]
.
• In 1998, also an article in the Phrack Magazine describes ways to by-pass network intrusion detection
[3]
.
The 1997 article
[1]
mostly discusses various shell-scripting and character-based tricks to fool an IDS. The Phrack
Magazine article
[3]
and the technical report from Ptacek et al.
[2]
discusses TCP/IP protocol exploits, evasions and
others. More recent discussions on evasions include the report by Kevin Timm
[4]
.
The challenge in protecting servers from evasions is to model the end-host operation at the network security device,
i.e., the device should be able to know how the target host would interpret the traffic, and if it would be harmful, or
not. A key solution in protecting against evasions is traffic normalization at the IDS/IPS device
[5]
.
Lately there has been discussions on putting more effort on research in evasion techniques. A presentation at Hack.lu
discussed some potentially new evasion techniques and how to apply multiple evasion techniques to by-pass network
security devices
[6]
.
References
[1] 50 Ways to Defeat Your Intrusion Detection System (http:// all. net/ journal/ netsec/ 1997-12.html)
[2] Ptacek, Newsham: Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection, Technical report, 1998.
[3] Defeating Sniffers and Intrusion Detection Systems (http:/ / www. phrack.com/ issues. html?issue=54& id=10)
[4] IDS Evasion Techniques and Tactics (http:/ / www.symantec. com/ connect/ articles/ ids-evasion-techniques-and-tactics)
[5] M. Handley, V. Paxson, C. Kreibich, Network intrusion detection: evasion, traffic normalization, and end-to-end protocol semantics, Usenix
Security Symposium, 2001.
[6] Advanced Network Based IPS Evasion Techniques (http:/ / 2009. hack. lu/ index.php/
Workshops#Advanced_Network_Based_IPS_Evasion_Techniques)
Event data
152
Event data
Event data is a synonym to an audit trail. Modern computer software applications and IT infrastructure have
adopted the term event data over audit trail. Events are typically recorded in logs and there is no standard for the
format of event type data.
Examples of the use of this new term to describe audit trails are becoming more common and the term is cited in the
documentation of the Microsoft Event Viewer which provides visibility into events in the following logs:
Application log, security log, System log, Directory service log, File Replication service log and DNS server log. [1]
Definition
Event data records are created whenever some sort of transaction occurs. Event data records are generated at an
extremely granular level by business applications, IT infrastructure, and security systems. Almost any type of record
that is created to record a transaction and affixed with a timestamp meets the definition of an event data.
The contents of event data records are extremely crude and often meaningless unless correlated with other event data
records.
Examples include business applications such as SAP, Oracle, IIS and thousands of others.
Examples of IT infrastructure includes servers, internetworking devices manufactured by Cisco and others,
telecommunication switches, a SAN and message queues between systems.
Examples of security systems range from authentication applications including LDAP and RACF as well as IDS
applications and other security systems.
A typical organization will have hundreds of sources of event records.
A single business transaction such as withdrawing cash from an Automated teller machine (ATM) or a customer
placing an order will generate several hundred event data records in dozens of federated log files. It is not
uncommon for organizations to generate terabytes of event data every day.
The retention and ability to quickly inspect event data records has become a necessity for the purposes of detecting
suspicious activity, insider threats and other security breaches.
Regulatory compliance implications
Since the passage of the Sarbanes-Oxley Act of 2002 and other regulatory compliance mandates, the requirement for
retention of event data has become mandatory for passing audits. [2]
EU Data Retention Directive implications
New legislation tied to combat terrorism such as The EU Data Retention Directive legislation, which the European
Union says is necessary to help fight terrorism and organized crime, was passed by justice ministers in Brussels
2006. Internet service providers and fixed-line and mobile operators will now be forced to keep details of their
customers' communications for up to two years.
Information including the date, destination and duration of communications will be stored and made available to law
enforcement authorities for between six and 24 months, although the content of such communications will not be
recorded. Service providers will have to bear the costs of the storage themselves.
EU countries will now have until August 2007 to implement the directive, which was initially proposed after the
Madrid train bombings in 2004. [3]
Event data
153
References
[1] http:/ / technet2.microsoft.com/ WindowsServer/ f/?en/ library/0cc21369-d815-40ad-8325-97e3762107b91033.mspx
[2] http:// www. pcaobus. org/Standards/ Standards_and_Related_Rules/ Auditing_Standard_No.2.aspx
[3] http:// www. ispai. ie/ DR%20as%20published%20OJ%2013-04-06.pdf
Federal Desktop Core Configuration
The Federal Desktop Core Configuration is a list of security settings recommended by the National Institute of
Standards and Technology for general-purpose microcomputers that are connected directly to the network of a
United States government agency.
FDCC Major Version 1.1 (as with all previous versions) applies only to Windows XP and Vista desktop and laptop
computers.
History
In 20 March 2007 the Office of Management and Budget issued a memorandum instructing United States
government agencies to develop plans for using the Microsoft Windows XP and Vista security configurations.
[1]

[2]
The United States Air Force common security configurations for Windows XP were proposed as an early model on
which standards could be developed.
[2]
The FDCC baseline was developed (and is maintained) by the National Institute of Standards and Technology in
collaboration with OMB, DHS, DOI, DISA, NSA, USAF, and Microsoft,
[2]
with input from public comment.
[3]
It
applies to Windows XP Professional and Vista systems only—these security policies are not tested (and according to
the NIST, will not work) on Windows 9x/ME/NT/2000 or Windows Server 2003.
[3]
Requirements
Organizations required to document FDCC compliance can do so by using SCAP tools.
Released in 20 June 2008, FDCC Major Version 1.0 specifies 674 settings.
[3]
For example, "all wireless interfaces
should be disabled".
[4]
In recognition that not all recommended settings will be practical for every system,
exceptions (such as "authorized enterprise wireless networks") can be made if documented in an FDCC deviation
report.
[2]

[4]
Major Version 1.1 (released 31 October 2008) has no new or changed settings, but expands SCAP reporting
options.
[3]
As with all previous versions, the standard is applicable to general-purpose workstations and laptops for
end users. Windows XP and Vista systems in use as servers are exempt from this standard. Also exempt are
embedded computers and "special purpose" systems (defined as specialized scientific, medical, process control, and
experimental systems), though NIST still recommends that FDCC security configuration be considered "where
feasible and appropriate".
[5]
Federal Desktop Core Configuration
154
External links
• "U.S. Government Configuration Baseline solution"
[6]
. Microsoft Corporation. Retrieved 2010-06-19.
References
[1] "F D C C Additional NIST Frequently Asked Questions – How do I report compliance and deviations?" (http:/ / nvd.nist. gov/ fdcc/
fdcc_faqs_20080128.cfm#q13). National Vulnerability Database. National Institute of Standards and Technology. .
[2] Evans, Karen S. (2007-03-20) (DOC). Managing Security Risk By Using Common Security Configurations (http:// www.cio.gov/
documents/ Windows_Common_Security_Configurations. doc). . Retrieved 2009-03-02.
[3] "F D C C download page" (http:/ / nvd. nist. gov/ fdcc/download_fdcc.cfm). National Vulnerability Database. National Institute of
Standards and Technology. .
[4] "F D C C Additional NIST Frequently Asked Questions – Are there any conditions under which wireless is allowed?" (http:/ / nvd. nist. gov/
fdcc/fdcc_faqs_20080128.cfm#q10). National Vulnerability Database. National Institute of Standards and Technology. .
[5] "F D C C Additional NIST Frequently Asked Questions – Is FDCC applicable to special purpose (e.g., scientific, medical, process control,
and experimental systems) computers?" (http:// nvd. nist. gov/ fdcc/fdcc_faqs_20080128. cfm#q1). National Vulnerability Database.
National Institute of Standards and Technology. .
[6] http:/ / www. microsoft.com/ industry/ government/ solutions/ FDCC/ default.aspx
Federal Information Security Management Act of
2002
The Federal Information Security Management Act of 2002 ("FISMA", 44 U.S.C. § 3541
[1]
, et seq.) is a United
States federal law enacted in 2002 as Title III of the E-Government Act of 2002 (Pub.L. 107-347
[2]
, 116 Stat. 2899).
The act recognized the importance of information security to the economic and national security interests of the
United States.
[3]
The act requires each federal agency to develop, document, and implement an agency-wide program
to provide information security for the information and information systems that support the operations and assets of
the agency, including those provided or managed by another agency, contractor, or other source.
[3]
FISMA has brought attention within the federal government to cybersecurity and explicitly emphasized a "risk-based
policy for cost-effective security."
[3]
FISMA requires agency program officials, chief information officers, and
inspectors general (IGs) to conduct annual reviews of the agency’s information security program and report the
results to Office of Management and Budget (OMB). OMB uses this data to assist in its oversight responsibilities
and to prepare this annual report to Congress on agency compliance with the act.
[4]
In FY 2008, federal agencies
spent $6.2 billion securing the government’s total information technology investment of approximately $68 billion or
about 9.2 percent of the total information technology portfolio.
[5]
Purpose of the act
FISMA assigns specific responsibilities to federal agencies, the National Institute of Standards and Technology
(NIST) and the Office of Management and Budget (OMB) in order to strengthen information system security. In
particular, FISMA requires the head of each agency to implement policies and procedures to cost-effectively reduce
information technology security risks to an acceptable level.
[4]
According to FISMA, the term information security means protecting information and information systems from
unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity,
confidentiality and availability.
Federal Information Security Management Act of 2002
155
Implementation of FISMA
In accordance with FISMA, NIST is responsible for developing standards, guidelines, and associated methods and
techniques for providing adequate information security for all agency operations and assets, excluding national
security systems. NIST works closely with federal agencies to improve their understanding and implementation of
FISMA to protect their information and information systems and publishes standards and guidelines which provide
the foundation for strong information security programs at agencies. NIST performs its statutory responsibilities
through the Computer Security Division of the Information Technology Laboratory.
[6]
NIST develops standards,
metrics, tests, and validation programs to promote, measure, and validate the security in information systems and
services. NIST hosts the following:
• FISMA implementation project
[7]
• Information Security Automation Program (ISAP) – * National Vulnerability Database (NVD) – the U.S.
government content repository for ISAP and SCAP. NVD is the U.S. government repository of standards based
vulnerability management data. This data enables automation of vulnerability management, security
measurement, and compliance (e.g., FISMA)
[8]
Compliance framework defined by FISMA and supporting standards
FISMA defines a framework for managing information security that must be followed for all information systems
used or operated by a U.S. federal government agency or by a contractor or other organization on behalf of a federal
agency. This framework is further defined by the standards and guidelines developed by NIST.
[9]
Inventory of information systems
FISMA requires that agencies have in place an information systems inventory. According to FISMA, the head of
each agency shall develop and maintain an inventory of major information systems (including major national
security systems) operated by or under the control of such agency
[9]
The identification of information systems in an
inventory under this subsection shall include an identification of the interfaces between each such system and all
other systems or networks, including those not operated by or under the control of the agency.
[9]
The first step is to
determine what constitutes the "information system" in question. There is not a direct mapping of computers to
information system; rather, an information system may be a collection of individual computers put to a common
purpose and managed by the same system owner. NIST SP 800-18, Revision 1, Guide for Developing Security Plans
for Federal Information Systems
[10]
provides guidance on determining system boundaries.
Categorize information and information systems according to risk level
All information and information systems should be categorized based on the objectives of providing appropriate
levels of information security according to a range of risk levels
[9]
The first mandatory security standard required by
the FISMA legislation, namely FIPS PUB 199 "Standards for Security Categorization of Federal Information and
Information Systems"
[11]
provides the definitions of security categories. The guidelines are provided by NIST SP
800-60 "Guide for Mapping Types of Information and Information Systems to Security Categories."
[12]
The overall FIPS PUB 199 system categorization is the "high water mark" for the impact rating of any of the criteria
for information types resident in a system. For example, if one information type in the system has a rating of "Low"
for "confidentiality," "integrity," and "availability," and another type has a rating of "Low" for "confidentiality" and
"availability" but a rating of "Moderate" for "integrity," then the entire system has a FIPS PUB 199 categorization of
"Moderate."
Federal Information Security Management Act of 2002
156
Security controls
Federal information systems must meet the minimum security requirements.
[9]
These requirements are defined in the
second mandatory security standard required by the FISMA legislation, namely FIPS 200 "Minimum Security
Requirements for Federal Information and Information Systems".
[11]
Organizations must meet the minimum security
requirements by selecting the appropriate security controls and assurance requirements as described in NIST Special
Publication 800-53, "Recommended Security Controls for Federal Information Systems". The process of selecting
the appropriate security controls and assurance requirements for organizational information systems to achieve
adequate security is a multifaceted, risk-based activity involving management and operational personnel within the
organization. Agencies have flexibility in applying the baseline security controls in accordance with the tailoring
guidance provided in Special Publication 800-53. This allows agencies to adjust the security controls to more closely
fit their mission requirements and operational environments. The controls selected or planned must be documented
in the System Security Plan.
Risk assessment
The combination of FIPS 200 and NIST Special Publication 800-53 requires a foundational level of security for all
federal information and information systems. The agency's risk assessment validates the security control set and
determines if any additional controls are needed to protect agency operations (including mission, functions, image,
or reputation), agency assets, individuals, other organizations, or the Nation. The resulting set of security controls
establishes a level of “security due diligence” for the federal agency and its contractors.
[13]
A risk assessment starts
by identifying potential threats and vulnerabilities and mapping implemented controls to individual vulnerabilities.
One then determines risk by calculating the likelihood and impact that any given vulnerability could be exploited,
taking into account existing controls. The culmination of the risk assessment shows the calculated risk for all
vulnerabilities and describes whether the risk should be accepted or mitigated. If mitigated by the implementation of
a control, one needs to describe what additional Security Controls will be added to the system.
NIST also initiated the Information Security Automation Program (ISAP) and Security Content Automation Protocol
(SCAP) that support and complement the approach for achieving consistent, cost-effective security control
assessments.
System security plan
Agencies should develop policy on the system security planning process.
[9]
NIST SP-800-18 introduces the concept
of a System Security Plan.
[10]
System security plans are living documents that require periodic review, modification,
and plans of action and milestones for implementing security controls. Procedures should be in place outlining who
reviews the plans, keeps the plan current, and follows up on planned security controls.
[10]
The System security plan is the major input to the security certification and accreditation process for the system.
During the security certification and accreditation process, the system security plan is analyzed, updated, and
accepted. The certification agent confirms that the security controls described in the system security plan are
consistent with the FIPS 199 security category determined for the information system, and that the threat and
vulnerability identification and initial risk determination are identified and documented in the system security plan,
risk assessment, or equivalent document.
[10]
Certification and accreditation
Once the system documentation and risk assessment has been completed, the system's controls must be reviewed and
certified to be functioning appropriately. Based on the results of the review, the information system is accredited.
The certification and accreditation process is defined in NIST SP 800-37 "Guide for the Security Certification and
Accreditation of Federal Information Systems".
[14]
Security accreditation is the official management decision given
by a senior agency official to authorize operation of an information system and to explicitly accept the risk to agency
Federal Information Security Management Act of 2002
157
operations, agency assets, or individuals based on the implementation of an agreed-upon set of security controls.
Required by OMB Circular A-130, Appendix III, security accreditation provides a form of quality control and
challenges managers and technical staffs at all levels to implement the most effective security controls possible in an
information system, given mission requirements, technical constraints, operational constraints, and cost/schedule
constraints. By accrediting an information system, an agency official accepts responsibility for the security of the
system and is fully accountable for any adverse impacts to the agency if a breach of security occurs. Thus,
responsibility and accountability are core principles that characterize security accreditation. It is essential that agency
officials have the most complete, accurate, and trustworthy information possible on the security status of their
information systems in order to make timely, credible, risk-based decisions on whether to authorize operation of
those systems.
[14]
The information and supporting evidence needed for security accreditation is developed during a detailed security
review of an information system, typically referred to as security certification. Security certification is a
comprehensive assessment of the management, operational, and technical security controls in an information system,
made in support of security accreditation, to determine the extent to which the controls are implemented correctly,
operating as intended, and producing the desired outcome with respect to meeting the security requirements for the
system. The results of a security certification are used to reassess the risks and update the system security plan, thus
providing the factual basis for an authorizing official to render a security accreditation decision.
[14]
Continuous monitoring
All accredited systems are required to monitor a selected set of security controls and the system documentation is
updated to reflect changes and modifications to the system. Large changes to the security profile of the system
should trigger an updated risk assessment, and controls that are significantly modified may need to be re-certified.
Continuous monitoring activities include configuration management and control of information system components,
security impact analyses of changes to the system, ongoing assessment of security controls, and status reporting. The
organization establishes the selection criteria and subsequently selects a subset of the security controls employed
within the information system for assessment. The organization also establishes the schedule for control monitoring
to ensure adequate coverage is achieved.
Critique
Security experts Bruce Brody, a former federal chief information security officer, and Alan Paller, director of
research for the SANS Institute – have described FISMA as a well-intentioned but fundamentally flawed tool, and
argued that the compliance and reporting methodology mandated by FISMA measures security planning rather than
measuring information security.
[15]
Past federal chief technology officer Keith Rhodes said that FISMA can and has
helped government system security but that implementation is everything, and if security people view FISMA as just
a checklist, nothing is going to get done.
[16]
Federal Information Security Management Act of 2002
158
Status
As of June 2010, multiple bills in Congress are proposing changes to FISMA, including shifting focus from periodic
assessment to real-time assessment and increasing use of automation for reporting.
[17]
References
[1] http:/ / www. law. cornell.edu/ uscode/ 44/ 3541. html
[2] http:// www. gpo. gov/ fdsys/ pkg/ PLAW-107publ347/content-detail. html
[3] NIST: FISMA Overview (http:// csrc. nist. gov/ groups/ SMA/ fisma/ overview.html)
[4] FY 2005 Report to Congress on Implementation of The Federal Information Security Management Act of 2002
[5] FY 2008 Report to Congress on Implementation of The Federal Information Security Management Act of 2002
[6] NIST Computer Security Division 2008 report (http:// csrc. nist. gov)
[7] FISMA implementation (http:/ / csrc. nist. gov/ groups/ SMA/ fisma/ overview.html)
[8] National Vulnerability Database (http:// nvd. nist. gov/ )
[9] The 2002 Federal Information Security Management Act (FISMA)
[10] NIST SP 800-18, Revision 1, "Guide for Developing Security Plans for Federal Information Systems"
[11] Catalog of FIPS publications (http:// csrc. nist. gov/ publications/ PubsFIPS. html)
[12] Catalog of NIST SP-800 publications (http:/ / csrc. nist. gov/ publications/ PubsSPs. html)
[13] NIST SP 800-53A "Guide for Assessing the Security Controls in Federal Information Systems"
[14] NIST SP 800-37 "Guide for the Security Certification and Accreditation of Federal Information Systems
[15] http:/ / gcn.com/ Articles/ 2007/ 03/ 18/ FISMAs-effectiveness-questioned. aspx?Page=2 Government Computer News, FISMA efficiency
questioned, 2007.
[16] http:// gcn.com/ Articles/ 2009/ 06/ 15/ Interview-Keith-Rhodes-IT-security.aspx?sc_lang=en& Page=2 Government Computer News,
Effective IT security starts with risk analysis, former GAO CTO says
[17] http:// gcn.com/ Articles/ 2010/ 06/ 03/ Cybersecurity-congressional-priority.aspx?sc_lang=en Cybersecurity moving up on Congress'
to-do list
• (http:// csrc. nist. gov/ groups/ SMA/ fisma/ index.html) NIST: FISMA Implementation Project
• FISMApedia project (http:// www. fismapedia.org)
• (http:/ / www. fismaresources. com) FISMA Guidance
External links
• NIST SP 800 Series Special Publications Library (http:// csrc. nist. gov/ publications/ nistpubs/ index. html)
• NIST FISMA Implementation Project Home Page (http:// csrc. nist. gov/ sec-cert/)
• Full text of FISMA (http:/ / csrc. nist. gov/ drivers/documents/ FISMA-final.pdf)
• Report on 2004 FISMA scores (http:/ / searchsecurity. techtarget.com/ originalContent/
0,289142,sid14_gci1059656,00. html)
• FISMA Resources (http:/ / www.fismacenter.com/ default.asp?lnc=resources)
• Rsam: Automated Platform for FISMA Compliance and Continuous Monitoring (http:/ / www.rsam. com/
rsam_fisma. htm)
Flaw hypothesis methodology
159
Flaw hypothesis methodology
Flaw hypothesis methodology is a systems analysis and penetration prediction technique where a list of
hypothesized flaws in a system are compiled through analysis of the specifications and documentation for the
system. The list of hypothesized flaws is then prioritized on the basis of the estimated probability that a flaw actually
exists, and on the ease of exploiting it to the extent of control or compromise. The prioritized list is used to direct the
actual testing of the system.
Footprinting
Footprinting is the technique of gathering information about computer systems and the entities they belong to. This
is done by employing various computer security techniques, as:
• DNS queries
• Network enumeration
• Network queries
• Operating system identification
• Organizational queries
• Ping sweeps
• Point of contact queries
• Port Scanning
• Registrar queries (WHOIS queries)
When used in the computer security lexicon, "footprinting" generally refers to one of the pre-attack phases; tasks
performed prior to doing the actual attack. Some of the tools used for footprinting are Sam Spade, nslookup,
traceroute, Nmap and neotrace.
Forward anonymity
160
Forward anonymity
Forward anonymity is analogous to forward secrecy.
When speaking of forward secrecy, system designers attempt to prevent an attacker who has recorded past
communications from discovering the contents of said communications later on. One example of a system which
satisfies the perfect forward secrecy property is one in which a compromise of one key by an attacker (and
consequent decryption of messages encrypted with that key) does not undermine the security of previously used
keys.
When speaking of forward anonymity, system designers attempt to prevent an attacker who has recorded past
communications from discovering the identities of the participants, even after the fact. This property is not to be
confused with sender (receiver) anonymity, in which the identity of the sender (receiver) remains unknown to all
entities in the system.
Four Horsemen of the Infocalypse
The Four Horsemen of the Infocalypse is a term for internet criminals, or the imagery of internet criminals.
A play on Four Horsemen of the Apocalypse, it refers to types of criminals who use the internet to facilitate crime
and consequently jeopardize the rights of honest internet users. There does not appear to be an exact definition for
who the Horsemen are, but they are usually described as terrorists, drug dealers, pedophiles, and organized crime.
Other sources use slightly different descriptions but generally refer to the same types of criminals. The term was
coined by Timothy C. May in 1988, who referred to them as "child pornographers, terrorists, abortionists, abortion
protestors, etc."
[1]
when discussing the reasons for limited civilian use of cryptography tools. Among the most
famous of these is in the Cypherpunk FAQ,
[2]
which states:
8.3.4. "How will privacy and anonymity be attacked?"
[...]
- like so many other "computer hacker" items, as a tool for
the "Four Horsemen": drug-dealers, money-launderers,
terrorists, and pedophiles.
17.5.7. "What limits on the Net are being proposed?"
[...]
+ Newspapers are complaining about the Four Horsemen of the
Infocalypse:
- terrorists, pedophiles, drug dealers, and money
launderers
The term seems to be used less often in discussions about online criminal activity, but more often in discussions
about the negative, or chilling effects such activity has had on regular users' daily experiences online. It also used
frequently to describe the political tactic Think of the children. A message from the same mailing list states:
[3]
How to get what you want in 4 easy stages:
1. Have a target "thing" you wish to stop, yet lack any moral, or
practical reasons for doing so?
2. Pick a fear common to lots of people, something that will evoke a
Four Horsemen of the Infocalypse
161
gut reaction: terrorists, pedophiles, serial killers.
3. Scream loudly to the media that "thing" is being used by
perpetrators. (Don't worry if this is true, or common to all other
things, or less common with "thing" than with other long established
systems - payphones, paper mail, private hotel rooms, lack of bugs in
all houses etc)
4. Say that the only way to stop perpetrators is to close down
"thing", or to regulate it to death, or to have laws forcing en-mass
tapability of all private communications on "thing". Don't worry if
communicating on "thing" is a constitutionally protected right, if you
have done a good job in choosing and publicising the horsemen in 2, no
one will notice, they will be too busy clamouring for you to save them
from the supposed evils.
The four supposed threats may be used all at once or individually, depending on the circumstances:
[4]
Pedophiles fill in the gaps when the terrorists aren't doing anything. I mean, how many more buildings have
fallen here in the U.S. since 9/11? Not many. So, given the absence of an active external threat, an internal one
must be manufactured.
References
[1] Carey, Robert; Jacquelyn Burkell (August 2007). "Revisiting the Four Horsemen of the Infopocalypse: Representations of anonymity and the
Internet in Canadian newspapers" (http:/ / www.firstmonday.org/ issues/ issue12_8/ carey/index. html). First Monday 12 (8). .
[2] May, Timothy C. (1994-09-10). "§8.3.4. How will privacy and anonymity be attacked?" (http:// www. swiss. ai.mit. edu/ 6805/ articles/
crypto/cypherpunks/ cyphernomicon/ CP-FAQ). Cypherpunk FAQ. .
[3] aba@dcs.exeter.ac.uk (1995-10-16). "The Four Horsemen" (http:/ / web.archive.org/web/ 20061029141026/ http:/ / www.shipwright. com/
horsemen. html). Archived from the original (http:// www.shipwright. com/ horsemen.html) on 2006-10-29. .
[4] "ScrewMaster" (2008-08-19). "Re:The devil is in the details" (http:/ / news. slashdot. org/comments. pl?sid=650749& cid=24666825). Judge
Rules Man Cannot Be Forced To Decrypt HD. Slashdot. .
External links
• Bernal, Javier (1999-08-06). "Big Brother is On-line: Public and Private Security in the Internet" (http:// www.
cybersociology. com/ files/ 6_publicandprivatesecurity.html). Cybersociology Magazine.
• McCullagh, Declan (2007-10-11). "McCullagh's Law: When politicians invoke the do-this-or-Americans-will-die
argument" (http:/ / www. news. com/ 8301-13578_3-9795316-38.html). News.com (CNET Networks, Inc.).
• Grossman, Wendy M. (2004-02-27). "eCrimes of the century" (http:/ / www.theinquirer.net/ en/ inquirer/news/
2004/ 02/ 27/ ecrimes-of-the-century). The Inquirer.
Fragmented distribution attack
162
Fragmented distribution attack
Fragmented distribution attack in computer security is a malware or virus distribution technique aiming at
bypassing protection systems by sending fragments of code over the network.
This technique has been first described in a paper published on Virus Bulletin 2009 annual conference by Anoirel
Issa, malware Analyst for the Symantec Hosted Services, formerly MessageLabs.
Method of attack
A malware is split into several fragments and are embedded in an innocent file, and these segments are sent over a
protected network. The fragmented malware successfully bypasses firewalls, IDS and anti-virus undetected, then is
re-assembled on victim's system. The re-assembler is a separate program, which is not necessarily a malware thus
can evade security measures, locates malware fragment carriers and pre-assemble the malware in memory. The
re-assembler may write the code to disk then executes the re-assembled code on either in memory or on disk.
Consequences
If successfully achieved, an FDA attack can result to some serious consequences depends on the victim's level of
protection. Consequence not easily predictable but can lead to:
• Data, intellectual property leakage
• Government, military, industrial espionage
• Irreversible financial losses
External links
• Virus bulletin conference White paper
[1]
References
[1] http:/ / www. virusbtn. com/ pdf/conference_slides/ 2009/ Issa-VB2009. pdf
Higgins project
163
Higgins project
Higgins is an open source project dedicated to giving individuals more control over their personal identity, profile
and social network data.
The project is organized into three main areas:
1. Active Clients - An active client integrates with a browser and runs on a computer or mobile device.
• Higgins 1.X: the active client supports the OASIS IMI protocol and performs the functions of an Information
Card selector.
• Higgins 2.0: the plan is to move beyond selector functionality to add support for managing passwords, Higgins
relationship cards, as well other protocols such as OpenID. It also becomes a client for the Personal Data Store
(see below) and thereby provides a kind of dashboard for personal information and a place to manage
"permissioning" --deciding who gets access to what slice of the user's data.
2. Personal Data Store (PDS) is a new work area under development for Higgins 2.0. A PDS stores local personal
data, controls access to remotely hosted personal data, synchronizes personal data to other devices and computers,
accessed directly or via a PDS client it allows the user to share selected aspects of their information with people
and organizations that they trust.
3. Identity Services - Code for (i) an IMI and SAML compatible Identity Provider and (ii) enabling websites to be
IMI and OpenID compatible.
History
The initial code for the Higgins Project
[1]
was written by Paul Trevithick in the summer of 2003. In 2004 the effort
became part of SocialPhysics.org
[2]
, a collaboration between Paul and Mary Ruddy, of Azigo
[3]
, (formerly Parity
Communications, Inc.), and Meristic
[4]
, and John Clippinger, at the Berkman Center for Internet and Society at
Harvard University
[5]
. Higgins, under its original name Eclipse Trust Framework, was accepted into the Eclipse
Foundation in early 2005. Mary and Paul are the project co-leads. IBM and Novell's participation in the project was
announced in early 2006
[6]

[7]
. Higgins has received technology contributions from IBM, Novell, Oracle, CA,
Serena, Google, Corisecio
[8]
as well as from several other firms and individuals. Version 1.0 was released in
February 2008
[9]
.
References
[1] Eclipse Higgins Project (http:// www.eclipse. org/higgins/ ) - home page
[2] http:// socialphysics. org/
[3] http:/ / azigo.com
[4] http:/ / Meristic. com
[5] http:/ / cyber.law.harvard.edu/
[6] Open Source Initiative to Give People More Control Over Their Personal Online Information (http:// www-03.ibm. com/ press/ us/ en/
pressrelease/ 19280. wss) - IBM press release
[7] IBM/Novell unveil rival to Microsoft Infocard (http:/ / www.vnunet. com/ vnunet/ news/ 2151060/ ibm-backs-open-source)
[8] http:/ / www. corisecio. com/
[9] Eclipse press release (http:/ / www.eclipse. org/org/press-release/ 20080221_higgins.php)
High Assurance Guard
164
High Assurance Guard
A High Assurance Guard (HAG) is a Multilevel security computer device which is used to communicate between
different Security Domains, such as NIPRNet to SIPRNet. A HAG is one example of a Controlled Interface between
security levels. HAGs are approved through the Common Criteria process.
Operation
A HAG runs multiple virtual machines - one subsystem for the lower classification, one subsystem for the higher
classification. The hardware runs a type of Knowledge Management software that examines data coming out of the
higher classification subsystem and rejects any data that is classified higher than the lower classification. In general,
a HAG allows lower classified data that resides on a higher classified system to be moved to another lower classified
system. For example, in the US, it would allow unclassified information residing on a classified secret system to be
moved to another unclassified system. Through various rules and filters, the HAG ensures that data is of the lower
classification and then allows the transfer.
Importance, risks
The HAG is mostly used in email and DMS environments as certain organizations may only have unclassified
network access, and they need to send a message to an organization that has only secret network access. The HAG
provides them this ability.
External links
• http:/ / www. deep-secure.com/
Host Based Security System
165
Host Based Security System
The Host Based Security System (HBSS) is the official name given to the Department of Defense (DOD)
commercial-off-the-shelf (COTS) suite of software applications used within the DOD to monitor, detect and counter
attacks against the DOD computer networks and systems. The Enterprise-wide Information Assurance and computer
Network Defense Solutions Steering Group (ESSG) sponsored the acquisition of the HBSS System for use within
the DOD Enterprise Network. HBSS is deployed on both the Non-Classified Internet Protocol Routed Network
(NIPRNet) and Secret Internet Protocol Routed Network (SIPRNet) networks, with priority given to installing it on
the NIPRNet. HBSS is based on McAfee, Inc's ePolicy Orchestrator (ePO) and other McAfee point product security
applications such as Host Intrusion Prevention System (HIPS).
History
Seeing the need to supply a comprehensive, department-wide security suite of tools for DOD System Administrators,
the ESSG started to gather requirements for the formation of a host-based security system in the summer of 2005. In
March 2006, BAE Systems and McAfee were awarded a contract to supply an automated host-based security system
to the department. After the award, 22 pilot sites were identified to receive the first deployments of HBSS.
[1]
During
the pilot roll out, DOD System Administrators around the world were identified and trained on using the HBSS
software in preparation for software deployment across DOD.
On October 9, 2007, the Joint Task Force for Global Network Operations (JTF-GNO) released Communications
Tasking Order (CTO) 07-12 (Deployment of Host Based Security System (HBSS)) mandating the deployment of
HBSS on all Component Command, Service and Agency (CC/S/A) networks within DOD with the completion date
by the 3rd quarter of 2008.
[2]
The release of this CTO brought HBSS to the attention of all major department heads
and CC/S/A's, providing the ESSG with the necessary authority to enforce its deployment. Agencies not willing to
comply with the CTO now risked being disconnected from the DOD Global Information Grid (GIG) for any lack of
compliance.
Lessons learned from the pilot deployments provided valuable insight to the HBSS program, eventually leading to
the Defense Information Systems Agency (DISA) supplying both pre-loaded HBSS hardware as well as providing an
HBSS software image that could be loaded on compliant hardware platforms. This proved to be invaluable to easing
the deployment task on the newly trained HBSS System Administrators and provided a consistent department-wide
software baseline. The DISA further provided step-by-step documentation for completing an HBSS baseline creation
from a freshly installed operating system. The lessons learned from the NIPRNet deployments simplified the process
of deploying HBSS on the SIPRNet.
Significant HBSS Dates
• Summer 2005: ESSG gathered information on establishing an HBSS automated system
• March 2006: BAE Systems and McAfee awarded contract for HBSS establishment and deployment
• March 27, 2007: The ESSG approved the HBSS for full-scale deployment throughout the DoD enterprise
• October 9, 2007: The JTF-GNO releases CTO 07-12
• November, 2009: The Air Force awarded Northrop Grumman, Inc. with the deployment of HBSS on the
SIPRNet
[3]
Host Based Security System
166
HBSS Components
Throughout its lifetime, HBSS has undergone several major baseline updates as well as minor maintenance releases.
The first major release of HBSS was known as Baseline 1.0 and contained the McAfee ePolicy Orchestrator engine,
HIPS, Software Compliance Profiler (SCP), Rogue System Detection (RSD), Asset Baseline Manager (ABM), and
Assets software. As new releases were introduced, these software products have evolved, had new products added,
and in some cases, been completely replaced for different products.
As of January, 2011, HBSS is currently at Baseline 4.5, Maintenance Release 2.0 (MR2). MR2 contains the
following software:
HBSS Baseline 4.5 MR2 Components
As of January, 2011, HBSS is currently at Baseline 4.5, Maintenance Release 2.0 (MR2). MR2 contains the following software:
Microsoft Products
Software Application Version
Microsoft Windows 2003 SP2 (5.2.3790)
Microsoft .NET Framework 1.1.4322.2433
Microsoft .NET Framework 2.2.30729
Microsoft .NET Framework 3.2.30729
Microsoft .NET Framework 3.5.30729.1
Microsoft Internet Explorer 7.0.5720.13
Microsoft SQL Management Studio SQL2005 SP3 - 9.00.4035.00
HBSS Products/Components
Software Application Version
McAfee ePolicy Orchestrator 4.5.3.937 (4.5 Patch 3)
Asset Baseline Monitor 3.5.0.190
Asset Baseline Monitor AIX, HP-UX, Linux, MAC, and Solaris 3.5.0.190
Asset Publishing Service 1.5.2
Data Loss Prevention 9.1.2.1 (Evaluation)
Data Loss Prevention Extension 9.1.2.4
Enhanced Reporting Extension 4.5.11
Host Intrusion Prevention Server 7.0.5.106
Host Intrusion Windows Hot Fix 7.0.0 Win-8 (Current)
Host Intrusion Windows Slipstream 7.0.0.1159
Host Intrusion Client (Linux) 7.1.0.227.1 (Current)
McAfee Agent Server 4.5.0.171
McAfee Agent Windows Client 4.5.0.1499 (Current)
McAfee Agent AIX, HP-UX, MAC, and Solaris Client 4.5.0.1453 (Current)
McAfee Agent Linux Client 4.5.0.1470 (Current)
Host Based Security System
167
Policy Auditor Server 5.2.0.166
Policy Auditor Windows Client 5.2.0.152
Policy Auditor AIX, HP-UX, Linux, MAC, and Solaris Clients 5.2.0.210
Rogue Systems Detection Server 2.0.2.105
Rogue Systems Detection Client (Windows) 2.0.0.405
Operational Attribute Module (Required for APS) 2.0.1
CertAuth Module 4.5.3.6
DoD Logon Banner 1.2
Optional Products/Components
Software Application Version
Symantec SEP/SAV Integration Extension 1.3, plugin 1.2
VirusScan Enterprise 8.7.0.570 (Evaluation)
VirusScan Enterprise 8.7 Extension 8.7.0.195
VirusScan Report Extension 1.1.0.154
SIPRNet Only Products/Components
Software Application Version
ArcSight Connector 5.0.4.5717
Rollup Extender 1.2.8
How HBSS Works
The heart of the HBSS is the McAfee ePolicy Orchestrator (ePO) management engine. The engine is responsible for:
• Providing a consistent front-end to the point products
• Consolidating point product data for analysis
• Presenting point product reports
• Managing the point product updates and communications
• Ensure application patch compliance
McAfee Point Products
McAfee considers a point product to be the individual software applications controlled by the ePO server. The HBSS
point products consist of the following:
• Host Intrusion Prevention System (HIPS)
• Policy Auditor (PA)
• Assets Baseline Module (ABM)
• Rogue System Detection (RSD)
• Device Control Module (DCM)
• Asset Publishing Service (APS)
Host Based Security System
168
Host Intrusion Prevention System
The Host Intrusion Prevention System (HIPS) consists of a host-based firewall, HIPS, and application-level blocking
consolidated in a single product. The HIPS component is one of the most significant components of the HBSS, as it
provides for the capability to block known intrusion signatures and restrict unauthorized services and applications
running on the host machines.
Policy Auditor
Policy Auditor (PA) was introduced in HBSS Baseline 2.0. PA is responsible for ensuring compliance with mandates
such as PCI DSS, SOX, GLBA, HIPAA, FISMA, as well as the best practice frameworks ISO 27001 and COBiT.
[4]
Assets Baseline Module
The Assets Baseline Module, released in Baseline 1.0 as a Government off-the-shelf (GOTS) product, is used to
address system baseline configurations and changes in order to respond to Information Operations Condition
(INFOCON) (INFOCON) changes necessary during times of heightened security threats to the system. During the
initial deployment stages of HBSS, the Assets Module was juvenile and lacked much of the products intended
capabilities. However, the application has fully evolved into a robust and feature packed version capable of handling
the original software's design goals. ABM was originally known as Assets 1.0. It was upgraded to Assets 2.0 in
HBSS Baseline 2.0. Later it was called Assets 3000 in HBSS Baseline 3.0.
Rogue System Detection
The Rogue System Detector (RSD) component of HBSS is used to provide real-time detection of new hosts
attaching to the network. RSD monitors network segments and reports all hosts seen on the network to the ePO
Server. The ePO Server then determines whether the system is connected to the ePO Server, has a McAfee Agent
installed, has been identified as an exception, or is considered rogue. The ePO Server can then take the appropriate
action(s) concerning the rogue host, as specified in the RSD policy. HBSS Baseline 1.0 introduced RSD 1.0. RSD
was updated to 2.0 in HBSS Baseline 2.0.
Device Control Module/Data Loss Prevention
The DCM component of HBSS was introduced in HBSS Baseline 2.0 specifically to address the use of USB devices
on DOD Networks. JTF-GNO CTO 09-xxx, Removable Flash Media Device Implementation Within and Between
Department of Defense (DOD) Networks was released in March, 2009 and allowed the use of USB removable media,
provided it meets all of the conditions stated within the CTO. One of these conditions requires the use of HBSS with
the DCM module installed and configured to manage the USB devices attached to the system.
[5]
The DCM was
renamed to the Data Loss Prevention (DLP) in HBSS Baseline 3.0 MR3.
Assets Publishing Service
The Assets Publishing Service (APS) of HBSS was introduced in HBSS Baseline 4.0 to allow for enclaves to report
on asset information to a third-party DoD entity in a standards-compliant format. It adds contextual information to
HBSS assets and allows for improved reporting features on systems relying on HBSS data.McAfee considers a point
product to be the individual software applications controlled by the ePO server. The HBSS point products consist of
the following:
Host Based Security System
169
Obtaining HBSS
According to JTF-GNO CTO 07-12, all DOD agencies are required to deploy HBSS to their networks. The DISA
has made HBSS software available for download on their PKI protected server
[6]
. Users attempting to download the
software are required to have a Common Access Card (CAC) and be on a .mil network. The DISA provides software
and updates free of charge to DOD entities.
Additionally, HBSS Administrators require the satisfactory completion of HBSS training and are commonly
appointed by the unit or section commander in writing.
Learning HBSS
In order to receive and administer and HBSS System, System Administrators must satisfactorily complete online or in class HBSS Training as well
as be identified as an HBSS Administrator. Online training takes 30 hours to complete while in class training requires four days, excluding travel.
An advanced HBSS class is also available to HBSS Administrators wishing to acquire a more in-depth knowledge of the system. HBSS online and
in class training is managed by the DISA and information pertaining to these training classes can be obtained at the DISA Information Assurance
Support Environment
[7]
(IASE) website.
HBSS Support
The DISA Field Security Office (FSO) provides free technical support for all HBSS Administrators through their
help desk. DISA has three tiers of support, from Tier I to Tier III. Tier I and Tier II support is provided by DISA
FSO, while Tier III support is provided by McAfee. DISA FSO Support is available using one of the following
methods:
[8]
Email: disa-esmost [at] csd.disa.mil
Commercial (405) 739-5600
DSN: 339-5600
Toll Free: 800-490-1643
The Future of HBSS
At its current pace, HBSS has been updated several times from the original Baseline 1.0 to the current Baseline 3.0,
MR3 version. Within Baseline 3.0, maintenance releases have been introduced every two to four months, bringing
better stability and security with each release. HBSS follows McAfee ePO version updates closely and it is expected
to continue this trend as ePO is continuously developed.
Conclusion
HBSS has been one of the most significant and effective security improvements made to DOD networks that has
helped to reduce or eliminate security intrusions across the DOD.
Host Based Security System
170
References
[1] Host Based Security System, http:/ / www.disa. mil/ hbss/ index.html, 3/13/2010
[2] Host Based Security System HBSS),http:/ / www.afcea.org/ events/ landwarnet/ 08/ infoexchange.asp, 3/13/2010
[3] Henry Kenyon, Northrop Grumman Wins Air Force SIPRNET Contract,http:// www. afcea.org/signal/ signalscape/ index.php/ 2009/ 11/
northrop-grumman-wins-air-force-siprnet-contract/ , 3/13/2010
[4] McAfee Policy Auditor.http:// www.mcafee.com/ us/ enterprise/products/ system_security/ clients/ policy_auditor.html,3/ 14/ 2010
[5] Tom Conway, DOD Can Safely Use USB,http:// blogs. mcafee.com/ enterprise/public-sector/ dod-can-use-usb-securely, (Security Insights
Blog), 3/9/2010
[6] https:/ / patches.csd. disa. milpatch
[7] http:// iase. disa. mil
[8] IA Tools, http:// iase. disa. mil/ tools/ index. html, 3/14/2010
External links
• End-Point Security Spreads Throughout Military (http:// www.afcea.org/signal/ articles/ templates/
200904SIGNALConnections. asp?articleid=1909& zoneid=258)
• Northrop Grumman Wins Air Force SIPRNET Contract (http:// www.afcea.org/signal/ signalscape/ index. php/
2009/ 11/ northrop-grumman-wins-air-force-siprnet-contract/)
• Host Based Security System (HBSS) (http:/ / www.afcea.org/events/ landwarnet/08/ infoexchange. asp)
• Information Assurance Support Environment (http:// iase. disa. mil)
• McAfee, Inc. (http:/ / www.mcafee.com)
• BAE Systems (http:/ / www. baesystems. com)
Human–computer interaction (security)
HCISec is the study of interaction between humans and computers, or HCI, specifically as it pertains to information
security. Its aim, in plain terms, is to improve the usability of security features in end user applications.
Unlike HCI, which has roots in the early days of Xerox PARC during the 1970s, HCISec is a nascent field of study
by comparison. Not surprisingly, interest in this topic tracks with that of Internet security, which has become an area
of broad public concern only in very recent years.
Historically, security features exhibit poor usability for reasons that include:
• they were added in casual afterthought
• they were hastily patched in to address newly discovered security bugs
• they address very complex use cases without the benefit of a software wizard
• their interface designers lacked understanding of related security concepts
• their interface designers were not usability experts (often meaning they were the application developers
themselves)
Human–computer interaction (security)
171
Further reading
• "Design Principles and Patterns for Computer Systems That Are Simultaneously Secure and Usable"
[1]
, by
Simson Garfinkel
External links
• HCISec Bibliography
[2]
• HCISec
[3]
Yahoo! Group
• Usable Security Blog
[4]
References
[1] http:/ / www. simson. net/ thesis/
[2] http:// gaudior.net/ alma/ biblio. html
[3] http:// tech.groups.yahoo. com/ group/hcisec/
[4] http:/ / www. usablesecurity. com
Inference attack
An Inference Attack is a data mining technique performed by analyzing data in order to illegitimately gain
knowledge about a subject or database.
[1]
A subject's sensitive information can be considered as leaked if an
adversary can infer its real value with a high confidence.
[2]
This is an example of breached information security. An
Inference attack occurs when a user is able to infer from trivial information more robust information about a
database without directly accessing it.
[3]
The object of Inference attacks is to piece together information at one
security level to determine a fact that should be protected at a higher security level.
[4]
Countermeasures
Computer security inference control is the attempt to prevent users to infer classified information from rightfully
accessible chunks of information with lower classification. Computer security professionals install protocols into
databases to prevent inference attacks by software but to date there is no software or hardware, such as an
anti-inference engine, that delivers this countermeasure against a human inference engine.
[5]
References
[1] "Inference Attacks on Location Tracks" by John Krumm (http:// research.microsoft.com/ ~jckrumm/Publications 2007/ inference attack
refined02 distribute. pdf)
[2] http:// www. ics. uci. edu/ ~chenli/ pub/ 2007-dasfaa.pdf "Protecting Individual Information Against Inference Attacks in Data Publishing"
by Chen Li, Houtan Shirani-Mehr, and Xiaochun Yang
[3] "Detecting Inference Attacks Using Association Rules" by Sangeetha Raman, 2001 (http:// andromeda.rutgers. edu/ ~gshafer/raman.pdf)
[4] "Database Security Issues: Inference" by Mike Chapple (http:/ / databases. about.com/ od/ security/ l/ aainference.htm)
[5] "Computer Security Inference Control" by Halim. M. Khelalfa (1997) (http:// www.unesco.org/ webworld/public_domain/ tunis97/
com_54/ com_54.html)
Information assurance
172
Information assurance
U.S. Department of Defense Information
Assurance emblem
Information assurance (IA) is the practice of managing risks related
to the use, processing, storage, and transmission of information or data
and the systems and processes used for those purposes. While focused
dominantly on information in digital form, the full range of IA
encompasses not only digital but also analog or physical form.
Information assurance as a field has grown from the practice of
information security which in turn grew out of practices and
procedures of computer security.
There are three models used in the practice of IA to define assurance
requirements and assist in covering all necessary aspects or attributes.
The first is the classic information security model, also called the CIA
Triad, which addresses three attributes of information and information
systems, confidentiality, integrity, and availability. This C-I-A model
is extremely useful for teaching introductory and basic concepts of
information security and assurance; the initials are an easy mnemonic to remember, and when properly understood,
can prompt systems designers and users to address the most pressing aspects of assurance.
The next most widely known model is the Five Pillars of IA model, promulgated by the U.S. Department of Defense
(DoD) in a variety of publications, beginning with the National Information Assurance Glossary, Committee on
National Security Systems
[1]
Instruction CNSSI-4009
[1]
. Here is the definition from that publication: "Measures
that protect and defend information and information systems by ensuring their availability, integrity, authentication,
confidentiality, and non-repudiation. These measures include providing for restoration of information systems by
incorporating protection, detection, and reaction capabilities." The Five Pillars model is sometimes criticized because
authentication and non-repudiation are not attributes of information or systems; rather, they are procedures or
methods useful to assure the integrity and authenticity of information, and to protect the confidentiality of those
same.
A third, less widely known IA model is the Parkerian Hexad, first introduced by Donn B. Parker in 1998. Like the
Five Pillars, Parker's hexad begins with the C-I-A model but builds it out by adding authenticity, utility, and
possession (or control). It is significant to point out that the concept or attribute of authenticity, as described by
Parker, is not identical to the pillar of authentication as described by the U.S. DoD.
Overview
Information assurance is closely related to information security and the terms are sometimes used interchangeably.
However, IA’s broader connotation also includes reliability and emphasizes strategic risk management over tools and
tactics. In addition to defending against malicious hackers and code (e.g., viruses), IA includes other corporate
governance issues such as privacy, compliance, audits, business continuity, and disaster recovery. Further, while
information security draws primarily from computer science, IA is interdisciplinary and draws from multiple fields,
including accounting, fraud examination, forensic science, management science, systems engineering, security
engineering, and criminology, in addition to computer science. Therefore, IA is best thought of as a superset of
information security (e.g. umbrella term).
Information assurance
173
History
In the 1960s, IA was not as complex as it is today. IA was as simple as controlling access to the computer room by
locking the door and placing guards to protect it.
IA Concepts
Model of integrated CIA triad, Defense-in-Depth strategies.
Since the 1970s, information security has
held confidentiality, integrity and
availability (known as the CIA triad) as the
core principles. One newer model of
Information Assurance adds Authentication
and Non-repudiation to create the 5 Pillars
of IA. In contrast, Donn B. Parker
developed a model that added three
attributes of authenticity, utility, and
possession to the core C-I-A. The work in
which Parker introduced this model.
[2]
see
also
[3]
Confidentiality
CNSSI-4009
[1]
: "Assurance that
information is not disclosed to unauthorized
individuals, processes, or devices. "
Confidential information must only be
accessed, used, copied, or disclosed by users who have been authorized, and only when there is a genuine need. A
confidentiality breach occurs when information or information systems have been, or may have been, accessed, used,
copied, or disclosed, or by someone who was not authorized to have access to the information.
For example: Permitting someone to look over your shoulder at your computer screen while you have confidential
data displayed on it would be a breach of confidentiality if they were not authorized to have the information. If a
laptop computer, which contains employment and benefit information about 100,000 employees, is stolen from a car
(or is sold on eBay) could result in a breach of confidentiality because the information is now in the hands of
someone who is not authorized to have it. Giving out confidential information over the telephone is a breach of
confidentiality if the caller is not authorized to have the information.
Integrity
CNSSI-4009
[1]
: "Quality of an IS reflecting the logical correctness and reliability of the operating system; the
logical completeness of the hardware and software implementing the protection mechanisms; and the consistency of
the data structures and occurrence of the stored data. Note that, in a formal security mode, integrity is interpreted
more narrowly to mean protection against unauthorized modification or destruction of information."
Some practitioners make the mistake of thinking of the integrity attribute as being only data integrity. While data
integrity is a major part of this attribute, it is not everything. This attribute also addresses whether the physical and
electronic systems have been maintained without breach or unauthorized change. It even refers to the people
involved in handling the information; are they acting with proper motivation and integrity.
Integrity means data can not be created, changed, or deleted without proper authorization. It also means that data
stored in one part of a database system is in agreement with other related data stored in another part of the database
system (or another system).
Information assurance
174
For example: A loss of integrity occurs when an employee accidentally, or with malicious intent, deletes important
data files. A loss of integrity can occur if a computer virus is released onto the computer. A loss of integrity can
occur when an on-line shopper is able to change the price of the product they are purchasing.
Availability
CNSSI-4009
[1]
: "Timely, reliable access to data and information services for authorized users."
Availability means that the information, the computing systems used to process the information, and the security
controls used to protect the information are all available and functioning correctly when the information is needed.
The opposite of availability is the lack thereof, one example of this is a common attack known as a denial of service
(DoS) attack.
For example: In 2000 Amazon, CNN, eBay, and Yahoo! were victims of a DoS attack.
[4]

Yahoo Attacked. No one knows what happened except that it was inaccesible for more than 3 hours. It was also known that the attack was
co-ordinated and hence the standard firewall algorithms failed to figure out what was happening.

— -Techhawking
[4]
Authentication
CNSSI-4009
[1]
: "Security measure designed to establish the validity of a transmission, message, or originator, or a
means of verifying an individual's authorization to receive specific categories of information."
Authentication breach can occur when a user's login id and password is used by un-authorized users to send
un-authorized information.
Authenticity
Authenticity is necessary to ensure that the users or objects (like documents) are genuine (they have not been forged
or fabricated).
As files are shared across multiple organizations, there can be circumstances when duplicate copies of that file may
exist. In such cases it's important to establish not only which is the master copy, but also to establish a way for those
who use the data to know where file, and all of the tagged data sets in the file, came from. A Tagged Data Authority
Engine is one way to do this.
[5]
Non-repudiation
CNSSI-4009
[1]
: "Assurance the sender of data is provided with proof of delivery and the recipient is provided with
proof of the sender's identity, so neither can later deny having processed the data."
Non-repudiation implies that one party of a transaction can not deny having received a transaction nor can the other
party deny having sent a transaction.
For example: Electronic commerce uses technology such as digital signatures to establish authenticity and
non-repudiation.
Utility
Utility means usefulness and usability. For example, suppose someone encrypted data on disk to prevent
unauthorized access or undetected modifications – and then lost the decryption key: that would be a breach of utility.
The data would be confidential, controlled, integral, authentic, and available – they just wouldn’t be useful in that
form. Similarly, conversion of salary data from one currency into an inappropriate currency would be a breach of
utility, as would the storage of data in a format inappropriate for a specific computer architecture; e.g., EBCDIC
instead of ASCII or 9-track magnetic tape instead of DVD-ROM. A tabular representation of data substituted for a
Information assurance
175
graph could be described as a breach of utility if the substitution made it more difficult to interpret the data. Utility is
often confused with availability because breaches such as those described in these examples may also require time to
work around the change in data format or presentation. However, the concept of usefulness is distinct from that of
availability.
Information assurance process
The IA process typically begins with the enumeration and classification of the information assets to be protected.
Next, the IA practitioner will perform a risk assessment. This assessment considers both the probability and impact
of the undesired events. The probability component may be subdivided into threats and vulnerabilities. The impact
component is usually measured in terms of cost. The product of these values is the total risk.
Based on the risk assessment, the IA practitioner will develop a risk management plan. This plan proposes
countermeasures that involve mitigating, eliminating, accepting, or transferring the risks, and considers prevention,
detection, and response. A framework, such as Risk IT, CobiT, PCI DSS, ISO 17799 or ISO/IEC 27002, may be
utilized in designing this plan. Countermeasures may include tools such as firewalls and anti-virus software, policies
and procedures such as regular backups and configuration hardening, training such as security awareness education,
or restructuring such as forming an computer security incident response team (CSIRT) or computer emergency
response team (CERT). The cost and benefit of each countermeasure is carefully considered. Thus, the IA
practitioner does not seek to eliminate all risks, were that possible, but to manage them in the most cost-effective
way.
After the risk management plan is implemented, it is tested and evaluated, perhaps by means of formal audits. The
IA process is cyclical; the risk assessment and risk management plan are continuously revised and improved based
on data gleaned from evaluation.
Standards Organizations and Standards
There are a number of international and national bodies that issued standards in Information Assurance
Education and certifications
Information security professionalism is the set of knowledge that people working in Information security and
similar fields (Information Assurance and Computer security) should have and eventually demonstrate through
certifications from well respected organizations.
It also encompasses the education process required to accomplish different tasks in these fields.
Information technology adoption is always increasing and spread to vital infrastructure for civil and military
organizations. Everybody can get involved in the Cyberwar. It is crucial that a nation can have skilled professional to
defend its vital interests.
Information assurance
176
References
[1] http:/ / www. cnss. gov/ Assets/ pdf/cnssi_4009. pdf
[2] Parker, Donn B. (1998). Fighting Computer Crime. New York, NY: John Wiley & Sons. ISBN 0471163783.
[3] [ |Parker, Donn B. (http:// www.computersecurityhandbook.com/ Author-Parker.html)] (2002). "Toward a New Framework for Information
Security" (http:// www.computersecurityhandbook. com/ CSH4/ Chapter5.html). In Bosworth, Seymour; Kabay, M. E.. The Computer
Security Handbook (http:/ / www.computersecurityhandbook. com/ default.html) (4th ed.). New York, NY: John Wiley & Sons.
ISBN 0471412589. .
[4] Techhawking (February 2000). "Feb Attack 2000: DDOS Attack - analysis." (http:// www. royans.net/ rant/2000/ 06/ 06/
feb-attack-2000-ddos-attack-analysis/ ). . Retrieved 2008-04-09.
[5] Article on Tagged Data Authority Servers from Government Computer news (http:// gcn.com/ articles/ 2009/ 12/ 14/
internaut-tagged-data-authority-engines. aspx)
External links
• GCHQ: Britain's Most Secret Intelligence Agency (http:/ / www2.warwick. ac. uk/ fac/soc/ pais/ staff/ aldrich/
vigilant/ lectures/ gchq/ )
• Quantitative Risk Analysis in Information Security from Digital Threat (http:// www.digitalthreat. net/ 2010/ 05/
information-security-risk-analysis/ )
• Verrimus 'Privacy Protected' (http:// www. verrimus.com/ )
Documentation
• UK Government (http:// www.cabinetoffice.gov. uk/ csia/ ia_review)
• HMG INFOSEC STANDARD NO. 2 (http:/ / www.cpni.gov.uk/ Docs/ re-20050804-00653.pdf) Risk
management and accreditation of information systems (2005)
• IA References (http:/ / www. albany. edu/ acc/ courses/ ia/ classics)
• Information Assurance XML Schema Markup Language (http:// www.ism3. com/ index.
php?option=com_docman& task=doc_download& gid=5&Itemid=9)
• DoD Directive 8500.01 (http:/ / www. dtic. mil/ whs/ directives/ corres/ pdf/850001p. pdf) Information
Assurance
• DoD Instruction 8500.02 (http:/ /www. dtic. mil/whs/ directives/ corres/ pdf/850002p.pdf) Information
Assurance (IA) Implementation
• DoD IA Policy Chart (http:/ /iac. dtic. mil/iatac/ ia_policychart.html) DoD IA Policy Chart
• Archive of Information Assurance (http:// iaarchive.fi) Archive of Information Assurance
EMSEC
• AFI 33-203 Vol 1 (http:/ / www. e-publishing. af.mil/ shared/ media/ epubs/ AFI33-203V1.pdf), Emission
Security (Soon to be AFSSI 7700)
• AFI 33-203 Vol 3 (http:/ / www. e-publishing. af.mil/ shared/ media/ epubs/ AFI33-203V3.pdf), EMSEC
Countermeasures Reviews (Soon to be AFSSI 7702)
• AFI 33-201 Vol 8, Protected Distributed Systems (Soon to be AFSSI 7703)
Information Assurance Vulnerability Alert
177
Information Assurance Vulnerability Alert
An Information Assurance Vulnerability Alert (IAVA) is an announcement of a computer application software or
operating system vulnerability notification in the form of alerts, bulletins, and technical advisories identified by
DoD-CERT, a division of the United States Cyber Command. These selected vulnerabilities are the mandated
baseline, or minimum configuration of all hosts residing on the GIG. USCYBERCOM analyzes each vulnerability
and determines if it is necessary or beneficial to the Department of Defense to release it as an IAVA. Implementation
of IAVA policy will help ensure that DoD Components take appropriate mitigating actions against vulnerabilities to
avoid serious compromises to DoD computer system assets that would potentially degrade mission performance.
Information Assurance Vulnerability Management Program
The Combatant Commands, Services, Agencies and field activities are required to implement vulnerability
notifications in the form of alerts, bulletins, and technical advisories. USSTRATCOM via its sub-unified command
USCYBERCOM has the authority to direct corrective actions, which may ultimately include disconnection of any
enclave, or affected system on the enclave, not in compliance with the IAVA program directives and vulnerability
response measures (i.e. communication tasking orders or messages). USSTRATCOM and USCYBERCOM will
coordinate with all affected organizations to determine operational impact to the DoD before instituting a
disconnection.
Background
On February 15, 1998, the Deputy, Secretary of Defense issued a classified memorandum on Information Assurance,
that instructed the DISA, with the assistance of the Military Departments, to develop an alert system that ensured
positive control of information assurance. According to the memorandum, the alert system should:
• Identify a system administrator to be the point of contact for each relevant network system,
• Send alert notifications to each point of contact,
• Require confirmation by each point of contact acknowledging receipt of each alert notification,
• Establish a date for the corrective action to be implemented, and enable DISA to confirm whether the correction
has been implemented.
The Deputy, Secretary of Defense issued an Information Assurance Vulnerability Alert (IAVA) policy memorandum
on December 30, 1999. Current events of the time demonstrated that widely known vulnerabilities exist throughout
DoD networks, with the potential to severely degrade mission performance. The policy memorandum instructs the
DISA to develop and maintain an IAVA database system that would ensure a positive control mechanism for system
administrators to receive, acknowledge, and comply with system vulnerability alert notifications. The IAVA policy
requires the Component Commands, Services, and Agencies to register and report their acknowledgement of and
compliance with the IAVA database. According to the policy memorandum, the compliance data to be reported
should include the number of assets affected, the number of assets in compliance, and the number of assets with
waivers.
Information Assurance Vulnerability Alert
178
External links
• [1] Office of the Inspector General, DoD Compliance with the Information Assurance Vulnerability Alert Policy,
Dec 2001.
• [2] Chairman of the Joint Chiefs of Staff Instruction, 6510.01E, August 2007.
• DoD IA Policy Chart
[3]
DoD IA Policy Chart
References
[1] http:/ / www. dodig. osd. mil/ Audit/ reports/fy01/01-013. pdf
[2] http:// www. dtic. mil/ cjcs_directives/ cdata/ unlimit/ 6510_01.pdf
[3] http:// iac.dtic.mil/ iatac/ ia_policychart.html
Information security
Information Security Components: or qualities, i.e., Confidentiality, Integrity
and Availability (CIA). Information Systems are decomposed in three main
portions, hardware, software and communications with the purpose to identify and
apply information security industry standards, as mechanisms of protection and
prevention, at three levels or layers: physical, personal and organizational.
Essentially, procedures or policies are implemented to tell people (administrators,
users and operators)how to use products to ensure information security within the
organizations.
Information security means protecting
information and information systems from
unauthorized access, use, disclosure,
disruption, modification, perusal, inspection,
recording or destruction.
[1]
The terms information security, computer
security and information assurance are
frequently incorrectly used interchangeably.
These fields are interrelated often and share
the common goals of protecting the
confidentiality, integrity and availability of
information; however, there are some subtle
differences between them.
These differences lie primarily in the
approach to the subject, the methodologies
used, and the areas of concentration.
Information security is concerned with the
confidentiality, integrity and availability of
data regardless of the form the data may
take: electronic, print, or other forms.
Computer security can focus on ensuring the
availability and correct operation of a
computer system without concern for the
information stored or processed by the
computer.
Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of
confidential information about their employees, customers, products, research, and financial status. Most of this
information is now collected, processed and stored on electronic computers and transmitted across networks to other
computers.
Should confidential information about a business' customers or finances or new product line fall into the hands of a
competitor, such a breach of security could lead to lost business, law suits or even bankruptcy of the business.
Information security
179
Protecting confidential information is a business requirement, and in many cases also an ethical and legal
requirement.
For the individual, information security has a significant effect on privacy, which is viewed very differently in
different cultures.
The field of information security has grown and evolved significantly in recent years. There are many ways of
gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and
allied infrastructure, securing applications and databases, security testing, information systems auditing, business
continuity planning and digital forensics science, etc.
This article presents a general overview of information security and its core concepts.
History
Since the early days of writing, heads of state and military commanders understood that it was necessary to provide
some mechanism to protect the confidentiality of written correspondence and to have some means of detecting
tampering.
Julius Caesar is credited with the invention of the Caesar cipher ca. 50 B.C., which was created in order to prevent
his secret messages from being read should a message fall into the wrong hands.
World War II brought about many advancements in information security and marked the beginning of the
professional field of information security.
The end of the 20th century and early years of the 21st century saw rapid advancements in telecommunications,
computing hardware and software, and data encryption. The availability of smaller, more powerful and less
expensive computing equipment made electronic data processing within the reach of small business and the home
user. These computers quickly became interconnected through a network generically called the Internet or World
Wide Web.
The rapid growth and widespread use of electronic data processing and electronic business conducted through the
Internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting
the computers and the information they store, process and transmit. The academic disciplines of computer security,
information security and information assurance emerged along with numerous professional organizations – all
sharing the common goals of ensuring the security and reliability of information systems.
Basic principles
Key concepts
For over twenty years, information security has held confidentiality, integrity and availability (known as the CIA
triad) to be the core principles of information security.
There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes
been proposed for addition – it has been pointed out that issues such as Non-Repudiation do not fit well within the
three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations)
Legality is becoming a key consideration for practical security installations.
In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements
of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The
merits of the Parkerian hexad are a subject of debate amongst security professionals.
Information security
180
Confidentiality
Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For
example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to
the merchant and from the merchant to a transaction processing network. The system attempts to enforce
confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in
databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If
an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred.
Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer
screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer
containing sensitive information about a company's employees is stolen or sold, it could result in a breach of
confidentiality. Giving out confidential information over the telephone is a breach of confidentiality if the caller is
not authorized to have the information.
Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information
a system holds.
Integrity
In information security, integrity means that data cannot be modified undetectably. This is not the same thing as
referential integrity in databases, although it can be viewed as a special case of Consistency as understood in the
classic ACID model of transaction processing. Integrity is violated when a message is actively modified in transit.
Information security systems typically provide message integrity in addition to data confidentiality.
Availability
For any information system to serve its purpose, the information must be available when it is needed. This means
that the computing systems used to store and process the information, the security controls used to protect it, and the
communication channels used to access it must be functioning correctly. High availability systems aim to remain
available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.
Ensuring availability also involves preventing denial-of-service attacks.
Authenticity
In computing, e-Business and information security it is necessary to ensure that the data, transactions,
communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate
that both parties involved are who they claim they are.
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party
of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction.
Electronic commerce uses technology such as digital signatures and encryption to establish authenticity and
non-repudiation.
Information security
181
Risk management
A comprehensive treatment of the topic of risk management is beyond the scope of this article. However, a useful
definition of risk management will be provided as well as some basic terminology and a commonly used process for
risk management.
The CISA Review Manual 2006 provides the following definition of risk management: "Risk management is the
process of identifying vulnerabilities and threats to the information resources used by an organization in achieving
business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based
on the value of the information resource to the organization."
[2]
There are two things in this definition that may need some clarification. First, the process of risk management is an
ongoing iterative process. It must be repeated indefinitely. The business environment is constantly changing and new
threats and vulnerability emerge every day. Second, the choice of countermeasure (computer)s (controls) used to
manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of
the informational asset being protected.
Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the
asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A
threat is anything (man made or act of nature) that has the potential to cause harm.
The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a
vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of
availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It
should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining
risk is called residual risk.
A risk assessment is carried out by a team of people who have knowledge of specific areas of the business.
Membership of the team may vary over time as different parts of the business are assessed. The assessment may use
a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical
information is available, the analysis may use quantitative analysis.
The research has shown that the most vulnerable point in most information systems is the human user, operator,
designer, or other human
[3]
The ISO/IEC 27002:2005 Code of practice for information security management
recommends the following be examined during a risk assessment:
• security policy,
• organization of information security,
• asset management,
• human resources security,
• physical and environmental security,
• communications and operations management,
• access control,
• information systems acquisition, development and maintenance,
• information security incident management,
• business continuity management, and
• regulatory compliance.
In broad terms, the risk management process consists of:
1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data
(electronic, print, other), supplies.
2. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from
inside or outside the organization.
Information security
182
3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited.
Evaluate policies, procedures, standards, training, physical security, quality control, technical security.
4. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis.
5. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost
effectiveness, and value of the asset.
6. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective
protection without discernible loss of productivity.
For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the
asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may
choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some
cases, the risk can be transferred to another business by buying insurance or out-sourcing to another business.
[4]
The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a
potential risk.
Controls
When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types
of controls.
Administrative
Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards
and guidelines. Administrative controls form the framework for running the business and managing people. They
inform people on how the business is to be run and how day to day operations are to be conducted. Laws and
regulations created by government bodies are also a type of administrative control because they inform the business.
Some industry sectors have policies, procedures, standards and guidelines that must be followed – the Payment Card
Industry (PCI) Data Security Standard required by Visa and Master Card is such an example. Other examples of
administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary
policies.
Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical
and physical controls are manifestations of administrative controls. Administrative controls are of paramount
importance.
Logical
Logical controls (also called technical controls) use software and data to monitor and control access to information
and computing systems. For example: passwords, network and host based firewalls, network intrusion detection
systems, access control lists, and data encryption are logical controls.
An important logical control that is frequently overlooked is the principle of least privilege. The principle of least
privilege requires that an individual, program or system process is not granted any more access privileges than are
necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging
into Windows as user Administrator to read Email and surf the Web. Violations of this principle can also occur when
an individual collects additional access privileges over time. This happens when employees' job duties change, or
they are promoted to a new position, or they transfer to another department. The access privileges required by their
new duties are frequently added onto their already existing access privileges which may no longer be necessary or
appropriate.
Information security
183
Physical
Physical controls monitor and control the environment of the work place and computing facilities. They also monitor
and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and
fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the
network and work place into functional areas are also physical controls.
An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures
that an individual can not complete a critical task by himself. For example: an employee who submits a request for
reimbursement should not also be able to authorize payment or print the check. An applications programmer should
not also be the server administrator or the database administrator – these roles and responsibilities must be separated
from one another.
[5]
Defense in depth
Information security must protect information throughout the life span
of the information, from the initial creation of the information on
through to the final disposal of the information. The information must
be protected while in motion and while at rest. During its lifetime,
information may pass through many different information processing
systems and through many different parts of information processing
systems. There are many different ways the information and
information systems can be threatened. To fully protect the information
during its lifetime, each component of the information processing
system must have its own protection mechanisms. The building up,
layering on and overlapping of security measures is called defense in
depth. The strength of any system is no greater than its weakest link.
Using a defence in depth strategy, should one defensive measure fail
there are other defensive measures in place that continue to provide protection.
Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of
controls can be used to form the basis upon which to build a defense-in-depth strategy. With this approach,
defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional
insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core
of the onion, people the next outer layer of the onion, and network security, host-based security and application
security forming the outermost layers of the onion. Both perspectives are equally valid and each provides valuable
insight into the implementation of a good defense-in-depth strategy.
Security classification for information
An important aspect of information security and risk management is recognizing the value of information and
defining appropriate procedures and protection requirements for the information. Not all information is equal and so
not all information requires the same degree of protection. This requires information to be assigned a security
classification.
The first step in information classification is to identify a member of senior management as the owner of the
particular information to be classified. Next, develop a classification policy. The policy should describe the different
classification labels, define the criteria for information to be assigned a particular label, and list the required security
controls for each classification.
Some factors that influence which classification information should be assigned include how much value that
information has to the organization, how old the information is and whether or not the information has become
obsolete. Laws and other regulatory requirements are also important considerations when classifying information.
Information security
184
The type of information security classification labels selected and used will depend on the nature of the organisation,
with examples being:
• In the business sector, labels such as: Public, Sensitive, Private, Confidential.
• In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential,
Secret, Top Secret and their non-English equivalents.
• In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber and Red.
All employees in the organization, as well as business partners, must be trained on the classification schema and
understand the required security controls and handling procedures for each classification. The classification a
particular information asset has been assigned should be reviewed periodically to ensure the classification is still
appropriate for the information and to ensure the security controls required by the classification are in place.
Access control
Access to protected information must be restricted to people who are authorized to access the information. The
computer programs, and in many cases the computers that process the information, must also be authorized. This
requires that mechanisms be in place to control the access to protected information. The sophistication of the access
control mechanisms should be in parity with the value of the information being protected – the more sensitive or
valuable the information the stronger the control mechanisms need to be. The foundation on which access control
mechanisms are built start with identification and authentication.
Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my
name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before
John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be
John Doe really is John Doe.
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he
tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller
his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the
photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then
the teller has authenticated that John Doe is who he claimed to be.
There are three different types of information that can be used for authentication: something you know, something
you have, or something you are. Examples of something you know include such things as a PIN, a password, or
your mother's maiden name. Examples of something you have include a driver's license or a magnetic swipe card.
Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and
retina (eye) scans. Strong authentication requires providing information from two of the three different types of
authentication information. For example, something you know plus something you have. This is called two factor
authentication.
On computer systems in use today, the Username is the most common form of identification and the Password is the
most common form of authentication. Usernames and passwords have served their purpose but in our modern world
they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated
authentication mechanisms.
After a person, program or computer has successfully been identified and authenticated then it must be determined
what informational resources they are permitted to access and what actions they will be allowed to perform (run,
view, create, delete, or change). This is called authorization.
Authorization to access information and other computing services begins with administrative policies and
procedures. The policies prescribe what information and computing services can be accessed, by whom, and under
what conditions. The access control mechanisms are then configured to enforce these policies.
Information security
185
Different computing systems are equipped with different kinds of access control mechanisms - some may even offer
a choice of different access control mechanisms. The access control mechanism a system offers will be based upon
one of three approaches to access control or it may be derived from a combination of the three approaches.
The non-discretionary approach consolidates all access control under a centralized administration. The access to
information and other resources is usually based on the individuals function (role) in the organization or the tasks the
individual must perform. The discretionary approach gives the creator or owner of the information resource the
ability to control access to those resources. In the Mandatory access control approach, access is granted or denied
basing upon the security classification assigned to the information resource.
Examples of common access control mechanisms in use today include Role-based access control available in many
advanced Database Management Systems, simple file permissions provided in the UNIX and Windows operating
systems, Group Policy Objects provided in Windows network systems, Kerberos, RADIUS, TACACS, and the
simple access lists used in many firewalls and routers.
To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that
people are held accountable for their actions. All failed and successful authentication attempts must be logged, and
all access to information must leave some type of audit trail.
Cryptography
Information security uses cryptography to transform usable information into a form that renders it unusable by
anyone other than an authorized user; this process is called encryption. Information that has been encrypted
(rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the
cryptographic key, through the process of decryption. Cryptography is used in information security to protect
information from unauthorized or accidental disclosure while the information is in transit (either electronically or
physically) and while information is in storage.
Cryptography provides information security with other useful applications as well including improved authentication
methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less
secure application such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use
encrypted network communications. Wireless communications can be encrypted using protocols such as
WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-T G.hn) are secured using
AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP
can be used to encrypt data files and Email.
Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to
be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts
in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak
or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the
same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and
destruction and they must be available when needed. PKI solutions address many of the problems that surround key
management.
Information security
186
Process
The terms reasonable and prudent person, due care and due diligence have been used in the fields of Finance,
Securities, and Law for many years. In recent years these terms have found their way into the fields of computing
and information security. U.S.A. Federal Sentencing Guidelines now make it possible to hold corporate officers
liable for failing to exercise due care and due diligence in the management of their information systems.
In the business world, stockholders, customers, business partners and governments have the expectation that
corporate officers will run the business in accordance with accepted business practices and in compliance with laws
and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent
person takes due care to ensure that everything necessary is done to operate the business by sound business
principles and in a legal ethical manner. A prudent person is also diligent (mindful, attentive, and ongoing) in their
due care of the business.
In the field of Information Security, Harris
[6]
offers the following definitions of due care and due diligence:
"Due care are steps that are taken to show that a company has taken responsibility for the activities that
take place within the corporation and has taken the necessary steps to help protect the company, its
resources, and employees." And, [Due diligence are the] "continual activities that make sure the
protection mechanisms are continually maintained and operational."
Attention should be made to two important points in these definitions. First, in due care, steps are taken to show -
this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there
are continual activities - this means that people are actually doing things to monitor and maintain the protection
mechanisms, and these activities are ongoing.
Security governance
The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise
Security (GES)", defines characteristics of effective security governance. These include:
• An enterprise-wide issue
• Leaders are accountable
• Viewed as a business requirement
• Risk-based
• Roles, responsibilities, and segregation of duties defined
• Addressed and enforced in policy
• Adequate resources committed
• Staff aware and trained
• A development life cycle requirement
• Planned, managed, measurable, and measured
• Reviewed and audited
Information security
187
Incident response plans
1 to 3 paragraphs (non technical) that discuss:
• Selecting team members
• Define roles, responsibilities and lines of authority
• Define a security incident
• Define a reportable incident
• Training
• Detection
• Classification
• Escalation
• Containment
• Eradication
• Documentation
Change management
Change management is a formal process for directing and controlling alterations to the information processing
environment. This includes alterations to desktop computers, the network, servers and software. The objectives of
change management are to reduce the risks posed by changes to the information processing environment and
improve the stability and reliability of the processing environment as changes are made. It is not the objective of
change management to prevent or hinder necessary changes from being implemented.
Any change to the information processing environment introduces an element of risk. Even apparently simple
changes can have unexpected effects. One of Managements many responsibilities is the management of risk. Change
management is a tool for managing the risks introduced by changes to the information processing environment. Part
of the change management process ensures that changes are not implemented at inopportune times when they may
disrupt critical business processes or interfere with other changes being implemented.
Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information
processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing
environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not
generally require change management. However, relocating user file shares, or upgrading the Email server pose a
much higher level of risk to the processing environment and are not a normal everyday activity. The critical first
steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope
of the change system.
Change management is usually overseen by a Change Review Board composed of representatives from key business
areas, security, networking, systems administrators, Database administration, applications development, desktop
support and the help desk. The tasks of the Change Review Board can be facilitated with the use of automated work
flow application. The responsibility of the Change Review Board is to ensure the organizations documented change
management procedures are followed. The change management process is as follows:
• Requested: Anyone can request a change. The person making the change request may or may not be the same
person that performs the analysis or implements the change. When a request for change is received, it may
undergo a preliminary review to determine if the requested change is compatible with the organizations business
model and practices, and to determine the amount of resources needed to implement the change.
• Approved: Management runs the business and controls the allocation of resources therefore, Management must
approve requests for changes and assign a priority for every change. Management might choose to reject a change
request if the change is not compatible with the business model, industry standards or best practices. Management
might also choose to reject a change request if the change requires more resources than can be allocated for the
Information security
188
change.
• Planned: Planning a change involves discovering the scope and impact of the proposed change; analyzing the
complexity of the change; allocation of resources and, developing, testing and documenting both implementation
and backout plans. Need to define the criteria on which a decision to back out will be made.
• Tested: Every change must be tested in a safe test environment, which closely reflects the actual production
environment, before the change is applied to the production environment. The backout plan must also be tested.
• Scheduled: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing
the proposed implementation date for potential conflicts with other scheduled changes or critical business
activities.
• Communicated: Once a change has been scheduled it must be communicated. The communication is to give
others the opportunity to remind the change review board about other changes or critical business activities that
might have been overlooked when scheduling the change. The communication also serves to make the Help Desk
and users aware that a change is about to occur. Another responsibility of the change review board is to ensure
that scheduled changes have been properly communicated to those who will be affected by the change or
otherwise have an interest in the change.
• Implemented: At the appointed date and time, the changes must be implemented. Part of the planning process
was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change
should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan
should be implemented.
• Documented: All changes must be documented. The documentation includes the initial request for change, its
approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change
review board critique, the date/time the change was implemented, who implemented it, and whether the change
was implemented successfully, failed or postponed.
• Post change review: The change review board should hold a post implementation review of changes. It is
particularly important to review failed and backed out changes. The review board should try to understand the
problems that were encountered, and look for areas for improvement.
Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created
when changes are made to the information processing environment. Good change management procedures improve
the over all quality and success of changes as they are implemented. This is accomplished through planning, peer
review, documentation and communication.
ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps
[7]
(Full book
summary
[8]
), and Information Technology Infrastructure Library all provide valuable guidance on implementing an
efficient and effective change management program. information security
Business continuity
Business continuity is the mechanism by which an organization continues to operate its critical business units, during
planned or unplanned disruptions that affect normal business operations, by invoking planned and managed
procedures.
Unlike what most people think business continuity is not necessarily an IT system or process, simply because it is
about the business. Today disasters or disruptions to business are a reality. Whether the disaster is natural or
man-made, it affects normal life and so business. So why is planning so important? Let us face reality that "all
businesses recover", whether they planned for recovery or not, simply because business is about earning money for
survival.
The planning is merely getting better prepared to face it, knowing fully well that the best plans may fail. Planning
helps to reduce cost of recovery, operational overheads and most importantly sail through some smaller ones
Information security
189
effortlessly.
For businesses to create effective plans they need to focus upon the following key questions. Most of these are
common knowledge, and anyone can do a BCP.
1. Should a disaster strike, what are the first few things that I should do? Should I call people to find if they are OK
or call up the bank to figure out my money is safe? This is Emergencey Response. Emergency Response services
help take the first hit when the disaster strikes and if the disaster is serious enough the Emergency Response
teams need to quickly get a Crisis Management team in place.
2. What parts of my business should I recover first? The one that brings me most money or the one where I spend
the most, or the one that will ensure I shall be able to get sustained future growth? The identified sections are the
critical business units. There is no magic bullet here, no one answer satisfies all. Businesses need to find answers
that meet business requirements.
3. How soon should I target to recover my critical business units? In BCP technical jargon this is called Recovery
Time Objective, or RTO. This objective will define what costs the business will need to spend to recover from a
disruption. For example, it is cheaper to recover a business in 1 day than in 1 hour.
4. What all do I need to recover the business? IT, machinery, records...food, water, people...So many aspects to
dwell upon. The cost factor becomes clearer now...Business leaders need to drive business continuity. Hold on.
My IT manager spent $200000 last month and created a DRP (Disaster Recovery Plan), whatever happened to
that? a DRP is about continuing an IT system, and is one of the sections of a comprehensive Business Continuity
Plan. Look below for more on this.
5. And where do I recover my business from... Will the business center give me space to work, or would it be
flooded by many people queuing up for the same reasons that I am.
6. But once I do recover from the disaster and work in reduced production capacity, since my main operational sites
are unavailable, how long can this go on. How long can I do without my original sites, systems, people? this
defines the amount of business resilience a business may have.
7. Now that I know how to recover my business. How do I make sure my plan works? Most BCP pundits would
recommend testing the plan at least once a year, reviewing it for adequacy and rewriting or updating the plans
either annually or when businesses change.
Disaster recovery planning
While a business continuity plan (BCP) takes a broad approach to dealing with organizational-wide effects of a
disaster, a disaster recovery plan (DRP), which is a subset of the business continuity plan, is instead focused on
taking the necessary steps to resume normal business operations as quickly as possible. A disaster recovery plan is
executed immediately after the disaster occurs and details what steps are to be taken in order to recover critical
information technology infrastructure.
[9]
Laws and regulations
Below is a partial listing of European, United Kingdom, Canadian and USA governmental laws and regulations that
have, or will have, a significant effect on data processing and information security. Important industry sector
regulations have also been included when they have a significant impact on information security.
• UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to
individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data
Protection Directive (EUDPD) requires that all EU member must adopt national regulations to standardize the
protection of data privacy for citizens throughout the EU.
• The Computer Misuse Act 1990 is an Act of the UK Parliament making computer crime (e.g. cracking -
sometimes incorrectly referred to as hacking) a criminal offence. The Act has become a model upon which
several other countries including Canada and the Republic of Ireland have drawn inspiration when subsequently
Information security
190
drafting their own information security laws.
• EU Data Retention laws requires Internet service providers and phone companies to keep data on every electronic
message sent and phone call made for between six months and two years.
• The Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232
[10]
g; 34 CFR Part 99) is a USA
Federal law that protects the privacy of student education records. The law applies to all schools that receive
funds under an applicable program of the U.S. Department of Education. Generally, schools must have written
permission from the parent or eligible student in order to release any information from a student's education
record.
• Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards
for electronic health care transactions and national identifiers for providers, health insurance plans, and
employers. And, it requires health care providers, insurance providers and employers to safeguard the security and
privacy of health data.
• Gramm-Leach-Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999,
protects the privacy and security of private financial information that financial institutions collect, hold, and
process.
• Sarbanes-Oxley Act of 2002 (SOX). Section 404 of the act requires publicly traded companies to assess the
effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each
fiscal year. Chief information officers are responsible for the security, accuracy and the reliability of the systems
that manage and report the financial data. The act also requires publicly traded companies to engage independent
auditors who must attest to, and report on, the validity of their assessments.
• Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing
payment account data security. It was developed by the founding payment brands of the PCI Security Standards
Council, including American Express, Discover Financial Services, JCB, MasterCard Worldwide and Visa
International, to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI
DSS is a multifaceted security standard that includes requirements for security management, policies, procedures,
network architecture, software design and other critical protective measures.
• State Security Breach Notification Laws (California and many others) require businesses, nonprofits, and state
institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or
stolen.
• Personal Information Protection and Electronics Document Act (PIPEDA) – An Act to support and promote
electronic commerce by protecting personal information that is collected, used or disclosed in certain
circumstances, by providing for the use of electronic means to communicate or record information or transactions
and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act.
Sources of standards
International Organization for Standardization (ISO) is a consortium of national standards institutes from 157
countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of
standards. ISO 15443: "Information technology - Security techniques - A framework for IT security assurance",
ISO/IEC 27002: "Information technology - Security techniques - Code of practice for information security
management", ISO-20000: "Information technology - Service management", and ISO/IEC27001: "Information
technology - Security techniques - Information security management systems - Requirements" are of particular
interest to information security professionals.
The USA National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S.
Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation
programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management
and operation. NIST is also the custodian of the USA Federal Information Processing Standard publications (FIPS).
Information security
191
The Internet Society is a professional membership society with more than 100 organization and over 20,000
individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the
Internet, and is the organization home for the groups responsible for Internet infrastructure standards, including the
Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for
Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security
Handbook.
The Information Security Forum is a global nonprofit organization of several hundred leading organizations in
financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It undertakes
research into information security practices and offers advice in its biannual Standard of Good Practice and more
detailed advisories for members.
The IT Baseline Protection Catalogs, or IT-Grundschutz Catalogs, ("IT Baseline Protection Manual" before 2005)
are a collection of documents from the German Federal Office for Security in Information Technology (FSI), useful
for detecting and combating security-relevant weak points in the IT environment (“IT cluster“). The collection
encompasses over 3000 pages with the introduction and catalogs.
Professionalism
Information security professionalism is the set of knowledge that people working in Information security and
similar fields (Information Assurance and Computer security) should have and eventually demonstrate through
certifications from well respected organizations.
It also encompasses the education process required to accomplish different tasks in these fields.
Information technology adoption is always increasing and spread to vital infrastructure for civil and military
organizations. Everybody can get involved in the Cyberwar. It is crucial that a nation can have skilled professional to
defend its vital interests.
Conclusion
Information security is the ongoing process of exercising due care and due diligence to protect information, and
information systems, from unauthorized access, use, disclosure, destruction, modification, or disruption or
distribution. The never ending process of information security involves ongoing training, assessment, protection,
monitoring & detection, incident response & repair, documentation, and review. This makes information security an
indispensable part of all the business operations across different domains.
Scholars working in the field
• Stefan Brands
• Adam Back
• Lance Cottrell
• Ian Goldberg
• Peter Gutmann
• Bruce Schneier
• Gene Spafford
Further reading
• Anderson, K., "IT Security Professionals Must Evolve for Changing Market
[11]
", SC Magazine, October 12,
2006.
• Aceituno, V., "On Information Security Paradigms
[12]
",ISSA Journal, September, 2005.
• Dhillon, G., "Principles of Information Systems Security: text and cases", John Wiley & Sons, 2007.
• Lambo, T."ISO/IEC 27001: The future of infosec certification
[13]
",ISSA Journal, November, 2006.
Information security
192
Notes and references
[1] 44 U.S.C.  § 3542 (http:// www. law.cornell.edu/ uscode/ 44/ 3542. html)(b)(1)
[2] ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control Association (http:/ / www.isaca.org/ ).
pp. 85. ISBN 1-933284-15-3.
[3] Kiountouzis, E. A.; Kokolakis, S. A. Information systems security: facing the information society of the 21st century London:
Chapman & Hall, Ltd ISBN 0-412-78120-4
[4] NIST SP 800-30 Risk Management Guide for Information Technology Systems (http:/ / csrc.nist. gov/ publications/ nistpubs/
800-30/ sp800-30. pdf)
[5] "Segregation of Duties Control matrix" (http:// www.isaca. org/AMTemplate. cfm?Section=CISA1&Template=/ContentManagement/
ContentDisplay. cfm& ContentID=40835). ISACA. 2008. . Retrieved 2008-09-30.
[6] Harris, Shon (2003). All-in-one CISSP Certification Exam Guide (2nd Ed. ed.). Emeryville, CA: McGraw-Hill/Osborne.
ISBN 0-07-222966-7.
[7] http:/ / www. itpi. org/home/ visibleops2. php
[8] http:/ / wikisummaries. org/Visible_Ops
[9] Harris, Shon (2008). All-in-one CISSP Certification Exam Guide (4th Ed. ed.). New York, NY: McGraw-Hill.
ISBN 978-0-07-149786-2.
[10] http:/ / www. law. cornell.edu/ uscode/ 20/ 1232. html
[11] http:// www. scmagazineus. com/ IT-security-professionals-must-evolve-for-changing-market/article/ 33990/
[12] http:/ / www. issa. org/ Library/Journals/ 2005/ September/ Aceituno%20Canal%20-%20On%20Information%20Security%20Paradigms.
pdf
[13] https:// www. issa. org/Library/Journals/ 2006/ November/Lambo-ISO-IEC%2027001-The%20future%20of%20infosec%20certification.
pdf
External links
• InfoSecNews.us (http:// www. infosecnews. us/ ) Information Security News
• DoD IA Policy Chart (http://iac. dtic. mil/iatac/ ia_policychart.html) on the DoD Information Assurance
Technology Analysis Center web site.
• patterns & practices Security Engineering Explained (http:// msdn2. microsoft.com/ en-us/ library/ms998382.
aspx)
• Open Security Architecture- Controls and patterns to secure IT systems (http:/ / www.opensecurityarchitecture.
org)
• An Introduction to Information Security (http:// security. practitioner.com/ introduction/ )
• IWS - Information Security Chapter (http:/ / www.iwar.org.uk/ comsec/ )
Bibliography
• Allen, Julia H. (2001). The CERT Guide to System and Network Security Practices. Boston, MA:
Addison-Wesley. ISBN 0-201-73723-X.
• Krutz, Ronald L.; Russell Dean Vines (2003). The CISSP Prep Guide (Gold Edition ed.). Indianapolis, IN: Wiley.
ISBN 0-471-26802-X.
• Layton, Timothy P. (2007). Information Security: Design, Implementation, Measurement, and Compliance. Boca
Raton, FL: Auerbach publications. ISBN 978-0-8493-7087-8.
• McNab, Chris (2004). Network Security Assessment. Sebastopol, CA: O'Reilly. ISBN 0-596-00611-X.
• Peltier, Thomas R. (2001). Information Security Risk Analysis. Boca Raton, FL: Auerbach publications.
ISBN 0-8493-0880-1.
• Peltier, Thomas R. (2002). Information Security Policies, Procedures, and Standards: guidelines for effective
information security management. Boca Raton, FL: Auerbach publications. ISBN 0-8493-1137-3.
• White, Gregory (2003). All-in-one Security+ Certification Exam Guide. Emeryville, CA: McGraw-Hill/Osborne.
ISBN 0-07-222633-1.
• Dhillon, Gurpreet (2007). Principles of Information Systems Security: text and cases. NY: John Wiley & Sons.
ISBN 978-0471450566.
Information Security Automation Program
193
Information Security Automation Program
The Information Security Automation Program (ISAP, pronounced “I Sap”) is a U.S. government multi-agency
initiative to enable automation and standardization of technical security operations. While a U.S. government
initiative, its standards based design can benefit all information technology security operations. The ISAP high level
goals include standards based automation of security checking and remediation as well as automation of technical
compliance activities (e.g. FISMA). ISAP’s low level objectives include enabling standards based communication of
vulnerability data, customizing and managing configuration baselines for various IT products, assessing information
systems and reporting compliance status, using standard metrics to weight and aggregate potential vulnerability
impact, and remediating identified vulnerabilities.
ISAP’s technical specifications are contained in the related Security Content Automation Protocol (SCAP). ISAP’s
security automation content is either contained within, or referenced by, the National Vulnerability Database.
ISAP is being formalized through a trilateral memorandum of agreement (MOA) between Defense Information
Systems Agency (DISA), the National Security Agency (NSA), and the National Institute of Standards and
Technology (NIST). The Office of the Secretary of Defense (OSD) also participates and the Department of
Homeland Security (DHS) funds the operation infrastructure on which ISAP relies (i.e., the National Vulnerability
Database).
External links
• Information Security Automation Program web site
[1]
• Security Content Automation Protocol web site
[2]
• National Vulnerability Database web site
[3]
This document incorporates text from Information Security Automation Program Overview (v1 beta)
[4]
, a public
domain publication of the U.S. government.
References
[1] http:/ / nvd.nist. gov/ scap. cfm
[2] http:/ / scap. nist. gov
[3] http:// nvd.nist. gov
[4] http:// nvd.nist. gov/ scap/ docs/ ISAP.doc
Information Security Forum
194
Information Security Forum
Information Security Forum
Industry information security best practice research
Founded London, United Kingdom (1989)
Website
SecurityForum.org
[1]
The Information Security Forum (ISF) is an international, independent, non-profit organization dedicated to
benchmarking and identifying good practice in information security. It was established in 1989 as the European
Security Forum and expanded its mission and membership in the 1990s. It now includes hundreds of members,
including a large number of Fortune 500 companies, from North America, Asia, and other locations around the
world. Groups of members are organized as chapters throughout Europe, Africa, Asia, the Middle East, and North
America. The ISF is headquartered in London, United Kingdom, but also has staff based in New York City.
[2]
The membership of the ISF is international and includes large organizations in transportation, financial services,
chemical/pharmaceutical, manufacturing, government, retail, media, telecommunications, energy, professional
services, and other sectors.
[3]
In addition to the benchmarking program, the ISF runs regional chapter meetings, topical workshops, a large annual
conference (called the "World Congress"), and develops and publishes research reports and tools addressing a wide
variety of subjects. Its research agenda is driven entirely by its member organizations, who govern all ISF activities.
Primary deliverables
The ISF delivers a range of content, activities, and tools, summarized below.
The ISF is a paid membership organization, although the Standard of Good Practice is available for free to the
public. From time to time, the ISF makes other research documents available for free. In the past, the ISF has given
away a comprehensive checklist on Windows server security, a report entitled The Disappearance of the Network
Boundary, and a briefing on information leakage. All other products and service are included in the membership fee.
Information Security Forum
195
The Standard of Good Practice and Meta Standard
Every two to three years, the ISF revises and publishes the Standard of Good Practice, a detailed documentation of
best practices in information security, based on research and a comprehensive benchmarking program that has
captured security behavior and detailed incident data for many years.
[3]
The most recent version was published in
2007 and the next version is expected in 2010.
The Forum has also developed a "meta standard" tool that cross-references several major information security
standards.
Research projects
Based on member input, the ISF selects a number of topics for research in a given year. The research includes
interviewing member and non-member organizations and thought leaders, academic researchers, and other key
individuals, as well as examining the range of approaches to the issue. The resulting reports typically go into depth
describing the issue generally, outlining the key information security issues to be considered, and proposing a
process to address the issue, based on best practices.
Methodologies and tools
For broad, fundamental areas, such as information risk assessment, or return-on-investment calculations, the ISF will
develop comprehensive methodoligies that formalize the approaches to these issues. Supporting the methodology,
the ISF supplies Web-based and spreadsheet-based tools to automate these functions.
Benchmarking program
Formerly called the "Information Security Status Survey," the ISF conducts a biannual benchmarking exercise that
comprehensively examines the information-security practices of participants in all the areas addressed by the
Standard of Good Practice (although participants need not adhere to the Standard in order to participate in the
benchmarking). The results include detailed information on how responses compare (anonymously) to other
participants. The results system allows for detailed analysis, factoring in market sector, subject scope, organizational
measures (such as number of employees or revenue), and other elements.
Face-to-Face Networking
Regional chapter meetings and other activities provide for face-to-face networking among individuals from ISF
member organizations. The ISF encourages direct member-to-member contact to address individual questions and to
strengthen relationships. Chapter meetings and other activities are conducted around the world and address local
issues and language/cultural dimensions.
Annual World Congress
The ISF's annual global conference is called the "Annual World Congress", and it takes place in a different city each
year. In 2008 the conference was held in Barcelona, Spain; the 2009 conference is planned for Vancouver, British
Columbia, Canada. The typically 2½ day conference includes plenary sessions by leaders in information security,
personal development, practical workshops conducted by member organizations, and a substantial evening social
program. The program focuses on information-security practitioners; the participation of vendors is limited to an
exhibition area and a few invited speakers. The conference is preceded by in-depth workshops.
Information Security Forum
196
Web portal (MX)
The ISF's extranet portal, "Member Exchange" (MX) allows members to directly access all ISF materials, including
member presentations, and also includes messaging forums, contact information, webcasts, on-line tools, and other
data for member use.
Leadership
The members of the ISF, through the regional chapters, elect a Council to develop its work program and generally to
represent member interests. The Council elects an "Executive" group that is responsible for financial and strategic
objectives.
References
[1] http:/ / www. securityforum.org/
[2] Tom Jowitt (2008-07-31). "Security set to move beyond IT director control" (http:// www.infoworld.com/ news/ feeds/ 08/ 07/ 31/
Security-set-to-move-beyond-IT-director-control.html). . Retrieved 2008-11-25.
[3] Computer Technology Review (2007-10-17). "ISF launches new standard of good practices (sic)" (http:// www.wwpi. com/ index.
php?option=com_content&task=view& id=2914& Itemid=128). . Retrieved 2008-11-25.
External links
• The Information Security Forum (http:// www.securityforum.org)
• The Standard of Good Practice (http:/ / www.isfsecuritystandard. com)
Information sensitivity
Information sensitivity is the control of access to information or knowledge that might result in loss of an
advantage or level of security if disclosed to others who might have low or unknown trustability or undesirable
intentions.
Loss, misuse, modification or unauthorized access to sensitive information can adversely affect the privacy or
welfare of an individual, trade secrets of a business or even the security, internal and foreign affairs of a nation
depending on the level of sensitivity and nature of the information.
Levels
The term classified information generally refers to information that is subject to special security classification
regulations imposed by many national governments. The term "Unclassified" as used in the below refers to
information that is not subject to security classification regulations. Information can be reclassified to a different
level or declassified (made available to the public) depending on changes of situation or new intelligence.
Non-classified
Public information
This refers to information that is already a matter of public record or knowledge.
Information sensitivity
197
Personal information
This is information belonging to a private individual, but the individual commonly may share with others for
personal or business reasons. This generally includes contact information such as addresses, telephone numbers,
e-mail addresses, and so on. It may be considered a breach of privacy to disclose such information, but for most
people its disclosure is not considered a serious matter.
However, there are situations in which the release of personal information could have a negative effect on its owner.
For example, a person trying to avoid a stalker will be inclined to further restrict access to such personal information.
Routine business information
This includes business information that is not subjected to special protection and may be routinely shared with
anyone inside or outside of the business.
Private information
Information is private if it is associated with an individual and its disclosure might not be in the individual's best
interests. This would include a broad range of information that could be exploited to cause a person damage.
A person's SSN, credit card numbers, and other financial information should be considered private, since their
disclosure might lead to crimes such as identity theft or fraud.
Some types of private information, including records of a person's health care, education, and employment may be
protected by privacy laws in some cases. Disclosing private information can make the perpetrator liable for civil
remedies and may in some cases be subject to criminal penalties.
Confidential business information
Confidential business information refers to information whose disclosure may harm the business. Such information
may include trade secrets as described in the "Economic Espionage Act of 1996 (18 U.S.C. §§ 1831
[1]
–1839
[2]
)". In
practice, it may include sales and marketing plans, new product plans, and notes associated with patentable
inventions. In publicly held companies, confidential information may include "insider" financial data whose
disclosure is regulated by the United States Securities and Exchange Commission.
Classified
Confidential
• Requires protection
• Unauthorized disclosure could damage national security e.g. compromise information that indicates the strength
of armed forces or disclosure of technical information about weapons, such as performance characteristics, test
data, design, and production data.
Secret
• Requires substantial protection
• Unauthorized disclosure could seriously damage national security
• Wrongful disclosure could lead to a disruption of foreign relations, impair a program or policy directly related to
national security, reveal significant military plans or intelligence operations, or compromise significant scientific
or technological development relating to national security
• Most classified information falls into this category
• Penalty can be a large fine and/or a 5 year to life imprisonment sentence
Information sensitivity
198
Top secret
• Requires the highest degree of protection
• Unauthorized disclosure could severely damage national security
• Wrongful disclosure could lead to war against a nation or its allies, disrupt vital relations, compromise vital
defense plans or cryptologic and communications intelligence systems, reveal sensitive intelligence operations, or
could jeopardize a vital advantage in an area of science or technology
• Penalty can range from 5 years to life imprisonment or even the death penalty if considered treason
Sensitivity Indicator in the USA
In the intelligence community the sensitivity indicator (aka. sensitivity label) specifies the level of secrecy of a
project, document or piece of information by its relevancy to national security. Only those with appropriate security
clearance can access information of certain sensitivity and might face additional special access restrictions.
The indicator can also be the name of a classified project such as "Project Blue Book" or "Ultra", further restricting
access to or handling of information.
External links
• ISOO
[3]
• CIA
[4]
• FBI Security Clearance FAQ
[5]
References
[1] http:/ / www. law. cornell.edu/ uscode/ 18/ 1831. html
[2] http:// www. law. cornell.edu/ uscode/ 18/ 1839. html
[3] http:// www. archives. gov/ isoo/
[4] https:// www. cia. gov/ library/publications/ / cia_today/ ciatoday_05.shtml
[5] http:// www. fbi.gov/ clearance/ securityclearance.htm
Inter-Control Center Communications Protocol
199
Inter-Control Center Communications Protocol
The Inter-Control Center Communications Protocol (ICCP or IEC 60870-6/TASE.2)
[1]
is being specified by
utility organizations throughout the world to provide data exchange over wide area networks (WANs) between utility
control centers, utilities, power pools, regional control centers, and Non-Utility Generators. ICCP is also an
international standard: International Electrotechnical Commission (IEC) Telecontrol Application Service Element 2
(TASE.2).
Background
Inter-utility real time data exchange has become critical to the operation of interconnected systems in most parts of
the world. For example, the development of electricity markets has seen the management of electricity networks by a
functional hierarchy that is split across boundaries of commercial entities. At the top level there is typically a system
operator with co-ordination responsibilities for dispatch and overall system security. Below this are regional
transmission companies that tie together distribution companies and generating companies. In continental power
systems there is now considerable interconnection across international borders. ICCP allows the exchange of real
time and historical power system information including status and control data, measured values, scheduling data,
energy accounting data and operator messages.
Historically there has been reliance on custom or proprietary links and protocols to exchange real time data between
systems. ICCP began as an effort to develop an international standard for real-time data exchange within the electric
power utility industry. A working group was formed in 1991 to develop a protocol standard, develop a prototype to
test the specification, submit the specification to the IEC for standardisation and carry out interoperability testing
between developing vendors. The initial driver was to meet European Common Market requirements in 1992. The
official designation of the first protocol was TASE.1 (Telecontrol Application Service Element-1).
[2]
The second
protocol TASE.2
[3]
making use of the Manufacturing Message Specification (MMS) appears to be the version that
has become the most popular.
In the US ICCP networks are widely used to tie together groups of utility companies typically a regional system
operator with transmission utilities, distribution utilities and generators. Regional operators may also be connected
together to co-ordinate import and export of power between regions across major inter-ties.
ICCP Functionality
Basic ICCP functionality is specified as “Conformance Blocks” listed below. The objects that are used to convey the
data are defined in various parts of IEC 60870-6.
Block Description Data Examples:
1. Periodic System Data: Status points, analogue points, quality flags, time stamp, change of value counter,
protection events. Association objects to control ICCP sessions.
2. Extended Data Set Condition Monitoring: Provides report by exception capability for the data types that block 1
is able to transfer periodically.
3. Block Data Transfer: Provides a means transferring Block 1 and Block 2 data types as block transfers instead of
point by point. In some situations this may reduce bandwidth requirements.
4. Information Messages: Simple text and binary files.
5. Device Control: Device control requests: on/off, trip/close, raise/lower etc and digital setpoints. Includes
mechanisms for interlocked controls and select-beforeoperate.
6. Program Control: Allows an ICCP client to remote control programs executing on an ICCP server.
7. Event Reporting: Extended reporting to a client of error conditions and device state changes at a server.
8. Additional User Objects: Scheduling, accounting, outage and plant information.
Inter-Control Center Communications Protocol
200
9. Time Series Data: Allows a client to request a report from a server of historical time series data between a start
and end date.
Protocol Architecture
ICCP is based on client / server principles. Data transfers result from a request from a control centre (client) to
another control centre (server). Control centres may be both clients and servers. ICCP is just one of the elements in a
standard 7 layer OSI model. As such any physical interfaces, transport and network services that fit this model are
supported. TCP/IP over Ethernet (802.3) seems to be the most common. ICCP may operate over a single
point-to-point link between two control centres; however, the more general case is for many control centres and a
routed wide area network. The logical connections or “associations” between control centres are completely general.
A client may establish associations with more than one server and a client may establish more than one association
with the same server. Multiple associations with same server can be established at different levels of quality of
service so that high priority real time data is not delayed by lower priority or non real time data transfers.
Access Control
ICCP does not provide authentication or encryption. These services are normally provided by lower protocol layers.
ICCP uses “Bilateral Tables” to control access. A Bilateral Table represents the agreement between two control
centres connected with an ICCP link. The agreement identifies data elements and objects that can be accessed via the
link and the level of access permitted. Once an ICCP link is established, the contents of the Bilateral Tables in the
server and client provide complete control over what is accessible to each party. There must be matching entries in
the server and client tables to provide access to data and objects.
Interoperability
The wide acceptance of ICCP by the utility industry has resulted in several ICCP products being on the market.
Although interoperability is not regarded as a high risk area, the standard is such that an implementation does not
have to support all conformance blocks in order to claim compliance with the standard. A minimal implementation
only requires Block 1. Only those blocks necessary to achieve the required functionality need be implemented. It is
also not necessary to support all objects defined in the standard for any particular block. Extensive interoperability
testing between products of some of the major vendors has been a feature of ICCP protocol development.
Independent reports are available, as no doubt are reports from vendors. An ICCP purchaser must define
functionality required in terms of conformance blocks required and the objects within those blocks. Application
profiles for the ICCP client and server conformances must match if the link is to operate successfully.
Product Differentiation
ICCP is a real time data exchange protocol providing features for data transfer, monitoring and control. For a
complete ICCP link there need to be facilities to manage and configure the link and monitor its performance. The
ICCP standard does not specify any interface or requirements for these features that are necessary but nevertheless
do not affect interoperability. Similarly failover and redundancy schemes and the way the SCADA responds to ICCP
requests is not a protocol issue so is not specified. These non protocol specific features are referred to in the standard
as “local implementation issues”. ICCP implementers are free to handle these issues any way they wish. Local
implementation is the means that developers have to differentiate their product in the market with added value.
Additional money spent on a product with well-developed maintenance and diagnostic tools may well be saved
many times over during the life of the product if use of the ICCP connection is expected to grow and change.
Inter-Control Center Communications Protocol
201
Product Configurations
Commercial ICCP products are generally available for one of three configurations:
1. As a native protocol embedded in the SCADA host.
2. As a networked server.
3. As a gateway processor.
As an embedded protocol the ICCP management tools and interfaces are all part of the complete suite of tools for the
SCADA. This configuration offers maximum performance because of the direct access to the SCADA database
without requiring any intervening buffering. This approach may not be available as an addition to a legacy system.
The ICCP application may be restricted to accessing only the SCADA environment in which it is embedded.
A networked server making use of industry standard communications networking to the SCADA host may provide
performance approaching that of an embedded ICCP application. On the application interface side the ICCP is not
restricted to the SCADA environment but is open to other systems such as a separate data historian or other
databases. Security may be easier to manage with the ICCP server segregated from the operational real time systems.
The gateway processor approach is similar to the networked server except it is intended for legacy systems with
minimal communications networking capability and so has the lowest performance. In the most minimal situation
the ICCP gateway may communicate with the SCADA host via a serial port in a similar manner to the SCADA
RTUs.
External links
• ICCP
[4]
• Open ICCP
[5]
References
[1] http:/ / intelligrid.ipower. com/ IntelliGrid_Architecture/New_Technologies/ Tech_IEC_60870-6_%28ICCP%29.htm
[2] http:/ / webstore.iec. ch/ preview/ info_iec60870-6-501%7Bed1.0%7Db.pdf
[3] http:/ / webstore.iec. ch/ preview/ info_iec60870-6-503%7Bed2.0%7Den.pdf
[4] http:/ / www. compusharp. com/ intercontrol.htm
[5] http:// www. osii. com/ pdf/scada-ui/ OpenICCP_PS.pdf
Inter-protocol communication
202
Inter-protocol communication
Inter-protocol communication
[1]
is a security vulnerability in the fundamentals of a network communication
protocol. Whilst other protocols are vulnerable, this vulnerability is commonly discussed in the context of the
Hypertext Transfer Protocol (HTTP).
[2]
This attack uses the potential of the two different protocols meaningfully
communicating commands and data.
Inter-protocol exploitation can utilize inter-protocol communication to establish the preconditions for launching an
Inter-protocol exploit. For example, this process could negotiate the initial authentication communication for a
vulnerability in password parsing.
Technical Details
The two protocols involved in the vulnerability are termed the carrier and target. The carrier encapsulates the
commands and/or data. The target protocol is used for communication to the intended victim service. Inter-protocol
communication will be successful if the carrier protocol can encapsulate the commands and/or data sufficiently to
meaningfully communicate to the target service.
Preconditions
Two preconditions need to be met for successful communication across protocols: encapsulation and error tolerance.
Encapsulation
The carrier protocol must encapsulate the data and commands in a manner that the target protocol can understand. It
is highly likely that the resulting data stream with induce parsing errors in the target protocol.
Error Tolerance
The target protocol be must be sufficiently forgiving of errors. During the Inter-Protocol connection it is likely that
a percentage of the communication will be invalid and cause errors. To meet this precondition, the target protocol
implementation must continue processing despite these errors.
References
[1] "Inter-protocol Communication" (http:// www. ngssoftware. com/ research/ papers/ InterProtocolCommunication.pdf). 2006-08. .
[2] "HTML Form Protocol Attack" (http:/ / www.remote. org/jochen/ sec/ hfpa/index. html). .
Inter-protocol exploitation
203
Inter-protocol exploitation
Inter-protocol exploitation is a security vulnerability that takes advantage of interactions between two
communication protocols, for example the protocols used in the Internet. Under this name, it was popularized in
2007 and publicly described in research
[1]
of the same year. The general class of attacks that it refers to has been
known since at least 1994 (see the Security Considerations section of RFC 1738).
Internet protocol implementations allow for the possibility of encapsulating exploit code to compromise a remote
program which uses a different protocol. Inter-protocol exploitation is where one protocol attacks a service running a
different protocol. This is a legacy problem because the specifications of the protocols did not take into consideration
an attack of this type.
Technical details
The two protocols involved in the vulnerability are the carrier and target. The carrier encapsulates the exploit code
and the target protocol is used for communication by the intended victim service. Inter-protocol exploitation will be
successful if the carrier protocol can encapsulate the exploit code which can take advantage of a target service. Also,
there may be other preconditions depending on the complexity of the vulnerability.
Current implications
One of the major points of concern is the potential for this attack vector to reach through firewalls and DMZs.
Inter-protocol exploits can be transmitted over HTTP and launched from web browsers on an internal subnet. An
important point is the web browser is not exploited though any conventional means.
References
[1] "Inter-protocol Exploitation" (http:// www. ngssoftware. com/ Libraries/Documents/ 03_07_Inter-Protocol_Exploitation.sflb. ashx).
2007-03-05. .
External links
• http:/ / www. theregister.co. uk/ 2007/ 06/ 27/ wade_alcorn_metasploit_interview/
International Journal of Critical Computer-Based Systems
204
International Journal of Critical
Computer-Based Systems
The International Journal of Critical Computer-Based Systems (IJCCBS) is a quarterly computer science research
journal published by Inderscience Publishers.
[1]
The journal focuses on engineering and verification of complex computer-based systems (where complex means
large, distributed and heterogeneous) in critical applications, with special emphasis on model-based approaches and
industrial case-studies. Critical computer-based systems include real-time control, fly/brake-by-wire, on-line
transactional and web servers, biomedical apparels, networked devices for telecommunications, environmental
monitoring, infrastructure protection, etc.
References
[1] International Journal of Critical Computer-Based Systems (http:/ / www.informatik.uni-trier.de/ ~ley/ db/ journals/ ijccbs/ ), DBLP.
External links
• IJCCBS website (http:/ / www. inderscience. com/ ijccbs)
Internet leak
An Internet leak occurs when a party's confidential information is released to the public on the Internet. Various
types of information and data can be, and have been, "leaked" to the Internet, the most common being personal
information, computer software and source code, and artistic works such as books or albums. For example, a musical
album is leaked if it has been made available to the public on the Internet before its official release date; this musical
material is still intended to be confidential.
Source code leaks are usually caused by misconfiguration of software like CVS or FTP which allow people to get
source files by exploiting this, by software bugs, or by employees that have access to the sources of part of them
revealing the code in order to harm the company.
There were many cases of source code leaks in the history of software development. For example, in 2003 a cracker
exploited a security hole in Microsoft's Outlook to get the complete source of Half-Life 2, which was under
development at the time.
[1]
The complete source was soon available in various file sharing networks. This leak was
rumored to be the cause of the game's delay,
[2]
but later was stated not to be.
Also in 2003, source code to Diebold Election Systems Inc. voting machines was leaked. Researchers at Johns
Hopkins University and Rice University published a damning critique of Diebold's products, based on an analysis of
the software. They found, for example, that it would be easy to program a counterfeit voting card to work with the
machines and then use it to cast multiple votes inside the voting booth.
Another case involved a partial leak of the source code to Microsoft Windows 2000. Two files containing Microsoft
source code were circulating on the Internet. One contains a majority of the NT4 source code and the other contains
a fraction of the Windows 2000 source code, reportedly about 15% of the total. This includes some networking code
including Winsock and inet; as well as some shell code. It was feared that because of the leak, the number of security
exploits would increase due to wider scrutinization of the source code.
In 2004 partial (800MB) proprietary source code that drives Cisco Systems' networking hardware was made
available in the internet. The site posted two files of source code written in the C programming language, which
apparently enables some next-generation IPv6 functionality. News of the latest source code leak appeared on a
Internet leak
205
Russian security site, SecurityLab.ru
[3]
.
On January 28, 2008, Nintendo's crossover fighting video game Super Smash Bros. Brawl for the Wii console had a
major leak having to do with unconfirmed playable characters. The leak was unintentionally started by the Japanese
language www.wii.com website, which released a video that included small images of not-yet confirmed characters
within the game. The website fixed this mistake, but the leak still continued. Websites like YouTube contain
screenshots and video gameplay of the unconfirmed characters of the game.
Sometimes, game developers who post blogs can accidentally leak information.
Recently, several high-profile books have been leaked on the Internet before their official release date, including If I
Did It, Harry Potter and the Deathly Hallows, and an early draft of the first twelve chapters of Midnight Sun. The
leak of the latter prompted the author Stephenie Meyer to suspend work on the novel.
High-profile Internet leaks
• 3 October 2003
[4]
: Half-Life 2 source code
• 13 February 2004
[5]
: Microsoft Windows 2000/NT source code
• November 2009: Climatic Research Unit email leak, aka Climategate
References
[1] "Playable Version of Half-Life 2 Stolen" (http:// money. cnn.com/ 2003/ 10/ 07/ commentary/ game_over/column_gaming/ ). CNN Money.
2003-10-07. . Retrieved February 14, 2007.
[2] "Half Life 2 Source-Code Leak Delays Debut" (http:/ / www.technewsworld.com/ story/ 31783. html). TechNewsWorld. . Retrieved
February 14, 2007.
[3] http:/ / www. SecurityLab.ru
[4] http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 3162074. stm
[5] http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 3485545. stm
Internet Security Awareness Training
206
Internet Security Awareness Training
Internet Security Awareness Training (ISAT) consists of the training of members of an organization regarding the
protection of various information assets of that organization. Organizations that need to comply with government
regulations (i.e. GLBA, PCI, HIPAA, SarBox) normally require formal ISAT for all employees, usually once or
twice a year. Many Small and Medium Enterprises (SME's) do not require ISAT for regulatory compliance, but train
their employees to prevent a cyberheist. Internet Security Awareness Training at this point in time is usually
provided via online courses. ISAT is a subset of general security awareness Training.
Topics covered in ISAT include:
• Appropriate methods for protecting sensitive information on personal computer systems, including password
policy
• Various computer security concerns, including spam, malware, phishing, social engineering, etc.
• Consequences of failure to properly protect information, including potential job loss, economic consequences to
the firm, damage to individuals whose private records are divulged, and possible civil and criminal law penalties.
Being Internet Security Aware means you understand that there are people actively trying to steal data that is stored
within an organization's computers. (This often focuses on user names and passwords, so that criminal elements can
ultimately get access to bank accounts.) That is why it is important to protect the assets of the organization and stop
that from happening.
According to Microsoft,
• End User Internet Security Awareness Training resides in the Policies, Procedures, and Awareness layer of the
Defense in Depth security model.
• User security awareness can affect every aspect of an organization’s security profile.
• End User Security awareness is a significant part of a comprehensive security profile because many attack types
rely on human intervention (Social Engineering) to succeed.
The focus of ISAT is to achieve an immediate and lasting change in the attitude of employees towards Internet
Security, but making it clear that security policies are vital for the survival of the organization, and not as rules that
restrict the employee being efficient at work.
External Links
One of the more successful methods of ISAT is to test employees before and after the training with simulated
phishing attacks. There are several companies that provide this service:
KnowBe4
[1]
Wombat Security Technologies
[2]
Phishme
[3]
References
[1] http:/ / www. KnowBe4. com/
[2] http:// www. wombatsecurity. com/
[3] http:/ / www. phishme. com/
Intrusion detection system evasion techniques
207
Intrusion detection system evasion techniques
Intrusion Detection System evasion techniques are modifications made to attacks in order to prevent detection by an
Intrusion Detection System (IDS). Almost all published evasion techniques modify network attacks. The 1998 paper
Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection
[1]
popularized IDS evasion, and
discussed both evasion techniques and areas where the correct interpretation was ambiguous depending on the
targeted computer system. The 'fragroute' and 'fragrouter' programs implement evasion techniques discussed in the
paper. Many web vulnerability scanners, such as 'Nikto', 'whisker' and 'Sandcat', also incorporate IDS evasion
techniques.
Most IDSs have been modified to detect or even reverse basic evasion techniques, but IDS evasion (and countering
IDS evasion) are still active fields.
Obfuscating attack payload
An IDS can be evaded by obfuscating or encoding the attack payload in a way that the target computer will reverse
but the IDS will not. In the past, an adversary using the Unicode character could encode attack packets that an IDS
would not recognize but that an IIS web server would decode and become attacked.
Polymorphic code is another means to circumvent signature-based IDSs by creating unique attack patterns, so that
the attack does not have a single detectable signature.
Attacks on encrypted protocols such as HTTPS are obfuscated if the attack is encrypted.
Fragmentation and Small Packets
One basic technique is to split the attack payload into multiple small packets, so that the IDS must reassemble the
packet stream to detect the attack. A simple way of splitting packets is by fragmenting them, but an adversary can
also simply craft packets with small payloads. The 'whisker' evasion tool calls crafting packets with small payloads
'session splicing'.
By itself, small packets will not evade any IDS that reassembles packet streams. However, small packets can be
further modified in order to complicate reassembly and detection. One evasion technique is to pause between
sending parts of the attack, hoping that the IDS will time out before the target computer does. A second evasion
technique is to send the packets out of order, confusing simple packet reassemblers but not the target computer.
Overlapping Fragments
An IDS evasion technique is to craft a series of packets with TCP sequence numbers configured to overlap. For
example, the first packet will include 80 bytes of payload but the second packet's sequence number will be 76 bytes
after the start of the first packet. When the target computer reassembles the TCP stream, they must decide how to
handle the four overlapping bytes. Some operating systems will take the older data, and some will take the newer
data.
Protocol Violations
Some IDS evasion techniques involve deliberately violating the TCP or IP protocols in a way the target computer
will handle differently than the IDS. For example, the TCP Urgent Pointer is handled differently on different
operating systems and may not be handled correctly by the IDS.
Intrusion detection system evasion techniques
208
Inserting Traffic at the IDS
An adversary can send packets that the IDS will see but the target computer will not. For example, the attacker could
send packets whose Time to live fields have been crafted to reach the IDS but not the target computers it protects.
This technique will result in an IDS with different state than the target.
Denial of Service
An adversary can evade detection by disabling or overwhelming the IDS. This can be accomplished by exploiting a
bug in the IDS, using up computational resources on the IDS, or deliberately triggering a large number of alerts to
disguise the actual attack. The tools 'stick' and 'snot' were designed to generate a large number of IDS alerts by
sending attack signatures across the network, but will not trigger alerts in IDSs that maintain application protocol
context.
References
[1] http:/ / citeseer.ist. psu. edu/ ptacek98insertion. html
External links
1. Evasions in IDS/IPS (http:// www. virusbtn. com/ virusbulletin/ archive/ 2010/ 04/
vb201004-evasions-in-IPS-IDS), Abhishek Singh , Scott Lambert, Jeff Williams, Virus Bulletin, April 2010.
2. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection (http:// citeseer. ist. psu. edu/
ptacek98insertion. html) Thomas Ptacek, Timothy Newsham. Technical Report, Secure Networks, Inc., January
1998.
3. IDS evasion with Unicode (http:// www. securityfocus. com/ infocus/ 1232) Eric Packer. last updated January 3,
2001.
4. Fragroute home page (http:// monkey. org/~dugsong/ fragroute/)
5. Fragrouter source code (http:/ / www. freshports.org/ security/ fragrouter)
6. Nikto home page (http:/ / www. cirt. net/ code/ nikto. shtml)
7. Phrack 57 phile 0x03 (http:/ /www. phrack.org/archives/ 57/ p57-0x03) mentioning the TCP Urgent pointer
8. Whisker home page (http:/ / www. wiretrip.net/ rfp/)
9. Sandcat home page (http:/ / www. syhunt. com/ sandcat)
10. Snort's stream4 preprocessor (http:// www. snort. org/docs/ faq/1Q05/ node47. html#stream4) for stateful
packet reassembly
Intrusion prevention system
209
Intrusion prevention system
Intrusion Prevention Systems (IPS), also known as Intrusion Detection and Prevention Systems (IDPS), are
network security appliances that monitor network and/or system activities for malicious activity. The main functions
of intrusion prevention systems are to identify malicious activity, log information about said activity, attempt to
block/stop activity, and report activity.
[1]
Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor
network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection
systems, intrusion prevention systems are placed in-line and are able to actively prevent/block intrusions that are
detected.
[2]

[3]
More specifically, IPS can take such actions as sending an alarm, dropping the malicious packets,
resetting the connection and/or blocking the traffic from the offending IP address.
[4]
An IPS can also correct Cyclic
Redundancy Check (CRC) errors, unfragment packet streams, prevent TCP sequencing issues, and clean up
unwanted transport and network layer options.
[2]

[5]
Classifications
Intrusion prevention systems can be classified into four different types:
[6]

[7]
Network-based Intrusion Prevention (NIPS): monitors the entire network for suspicious traffic by analyzing
protocol activity.
Wireless Intrusion Prevention Systems (WIPS): monitors a wireless network for suspicious traffic by analyzing
wireless networking protocols.
Network Behavior Analysis (NBA): examines network traffic to identify threats that generate unusual traffic flows,
such as distributed denial of service (DDoS) attacks, certain forms of malware, and policy violations.
Host-based Intrusion Prevention (HIPS): an installed software package which monitors a single host for
suspicious activity by analyzing events occurring within that host.
Detection methods
The majority of intrusion prevention systems utilize one of three detection methods: signature-based, statistical
anomaly-based, and stateful protocol analysis.
[3]

[3]

[8]
Signature-based Detection: This method of detection utilizes signatures, which are attack patterns that are
preconfigured and predetermined. A signature-based intrusion prevention system monitors the network traffic for
matches to these signatures. Once a match is found the intrusion prevention system takes the appropriate action.
Signatures can be exploit-based or vulnerability-based. Exploit-based signatures analyze patterns appearing in
exploits being protected against, while vulnerability-based signatures analyze vulnerabilities in a program, its
execution, and conditions needed to exploit said vulnerability.
Statistical Anomaly-based Detection: This method of detection baselines performance of average network traffic
conditions. After a baseline is created, the system intermittently samples network traffic, using statistical analysis to
compare the sample to the set baseline. If the activity is outside the baseline parameters, the intrusion prevention
system takes the appropriate action.
Stateful Protocol Analysis Detection: This method identifies deviations of protocol states by comparing observed
events with “predetermined profiles of generally accepted definitions of benign activity.”
[3]
Intrusion prevention system
210
References
[1] "NIST - Guide to Intrusion Detection and Prevention Systems (IDPS)" (http:// csrc. nist.gov/ publications/ nistpubs/ 800-94/SP800-94. pdf).
2007-02. . Retrieved 2010-06-25.
[2] Robert C. Newman (19 February 2009). Computer Security: Protecting Digital Resources (http:// books.google.com/
books?id=RgSBGXKXuzsC& pg=PA273). Jones & Bartlett Learning. pp. 273–. ISBN 9780763759940. . Retrieved 25 June 2010.
[3] Michael E. Whitman; Herbert J. Mattord (2009). Principles of Information Security (http:// books. google.com/ books?id=gPonBssSm0kC&
pg=PA289). Cengage Learning EMEA. pp. 289–. ISBN 9781423901778. . Retrieved 25 June 2010.
[4] Tim Boyles (2010). CCNA Security Study Guide: Exam 640-553 (http:// books. google.com/ books?id=AHzAcvHWbx4C& pg=PA249).
John Wiley and Sons. pp. 249–. ISBN 9780470527672. . Retrieved 29 June 2010.
[5] Harold F. Tipton; Micki Krause (2007). Information Security Management Handbook (http:// books. google. com/
books?id=B0Lwc6ZEQhcC& pg=PA1000). CRC Press. pp. 1000–. ISBN 9781420013580. . Retrieved 29 June 2010.
[6] "NIST - Guide to Intrusion Detection and Prevention Systems (IDPS)" (http:// csrc. nist.gov/ publications/ nistpubs/ 800-94/SP800-94. pdf).
2007-02. . Retrieved 2010-06-25.
[7] John R. Vacca (2010). Managing Information Security (http:// books. google. com/ books?id=uwKkb-kpmksC& pg=PA137). Syngress.
pp. 137–. ISBN 9781597495332. . Retrieved 29 June 2010.
[8] Engin Kirda; Somesh Jha; Davide Balzarotti (2009). Recent Advances in Intrusion Detection: 12th International Symposium, RAID 2009,
Saint-Malo, France, September 23-25, 2009, Proceedings (http:/ / books. google.com/ books?id=DVuQbKQM3UwC& pg=PA162). Springer.
pp. 162–. ISBN 9783642043413. . Retrieved 29 June 2010.
External Links
• Common Vulnerabilities and Exposures (CVE) by Product (http:// www.cve. mitre.org/compatible/ product.
html)
• NIST SP 800-83, Guide to Malware Incident Prevention and Handling (http:/ / csrc.nist. gov/ publications/
nistpubs/ index.html)
• NIST SP 800-31, Intrusion Detection Systems (http:// csrc. nist. gov/ publications/ nistpubs/ index. html)
• Study by Gartner "Magic Quadrant for Network Intrusion Prevention System Appliances" (http:// www.
sourcefire. com/ resources/ downloads/ secured/ Sourcefire3047.pdf?a=1&b=2#go)
Intrusion tolerance
211
Intrusion tolerance
Intrusion tolerance is a Fault-tolerant design approach to defending information systems against malicious attack.
Abandoning the conventional aim of preventing all intrusions, intrusion tolerance instead calls for triggering
mechanisms that prevent intrusions from leading to a system security failure.
Projects in this area include the MAFTIA project (Malicious- and Accidental-Fault Tolerance for Internet
Applications), which developed concepts and prototyped architectures, and the OASIS
[1]
program, which
implemented several intrusion-tolerant systems.
External links
• Article "Intrusion Tolerance: Concepts and Design Principles. A Tutorial.
[2]
" by Paulo Veríssimo
[3]
References
[1] http:/ / www. tolerantsystems. org/
[2] http:// hdl.handle. net/ 10455/ 2988
[3] http:/ / www. di. fc.ul. pt/ ~pjv/
IT baseline protection
IT baseline protection signifies standard security measures for typical IT systems.
Overview
The term baseline security is used in various contexts with somewhat different meanings. For example:
• Microsoft Baseline Security Analyzer: Software tool focused on Microsoft operating system and services security
• Cisco security baseline
[1]
: Vendor recommendation focused on network and network device security controls
• Nortel baseline security
[2]
: Set of requirements and best practices with a focus on network operators
• ISO/IEC 13335-3 defines a baseline approach to risk management. This standard has been replaced by ISO/IEC
27005, but the baseline approach was not taken over yet into the 2700x series.
• There are numerous internal baseline security policies for organizations,
[3]

[4]
• The German FSI has a comprehensive baseline security standard, that is evolving towards ISO 27000
[5]
FSI Concept
The foundation of an IT baseline protection concept is initially not a detailed risk analysis. It proceeds from overall
hazards. Consequently, sophisticated classification according to damage extent and probability of occurrence is
ignored. Three protection needs categories are established. With their help, the protection needs of the object under
investigation can be determined. Based on these, appropriate personnel, technical, organizational and infrastructural
security measures are selected from the IT Baseline Protection Catalogs.
The Federal Office for Security in Information Technology's IT Baseline Protection Catalogs offer a "cookbook
recipe" for a normal level of protection. Besides probability of occurrence and potential damage extents,
implementation costs are also considered. By using the Baseline Protection Catalogs, costly security analyses
requiring expert knowledge are dispensed with, since overall hazards are worked with in the beginning. It is possible
for the relative layman to identify measures to be taken and to implement them in cooperation with professionals.
IT baseline protection
212
The FSI grants a baseline protection certificate as confirmation for the successful implementation of baseline
protection. In stages 1 and 2, this is based on self declaration. In stage 3, an independent, FSI-licensed auditor
completes an audit. Certification process internationalization has been possible since 2006. ISO 27001 certification
can occur simultaneously with IT baseline protection certification. (The ISO 27001 standard is the successor of BS
7799-2). This process is based on the new FSI security standards. This process carries a development price which
has prevailed for some time. Corporations having themselves certified under the BS 7799-2 standard are obliged to
carry out a risk assessment. To make it more comfortable, most deviate from the protection needs analysis pursuant
to the IT Baseline Protection Catalogs. The advantage is not only conformity with the strict Federal Office for
Security in Information Technology, but also attainment of BS 7799-2 certification. Beyond this, the FSI offers a
few help aids like the policy template and the GSTOOL.
One data protection component is available, which was produced in cooperation with the German Federal
Commissioner for Data Protection and Freedom of Information and the state data protection authorities and
integrated into the IT Baseline Protection Catalog. This component is not considered, however, in the certification
process.
Baseline protection process
The following steps are taken pursuant to the baseline protection process during structure analysis and protection
needs analysis:
• The IT network is defined.
• IT structure analysis is carried out.
• Protection needs determination is carried out.
• A baseline security check is carried out.
• IT baseline protection measures are implemented.
Creation occurs in the following steps:
• IT structure analysis (survey)
• Assessment of protection needs
• Selection of actions
• Running comparison of nominal and actual.
IT structure analysis
An IT network includes the totality of infrastructural, organizational, personnel, and technical components serving
the fulfillment of a task in a particular information processing application area. An IT network can thereby
encompass the entire IT character of an institution or individual division, which is partitioned by organizational
structures as, for example, a departmental network, or as shared IT applications, for example, a personnel
information system. It is necessary to analyze and document the information technological structure in question to
generate an IT security concept and especially to apply the IT Baseline Protection Catalogs. Due to today's usually
heavily networked IT systems, a network topology plan offers a starting point for the analysis. The following aspects
must be taken into consideration:
• The available infrastructure,
• The organizational and personnel framework for the IT network,
• Networked and non-networked IT systems employed in the IT network.
• The communications connections between IT systems and externally,
• IT applications run within the IT network.
IT baseline protection
213
Protection needs determination
The purpose of the protection needs determination is to investigate what protection is sufficient and appropriate for
the information and information technology in use. In this connection, the damage to each application and the
processed information, which could result from a breach of confidentiality, integrity or availability, is considered.
Important in this context is a realistic assessment of the possible follow-on damages. A division into the three
protection needs categories "low to medium", "high" and "very high" has proved itself of value. "Public", "internal"
and "secret" are often used for confidentiality.
Modelling
Heavily networked IT systems typically characterize information technology in government and business these days.
As a rule, therefore, it is advantageous to consider the entire IT system and not just individual systems within the
scope of an IT security analysis and concept. To be able to manage this task, it makes sense to logically partition the
entire IT system into parts and to separately consider each part or even an IT network. Detailed documentation about
its structure is prerequisite for the use of the IT Baseline Protection Catalogs on an IT network. This can be achieved,
for example, via the IT structure analysis described above. The IT Baseline Protection Catalog’s' components must
ultimately be mapped onto the components of the IT network in question in a modelling step.
Baseline security check
The baseline security check is an organisational instrument offering a quick overview of the prevailing IT security
level. With the help of interviews, the status quo of an existing IT network (as modelled by IT baseline protection)
relative to the number of security measures implemented from the IT Baseline Protection Catalogs are investigated.
The result is a catalog in which the implementation status "dispensable", "yes", "partly", or "no" is entered for each
relevant measure. By identifying not yet, or only partially, implemented measures, improvement options for the
security of the information technology in question are highlighted.
The baseline security check gives information about measures, which are still missing (nominal vs. actual
comparison). From this follows what remains to be done to achieve baseline protection through security. Not all
measures suggested by this baseline check need to be implemented. Peculiarities are to be taken into account! It
could be that several more or less unimportant applications are running on a server, which have lesser protection
needs. In their totality, however, these applications are to be provided with a higher level of protection. This is called
the (cumulation effect).
The applications running on a server determine its need for protection. In this connection, it is to be noted that
several IT applications can run on an IT system. When this occurs, the application with the greatest need for
protection determines the IT system’s protection category.
Conversely, it is conceivable that an IT application with great protection needs does not automatically transfer this to
the IT system. This may happen because the IT system is configured redundantly, or because only an inconsequential
part is running on it. This is called the (distribution effect). This is the case, for example, with clusters.
The baseline security check maps baseline protection measures. This level suffices for low to medium protection
needs. This comprises about 80 % of all IT systems according to FSI estimates. For systems with high to very high
protection needs, risk analysis based information security concepts, like for example ISO 27001, are usually used.
IT baseline protection
214
IT Baseline Protection Catalog and standards
During its 2005 restructuring and expansion of the IT Baseline Protection Catalogs, the FSI separated methodology
from the IT Baseline Protection Catalog. The BSI 100-1, BSI 100-2, and BSI 100-3 standards contain information
about construction of an information security management system (ISMS), the methodology or basic protection
approach, and the creation of a security analysis for elevated and very elevated protection needs building on a
completed baseline protection investigation.
BSI 100-4, the "Emergency management" standard, is currently in preparation. It contains elements from BS 25999,
ITIL Service Continuity Management combined with the relevant IT Baseline Protection Catalog components, and
essential aspects for appropriate Business Continuity Management (BCM). Implementing these standards renders
certification is possible pursuant to BS 25999-2. The FSI has submitted the FSI 100-4 standards design for online
commentary under.
[6]
The FSI brings its standards into line with international norms this way.
Literature
• FSI:IT Baseline Protection Guidelines (pdf, 420 kB)
[7]
• FSI: IT Baseline Protection Cataloge 2007
[8]
(pdf)
• FSI: FSI IT Security Management and IT Baseline Protection Standards
[9]
• Frederik Humpert: IT-Grundschutz umsetzen mit GSTOOL. Anleitungen und Praxistipps für den erfolgreichen
Einsatz des BSI-Standards, Carl Hanser Verlag München, 2005.
[10]
(ISBN 3-446-22984-1)
• Norbert Pohlmann, Hartmut Blumberg: Der IT-Sicherheitsleitfaden. Das Pflichtenheft zur Implementierung von
IT-Sicherheitsstandards im Unternehmen, ISBN 3-8266-0940-9
References
[1] http:/ / www. cisco. com/ en/ US/ docs/ solutions/ Enterprise/Security/ Baseline_Security/ securebasebook. html
[2] http:// www. nortel.com/ corporate/news/ collateral/ntj3_baseline_04. pdf
[3] "Department Baseline Security Policy and End User Agreement" (http:// www.ag. purdue.edu/ biochem/ department/Documents/ Baseline
Security Policy and End User Agreement.pdf). Purdue University. . Retrieved 17 December 2009.
[4] "D16 Baseline Security Requirements for Information Systems" (http:// www. kent.police.uk/ About Kent Police/ policies/ d/ d16.html).
Kent Police. . Retrieved 17 December 2009.
[5] "Mapping ISO 27000 to baseline security" (https:/ / www.bsi.bund.de/ cae/ servlet/ contentblob/ 471598/ publicationFile/ 31081/
Vergleich_ISO27001_GS_pdf. pdf). BSI. . Retrieved 17 December 2009.
[6] Entwurf BSI 100-4 (http:// www. bsi. de/ literat/bsi_standard/ bsi-standard_100-4_v090.pdf) (pdf)
[7] http:/ / www. bsi. bund. de/ gshb/ Leitfaden/GS-Leitfaden.pdf
[8] http:// www. bsi. bund. de/ gshb/ deutsch/ download/ it-grundschutz-kataloge_2006_de.pdf
[9] http:/ / www. bsi. bund. de/ literat/bsi_standard/ index. htm
[10] http:// www. humpert-partner.de/ conpresso/ _rubric/index.php?rubric=6
External links
• Federal Office for Security in Information Technology (http:// www.bsi. bund.de/ english/ index. htm)
• IT Security Yellow Pages (http:/ / www. branchenbuch-it-sicherheit.de/ )
• IT Baseline protection tools (http:/ / www. bsi. bund.de/ english/ gstool/ index.htm)
• Open Security Architecture- Controls and patterns to secure IT systems (http:// www.opensecurityarchitecture.
org)
IT Baseline Protection Catalogs
215
IT Baseline Protection Catalogs
The IT Baseline Protection Catalogs, or IT-Grundschutz-Kataloge, ("IT Baseline Protection Manual" before
2005) are a collection of documents from the German Federal Office for Security in Information Technology (BSI)
that provide useful information for detecting weaknesses and combating attacks in the information technology (IT)
environment (IT cluster). The collection encompasses over 3000 pages, including the introduction and catalogs. It
serves as the basis for the IT baseline protection certification of an enterprise.
Basic protection
IT baseline protection (protection) encompasses standard security measures for typical IT systems, with normal
protection needs.
[1]
The detection and assessment of weak points in IT systems often occurs by way of a risk assessment, wherein a
threat potential is assessed, and the costs of damage to the system (or group of similar systems) are investigated
individually. This approach is very time-intensive and very expensive.
Protection may rather proceed from a typical threat, which applies to 80% of cases, and recommend adequate
countermeasures against it. In this way, a security level can be achieved, viewed as adequate in most cases, and,
consequently, replace the more expensive risk assessment. In cases in which security needs are greater, such
protection can be used as a basis for further action.
The IT Baseline Protection Catalogs layout
The IT Baseline Protection Catalogs' layout
To familiarize the user with the manual itself, it contains an
introduction with explanations, the approach to IT baseline protection,
a series of concept and role definitions, and a glossary. The component
catalogs, threat catalogs, and the measures catalogs follow these
introductory sections. Forms and cross-reference tables supplement the
collection available on the Federal Office for Security in Information
Technology's (BSI) Internet platform. Here you can also find the
Baseline Protection Guide, containing support functions for
implementing IT baseline protection in procedural detail.
Each catalog element is identified by an individual mnemonic laid out
according to the following scheme (the catalog groups are named first).
C stands for component, M for measure, and T for threat. This is
followed by the layer number affected by the element. Finally, a serial
number within the layer identifies the element.
IT Baseline Protection Catalogs
216
Component catalog
Assignment of individual components to
personnel groups within the respective
organization
The component catalog is the central element, and contains the
following five layers: overall aspects, infrastructure, IT systems,
networks and IT applications.
Partitioning into layers clearly isolates personnel groups impacted by a
given layer from the layer in question. The first layer is addressed to
management, including personnel and outsourcing. The second is
addressed to in-house technicians, regarding structural aspects in the
infrastructure layer. System administrators cover the third layer,
looking at the characteristics of IT systems, including clients, servers
and private branch exchanges or fax machines. The fourth layer falls
within the network administrators task area. The fifth within that of the
applications administrator and the IT user, concerning software like
database management systems, e-mail and web servers.
Component lifecycle elements
Each individual component follows the same layout. The component
number is composed of the layer number in which the component is
located and a unique number within the layer. The given threat
situation is depicted after a short description of the component
examining the facts. An itemization of individual threat sources
ultimately follows. These present supplementary information. It is not
necessary to work through them to establish baseline protection.
The necessary measures are presented in a text with short illustrations. The text follows the facts of the life cycle in
question and includes planning and design, acquisition (if necessary), realization, operation, selection (if necessary),
and preventive measures. After a complete depiction, individual measures are once again collected into a list, which
is arranged according to the measures catalog's structure, rather than that of the life cycle. In the process,
classification of measures into the categories A, B, C, and Z is undertaken. Category A measures for the entry point
into the subject, B measures expand this, and category C is ultimately necessary for baseline protection certification.
Category Z measures any additional measures that have proven themselves in practice.
Networking of the catalogs
To keep each component as compact as possible, global aspects are
collected in one component, while more specific information is
collected into a second. In the example of an Apache web server, the
general B 5.4 Web server component, in which measures and threats
for each web server are depicted, would apply to it, as well as the
B5.11 component, which deals specifically with the Apache web
server. Both components must be successfully implemented to
guarantee the system's security.
The respective measures or threats, which are introduced in the
component, can also be relevant for other components. In this way, a
network of individual components arises in the baseline protection
catalogs.
IT Baseline Protection Catalogs
217
Threat catalogs
The threat catalogs, in connection with the component catalogs, offer more detail about potential threats to IT
systems. These threat catalogs follow the general layout in layers. "Force majeure", "organizational deficiencies",
"spurious human action", "technical failure", and "premeditated acts" are distinguished. According to the BSI, the
knowledge collected in these catalogs is not necessary to establish baseline protection. Baseline protection does,
however, demand an understanding of the measures, as well as the vigilance of management. Individual threat
sources are described briefly. Finally, examples of damages that can be triggered by these threat sources are given.
Measures catalogs
The measures catalogs summarize the actions necessary to achieve baseline protection; measures appropriate for
several system components are described centrally. In the process, layers are used for structuring individual
measures groups. The following layers are formed: infrastructure, organization, personnel, hardware and software,
communication, and preventive measures.
Managers are initially named to initiate and realize the measures in the respective measures description. A detailed
description of the measures follows. Finally, control questions regarding correct realization are given. During
realization of measures, personnel should verify whether adaptation to the operation in question is necessary; any
deviations from the initial measures should be documented for future reference.
Supplementary material
Besides the information summarized in the IT Baseline Protection Manual, the Federal Office for Data Security
provides further material in the Internet.
[2]
The forms provided serve to remedy protection needs for certain IT system components. A table summarizes the
measures to be applied for individual components in this regard. Each measure is named and its degree of realization
determined. Degrees of realization, "considerable", "yes", "partial", and "no", are distinguished. Finally, the
realization is terminated and a manager is named. If the measures' realization is not possible, reasons for this are
entered in the adjacent field for later traceability. The conclusion consists of a cost assessment.
Besides the forms, the cross-reference tables another useful supplement. They summarize the measures and most
important threats for individual components. Measures, as well as threats, are cited with mnemonics. Measures are
cited with a priority and a classification. The table contains correlations between measures and the threats they
address. However, the cross-reference tables only cite the most important threats. If the measure cited for a given
threat is not applicable for the individual IT system, it is not superfluous. Baseline protection can only be ensured if
all measures are realized.
IT Baseline Protection Catalogs
218
References
[1] IT Basic Protection Manual, sec. 1.1
[2] BSI Download (http:// www.bsi. de/ gshb/ deutsch/ download/ index. htm)
Further reading
• IT Baseline Protection Handbook. Germany. Federal Office for Security in Information Technology.
Bundesanzeiger, Cologne 2003-2005.
• Baseline Protection Guide. Germany. Federal Office for Security in Information Technology, 2006 version.
External links
• The BSI's web site (http:// www. bsi. de/ )
• IT Baseline Securitys homepage (http:// www. bsi. de/ gshb/ index. htm)
• Download page with IT Baseline Protection Catalogs, forms and supplementary information (http:// www.bsi.
de/ gshb/ downloads/ index. htm)
IT risk
Information technology risk, or IT risk, IT-related risk, is a risk related to information technology. This relatively
new term due to an increasing awareness that information security is simply one facet of a multitude of risks that are
relevant to IT and the real world processes it supports.
Because risk is strictly tied to uncertainty, Decision theory should be applied to manage risk as a science, i.e.
rationally making choices under uncertainty.
Generally speaking, risk is the product of likelihood times impact (Risk = Likelihood * Impact).
[1]
The measure of a IT risk can be determined as a product of threat, vulnerability and asset values:
[2]
Risk = Threat * Vulnerability ∗ Asset
Definitions
Definitions of IT risk come from different but authoritative sources.
ISO
IT risk: the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause
harm to the organization. It is measured in terms of a combination of the probability of an event and its
consequence.
[3]
Committee on National Security Systems
The Committee on National Security Systems of United States of America defined risk in different documents:
• From CNSS Instruction No. 4009 dated 26 April 2010
[4]
the basic and more technical focused definition:
Risk - Possibility that a particular threat will adversely impact an IS by exploiting a particular vulnerability.
• National Security Telecommunications and Information Systems Security Instruction (NSTISSI) No. 1000,
[5]
introduces a probability aspect, quite similar to NIST SP 800-30 one:
Risk - A combination of the likelihood that a threat will occur, the likelihood that a threat occurrence will
result in an adverse impact, and the severity of the resulting impact
National Information Assurance Training and Education Center defines risk in the IT field as:
[6]
IT risk
219
1. The loss potential that exists as the result of threat-vulnerability pairs. Reducing either the threat or the
vulnerability reduces the risk.
2. The uncertainty of loss expressed in terms of probability of such loss.
3. The probability that a hostile entity will successfully exploit a particular telecommunications or COMSEC system
for intelligence purposes; its factors are threat and vulnerability.
4. A combination of the likelihood that a threat shall occur, the likelihood that a threat occurrence shall result in an
adverse impact, and the severity of the resulting adverse impact.
5. the probability that a particular threat will exploit a particular vulnerability of the system.
NIST
Many NIST publications define risk in IT contest in different publications: FISMApedia
[7]
term
[8]
provide a list.
Between them:
• According to NIST SP 800-30
[9]
:
Risk is a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability,
and the resulting impact of that adverse event on the organization.
• From NIST FIPS 200
[10]
Risk - The level of impact on organizational operations (including mission, functions, image, or reputation),
organizational assets, or individuals resulting from the operation of an information system given the potential
impact of a threat and the likelihood of that threat occurring.
NIST SP 800-30
[9]
defines:
IT-related risk
The net mission impact considering:
1. the probability that a particular threat-source will exercise (accidentally trigger or intentionally exploit) a
particular information system vulnerability and
2. the resulting impact if this should occur. IT-related risks arise from legal liability or mission loss due to:
1. Unauthorized (malicious or accidental) disclosure, modification, or destruction of information
2. Unintentional errors and omissions
3. IT disruptions due to natural or man-made disasters
4. Failure to exercise due care and diligence in the implementation and operation of the IT system.
Risk management insight
IT risk is the probable frequency and probable magnitude of future loss.
[11]
ISACA
ISACA published the Risk IT Framework in order to provides an end-to-end, comprehensive view of all risks related
to the use of IT. There,
[12]
IT risk is defined as:
The business risk associated with the use, ownership, operation, involvement, influence and adoption of IT
within an enterprise
According to Risk IT,
[12]
IT risk has a broader meaning: it encompasses not just only the negative impact of
operations and service delivery which can bring destruction or reduction of the value of the organization, but also the
benefit\value enabling risk associated to missing opportunities to use technology to enable or enhance business or the
IT project management for aspects like overspending or late delivery with adverse business impact
IT risk
220
Measuring IT risk
You can't effectively and consistently manage what you can't measure, and you can't measure what you haven't
defined.
[11]

[13]
It is useful to introduce related terms, to properly measure IT risk.
Information security event
An identified occurrence of a system, service or network state indicating a possible breach of information
security policy or failure of safeguards, or a previously unknown situation that may be security relevant.
[3]
Occurrence of a particular set of circumstances
[14]
• The event can be certain or uncertain.
• The event can be a single occurrence or a series of occurrences. :(ISO/IEC Guide 73)
Information security incident
is indicated by a single or a series of unwanted information security events that have a significant probability
of compromising business operations and threatening information security
[3]
An event [G.11] that has been assessed as having an actual or potentially adverse effect on the security or
performance of a system.
[15]
Impact
[16]
The result of an unwanted incident [G.17].(ISO/IEC PDTR 13335-1)
Consequence
[17]
Outcome of an event [G.11]
• There can be more than one consequence from one event.
• Consequences can range from positive to negative.
• Consequences can be expressed qualitatively or quantitatively (ISO/IEC Guide 73)
The risk R is the product of the likelihood L of a security incident occurring times the impact I that will be incurred
to the organization due to the incident, that is:
[18]
R = L X I
The likelihood of a security incident occurrence is a function of the likelihood that a threat appears and the likelihood
that the threat can successfully exploit the relevant system vulnerabilities.
The consequence of the occurrence of a security incident are a function of likely impact that the incident will have
on the organization as a result of the harm the organization assets will sustain. Harm is related to the value of the
assets to the organization; the same asset can have different values to different organizations.
So R can be function of four factors:
• A = Value of the assets
• T = the likelihood of the threat
• V = the nature of vulnerability i.e. the likelihood that can be exploited (proportional to the potential benefit for the
attacker and inversely proportional to the cost of exploitation)
• I = the likely impact, the extent of the harm
If numerical values (money for impact and probabilities for the other factors), the risk can be expressed in monetary
terms and compared to the cost of countermeasures and the residual risk after applying the security control. It is not
always practical to express this values, so in the first step of risk evaluation, risk are graded dimensionless in three or
five steps scales.
OWASP proposes a practical risk measurement guideline
[18]
based on:
• Estimation of Likelihood as a mean between different factors in a 0 to 9 scale:
IT risk
221
• Threat agent factors
• Skill level: How technically skilled is this group of threat agents? No technical skills (1), some technical
skills (3), advanced computer user (4), network and programming skills (6), security penetration skills (9)
• Motive: How motivated is this group of threat agents to find and exploit this vulnerability? Low or no
reward (1), possible reward (4), high reward (9)
• Opportunity: What resources and opportunity are required for this group of threat agents to find and exploit
this vulnerability? full access or expensive resources required (0), special access or resources required (4),
some access or resources required (7), no access or resources required (9)
• Size: How large is this group of threat agents? Developers (2), system administrators (2), intranet users (4),
partners (5), authenticated users (6), anonymous Internet users (9)
• Vulnerability Factors: the next set of factors are related to the vulnerability involved. The goal here is to
estimate the likelihood of the particular vulnerability involved being discovered and exploited. Assume the
threat agent selected above.
• Ease of discovery: How easy is it for this group of threat agents to discover this vulnerability? Practically
impossible (1), difficult (3), easy (7), automated tools available (9)
• Ease of exploit: How easy is it for this group of threat agents to actually exploit this vulnerability?
Theoretical (1), difficult (3), easy (5), automated tools available (9)
• Awareness: How well known is this vulnerability to this group of threat agents? Unknown (1), hidden (4),
obvious (6), public knowledge (9)
• Intrusion detection: How likely is an exploit to be detected? Active detection in application (1), logged and
reviewed (3), logged without review (8), not logged (9)
• Estimation of Impact as a mean between different factors in a 0 to 9 scale
• Technical Impact Factors; technical impact can be broken down into factors aligned with the traditional
security areas of concern: confidentiality, integrity, availability, and accountability. The goal is to estimate the
magnitude of the impact on the system if the vulnerability were to be exploited.
• Loss of confidentiality: How much data could be disclosed and how sensitive is it? Minimal non-sensitive
data disclosed (2), minimal critical data disclosed (6), extensive non-sensitive data disclosed (6), extensive
critical data disclosed (7), all data disclosed (9)
• Loss of integrity: How much data could be corrupted and how damaged is it? Minimal slightly corrupt data
(1), minimal seriously corrupt data (3), extensive slightly corrupt data (5), extensive seriously corrupt data
(7), all data totally corrupt (9)
• Loss of availability How much service could be lost and how vital is it? Minimal secondary services
interrupted (1), minimal primary services interrupted (5), extensive secondary services interrupted (5),
extensive primary services interrupted (7), all services completely lost (9)
• Loss of accountability: Are the threat agents' actions traceable to an individual? Fully traceable (1), possibly
traceable (7), completely anonymous (9)
• Business Impact Factors: The business impact stems from the technical impact, but requires a deep
understanding of what is important to the company running the application. In general, you should be aiming
to support your risks with business impact, particularly if your audience is executive level. The business risk is
what justifies investment in fixing security problems.
• Financial damage: How much financial damage will result from an exploit? Less than the cost to fix the
vulnerability (1), minor effect on annual profit (3), significant effect on annual profit (7), bankruptcy (9)
• Reputation damage: Would an exploit result in reputation damage that would harm the business? Minimal
damage (1), Loss of major accounts (4), loss of goodwill (5), brand damage (9)
• Non-compliance: How much exposure does non-compliance introduce? Minor violation (2), clear violation
(5), high profile violation (7)
IT risk
222
• Privacy violation: How much personally identifiable information could be disclosed? One individual (3),
hundreds of people (5), thousands of people (7), millions of people (9)
• If the business impact is calculated accurately use it in the following otherwise use the Technical impact
• rate likelihood and impact in a LOW, MEDIUM, HIGH scale assuming that less than 3 is LOW, 3 to less than 6 is
MEDIUM, and 6 to 9 is HIGH.
• calculate the risk using the following table
Overall Risk Severity
Impact HIGH Medium High Critical
MEDIUM Low Medium High
LOW Note Low Medium
LOW MEDIUM HIGH
Likelihood
IT risk management
Risk Management Elements
IT risk management can be considered a component of a wider
Enterprise risk management system.
[19]
The establishment, maintenance and continuous update of an ISMS
provide a strong indication that a company is using a systematic
approach for the identification, assessment and management of
information security risks.
[20]
Different methodologies has been proposed to manage IT risks, each of
them divided in processes and steps.
[21]
The CISA Review Manual 2006 provides the following definition of risk management: "Risk management is the
process of identifying vulnerabilities and threats to the information resources used by an organization in achieving
business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based
on the value of the information resource to the organization."
[22]
IT Risk Laws and Regulations
In the following a brief description of applicable rules organized by source.
[23]
United Nations
United Nations issued the following:
• UN Guidelines concerning computerized personal data files of 14 December 1990
[24]
Generic data processing
activities using digital processing methods. Nonbinding guideline to UN nations calling for national regulation in
this field
IT risk
223
OECD
OECD issued the following:
• Organisation for Economic Co-operation and Development (OECD) Recommendation of the Council concerning
guidelines governing the protection of privacy and trans-border flows of personal data
[25]
(23 September 1980)
• OECD Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security
[26]
(25
July 2002). Topic: General information security. Scope: Non binding guidelines to any OECD entities
(governments, businesses, other organisations and individual users who develop, own, provide, manage, service,
and use information systems and networks). The OECD Guidelines state the basic principles underpinning risk
management and information security practices. While no part of the text is binding as such, non-compliance with
any of the principles is indicative of a serious breach of RM/RA good practices that can potentially incur liability.
European Union
The European Union issued the following, divided by topic:
• Privacy
• Regulation (EC) No 45/2001
[27]
on the protection of individuals with regard to the processing of personal data
by the Community institutions and bodies and on the free movement of such data provide an internal
regulation which is a practical application of the principles of the Privacy Directive described below.
Furthermore, article 35 of the Regulation requires the Community institutions and bodies to take similar
precautions with regard to their telecommunications infrastructure, and to properly inform the users of any
specific risks of security breaches.
• Directive 95/46/EC
[28]
on the protection of individuals with regard to the processing of personal data and on
the free movement of such data require that any personal data processing activity undergoes a prior risk
analysis in order to determine the privacy implications of the activity, and to determine the appropriate legal,
technical and organisation measures to protect such activities;is effectively protected by such measures, which
must be state of the art keeping into account the sensitivity and privacy implications of the activity (including
when a third party is charged with the processing task) is notified to a national data protection authority,
including the measures taken to ensure the security of the activity. Furthermore, article 25 and following of the
Directive requires Member States to ban the transfer of personal data to non-Member States, unless such
countries have provided adequate legal protection for such personal data, or barring certain other exceptions.
• Commission Decision 2001/497/EC of 15 June 2001
[29]
on standard contractual clauses for the transfer of
personal data to third countries, under Directive 95/46/EC; and Commission Decision 2004/915/EC
[30]
of 27
December 2004 amending Decision 2001/497/EC as regards the introduction of an alternative set of standard
contractual clauses for the transfer of personal data to third countries. Topic: Export of personal data to third
countries, specifically non-E.U. countries which have not been recognised as having a data protection level
that is adequate (i.e. equivalent to that of the E.U.). Both Commission Decisions provide a set of voluntary
model clauses which can be use to export personal data from a data controller (who is subject to E.U. data
protection rules) to a data processor outside the E.U. who is not subject to these rules or to a similar set of
adequate rules.
• International Safe Harbor Privacy Principles (see below USA and International Safe Harbor Privacy Principles
)
• Directive 2002/58/EC
[31]
of 12 July 2002 concerning the processing of personal data and the protection of
privacy in the electronic communications sector
• National Security
• Directive 2006/24/EC
[32]
of 15 March 2006 on the retention of data generated or processed in connection with
the provision of publicly available electronic communications services or of public communications networks
and amending Directive 2002/58/EC (‘Data Retention Directive’). Topic: Requirement for the providers of
IT risk
224
public electronic telecommunications service providers to retain certain information for the purposes of the
investigation, detection and prosecution of serious crime
• Council Directive 2008/114/EC
[33]
of 8 December 2008 on the identification and designation of European
critical infrastructures and the assessment of the need to improve their protection. Topic: Identification and
protection of European Critical Infrastructures. Scope: Applicable to Member States and to the operators of
European Critical Infrastructure (defined by the draft directive as ‘critical infrastructures the disruption or
destruction of which would significantly affect two or more Member States, or a single Member State if the
critical infrastructure is located in another Member State. This includes effects resulting from cross-sector
dependencies on other types of infrastructure’). Requires Member States to identify critical infrastructures on
their territories, and to designate them as ECIs. Following this designation, the owners/operators of ECIs are
required to create Operator Security Plans (OSPs), which should establish relevant security solutions for their
protection
• Civil and Penal law
• Council Framework Decision 2005/222/JHA
[34]
of 24 February 2005 on attacks against information systems.
Topic: General decision aiming to harmonise national provisions in the field of cyber crime, encompassing
material criminal law (i.e. definitions of specific crimes), procedural criminal law (including investigative
measures and international cooperation) and liability issues. Scope: Requires Member States to implement the
provisions of the Framework Decision in their national legal frameworks. Framework decision is relevant to
RM/RA because it contains the conditions under which legal liability can be imposed on legal entities for
conduct of certain natural persons of authority within the legal entity. Thus, the Framework decision requires
that the conduct of such figures within an organisation is adequately monitored, also because the Decision
states that a legal entity can be held liable for acts of omission in this regard.
Council of Europe
• Council of Europe Convention on Cybercrime, Budapest, 23.XI.2001
[35]
, European Treaty Series-No. 185.
Topic: General treaty aiming to harmonise national provisions in the field of cyber crime, encompassing material
criminal law (i.e. definitions of specific crimes), procedural criminal law (including investigative measures and
international cooperation), liability issues and data retention. Apart from the definitions of a series of criminal
offences in articles 2 to 10, the Convention is relevant to RM/RA because it states the conditions under which
legal liability can be imposed on legal entities for conduct of certain natural persons of authority within the legal
entity. Thus, the Convention requires that the conduct of such figures within an organisation is adequately
monitored, also because the Convention states that a legal entity can be held liable for acts of omission in this
regard.
USA
United States issued the following, divided by topic:
• Civil and Penal law
• Amendments to the Federal Rules of Civil Procedure with regard to electronic discovery
[36]
. Topic: U.S.
Federal rules with regard to the production of electronic documents in civil proceedings. The discovery rules
allow a party in civil proceedings to demand that the opposing party produce all relevant documentation (to be
defined by the requesting party) in its possession, so as to allow the parties and the court to correctly assess the
matter. Through the e-discovery amendment, which entered into force on 1 December 2006, such information
may now include electronic information. This implies that any party being brought before a U.S. court in civil
proceedings can be asked to produce such documents, which includes finalised reports, working documents,
internal memos and e-mails with regard to a specific subject, which may or may not be specifically delineated.
Any party whose activities imply a risk of being involved in such proceedings must therefore take adequate
IT risk
225
precautions for the management of such information, including the secure storage. Specifically: The party must
be capable of initiating a ‘litigation hold’, a technical/organisational measure which must ensure that no
relevant information can be modified any longer in any way. Storage policies must be responsible: while
deletion of specific information of course remains allowed when this is a part of general information
management policies (‘routine, good-faith operation of the information system’, Rule 37 (f)), the wilful
destruction of potentially relevant information can be punished by extremely high fines (in one specific case of
1.6 billion US$). Thus, in practice, any businesses who risk civil litigation before U.S. courts must implement
adequate information management policies, and must implement the necessary measures to initiate a litigation
hold.
• Privacy
• Gramm–Leach–Bliley Act (GLBA)
• USA PATRIOT Act, Title III
• Health Insurance Portability and Accountability Act (HIPAA) From an RM/RA perspective, the Act is
particularly known for its provisions with regard to Administrative Simplification (Title II of HIPAA). This
title required the U.S. Department of Health and Human Services (HHS) to draft specific rule sets, each of
which would provide specific standards which would improve the efficiency of the health care system and
prevent abuse. As a result, the HHS has adopted five principal rules: the Privacy Rule, the Transactions and
Code Sets Rule, the Unique Identifiers Rule, the Enforcement Rule, and the Security Rule. The latter,
published in the Federal Register on 20 February 2003 (see: http:// www.cms. hhs. gov/ SecurityStandard/
Downloads/ securityfinalrule.pdf), is specifically relevant, as it specifies a series of administrative, technical,
and physical security procedures to assure the confidentiality of electronic protected health information. These
aspects have been further outlined in a set of Security Standards on Administrative, Physical, Organisational
and Technical Safeguards, all of which have been published, along with a guidance document on the basics of
HIPAA risk management and risk assessment (see http:/ / www.cms. hhs. gov/ EducationMaterials/
04_SecurityMaterials.asp). European or other countries health care service providers will generally not be
affected by HIPAA obligations if they are not active on the U.S. market. However, since their data processing
activities are subject to similar obligations under general European law (including the Privacy Directive), and
since the underlying trends of modernisation and evolution towards electronic health files are the same, the
HHS safeguards can be useful as an initial yardstick for measuring RM/RA strategies put in place by European
health care service providers, specifically with regard to the processing of electronic health information.
HIPAA security standards include the following:
• Administrative safeguards:
• Security Management Process
• Assigned Security Responsibility
• Workforce Security
• Information Access Management
• Security Awareness and Training
• Security Incident Procedures
• Contingency Plan
• Evaluation
• Business Associate Contracts and Other Arrangements
• Physical safeguards
• Facility Access Controls
• Workstation Use
• Workstation Security
• Device and Media Controls
IT risk
226
• Technical safeguards
• Access Control
• Audit Controls
• Integrity
• Person or Entity Authentication
• Transmission Security
• Organisational requirements
• Business Associate Contracts & Other Arrangements
• Requirements for Group Health Plans
• International Safe Harbor Privacy Principles issued by the US Department of Commerce on July 21, 2000
Export of personal data from a data controller who is subject to E.U. privacy regulations to a U.S. based
destination; before personal data may be exported from an entity subject to E.U. privacy regulations to a
destination subject to U.S. law, the European entity must ensure that the receiving entity provides adequate
safeguards to protect such data against a number of mishaps. One way of complying with this obligation is to
require the receiving entity to join the Safe Harbor, by requiring that the entity self-certifies its compliance
with the so-called Safe Harbor Principles. If this road is chosen, the data controller exporting the data must
verify that the U.S. destination is indeed on the Safe Harbor list (see safe harbor list
[37]
)
• Sarbanes–Oxley Act
• FISMA
Standards Organizations and Standards
• International standard bodies:
• International Organization for Standardization - ISO
• Payment Card Industry Security Standards Council
• Information Security Forum
• The Open Group
• USA standard bodies:
• National Institute of Standards and Technology - NIST
• Federal Information Processing Standards - FIPS by NIST devoted to Federal Government and Agencies
• UK standard bodies
• British Standard Institute
Short description of standards
The list is chiefly based on
[23]
:
ISO
• ISO/IEC 13335-1:2004 - Information technology—Security techniques—Management of information and
communications technology security—Part 1: Concepts and models for information and communications
technology security management http:/ / www. iso. org/iso/ en/ CatalogueDetailPage.
CatalogueDetail?CSNUMBER=39066. Standard containing generally accepted descriptions of concepts and
models for information and communications technology security management. The standard is a commonly used
code of practice, and serves as a resource for the implementation of security management practices and as a
yardstick for auditing such practices. (See also http:// csrc. nist. gov/ publications/ secpubs/ otherpubs/ reviso-faq.
pdf)
IT risk
227
• ISO/IEC TR 15443-1:2005 – Information technology—Security techniques—A framework for IT security
assurance reference:http:// www. iso. org/ iso/ en/ CatalogueDetailPage. CatalogueDetail?CSNUMBER=39733
(Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of
charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic:
Security assurance – the Technical Report (TR) contains generally accepted guidelines which can be used to
determine an appropriate assurance method for assessing a security service, product or environmental factor
• ISO/IEC 15816:2002 - Information technology—Security techniques—Security information objects for access
control reference:http:// www. iso. org/iso/ en/ CatalogueDetailPage. CatalogueDetail?CSNUMBER=29139
(Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of
charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic:
Security management – Access control. The standard allows security professionals to rely on a specific set of
syntactic definitions and explanations with regard to SIOs, thus avoiding duplication or divergence in other
standardisation efforts.
• ISO/IEC TR 15947:2002 - Information technology—Security techniques—IT intrusion detection framework
reference:http:// www. iso. org/iso/ en/ CatalogueDetailPage. CatalogueDetail?CSNUMBER=29580 (Note: this
is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge,
and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic:
Security management – Intrusion detection in IT systems. The standard allows security professionals to rely on a
specific set of concepts and methodologies for describing and assessing security risks with regard to potential
intrusions in IT systems. It does not contain any RM/RA obligations as such, but it is rather a tool for facilitating
RM/RA activities in the affected field.
• ISO/IEC 15408-1/2/3:2005 - Information technology — Security techniques — Evaluation criteria for IT security
— Part 1: Introduction and general model (15408-1) Part 2: Security functional requirements (15408-2) Part 3:
Security assurance requirements (15408-3) reference: http:// isotc. iso. org/livelink/ livelink/ fetch/2000/ 2489/
Ittf_Home/ PubliclyAvailableStandards. htm Topic: Standard containing a common set of requirements for the
security functions of IT products and systems and for assurance measures applied to them during a security
evaluation. Scope: Publicly available ISO standard, which can be voluntarily implemented. The text is a resource
for the evaluation of the security of IT products and systems, and can thus be used as a tool for RM/RA. The
standard is commonly used as a resource for the evaluation of the security of IT products and systems; including
(if not specifically) for procurement decisions with regard to such products. The standard can thus be used as an
RM/RA tool to determine the security of an IT product or system during its design, manufacturing or marketing,
or before procuring it.
• ISO/IEC 17799:2005 - Information technology—Security techniques—Code of practice for information security
management. reference: http:// www. iso. org/iso/ en/ CatalogueDetailPage.
CatalogueDetail?CSNUMBER=39612& ICS1=35&ICS2=40& ICS3= (Note: this is a reference to the ISO page
where the standard can be acquired. However, the standard is not free of charge, and its provisions are not
publicly available. For this reason, specific provisions cannot be quoted). Topic: Standard containing generally
accepted guidelines and general principles for initiating, implementing, maintaining, and improving information
security management in an organization, including business continuity management. The standard is a commonly
used code of practice, and serves as a resource for the implementation of information security management
practices and as a yardstick for auditing such practices. (See alsoISO/IEC 17799)
• ISO/IEC TR 15446:2004 – Information technology—Security techniques—Guide for the production of
Protection Profiles and Security Targets. reference: http:// isotc. iso. org/ livelink/ livelink/ fetch/2000/ 2489/
Ittf_Home/PubliclyAvailableStandards. htm Topic: Technical Report (TR) containing guidelines for the
construction of Protection Profiles (PPs) and Security Targets (STs) that are intended to be compliant with
ISO/IEC 15408 (the "Common Criteria"). The standard is predominantly used as a tool for security professionals
to develop PPs and STs, but can also be used to assess the validity of the same (by using the TR as a yardstick to
IT risk
228
determine if its standards have been obeyed). Thus, it is a (nonbinding) normative tool for the creation and
assessment of RM/RA practices.
• ISO/IEC 18028:2006 - Information technology—Security techniques—IT network security reference: http://
www. iso. org/iso/ en/ CatalogueDetailPage. CatalogueDetail?CSNUMBER=40008 (Note: this is a reference to
the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions
are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Five part standard
(ISO/IEC 18028-1 to 18028-5) containing generally accepted guidelines on the security aspects of the
management, operation and use of information technology networks. The standard is considered an extension of
the guidelines provided in ISO/IEC 13335 and ISO/IEC 17799 focusing specifically on network security risks.
The standard is a commonly used code of practice, and serves as a resource for the implementation of security
management practices and as a yardstick for auditing such practices.
• ISO/IEC 27001:2005 - Information technology—Security techniques—Information security management
systems—Requirements reference: http:// www. iso. org/iso/ en/ CatalogueDetailPage.
CatalogueDetail?CSNUMBER=42103 (Note: this is a reference to the ISO page where the standard can be
acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this
reason, specific provisions cannot be quoted). Topic: Standard containing generally accepted guidelines for the
implementation of an Information Security Management System within any given organisation. Scope: Not
publicly available ISO standard, which can be voluntarily implemented. While not legally binding, the text
contains direct guidelines for the creation of sound information security practices The standard is a very
commonly used code of practice, and serves as a resource for the implementation of information security
management systems and as a yardstick for auditing such systems and/or the surrounding practices. (See also
ISO/IEC 27001). Its application in practice is often combined with related standards, such as BS 7799-3:2006
which provides additional guidance to support the requirements given in ISO/IEC 27001:2005 (see http:// www.
bsiglobal. com/ en/ Shop/ Publication-Detail/?pid=000000000030125022& recid=2491)
• ISO/IEC TR 18044:2004 – Information technology—Security techniques—Information security incident
management reference: http:// www. iso. org/iso/ en/ CatalogueDetailPage.
CatalogueDetail?CSNUMBER=35396 (Note: this is a reference to the ISO page where the standard can be
acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this
reason, specific provisions cannot be quoted). Topic: Technical Report (TR) containing generally accepted
guidelines and general principles for information security incident management in an organization.Scope: Not
publicly available ISO TR, which can be voluntarily used.While not legally binding, the text contains direct
guidelines for incident management. The standard is a high level resource introducing basic concepts and
considerations in the field of incident response. As such, it is mostly useful as a catalyst to awareness raising
initiatives in this regard.
• ISO/IEC 18045:2005 - Information technology—Security techniques—Methodology for IT security evaluation
reference: http:// isotc. iso. org/ livelink/ livelink/ fetch/2000/ 2489/ Ittf_Home/PubliclyAvailableStandards. htm
Topic: Standard containing auditing guidelines for assessment of compliance with ISO/IEC 15408 (Information
technology—Security techniques—Evaluation criteria for IT security) Scope Publicly available ISO standard, to
be followed when evaluating compliance with ISO/IEC 15408 (Information technology—Security
techniques—Evaluation criteria for IT security). The standard is a ‘companion document’, which is thus primarily
of used for security professionals involved in evaluating compliance with ISO/IEC 15408 (Information
technology—Security techniques—Evaluation criteria for IT security). Since it describes minimum actions to be
performed by such auditors, compliance with ISO/IEC 15408 is impossible if ISO/IEC 18045 has been
disregarded.
• ISO/TR 13569:2005 - Financial services—Information security guidelines reference: http:/ / www.iso. org/ iso/
en/ CatalogueDetailPage. CatalogueDetail?CSNUMBER=37245 (Note: this is a reference to the ISO page where
the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly
IT risk
229
available. For this reason, specific provisions cannot be quoted). Topic: Standard containing guidelines for the
implementation and assessment of information security policies in financial services institutions. The standard is a
commonly referenced guideline, and serves as a resource for the implementation of information security
management programmes in institutions of the financial sector, and as a yardstick for auditing such programmes.
(See also http:/ / csrc. nist. gov/ publications/ secpubs/ otherpubs/ reviso-faq.pdf)
• ISO/IEC 21827:2008 - Information technology—Security techniques—Systems Security
Engineering—Capability Maturity Model® (SSE-CMM®): ISO/IEC 21827:2008 specifies the Systems Security
Engineering - Capability Maturity Model® (SSE-CMM®), which describes the essential characteristics of an
organization's security engineering process that must exist to ensure good security engineering. ISO/IEC
21827:2008 does not prescribe a particular process or sequence, but captures practices generally observed in
industry. The model is a standard metric for security engineering practices.
BSI
• BS 25999-1:2006 - Business continuity management Part 1: Code of practice Note: this is only part one of BS
25999, which was published in November 2006. Part two (which should contain more specific criteria with a
view of possible accreditation) is yet to appear. reference: http:// www.bsi-global. com/ en/ Shop/
Publication-Detail/ ?pid=000000000030157563. Topic: Standard containing a business continuity code of
practice. The standard is intended as a code of practice for business continuity management, and will be extended
by a second part that should permit accreditation for adherence with the standard. Given its relative newness, the
potential impact of the standard is difficult to assess, although it could be very influential to RM/RA practices,
given the general lack of universally applicable standards in this regard and the increasing attention to business
continuity and contingency planning in regulatory initiatives. Application of this standard can be complemented
by other norms, in particular PAS 77:2006 - IT Service Continuity Management Code of Practice (see http://
www. bsi-global. com/ en/ Shop/ Publication-Detail/?pid=000000000030141858).The TR allows security
professionals to determine a suitable methodology for assessing a security service, product or environmental
factor (a deliverable). Following this TR, it can be determined which level of security assurance a deliverable is
intended to meet, and if this threshold is actually met by the deliverable.
• BS 7799-3:2006 - Information security management systems—Guidelines for information security risk
management reference: http:/ / www. bsi-global. com/ en/ Shop/ Publication-Detail/
?pid=000000000030125022& recid=2491 (Note: this is a reference to the BSI page where the standard can be
acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this
reason, specific provisions cannot be quoted). Topic: Standard containing general guidelines for information
security risk management.Scope: Not publicly available BSI standard, which can be voluntarily implemented.
While not legally binding, the text contains direct guidelines for the creation of sound information security
practices. The standard is mostly intended as a guiding complementary document to the application of the
aforementioned ISO 27001:2005, and is therefore typically applied in conjunction with this standard in risk
assessment practices
IT risk
230
Information Security Forum
• Standard of Good Practice
Professionalism
Information security professionalism is the set of knowledge that people working in Information security and
similar fields (Information Assurance and Computer security) should have and eventually demonstrate through
certifications from well respected organizations.
It also encompasses the education process required to accomplish different tasks in these fields.
Information technology adoption is always increasing and spread to vital infrastructure for civil and military
organizations. Everybody can get involved in the Cyberwar. It is crucial that a nation can have skilled professional to
defend its vital interests.
References
[1] "Risk is a combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can
be caused by the event or exposure(s)" (OHSAS 18001:2007).
[2] Caballero, Albert. (2009) "14" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 232
ISBN 978-0-12-374354-1
[3] ISO/IEC, "Information technology -- Security techniques-Information security risk management" ISO/IEC FIDIS 27005:2008
[4] CNSS Instruction No. 4009 (http:// www.cnss. gov/ Assets/ pdf/ cnssi_4009. pdf) dated 26 April 2010
[5] National Information Assurance Certification and Accreditation Process (NIACAP) by National Security Telecommunications and
Information Systems Security Committee (http:// niatec. info/GetFile. aspx?pid=567)
[6] NIATEC Glossary of terms (http:// niatec. info/ Glossary.aspx?term=4253& alpha=R)
[7] a wiki project (http:// fismapedia. org/index. php) devoted to FISMA
[8] FISMApedia Risk term (http:// fismapedia. org/index. php?title=Term:Risk)
[9] NIST SP 800-30 Risk Management Guide for Information Technology Systems (http:// csrc.nist. gov/ publications/ nistpubs/ 800-30/
sp800-30. pdf)
[10] FIPS Publication 200 Minimum Security Requirements for Federal Information and Information Systems (http:// csrc.nist. gov/
publications/ fips/ fips200/ FIPS-200-final-march.pdf)
[11] FAIR: Factor Analysis for Information Risks (http:/ / www.riskmanagementinsight. com/ media/ docs/ FAIR_introduction.pdf)
[12] ISACA THE RISK IT FRAMEWORK (http:// www.isaca. org/Knowledge-Center/ Research/ Documents/
RiskIT-FW-18Nov09-Research. pdf) ISBN 978-1-60420-111-6 (registration required)
[13] Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009.
[14] ENISA Glossary event (http:// www.enisa. europa. eu/ act/ rm/cr/ risk-management-inventory/glossary#G11)
[15] ENISA Glossary Incident (http:// www.enisa. europa.eu/ act/ rm/ cr/risk-management-inventory/glossary#G17)
[16] ENISA Glossary Impact (http:// www.enisa. europa.eu/ act/ rm/ cr/risk-management-inventory/glossary#G21)
[17] ENISA Glossary Consequence (http:// www.enisa. europa.eu/ act/ rm/ cr/risk-management-inventory/glossary#G4)
[18] OWASP risk rating Methodology (http:// www.owasp. org/index. php/ OWASP_Risk_Rating_Methodology)
[19] ISACA THE RISK IT FRAMEWORK (registration required) (http:/ / www. isaca. org/Knowledge-Center/ Research/ Documents/
RiskIT-FW-18Nov09-Research. pdf)
[20] Enisa Risk management, Risk assessment inventory, page 46 (http:// www. enisa.europa.eu/ act/ rm/cr/ risk-management-inventory/files/
deliverables/ risk-management-principles-and-inventories-for-risk-management-risk-assessment-methods-and-tools/at_download/ fullReport)
[21] Katsicas, Sokratis K. (2009) "35" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 605
ISBN 978-0-12-374354-1
[22] ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control Association (http:/ / www.isaca.org/ ). pp. 85.
ISBN 1-933284-15-3.
[23] Risk Management / Risk Assessment in European regulation, international guidelines and codes of practice (http:// www.enisa.europa. eu/
act/rm/cr/ laws-regulation/downloads/
risk-management-risk-assessment-in-european-regulation-international-guidelines-and-codes-of-practice/at_download/ fullReport) Conducted
by the Technical Department of ENISA Section Risk Management in cooperation with: Prof. J. Dumortier and Hans Graux www.lawfort.be
June 2007
[24] http:// www. unhchr.ch/ html/ menu3/ b/ 71. htm
[25] http:// www. oecd. org/document/ 18/ 0,2340,en_2649_34255_1815186_1_1_1_1,00. html
[26] http:/ / www. oecd. org/dataoecd/ 16/22/ 15582260. pdf
[27] http:// eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32001R0045:EN:NOT
IT risk
231
[28] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:EN:NOT
[29] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32001D0497:EN:NOT
[30] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32004D0915:EN:NOT
[31] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32002L0058:EN:NOT
[32] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32006L0024:EN:NOT
[33] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32008L0114:EN:NOT
[34] http:/ / eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=CELEX:32005F0222:EN:NOT
[35] http:/ / conventions. coe. int/ Treaty/EN/ Treaties/Html/ 185.htm
[36] http:// www. law. cornell.edu/ rules/ frcp/
[37] http:/ / web.ita. doc. gov/ safeharbor/shlist. nsf/ webPages/ safe+harbor+list
External links
• The Institute of Risk Management (IRM) (http:// www.theirm. org/index. html) is risk management's leading
international professional education and training body
• Internet2 Information Security Guide: Effective Practices and Solutions for Higher Education (https:// wiki.
internet2. edu/ confluence/ display/ itsg2/ Home)
• Risk Management - Principles and Inventories for Risk Management / Risk Assessment methods and tools (http:/
/ www. enisa. europa.eu/ act/ rm/cr/ risk-management-inventory/files/ deliverables/
risk-management-principles-and-inventories-for-risk-management-risk-assessment-methods-and-tools/
at_download/ fullReport), Publication date: Jun 01, 2006 Authors:Conducted by the Technical Department of
ENISA Section Risk Management
• Clusif Club de la Sécurité de l'Information Français (https:// www.clusif. asso. fr/)
• 800-30 NIST Risk Management Guide (http:/ / csrc. nist. gov/ publications/ nistpubs/ 800-30/ sp800-30. pdf)
• 800-39 NIST DRAFT Managing Risk from Information Systems: An Organizational Perspective (http:// csrc.
nist. gov/ publications/ PubsDrafts. html#SP-800-39)
• FIPS Publication 199, Standards for Security Categorization of Federal Information and Information (http:// csrc.
nist. gov/ publications/ fips/ fips199/ FIPS-PUB-199-final.pdf)
• FIPS Publication 200 Minimum Security Requirements for Federal Information and Information Systems (http://
csrc. nist. gov/ publications/ fips/ fips200/ FIPS-200-final-march.pdf)
• 800-37 NIST Guide for Applying the Risk Management Framework to Federal Information Systems: A Security
Life Cycle Approach (http:/ / csrc. nist. gov/ publications/ nistpubs/ 800-37-rev1/sp800-37-rev1-final.pdf)
• FISMApedia is a collection of documents and discussions focused on USA Federal IT security (http:/ /
fismapedia.org/index. php?title=Main_Page)
IT risk management
232
IT risk management
Risk Management Elements
Relationships between IT security entity
The IT risk management is the application of risk management to
Information technology context in order to manage IT risk, i.e.:
The business risk associated with the use, ownership, operation,
involvement, influence and adoption of IT within an enterprise
IT risk management can be considered a component of a wider
Enterprise risk management system.
[1]
The establishment, maintenance and continuous update of an ISMS
provide a strong indication that a company is using a systematic
approach for the identification, assessment and management of
information security risks.
[2]
Different methodologies has been proposed to manage IT risks, each of
them divided in processes and steps.
[3]
According to Risk IT,
[1]
it encompasses not just only the negative
impact of operations and service delivery which can bring destruction
or reduction of the value of the organization, but also the benefit\value
enabling risk associated to missing opportunities to use technology to
enable or enhance business or the IT project management for aspects
like overspending or late delivery with adverse business impact
Because risk is strictly tied to uncertainty, Decision theory should be
applied to manage risk as a science, i.e. rationally making choices under uncertainty.
Generally speaking, risk is the product of likelihood times impact (Risk = Likelihood * Impact).
[4]
The measure of a IT risk can be determined as a product of threat, vulnerability and asset values:
[5]
Risk = Threat * Vulnerability ∗ Asset
Definitions
The CISA Review Manual 2006 provides the following definition of risk management: "Risk management is the
process of identifying vulnerabilities and threats to the information resources used by an organization in achieving
business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based
on the value of the information resource to the organization."
[6]
There are two things in this definition that may need some clarification. First, the process of risk management is an
ongoing iterative process. It must be repeated indefinitely. The business environment is constantly changing and new
threats and vulnerability emerge every day. Second, the choice of countermeasure (computer)s (controls) used to
manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of
the informational asset being protected.
Risk management is the process that allows IT managers to balance the operational and economic costs of protective
measures and achieve gains in mission capability by protecting the IT systems and data that support their
organizations’ missions. This process is not unique to the IT environment; indeed it pervades decision-making in all
areas of our daily lives.
[7]
The head of an organizational unit must ensure that the organization has the capabilities needed to accomplish its
mission. These mission owners must determine the security capabilities that their IT systems must have to provide
the desired level of mission support in the face of real world threats. Most organizations have tight budgets for IT
IT risk management
233
security; therefore, IT security spending must be reviewed as thoroughly as other management decisions. A
well-structured risk management methodology, when used effectively, can help management identify appropriate
controls for providing the mission-essential security capabilities.
[7]
Risk management in the IT world is quite a complex, multi faced activity, with a lot of relations with other complex
activities. The picture show the relationships between different related terms.
National Information Assurance Training and Education Center defines risk in the IT field as:
[8]
1. The total process to identify, control, and minimize the impact of uncertain events. The objective of the risk
management program is to reduce risk and obtain and maintain DAA approval. The process facilitates the
management of security risks by each level of management throughout the system life cycle. The approval process
consists of three elements: risk analysis, certification, and approval.
2. An element of managerial science concerned with the identification, measurement, control, and minimization of
uncertain events. An effective risk management program encompasses the following four phases:
1. a Risk assessment, as derived from an evaluation of threats and vulnerabilities.
2. Management decision.
3. Control implementation.
4. Effectiveness review.
3. The total process of identifying, measuring, and minimizing uncertain events affecting AIS resources. It includes
risk analysis, cost benefit analysis, safeguard selection, security test and evaluation, safeguard implementation,
and systems review.
4. The total process of identifying, controlling, and eliminating or minimizing uncertain events that may affect
system resources. lt indudes risk analysis, cost benefit analysis, selection, implementation and test, security
evaluation of safeguards, and overall security review.
Risk management as part of enterprise risk management
Some organizations have, and many others should have, a comprehensive Enterprise risk management (ERM) in
place. The four objectives categories addressed, according to COSO are:
• Strategy - high-level goals, aligned with and supporting the organization's mission
• Operations - effective and efficient use of resources
• Financial Reporting - reliability of operational and financial reporting
• Compliance - compliance with applicable laws and regulations
According to Risk It framework by ISACA,
[9]
IT risk is transversal to all four categories. The IT risk should be
managed in the framework of Enterprise risk management: Risk appetite and Risk sensitivity of the whole enterprise
should guide the IT risk management process. ERM should provide the context and business objectives to IT risk
management
IT risk management
234
Risk management methodology
ENISA: The Risk Management Process,
according to ISO Standard 13335
The term methodology means an organized set of principles and rules
that drive action in a particular field of knowledge.
[3]
A methodology
does not describe specific methods; nevertheless it does specify several
processes that need to be followed. These processes constitute a
generic framework. They may be broken down in sub-processes, they
may be combined, or their sequence may change. However, any risk
management exercise must carry out these processes in one form or
another, The following table compare the processes foreseen by three
leading standards.
[3]
ISACA Risk IT framework is more recent. The
Risk IT Practitioner-Guide
[10]
compares Risk IT and ISO 27005. The
overall comparison is illustrated in the following table.
Risk management constituent processes
ISO/IEC
27005:2008
BS 7799-3:2006 SP 800-30 Risk IT
Context
establishment
Organizational context RG and RE Domains more precisely
• RG1.2 Propose IT risk tolerance,
• RG2.1 Establish and maintain accountability for IT risk management
• RG2.3 Adapt IT risk practices to enterprise risk practices,
• RG2.4 Provide adequate resources for IT risk management,
• RE2.1 Define IT risk analysis scope.
Risk assessment Risk assessment Risk assessment RE2 process includes:
• RE2.1 Define IT risk analysis scope.
• RE2.2 Estimate IT risk.
• RE2.3 Identify risk response options.
• RE2.4 Perform a peer review of IT risk analysis.
In general, the elements as described in the ISO 27005 process are all
included in Risk IT; however, some are structured and named differently.
Risk treatment Risk treatment and
management decision making
Risk mitigation • RE 2.3 Identify risk response options
• RR2.3 Respond to discovered risk exposure and opportunity
Risk acceptance RG3.4 Accept IT risk
Risk communication Ongoing risk management
activities
• RG1.5 Promote IT risk-aware culture
• RG1.6 Encourage effective communication of IT risk
• RE3.6 Develop IT risk indicators.
Risk monitoring and
review
Evaluation and
assessment
• RG2 Integrate with ERM.
• RE2.4 Perform a peer review of IT risk analysis.
• RG2.5 Provide independent assurance over IT risk management
IT risk management
235
Due to the probabilistic nature and the need of cost benefit analysis, the IT risks are managed following a process
that accordingly to NIST SP 800-30 can be divided in the following steps:
[7]
1. risk assessment,
2. risk mitigation, and
3. evaluation and assessment.
Effective risk management must be totally integrated into the Systems Development Life Cycle.
[7]
Information risk analysis conducted on applications, computer installations, networks and systems under
development should be undertaken using structured methodologies.
[11]
Context establishment
This step is the first step in ISO ISO/IEC 27005 framework. Most of the elementary activities are foreseen as the first
sub process of Risk assessment according to NIST SP 800-30. This step implies the acquisition of all relevant
information about the organization and the determination of the basic criteria, purpose, scope and boundaries of risk
management activities and the organization in charge of risk management activities. The purpose is usually the
compliance with legal requirements and provide evidence of due diligence supporting an ISMS that can be certified.
The scope can be an incident reporting plan, a business continuity plan.
Another area of application can be the certification of a product.
Criteria include the risk evaluation, risk acceptance and impact evaluation criteria. These are conditioned by:
[12]
• legal and regulatory requirements
• the strategic value for the business of information processes
• stakeholder expectations
• negative consequences for the reputation of the organization
Establishing the scope and boundaries, the organization should be studied: its mission, its values, its structure; its
strategy, its locations and cultural environment. The constraints (budgetary, cultural, political, technical) of the
organization are to be collected and documented as guide for next steps.
Organization for security management
The set up of the organization in charge of risk management is foreseen as partially fulfilling the requirement to
provide the resources needed to establish, implement, operate, monitor, review, maintain and improve an ISMS.
[13]
The main roles inside this organization are:
[7]
• Senior Management
• Chief information officer (CIO)
• System and Information owners
• the business and functional managers
• the Information System Security Officer (ISSO) or Chief information security officer (CISO)
• IT Security Practitioners
• Security Awareness Trainers
IT risk management
236
Risk assessment
ENISA: Risk assessment inside risk management
Risk Management is a recurrent activity that deals with the analysis,
planning, implementation, control and monitoring of implemented
measurements and the enforced security policy. On the contrary, Risk
Assessment is executed at discrete time points (e.g. once a year, on
demand, etc.) and – until the performance of the next assessment -
provides a temporary view of assessed risks and while parameterizing
the entire Risk Management process. This view of the relationship of
Risk Management to Risk Assessment is depicted in figure as adopted
from OCTAVE.
[2]
Risk assessment is often conducted in more than one iteration, the first
being a high-level assessment to identify high risks, while the other iterations detailed the analysis of the major risks
and other risks.
According to National Information Assurance Training and Education Center risk assessment in the IT field is:
[8]
1. A study of the vulnerabilities, threats, likelihood, loss or impact, and theoretical effectiveness of security
measures. Managers use the results of a risk assessment to develop security requirements and specifications.
2. The process of evaluating threats and vulnerabilities, known and postulated, to determine expected loss and
establish the degree of acceptability to system operations.
3. An identification of a specific ADP facility's assets, the threats to these assets, and the ADP facility's vulnerability
to those threats.
4. An analysis of system assets and vulnerabilities to establish an expected loss from certain events based on
estimated probabilities of the occurrence of those events. The purpose of a risk assessment is to determine if
countermeasures are adequate to reduce the probability of loss or the impact of loss to an acceptable level.
5. A management tool which provides a systematic approach for determining the relative value and sensitivity of
computer installation assets, assessing vulnerabilities, assessing loss expectancy or perceived risk exposure
levels, assessing existing protection features and additional protection alternatives or acceptance of risks and
documenting management decisions. Decisions for implementing additional protection features are normally
based on the existence of a reasonable ratio between cost/benefit of the safeguard and sensitivity/value of the
assets to be protected. Risk assessments may vary from an informal review of a small scale microcomputer
installation to a more formal and fully documented analysis (i. e. , risk analysis) of a large scale computer
installation. Risk assessment methodologies may vary from qualitative or quantitative approaches to any
combination of these two approaches.
ISO 27005 framework
Risk assessment receives as input the output of the previous step Context establishment; the output is the list of
assessed risks prioritized according to risk evaluation criteria. The process can divided in the following steps:
[12]
• Risk analysis, further divided in:
• Risk identification
• Risk estimation
• Risk evaluation
The following table compare these ISO 27005 processes with Risk IT framework processes:
[10]
IT risk management
237
Risk assessment constituent processes
ISO 27005 Risk IT
Risk analysis • RE2 Analyse risk comprises more than what is described by the ISO 27005 process step. RE2 has as its objective developing
useful information to support risk decisions that take into account the business relevance of risk factors.
• RE1 Collect data serves as input to the analysis of risk (e.g., identifying risk factors, collecting data on the external
environment).
Risk
identification
This process is included in RE2.2 Estimate IT risk. The identification of risk comprises the following elements:
• Risk scenarios
• Risk factors
Risk estimation RE2.2 Estimate IT risk
Risk evaluation RE2.2 Estimate IT risk
The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be
examined during a risk assessment:
• security policy,
• organization of information security,
• asset management,
• human resources security,
• physical and environmental security,
• communications and operations management,
• access control,
• information systems acquisition, development and maintenance, (see Systems Development Life Cycle)
• information security incident management,
• business continuity management, and
• regulatory compliance.
Risk identification
OWASP: relationship between threat agent and
business impact
Risk identification states what could cause a potential loss; the
following are to be identified:
[12]
• assets, primary (i.e. Business processes and related information) and
supporting (i.e. hardware, software, personnel, site, organization
structure)
• threats
• existing and planned security measures
• vulnerabilities
• consequences
• related business processes
The output of sub process is made up of:
[12]
• list of asset and related business processes to be risk managed with associated list of threats, existing and planned
security measures
• list of vulnerabilities unrelated to any identified threats
• list of incident scenarios with their consequences
IT risk management
238
Risk estimation
There are two methods of risk assessment in information security field, qualitative and quantitative.
[14]
Purely quantitative risk assessment is a mathematical calculation based on security metrics on the asset (system or
application). For each risk scenario, taking into consideration the different risk factors a Single loss expectancy
(SLE) is determined. Then, considering the probability of occurrence on a given period basis, for example the annual
rate of occurrence (ARO), the Annualized Loss Expectancy is determined as the product of ARO X SLE.
[5]
It is
important to point out that the values of assets to be considered are those of all involved assets, not only the value of
the directly affected resource.
For example, if you consider the risk scenario of a Laptop theft threat, you should consider the value of the data (a
related asset) contained in the computer and the reputation and liability of the company (other assets) deriving from
the lost of availability and confidentiality of the data that could be involved. It is easy to understand that intangible
assets (data, reputation, liability) can be worth much more than physical resources at risk (the laptop hardware in the
example).
[15]
Intangible asset value can be huge, but is not easy to evaluate: this can be a consideration against a
pure quantitative approach.
[16]
Qualitative risk assessment (three to five steps evaluation, from Very High to Low) is performed when the
organization requires a risk assessment be performed in a relatively short time or to meet a small budget, a
significant quantity of relevant data is not available, or the persons performing the assessment don't have the
sophisticated mathematical, financial, and risk assessment expertise required.
[14]
Qualitative risk assessment can be
performed in a shorter period of time and with less data. Qualitative risk assessments are typically performed
through interviews of a sample of personnel from all relevant groups within an organization charged with the
security of the asset being assessed. Qualitative risk assessments are descriptive versus measurable. Usually a
qualitative classification is done followed by a quantitative evaluation of the highest risks to be compared to the
costs of security measures.
Risk estimation has as input the output of risk analysis and can be split in the following steps:
• assessment of the consequences through the valuation of assets
• assessment of the likelihood of the incident (through threat and vulnerability valuation)
• assign values to the likelihood and consequence of the risks
The output is the list of risks with value levels assigned. It can be documented in a risk register
During risk estimation there are generally three values of a given asset, one for the loss of one of the CIA properties:
Confidentiality, Integrity, Availability.
[17]
IT risk management
239
Risk evaluation
The risk evaluation process receives as input the output of risk analysis process. It compares each risk level against
the risk acceptance criteria and prioritise the risk list with risk treatment indications.
NIST SP 800 30 framework
Risk assessment according NIST SP 800-30
Figure 3-1
To determine the likelihood of a future adverse event, threats to an IT
system must be in conjunction with the potential vulnerabilities and the
controls in place for the IT system.
Impact refers to the magnitude of harm that could be caused by a
threat’s exercise of vulnerability. The level of impact is governed by
the potential mission impacts and produces a relative value for the IT
assets and resources affected (e.g., the criticality sensitivity of the IT
system components and data). The risk assessment methodology
encompasses nine primary steps:
[7]
• Step 1 System Characterization
• Step 2 Threat Identification
• Step 3 Vulnerability Identification
• Step 4 Control Analysis
• Step 5 Likelihood Determination
• Step 6 Impact Analysis
• Step 7 Risk Determination
• Step 8 Control Recommendations
• Step 9 Results Documentation
Risk mitigation
Risk mitigation, the second process according to SP 800-30, the third according to ISO 27005 of risk management,
involves prioritizing, evaluating, and implementing the appropriate risk-reducing controls recommended from the
risk assessment process. Because the elimination of all risk is usually impractical or close to impossible, it is the
responsibility of senior management and functional and business managers to use the least-cost approach and
implement the most appropriate controls to decrease mission risk to an acceptable level, with minimal adverse
impact on the organization’s resources and mission.
ISO 27005 framework
The risk treatment process aim at selecting security measures to:
• reduce
• retain
• avoid
• transfer
risk and produce a risk treatment plan, that is the output of the process with the residual risks subject to the
acceptance of management.
There are some list to select appropriate security measures,
[13]
but is up to the single organization to choose the most
appropriate one according to its business strategy, constraints of the environment and circumstances. The choice
IT risk management
240
should be rational and documented. The importance of accepting a risk that is too costly to reduce is very high and
led to the fact that risk acceptance is considered a separate process.
[12]
Risk transfer apply were the risk has a very high impact but is not easy to reduce significantly the likelihood by
means of security controls: the insurance premium should be compared against the mitigation costs, eventually
evaluating some mixed strategy to partially treat the risk. Another option is to outsource the risk to somebody more
efficient to manage the risk.
[18]
Risk avoidance describe any action where ways of conducting business are changed to avoid any risk occurrence.
For example, the choice of not storing sensitive information about customers can be an avoidance for the risk that
customer data can be stolen.
The residual risks, i.e. the risk reaming after risk treatment decision have been taken, should be estimated to ensure
that sufficient protection is achieved. If the residual risk is unacceptable, the risk treatment process should be
iterated.
NIST SP 800 30 framework
Risk mitigation methodology flow chart from
NIST SP 800-30 Figure 4-2
Risk mitigation is a systematic methodology used by senior
management to reduce mission risk.
[7]
Risk mitigation can be achieved through any of the following risk
mitigation options:
• Risk Assumption. To accept the potential risk and continue
operating the IT system or to implement controls to lower the risk to
an acceptable level
• Risk Avoidance. To avoid the risk by eliminating the risk cause
and/or consequence (e.g., forgo certain functions of the system or
shut down the system when risks are identified)
• Risk Limitation. To limit the risk by implementing controls that
minimize the adverse impact of a threat’s exercising a vulnerability
(e.g., use of supporting, preventive, detective controls)
• Risk Planning. To manage risk by developing a risk mitigation
plan that prioritizes,implements, and maintains controls
• Research and Acknowledgement. To lower the risk of loss by
acknowledging the vulnerability or flaw and researching controls to
correct the vulnerability
• Risk Transference. To transfer the risk by using other options to
compensate for the loss, such as purchasing insurance.
Address the greatest risks and strive for sufficient risk mitigation at the
lowest cost, with minimal impact on other mission capabilities: this is the suggestion contained in
[7]
Risk communication
Risk communication is a horizontal process that interacts bidirectionally with all other processes of risk
management. Its purpose is to establish a common understanding of all aspect of risk among all the organization's
stakeholder. Establishing a common understanding is important, since it influences decisions to be taken.
IT risk management
241
Risk mitigation action point according to NIST
SP 800-30 Figure 4-1
Risk monitoring and review
Risk management is an ongoing, never ending process. Within this process implemented security measures are
regularly monitored and reviewed to ensure that they work as planned and that changes in the environment rendered
them ineffective. Business requirements, vulnerabilities and threats can change over the time.
Regular audits should be scheduled and should be conducted by an independent party, i.e. somebody not under the
control of whom is responsible for the implementations or daily management of ISMS.
IT evaluation and assessment
Security controls should be validated. Technical controls are possible complex systems that are to tested and
verified. The hardest part to validate is people knowledge of procedural controls and the effectiveness of the real
application in daily business of the security procedures.
[7]
Vulnerability assessment, both internal and external, and Penetration test are instruments for verifying the status of
security controls.
Information technology security audit is an organizational and procedural control with the aim of evaluating security.
The IT systems of most organization are evolving quite rapidly. Risk management should cope with this changes
through change authorization after risk re evaluation of the affected systems and processes and periodically review
the risks and mitigation actions.
[5]
Monitoring system events according to a security monitoring strategy, an incident response plan and security
validation and metrics are fundamental activities to assure that an optimal level of security is obtained.
It is important to monitor the new vulnerabilities, apply procedural and technical security controls like regularly
updating software, and evaluate other kinds of controls to deal with zero-day attacks.
The attitude of involved people to benchmark against best practice and follow the seminars of professional
associations in the sector are factors to assure the state of art of an organization IT risk management practice.
IT risk management
242
Integrating risk management into system development life cycle
Effective risk management must be totally integrated into the SDLC. An IT system’s SDLC has five phases:
initiation, development or acquisition, implementation, operation or maintenance, and disposal. The risk
management methodology is the same regardless of the SDLC phase for which the assessment is being conducted.
Risk management is an iterative process that can be performed during each major phase of the SDLC.
[7]
Table 2-1 Integration of Risk Management into the SDLC
[7]
SDLC Phases Phase Characteristics Support from Risk Management Activities
Phase 1: Initiation The need for an IT system is expressed and the
purpose and scope of the IT system is documented
Identified risks are used to support the development of the system
requirements, including security requirements, and a security concept
of operations (strategy)
Phase 2:
Development or
Acquisition
The IT system is designed, purchased, programmed,
developed, or otherwise constructed
The risks identified during this phase can be used to support the
security analyses of the IT system that may lead to architecture and
design tradeoffs during system development
Phase 3:
Implementation
The system security features should be configured,
enabled, tested, and verified
The risk management process supports the assessment of the system
implementation against its requirements and within its modeled
operational environment. Decisions regarding risks identified must
be made prior to system operation
Phase 4: Operation
or Maintenance
The system performs its functions. Typically the
system is being modified on an ongoing basis through
the addition of hardware and software and by changes
to organizational processes, policies, and procedures
Risk management activities are performed for periodic system
reauthorization (or reaccreditation) or whenever major changes are
made to an IT system in its operational, production environment
(e.g., new system interfaces)
Phase 5: Disposal This phase may involve the disposition of information,
hardware, and software. Activities may include
moving, archiving, discarding, or destroying
information and sanitizing the hardware and software
Risk management activities are performed for system components
that will be disposed of or replaced to ensure that the hardware and
software are properly disposed of, that residual data is appropriately
handled, and that system migration is conducted in a secure and
systematic manner
NIST SP 800-64
[19]
is devoted to this topic.
Early integration of security in the SDLC enables agencies to maximize return on investment in their security
programs, through:
[19]
• Early identification and mitigation of security vulnerabilities and misconfigurations, resulting in lower cost of
security control implementation and vulnerability mitigation;
• Awareness of potential engineering challenges caused by mandatory security controls;
• Identification of shared security services and reuse of security strategies and tools to reduce development cost and
schedule while improving security posture through proven methods and techniques; and
• Facilitation of informed executive decision making through comprehensive risk management in a timely manner.
This guide
[19]
focuses on the information security components of the SDLC. First, descriptions of the key security
roles and responsibilities that are needed in most information system developments are provided. Second, sufficient
information about the SDLC is provided to allow a person who is unfamiliar with the SDLC process to understand
the relationship between information security and the SDLC. The document integrates the security steps into the
linear, sequential (a.k.a. waterfall) SDLC. The five-step SDLC cited in the document is an example of one method of
development and is not intended to mandate this methodology. Lastly, SP 800-64 provides insight into IT projects
and initiatives that are not as clearly defined as SDLC-based developments, such as service-oriented architectures,
cross-organization projects, and IT facility developments.
Security can be incorporated into information systems acquisition, development and maintenance by implementing
effective security practices in the following areas.
[20]
IT risk management
243
• Security requirements for information systems
• Correct processing in applications
• Cryptographic controls
• Security of system files
• Security in development and support processes
• Technical vulnerability management
Information systems security begins with incorporating security into the requirements process for any new
application or system enhancement. Security should be designed into the system from the beginning. Security
requirements are presented to the vendor during the requirements phase of a product purchase. Formal testing should
be done to determine whether the product meets the required security specifications prior to purchasing the product.
Correct processing in applications is essential in order to prevent errors and to mitigate loss, unauthorized
modification or misuse of information. Effective coding techniques include validating input and output data,
protecting message integrity using encryption, checking for processing errors, and creating activity logs.
Applied properly, cryptographic controls provide effective mechanisms for protecting the confidentiality,
authenticity and integrity of information. An institution should develop policies on the use of encryption, including
proper key management. Disk Encryption is one way to protect data at rest. Data in transit can be protected from
alteration and unauthorized viewing using SSL certificates issued through a Certificate Authority that has
implemented a Public Key Infrastructure.
System files used by applications must be protected in order to ensure the integrity and stability of the application.
Using source code repositories with version control, extensive testing, production back-off plans, and appropriate
access to program code are some effective measures that can be used to protect an application's files.
Security in development and support processes is an essential part of a comprehensive quality assurance and
production control process, and would usually involve training and continuous oversight by the most experienced
staff.
Applications need to be monitored and patched for technical vulnerabilities. Procedures for applying patches should
include evaluating the patches to determine their appropriateness, and whether or not they can be successfully
removed in case of a negative impact.
Critique of risk management as a methodology
Risk management as a scientific methodology has been criticized as being shallow.
[3]
Major programs that implies
risk management applied to IT systems of large organizations as FISMA has been criticized.
The risk management methodology is based on scientific foundations of statistical decision making: indeed, by
avoiding the complexity that accompanies the formal probabilistic model of risks and uncertainty, risk management
looks more like a process that attempts to guess rather than formally predict the future on the basis of statistical
evidence. It is highly subjective in assessing the value of assets, the likelihood of threats occurrence and the
significance of the impact.
Having considered this criticisms the risk management is a very important instrument in designing, implementing
and operating secure information systems because it systematically classifies and drives the process of deciding how
to treat risks. Its usage is foreseen by legislative rules in many countries. A better way to deal with the subject it is
not emerged.
[3]
IT risk management
244
Risk managements methods
It is quite hard to list most of the methods that at least partially support the IT risk management process. Efforts in
this direction were done by:
• NIST Description of Automated Risk Management Packages That NIST/NCSC Risk Management Research
Laboratory Has Examined, updated 1991
• ENISA
[21]
in 2006; a list of methods and tools is available on line with a comparison engine.
[22]
Among them the
most widely used are:
[3]
• CRAMM Developed by British government is compliant to ISO/IEC 17799, Gramm–Leach–Bliley Act
(GLBA) and Health Insurance Portability and Accountability Act (HIPAA)
• EBIOS developed by the French government it is compliant with major security standards: ISO/IEC 27001,
ISO/IEC 13335, ISO/IEC 15408, ISO/IEC 17799 and ISO/IEC 21287
• Standard of Good Practice developed by Information Security Forum (ISF)
• Mehari developed by Clusif Club de la Sécurité de l'Information Français
[23]
• Octave developed by Carnegie Mellon University, SEI (Software Engineering Institute) The Operationally
Critical Threat, Asset, and Vulnerability EvaluationSM (OCTAVE®) approach defines a risk-based strategic
assessment and planning technique for security.
• IT-Grundschutz (IT Baseline Protection Manual) developed by Federal Office for Information Security (BSI)
(Germany); IT-Grundschutz provides a method for an organization to establish an Information Security
Management System (ISMS). It comprises both generic IT security recommendations for establishing an
applicable IT security process and detailed technical recommendations to achieve the necessary IT security
level for a specific domain
Enisa report
[2]
classified the different methods regarding completeness, free availability, tool support; the result is
that:
• EBIOS, ISF methods, IT-Grundschutz cover deeply all the aspects (Risk Identification, Risk analysis, Risk
evaluation, Risk assessment, Risk treatment, Risk acceptance, Risk communication),
• EBIOS and IT-Grundschutz are the only ones freely available and
• only EBIOS has an open source tool to support it.
The Factor Analysis of Information Risk (FAIR) main document, "An Introduction to Factor Analysis of Information
Risk (FAIR)", Risk Management Insight LLC, November 2006;
[16]
outline that most of the methods above lack of
rigorous definition of risk and its factors. FAIR is not another methodology to deal with risk management, but it
complements existing methodologies.
[24]
FAIR has had a good acceptance, mainly by The Open Group and ISACA.
ISACA developed a methodology, called Risk IT, to address various kind of IT related risks, chiefly security related
risks. It is integrated with COBIT, a general framework to manage IT. Risk IT has a broader concept of IT risk than
other methodologies, it encompasses not just only the negative impact of operations and service delivery which can
bring destruction or reduction of the value of the organization, but also the benefit\value enabling risk associated to
missing opportunities to use technology to enable or enhance business or the IT project management for aspects like
overspending or late delivery with adverse business impact.
[1]
The "Build Security In" initiative of Homeland Security Department of USA, cites FAIR.
[25]
The initiative Build
Security In is a collaborative effort that provides practices, tools, guidelines, rules, principles, and other resources
that software developers, architects, and security practitioners can use to build security into software in every phase
of its development. So it chiefly address Secure coding.
IT risk management
245
Standards
There are a number of standards about IT risk and IT risk management. For a description see the main article.
References
[1] ISACA THE RISK IT FRAMEWORK (registration required) (http:// www.isaca. org/Knowledge-Center/ Research/ Documents/
RiskIT-FW-18Nov09-Research. pdf)
[2] Enisa Risk management, Risk assessment inventory, page 46 (http:// www.enisa.europa.eu/ act/ rm/cr/ risk-management-inventory/files/
deliverables/ risk-management-principles-and-inventories-for-risk-management-risk-assessment-methods-and-tools/at_download/ fullReport)
[3] Katsicas, Sokratis K. (2009) "35" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 605
ISBN 978-0-12-374354-1
[4] "Risk is a combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can
be caused by the event or exposure(s)" (OHSAS 18001:2007).
[5] Caballero, Albert. (2009) "14" Computer and Information Security Handbook Morgan Kaufmann Pubblications Elsevier Inc p. 232
ISBN 978-0-12-374354-1
[6] ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control Association (http:/ / www.isaca.org/ ). pp. 85.
ISBN 1-933284-15-3.
[7] NIST SP 800-30 Risk Management Guide for Information Technology Systems (http:// csrc.nist. gov/ publications/ nistpubs/ 800-30/
sp800-30. pdf)
[8] NIATEC Glossary of terms (http:// niatec. info/ Glossary.aspx?term=4253& alpha=R)
[9] The Risk IT Framework by ISACA, ISBN 978-1-60420-111-6
[10] The Risk IT Practitioner Guide, Appendix 3 ISACA ISBN 978-1-60420-116-1 (registration required) (http:// www.isaca. org/
Knowledge-Center/ Research/ ResearchDeliverables/ Pages/ The-Risk-IT-Practitioner-Guide.aspx)
[11] Standard of Good Practice by Information Security Forum (ISF) Section SM3.4 Information risk analysis methodologies (https:// www.
isfsecuritystandard.com)
[12] ISO/IEC, "Information technology -- Security techniques-Information security risk management" ISO/IEC FIDIS 27005:2008
[13] ISO/IEC 27001
[14] Official (ISC)2 Guide to CISSP CBK. Risk Management: Auerbach Publications. 2007. pp. 1065.
[15] CNN article about a class action settlement for a Veteran Affair stolen laptop (http:// articles.cnn.com/ 2009-01-27/politics/ va. data.
theft_1_laptop-personal-data-single-veteran?_s=PM:POLITICS)
[16] , "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk Management Insight LLC, November 2006 (http:/ / www.
riskmanagementinsight.com/ media/ docs/ FAIR_introduction. pdf);
[17] British Standard Institute "ISMSs-Part 3: Guidelines for information security risk management" BS 7799-3:2006
[18] Costas Lambrinoudakisa, Stefanos Gritzalisa, Petros Hatzopoulosb, Athanasios N. Yannacopoulosb, Sokratis Katsikasa, "A formal model
for pricing information systems insurance contracts", Computer Standards & Interfaces - Volume 27, Issue 5, June 2005, Pages 521-532
doi:10.1016/j.csi.2005.01.010
[19] 800-64 NIST Security Considerations in the Information System Development Life Cycle (http:// csrc.nist. gov/ publications/ nistpubs/
800-64-Rev2/SP800-64-Revision2.pdf)
[20] EDUCAUSE Dashboard ISO 12 (https:// wiki. internet2.edu/ confluence/ display/ itsg2/ Information+Systems+ Acquisition,+
Development,+and+Maintenance+ (ISO+12))
[21] ENISA, Inventory of Risk Management / Risk Assessment Methods (http:/ / www. enisa. europa.eu/ act/ rm/ cr/
risk-management-inventory/rm-ra-methods)
[22] Inventory of Risk Management / Risk Assessment Methods (http:// rm-inv.enisa. europa.eu/ rm_ra_methods.html)
[23] https:/ / www. clusif. asso. fr/
[24] Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009.
[25] https:// buildsecurityin. us-cert.gov/ bsi/ articles/ best-practices/ deployment/ 583-BSI.html
IT risk management
246
External links
• The Institute of Risk Management (IRM) (http:// www.theirm. org/index. html) is risk management's leading
international professional education and training body
• Internet2 Information Security Guide: Effective Practices and Solutions for Higher Education (https:// wiki.
internet2. edu/ confluence/ display/ itsg2/ Home)
• Risk Management - Principles and Inventories for Risk Management / Risk Assessment methods and tools (http:/
/ www. enisa. europa.eu/ act/ rm/cr/ risk-management-inventory/files/ deliverables/
risk-management-principles-and-inventories-for-risk-management-risk-assessment-methods-and-tools/
at_download/ fullReport), Publication date: Jun 01, 2006 Authors:Conducted by the Technical Department of
ENISA Section Risk Management
• Clusif Club de la Sécurité de l'Information Français (https:// www.clusif. asso. fr/)
• 800-30 NIST Risk Management Guide (http:/ / csrc. nist. gov/ publications/ nistpubs/ 800-30/ sp800-30. pdf)
• 800-39 NIST DRAFT Managing Risk from Information Systems: An Organizational Perspective (http:// csrc.
nist. gov/ publications/ PubsDrafts. html#SP-800-39)
• FIPS Publication 199, Standards for Security Categorization of Federal Information and Information (http:// csrc.
nist. gov/ publications/ fips/ fips199/ FIPS-PUB-199-final.pdf)
• FIPS Publication 200 Minimum Security Requirements for Federal Information and Information Systems (http://
csrc. nist. gov/ publications/ fips/ fips200/ FIPS-200-final-march.pdf)
• 800-37 NIST Guide for Applying the Risk Management Framework to Federal Information Systems: A Security
Life Cycle Approach (http:/ / csrc. nist. gov/ publications/ nistpubs/ 800-37-rev1/sp800-37-rev1-final.pdf)
• FISMApedia is a collection of documents and discussions focused on USA Federal IT security (http:/ /
fismapedia.org/index. php?title=Main_Page)
• Anderson, K. " Intelligence-Based Threat Assessments for Information Networks and Infrastructures: A White
Paper (http:/ / www. aracnet.com/ ~kea/ Papers/ threat_white_paper.pdf)", 2005.
• Danny Lieberman, " Using a Practical Threat Modeling Quantitative Approach for data security (http:// www.
software. co. il/ case-studies/ 254-data-security-threat-assessment.html)", 2009
ITHC
247
ITHC
An ITHC, or IT Health Check, is an IT security assessment required, as part of an accreditation process, for many
government computer systems in the UK.
[1]

[2]
An ITHC is generally performed by an external service provider, although CESG personnel may perform ITHCs on
especially sensitive systems. It can touch on both applications and infrastructure, and involves an element of
penetration testing.
[3]
CHECK is a scheme for ITHC providers, run by CESG.
[1]
External links
• CESG
[4]
• TIGER scheme
[5]
References
[1] "CHECK - Fundamental Principles of the CHECK Service" (http:/ / www.cesg.gov.uk/ products_services/ iacs/ check/ fundmental.shtml). .
Retrieved 2010-10-13.
[2] "CHECK - What is CHECK?" (http:/ / www.cesg. gov. uk/ products_services/ iacs/ check/ index. shtml). . Retrieved 2010-10-13. "CHECK"
[3] "About the TIGER Scheme" (http:/ / www.tigerscheme. org/qualifications. php?ID=5). . Retrieved 2010-10-13.
[4] http:// www. cesg. gov. uk/
[5] http:// www. tigerscheme. org/ qualifications.php?ID=5
Joe-E
248
Joe-E
Joe-E
Paradigm object-capability
Appeared in
2004
[1]
Designed by David A. Wagner, Adrian Mettler, Chip Morningstar, Mark S. Miller
Stable release 2.2.0a
Influenced by Java, E
Influenced Caja project
Joe-E is a subset of the Java programming language intended to support programming according to object-capability
discipline.
[2]
The language is notable for being an early object-capability subset language. It has influenced later subset languages,
such as ADsafe and Caja/Cajita, subsets of Javascript.
It is also notable for allowing methods to be verified as functionally pure, based on their method signatures.
[3]
The restrictions imposed by the Joe-E verifier include:
• Classes may not have mutable static fields, because these create global state.
• Catching out-of-memory exceptions is prohibited, because doing so allows non-deterministic execution. For the
same reason, finally clauses are not allowed.
• Methods in the standard library may be blocked if they are deemed unsafe according to taming rules. For
example, the constructor new File(filename) is blocked because it allows unrestricted access to the
filesystem.
Cup of Joe
[4]
is slang for coffee, and so serves as a trademark-avoiding reference to Java. Thus, the name Joe-E is
intended to suggest an adaptation of ideas from the E programming language to create a variant of the Java language.
Waterken Server
[5]
is written in Joe-E.
References
[1] An early reference to Joe-E (http:// www.eros-os. org/pipermail/cap-talk/ 2004-November/002180.html) on the cap-talk mailing list, Mark
S. Miller, 2004/11/01, retrieved 2009/11/21.
[2] Joe-E: A Security-Oriented Subset of Java (http:// www.cs. berkeley.edu/ ~daw/ papers/ joe-e-ndss10.pdf), Adrian Mettler, David Wagner,
and Tyler Close; January 2010.
[3] Verifiable Functional Purity in Java (http:// www. cs. berkeley.edu/ ~daw/ papers/pure-ccs08. pdf), Matthew Finifter, Adrian Mettler,
Naveen Sastry, David Wagner; October 2008, Conference on Computer and Communications Security.
[4] http:// en.wiktionary.org/ wiki/ cup_of_joe
[5] http:// waterken.sourceforge.net/
External links
• The Joe-E project (http:// code. google. com/ p/ joe-e/) on Google Code
• Joe-E language specification (http:/ / www. cs.berkeley. edu/ ~daw/ joe-e/spec-20090918. pdf)
Kill Pill
249
Kill Pill
Kill pill is a term given to the mechanisms and technologies that allows a source computer system to communicate
to other systems, usually satellites like mobile devices and laptops, with preset instructions to render themselves
useless. "Kill pill" technologies are generally used for security purposes to disable lost or stolen devices as well as
for enforcement of contractual agreements between organizations.
LAIM Working Group
The LAIM (Log Anonymization and Information Management) Working Group is a NSF and ONR funded research
group at the National Center for Supercomputing Applications under the direction of Adam Slagell
[1]
. Work from
this group focuses upon log anonymization and Internet privacy. The LAIM group, established in 2005, has released
3 different log anonymization tools: CANINE
[2]
, Scrub-PA
[3]
, and FLAIM. FLAIM is their only tool still under
active development.
External links
• LAIM Working Group Official Home
[4]
• CANINE Home Page
[2]
• Scrub-PA Home Page
[3]
• Official FLAIM Home Page
[5]
• CRAWDAD entry on FLAIM at Dartmouth
[6]
References
[1] http:/ / www. slagell. org/
[2] http:// security.ncsa. uiuc. edu/ distribution/ CanineDownLoad.html
[3] http:// security.ncsa. uiuc. edu/ distribution/ Scrub-PADownLoad.html
[4] http:/ / laim.ncsa. uiuc. edu/
[5] http:// flaim.ncsa. uiuc. edu/
[6] http:// crawdad.cs. dartmouth. edu/ meta.php?name=tools/ sanitize/ generic/ FLAIM
Layered security
250
Layered security
Layered security, also known as layered defense, is a term used by IT security professionals, information
protection experts, and security software vendors that describes the practice of leveraging several different point
security solutions, filtering systems, and monitoring strategies to protect information technology resources and data.
The term bears some similarity to Defense in Depth (computing), a term adopted from a military strategy that
involves multiple layers of defense that resist rapid penetration by an attacker but yield rather than exhaust
themselves by too-rigid tactics. As the incursion progresses, resources are consumed and progress is slowed until it is
halted and turned back. The information assurance use of the term "defense in depth" assumes more than merely
technical security tools deployment; it also implies policy and operations planning, user training, physical access
security measures, and direct information assurance personnel involvement in dealing with attempts to gain
unauthorized access to information resources. Within a defense in depth security strategy, layered security is
regarded by some as merely a delaying tactic used to buy time to bring security resources to bear to deal with a
malicious security cracker's activities.
Philosophy
Commercial
Security vendors will sometimes cite differing solutions, but most can be grouped under consumer or enterprise
categories:
Consumer Layered Security Strategy
• Extended validation (EV) SSL certificates
• Multifactor authentication (also sometimes known as versatile or two-factor authentication)
• Single sign-on (SSO)
• Fraud detection and risk-based authentication
• Transaction signing and encryption
• Secure Web and e-mail
• Open fraud intelligence network
Enterprise Layered Security Strategy
• Workstation application whitelisting
• Workstation system restore solution
• Workstation and network authentication
• File, disk and removable media encryption
• Remote access authentication
• Network folder encryption
• Secure boundary and end-to-end messaging
• Content control and policy-based encryption
Integrated Solutions
An argument may be made that "ad-hoc" security strategy, with numerous vendors and an abundance of different,
sometimes incompatible, security solutions and products can leave gaps in protection, where a vertically integrated
vendor stack could provide more comprehensive defense. Single vendor solutions improve interoperability between
components of a complete security strategy, and may offer performance and price benefits over a multi-vendor
approach.
Layered security
251
Best Of Breed Solutions
The contrasting commercial security product argument is that a "best of breed" approach provides more effective
protection. While a single vendor's vertically integrated product stack may be the offering of choice for vendors who
want to monopolize a client's or customer's business, it could be also be argued that each component of a
comprehensive security strategy should be evaluated both for its performance within its niche and its open
compatibility with other, non-integrated components of the whole.
Likejacking
Likejacking, a form of clickjacking, is a malicious technique of tricking users of a website into posting a Facebook
status update for a site they did not intentionally mean to "like".
[1]
. The initial concept and code for likejacking were
created by a Black Hat World user that goes by the handle thefish2010.
The term "likejacking" came from a comment posted by Corey Ballou
[2]
in the article How to "Like" Anything on the
Web (Safely), which is one of the first documented postings explaining the possibility of malicious activity regarding
Facebook's "like" button.
[3]
References
[1] Cohen, Richard (05/31/2010). "Facebook Work - "Likejacking"" (http:// www.sophos. com/ blogs/ sophoslabs/ ?p=9783). Sophos. .
Retrieved 2010-06-05.
[2] Ballou, Corey (06/02/2010). ""Likejacking" Term Catches On" (http:/ / www.jqueryin.com/ 2010/ 06/ 02/ likejacking-term-catches-on/).
jqueryin.com. . Retrieved 2010-06-08.
[3] Perez, Sarah (06/02/2010). ""Likejacking" Takes Off on Facebook" (http:// www.readwriteweb.com/ archives/
likejacking_takes_off_on_facebook.php). readwriteweb.com. . Retrieved 2010-06-05.
External links
• "Facebook users warned of 'likejacking' scam" (http:// www.france24.com/ en/
20100601-facebook-users-warned-likejacking-scam). AFP. Jun 1, 2010. Retrieved 2011-03-12.
Linked Timestamping
252
Linked Timestamping
Linking-based time-stamping is a type of trusted timestamping where issued time-stamps are related to each other.
Description
Linking-based time-stamping creates time-stamp tokens which are dependent on each other, entangled into some
authenticated data structure. Later modification of issued time-stamps would invalidate this structure. Temporal
order of issued time-stamps is also protected by this data structure, making backdating of the issued time-stamps
impossible, even by the issuing server itself.
Top of the authenticated data structure is generally published in some hard-to-modify and widely witnessed media
like printed newspaper. There are no (long-term) private keys in use, avoiding PKI-related risks.
Suitable candidates for authenticated data structure are:
• Linear hash chain,
• Hash tree (Merkle tree),
• Skip list.
Simplest linear hash chain based time-stamping is illustrated on following drawing:
Linear hash-chain based linking scheme
The linking-based time-stamping authority (TSA) usually performs the following distinct functions:
Aggregation
For increased scalability TSA might group time-stamping requests arriving within a short timeframe. These
requests will be aggregated together without retaining their temporal order and then assigned the same time
value. Aggregation creates cryptographic connection between all involved requests; authenticating aggregate
value will be used as input for the linking operation.
Linking
Linking creates verifiable and ordered cryptographic link between current and already issued time-stamp
tokens.
Linked Timestamping
253
Example newspaper publication of hash-linked time-stamping service
Publishing
TSA publishes periodically some
links, so that all previously
issued time-stamp tokens depend
on the published link and that it
is practically impossible to forge
the published values. By
publishing widely witnessed
links the TSA creates
unforgeable verification points
for validating all previously
issued time-stamps.
Security
Linking-based time-stamping is inherently more secure than the usual, public-key signature based time-stamping. All
consequential time-stamps "seal" previously issued ones - hash chain (or other authenticated dictionary in use) could
be built only in one way; modifying issued time-stamps is nearly as hard as finding a preimage for the used
cryptographic hash function. Continuity of operation is observable by users; periodic publications in
widely-witnessed media provide extra transparency.
Tampering with absolute time values could be detected by users, whose time-stamps are relatively comparable by
system design.
Absence of secret keys increases system trustworthiness. There are no keys to leak and hash algorithms are
considered more future-proof
[1]
than modular arithmetic based algorithms, e.g. RSA.
Linking-based time-stamping scales well - hashing is much faster than public key cryptography. There is no need for
specific cryptographic hardware with its limitations.
The common technology
[2]
for guaranteeing long-term attestation value of the issued time-stamps (and digitally
signed data
[3]
) is periodic over-time-stamping of the time-stamp token. Because of missing key-related risks and of
the plausible safety margin of the reasonably chosen hash function this over-time-stamping period of hash-linked
token could be an order of magnitude longer than of public-key signed token.
Research
Foundations
Haber and Stornetta proposed
[4]
in 1990 to link issued time-stamps together into linear hash-chain, using a
collision-resistant hash function. The main rationale was to diminish TSA trust requirements.
Tree-like schemes and operating in rounds were proposed by Benaloh and de Mare in 1991
[5]
and by Bayer, Haber
and Stornetta in 1992
[6]
.
Benaloh and de Mare constructed a one-way accumulator
[7]
in 1994 and proposed its use in time-stamping. When
used for aggregation, one-way accumulator requires only one constant-time computation for round membership
verification.
Surety
[8]
started the first commercial linking-based time-stamping service in January 1995. Linking scheme is
described and its security is analyzed in the following article
[9]
by Haber and Sornetta.
Buldas et al. continued with further optimization
[10]
and formal analysis of binary tree and threaded tree
[11]
based
schemes.
Linked Timestamping
254
Skip-list based time-stamping system was implemented in 2005
[12]
; related algorithms are quite efficient
[13]
.
Provable security
Security proof for hash-function based time-stamping schemes was presented by Buldas, Saarepera
[14]
in 2004.
There is an explicit upper bound for the number of time stamps issued during the aggregation period; it is
suggested that it is probably impossible to prove the security without this explicit bound - the so-called black-box
reductions will fail in this task. Considering that all known practically relevant and efficient security proofs are
black-box, this negative result is quite strong.
Next, in 2005 it was shown
[15]
that bounded time-stamping schemes with a trusted audit party (who periodically
reviews the list of all time-stamps issued during an aggregation period) can be made universally composable - they
remain secure in arbitrary environments (compositions with other protocols and other instances of the time-stamping
protocol itself).
Buldas, Laur showed
[16]
in 2007 that bounded time-stamping schemes are secure in a very strong sense - they
satisfy the so-called "knowledge-binding" condition. The security guarantee offered by Buldas, Saarepera in 2004 is
improved by diminishing the security loss coefficient from to .
The hash functions used in the secure time-stamping schemes do not necessarily have to be collision-resistant
[17]
or
even one-way
[18]
; secure time-stamping schemes are probably possible even in the presence of a universal
collision-finding algorithm (i.e. universal and attacking program that is able to find collisions for any hash function).
This suggests that it is possible to find even stronger proofs based on some other properties of the hash functions.
Hash tree based linking scheme
At the illustration above hash tree based time-stamping system works in rounds ( , , , ...), with one
aggregation tree per round. Capacity of the system ( ) is determined by the tree size ( , where
denotes binary tree depth). Current security proofs work on the assumption that there is a hard limit of the
aggregation tree size, possibly enforced by the subtree length restriction.
Linked Timestamping
255
Standards
ISO 18014 part 3 covers 'Mechanisms producing linked tokens'.
American National Standard for Financial Services, "Trusted Timestamp Management and Security" (ANSI ASC
X9.95 Standard) from June 2005 covers linking-based and hybrid time-stamping schemes.
There is no IETF RFC or standard draft about linking based time-stamping. RFC 4998 (Evidence Record Syntax)
encompasses hash tree and time-stamp as an integrity guarantee for long-term archiving.
References
[1] Buchmann, J.; Dahmen, E.; Szydlo, M. (2009). Hash-based Digital Signature Schemes. pp. 35. doi:10.1007/978-3-540-88702-7_3.
[2] See ISO/IEC 18014-1:2002 Chapter 4.2
[3] For example see XAdES-A.
[4] Haber, S.; Stornetta, W. S. (1991). "How to time-stamp a digital document" (http:// citeseer.ist. psu. edu/ old/ haber91how.html). Journal of
Cryptology 3. doi:10.1007/BF00196791. .
[5] Benaloh, Josh; de Mare, Michael (1991). Efficient Broadcast Time-Stamping (http:/ / citeseerx.ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1.
38.9199). Technical Report 1. Clarkson University Department of Mathematics and Computer Science. .
[6] Bayer, Dave; Stuart A., Haber; Wakefield Scott, Stornetta (1992). "Improving the Efficiency And Reliability of Digital Time-Stamping"
(http:// citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10.1. 1. 46. 5923). Sequences II: Methods in Communication, Security and Computer
Science (Springer-Verlag): 329–334. .
[7] Benaloh, J.; Mare, M. (1994). One-Way Accumulators: A Decentralized Alternative to Digital Signatures (http:/ / citeseer.ist.psu. edu/ old/
benaloh94oneway. html). 765. pp. 274. doi:10.1007/3-540-48285-7_24. .
[8] http:// www. surety. com/
[9] Haber, S.; Stornetta, W. S. (1997). Secure names for bit-strings (http:/ / citeseerx.ist. psu.edu/ viewdoc/ summary?doi=10. 1. 1. 46. 7776).
pp. 28. doi:10.1145/266420.266430. .
[10] Buldas, A.; Laud, P.; Lipmaa, H.; Villemson, J. (1998). "Time-stamping with binary linking schemes" (http:/ / citeseerx.ist. psu. edu/
viewdoc/summary?doi=10. 1. 1. 35. 9724). LNCS 1462: 486. doi:10.1007/BFb0055749. .
[11] Buldas, Ahto; Lipmaa, Helger; Schoenmakers, Berry (2000). "Optimally Efficient Accountable Time-Stamping" (http:// citeseerx.ist. psu.
edu/ viewdoc/ summary?doi=10. 1. 1. 40. 9332). LNCS 1751: 293–305. doi:10.1007/b75033. .
[12] http:// chronos. univ-pau.fr/
[13] Blibech, K.; Gabillon, A. (2006). A New Timestamping Scheme Based on Skip Lists (http:/ / www.upf. pf/~gabillon/ articles/ blibech_final.
pdf). 3982. pp. 395. doi:10.1007/11751595_43. .
[14] Buldas, Ahto; Saarepera, Märt (2004). "On Provably Secure Time-Stamping Schemes" (http:/ / citeseerx.ist. psu.edu/ viewdoc/
summary?doi=10.1. 1. 65. 8638). LNCS 3329: 500–514. doi:10.1007/b104116. .
[15] Buldas, A.; Laud, P.; Saarepera, M. �R.; Willemson, J. (2005). "Universally Composable Time-Stamping Schemes with Audit." (http:/ /
citeseerx.ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 59.2070). LNCS 3650: 359–373. doi:10.1007/11556992+26. .
[16] Buldas, A.; Laur, S. (2007). "Knowledge-Binding Commitments with Applications in Time-Stamping" (http:// citeseerx.ist. psu. edu/
viewdoc/summary?doi=10. 1. 1. 102. 2680). LNCS 4450: 150–165. doi:10.1007/978-3-540-71677-8_11. .
[17] Buldas, A.; Jürgenson, A. (2007). "Does Secure Time-Stamping Imply Collision-Free Hash Functions?" (http:// citeseerx.ist. psu.edu/
viewdoc/summary?doi=10. 1. 1. 110. 4564). LNCS 4784: 138–150. doi:10.1007/978-3-540-75670-5_9. .
[18] Buldas, A.; Laur, S. (2006). "Do Broken Hash Functions Affect the Security of Time-Stamping Schemes?" (http:// www. cs. ut. ee/ ~swen/
publications/articles/ buldas-laur-2006.pdf). LNCS 3989: 50–65. doi:10.1007/11767480_4. .
Lock-Keeper
256
Lock-Keeper
Lock-Keeper Appearance
Lock-Keeper is a high-level security solution based on the idea of
"Physical Separation". It is a hardware-based device and works like a
sluice to provide secure data exchange between the physically
separated networks. Based on the simple principle that “the ultimate
method to secure a network is to disconnect it", the Lock-Keeper can
entirely prevent session-based and protocol-based network attacks (i.e.
so-called "online attack") by physically isolating the sensitive network
from outside intruders.
Lock-Keeper is not proposed to replace the functionality of the
conventional firewall but is generally used in combination with firewall to enhance the security of the protected
network. Moreover, other content scanning mechanisms, e.g. anti-virus software, can also be flexibly integrated with
Lock-Keeper to prevent some application-level attacks, also referred to as "offline attacks".
The strengths of the Lock-Keeper solution can be summarized as:
• the simplicity of the architecture
• the scalability of the integrated content layer scanning and checking
• the high level security of internal network
Lock-Keeper can meet the security needs in different scenarios, such as most public authorities, national defence
institutions or companies with a highly sensitive IT infrastructure, etc.
Lock-Keeper Technology
A research group led by Prof. Dr. Christoph Meinel at Hasso Plattner Institute(HPI) is now doing R&D works on
Lock-Keeper:
• Formalization of the "Physical Separation" concept.
• Lock-Keeper Hardware&Software Optimization.
• Implementing Lock-Keeper SDE Using Virtual Machine.
• Design of new Lock-Keeper applications, e.g. Lock-Keeper Web Services Module.
• Secure Database Replication Module through a WS-Based Messaging Framework.
• Lock-Keeper-based Online Police Station.
• Deployment of Lock-Keeper in Service-Oriented-Architecture.
• Research and Development of intelligent gateway device using the Lock-Keeper technology.
• Development of the Lock-Keeper Cluster System.
• Authentication and access control based on the Lock-Keeper technology.
• Performance measurement and comparison between the Lock-Keeper and other similar security solutions.
• ......
Lock-Keeper
257
External links
• Lock-Keeper Project Portal
[1]
• Research work around Lock-Keeper
[2]
at HPI
• Actisis GmbH
[3]
: Consulting about Lock-Keeper
References
[1] http:/ / www. lock-keeper.de
[2] http:/ / www. hpi-web.de/ ~meinel/ projects/lock-keeper
[3] http:/ / www. actisis. com/ de/ Lock-Keeper.html
MAGEN (security)
MAGEN (Masking Gateway for Enterprises) is information security technology designed by IBM's Haifa Research
Lab. MAGEN is designed to keep users from viewing discrete chunks of secret or sensitive data on their screens that
they are not authorized to see.
MAGEN applies a sort of inverse highlighting on the data in question in real time as it is rendered on the screen.
This allows "eyes only" business logic to be implemented once, at the screen, rather than within each affected
application.
MAGEN leverages a combination of optical character recognition and screen scraping techniques.
External links
• MAGEN - the big cover up
[1]
References
[1] http:/ / www. haifa.ibm. com/ info/ 200904_MAGEN. shtml
Mandatory Integrity Control
258
Mandatory Integrity Control
In the context of the Microsoft Windows range of operating systems, Mandatory Integrity Control (MIC) or
Integrity Levels (or Protected Mode in the context of applications like Internet Explorer, Google Chrome and
Adobe Reader)
[1]
is a core security feature, introduced in Windows Vista and Windows Server 2008, that adds
Integrity Levels (IL) to processes running in a login session. (See also Security features new to Windows Vista.) This
mechanism is able to selectively restrict the access permissions of certain programs or software components in
contexts that are considered to be potentially less trustworthy, compared with other contexts running under the same
user account that are more trusted. Windows Vista defines four integrity levels: Low (SID: S-1-16-4096), Medium
(SID: S-1-16-8192), High (SID: S-1-16-12288), and System (SID: S-1-16-16384).
[1]
By default, processes started by
a regular user gain a Medium IL and elevated processes have High IL.
[2]
Processes must be configured explicitly to
run with Low IL. Processes with Low IL are called low-integrity processes. While processes inherit the integrity level
of the process that spawned it, the integrity level can be customized on a per-process basis as well. For example,
executables originating from the Internet are marked for and executed with Low IL. Windows controls access to
objects based on ILs, as well as for defining the boundary for window messages, via User Interface Privilege
Isolation.
Operation
Named objects, including files, registry keys or even other processes and threads, have an entry in the ACL
governing access to them, that defines the minimum integrity level of the process that can use the object. Windows
makes sure that a process can write to or delete an object only when its integrity level is equal to or higher than the
requested integrity level specified by the object.
[2]
Additionally, process objects with higher IL are out-of-bounds for
even read access.
[3]
Consequently, a process cannot interact with another process that has a higher IL. So a process cannot perform
functions such as inject a DLL into a higher IL process by using the CreateRemoteThread()
[4]
API function or send
data to a different process by using the WriteProcessMemory()
[5]
function. However, the higher IL process can
execute such functions against the lower IL process.
[1]
However, they can still communicate by using files, Named
pipes, LPC or other shared objects. The shared object must have an integrity level as low as the low IL process and
should be shared by both the Low-IL and High-IL process.
[3]
Security
Access control lists (ACLs) are limited to granting access rights (read, write, and execute permissions) and privileges
to users or groups. MIC allows classes of applications to be isolated, enabling scenarios like sandboxing
potentially-vulnerable applications (such as Internet-facing application).
However, since it does not prevent a low IL process from sharing objects with a higher IL process, it can trigger
flaws in the higher IL process and have it work on behalf of the low IL process, thereby causing a Squatting attack.
[3]
Shatter attacks, however, can be prevented by using another feature, User Interface Privilege Isolation, in
conjunction with MIC.
Mandatory Integrity Control is defined using a new access control entry (ACE) type to represent the object's IL in its
security descriptor. A subject IL is also assigned to the security access token when it is initialized. The integrity level
in the access token is compared against the integrity level in the security descriptor when the security reference
monitor performs authorization before granting access to objects. Windows restricts the allowed access rights
depending on whether the subject's integrity level is higher or lower than the object, and depending on the integrity
policy flags in the new access control ACE. The security subsystem implements the integrity level as a mandatory
label to distinguish it from the discretionary access under user control that ACLs provide.
Mandatory Integrity Control
259
Usage
One of the most common applications for integrity controls in Windows is with Internet Explorer 7 and Internet
Explorer 8, which can run in "Protected Mode" on Windows Vista and later operating systems. In this configuration,
the iexplore.exe process runs with a Low integrity level to limit its access to the underlying system, and thereby
prevent some classes of security vulnerabilities; since Internet Explorer in this case runs as a Low-IL process, it
cannot modify system level objects—file and registry operations are instead virtualized. Adobe Reader 10 and
Google Chrome are two other notable applications that are introducing the technology in order to limit their
vulnerability to malware.
[6]
References
[1] Matthew Conover. "Analysis of the Windows Vista Security Model" (http:/ / www.symantec.com/ avcenter/ reference/
Windows_Vista_Security_Model_Analysis. pdf). Symantec Corporation. . Retrieved 2007-10-08.
[2] Steve Riley. "Mandatory Integrity Control in Windows Vista" (http:// blogs.technet. com/ steriley/ archive/ 2006/ 07/ 21/ 442870. aspx). .
Retrieved 2007-10-08.
[3] Mark Russinovich. "PsExec, User Account Control and Security Boundaries" (http:/ / blogs. technet. com/ markrussinovich/ archive/2007/
02/ 12/ 638372. aspx). . Retrieved 2007-10-08.
[4] "CreateRemoteThread Function (Windows)" (http:/ / msdn2. microsoft.com/ en-us/ library/ms682437.aspx). MSDN. . Retrieved
2007-10-08.
[5] "WriteProcessMemory Function" (http:/ / msdn2. microsoft.com/ en-us/ library/ms681674. aspx). MSDN. . Retrieved 2007-10-08.
[6] Brad Arkin (2010-07-10). "Introducing Adobe Reader Protected Mode" (http:// blogs. adobe.com/ asset/ 2010/ 07/
introducing-adobe-reader-protected-mode. html). Adobe Systems. . Retrieved 2010-09-10.
External links
• Introduction to the Protected Mode API (http:// msdn. microsoft.com/ en-us/ library/ms537319(VS. 85).aspx)
• Windows Vista Integrity Mechanism technical reference on MSDN (http:/ / msdn2. microsoft. com/ en-us/
library/bb625964. aspx)
• Introduction to Windows Integrity Control: Security Focus article (http:// www.securityfocus. com/ print/
infocus/ 1887)
• Escaping from Microsoft’s Protected Mode Internet Explorer (http:// www.verizonbusiness. com/ resources/
whitepapers/ wp_escapingmicrosoftprotectedmodeinternetexplorer_en_xg.pdf)
Mayfield's Paradox
260
Mayfield's Paradox
Mayfield’s Paradox states that to keep everyone out of an information system requires an infinite amount of money,
and to get everyone onto an information system also requires infinite money, while costs between these extremes are
relatively low.
[1]
The paradox is depicted as a U-curve, where the cost of a system is on the vertical axis, and the percentage of
humanity that can access the system is on the horizontal axis. Acceptance of this paradox by the information security
community was immediate, because it was consistent with the professional experiences of this group. Mayfield’s
Paradox points out that, at some point of the curve, additional security becomes unrealistically expensive.
Conversely, at some point of the curve, it becomes unrealistically expensive to add additional users.
Based on the Paradox the Menz brothers developed the "Menz Theorems of Information and Physical Security". The
theorems present two formulas covering access and security of both information systems and physical facilities.
They are used to help determin allocation of resorces and response levels.
Notes
[1] Mayfield; Cvitanic (2000). "Mathematical Proofs of Mayfield's Paradox: A Fundamental Principle of Information Security" (http:// www.
isaca.org/Journal/ Past-Issues/ 2001/ Volume-2/ Pages/
Mathematical-Proofs-of-Mayfields-Paradox-A-Fundamental-Principle-of-Information-Security. aspx). Information Systems Control Journal 2.
. Retrieved 2010-07-12.
National Cyber Security Awareness Month
National Cyber Security Awareness Month is observed each October since its inception in 2001 in the United
States of America
[1]
. Sponsored by the National Cyber Security Division (NCSD) within the Department of
Homeland Security and the National Cyber Security Alliance (NCSD, a non-profit organization), Cyber Security
Awareness Month encourages vigilance and protection by all computer users.
[2]
During the month of October, the Department and the NCSA reach out to all Americans, public- and private-sector
partners, and the international community about cyber threats and offers tips and best practices concerning how to
stay safe online.
[3]
The 2009 theme was Our Shared Responsibility to reflect the notion that cyberspace cannot be secured without the
help of all users.
In line with President Obama’s 60 day review of cyber security, Awareness Month builds on existing programs
within the Department of Homeland Security. NCSD and NCSA continue to encourage participation in the Cyber
Security Awareness Volunteer Education (C-SAVE) Program. The C-SAVE Program advocates for cyber security
professionals to visit local schools to educate students on cyber security threats and the importance of staying safe
online.
[4]
National Cyber Security Awareness Month
261
References
[1] http:/ / www. staysafeonline. info/content/ about-ncsam
[2] http:// www. dhs. gov/ files/ programs/gc_1158611596104. shtm
[3] http:/ / www. staysafeonline. org/
[4] http:// www. staysafeonline. org/content/ c-save
National Vulnerability Database
The National Vulnerability Database is the U.S. government repository of standards based vulnerability
management data represented using the Security Content Automation Protocol (SCAP). This data enables
automation of vulnerability management, security measurement, and compliance. NVD includes databases of
security checklists, security related software flaws, misconfigurations, product names, and impact metrics. NVD
supports the Information Security Automation Program (ISAP).
External links
• National Vulnerability Database web site
[3]
• Security Content Automation Protocol web site
[2]
• Packet Storm
[1]
• milw0rm
[2]
References
[1] http:/ / packetstormsecurity. org
[2] http:/ / www. milw0rm.com
Neurosecurity
262
Neurosecurity
Neurosecurity has been defined as "a version of computer science security principles and methods applied to neural
engineering," or more fully, as "the protection of the confidentiality, integrity, and availability of neural devices from
malicious parties with the goal of preserving the safety of a person’s neural mechanisms, neural computation, and
free will."
[1]
Neurosecurity is a distinct concept from neuroethics; neurosecurity is effectively a way of enforcing a
set of neuroethical principles for a neural device. Neurosecurity is also distinct from the application of neuroscience
to national security, a topic that is addressed in Mind Wars: Brain Research and National Defense.
[2]
Popular culture
• The anime series Ghost in the Shell: Stand Alone Complex (2002–2003) prominently features hackers
manipulating neural implants. One example is the Laughing Man's use of hacking to interfere with the reports of
eye witnesses. In another example, Major Kusanagi makes a point by taking control of some of Batou's implants
and forcing him to punch himself.
• Neal Stephenson's book The Diamond Age (1995) briefly refers to corporations hacking neural implants in order
to superimpose advertisements onto a user's field of vision.
References
[1] Denning, Tamara; Matsuoka, Yoky; Kohno, Tadayoshi (July 1, 2009). "Neurosecurity: security and privacy for neural devices" (http://
thejns.org/doi/ abs/ 10. 3171/ 2009. 4. FOCUS0985). Neurosurgical Focus 27 (1): E7. doi:10.3171/2009.4.FOCUS0985. PMID 19569895. .
[2] Moreno, Jonathan D. (November 17, 2006). Mind Wars: Brain Research and National Defense. Dana Press. ISBN 978-1932594164.
nobody (username)
In many Unix variants, "nobody" is the conventional name of a user account which owns no files, is in no privileged
groups, and has no abilities except those which every other user has.
It is common to run daemons as nobody, especially servers, in order to limit the damage that could be done by a
malicious user who gained control of them. However, the usefulness of this technique is reduced if more than one
daemon is run like this, because then gaining control of one daemon would provide control of them all. The reason is
that nobody-owned processes have the ability to send signals to each other and even (on Linux) ptrace each other,
which means that one process can read and write to the memory of another process. Creating one account for each
daemon, as recommended by the Linux Standard Base,
[1]
provides for a tighter security policy.
References
[1] Linux Standard Base, Core Specification 3.1 section 21.2: User & Group Names (http:// refspecs. freestandards.org/LSB_3. 1. 0/
LSB-Core-generic/ LSB-Core-generic/usernames. html), freestandards.org
Non-repudiation
263
Non-repudiation
Non-repudiation refers to a state of affairs where the purported maker of a statement will not be able to successfully
challenge the validity of the statement or contract. The term is often seen in a legal setting wherein the authenticity
of a signature is being challenged. In such an instance the authenticity is being "repudiated".
Non-repudiation in digital security
Regarding digital security, the cryptological meaning and application of non-repudiation shifts to mean:
[1]
• A service that provides proof of the integrity and origin of data.
• An authentication that with high assurance can be asserted to be genuine.
Proof of data integrity is typically the easiest of these requirements to accomplish. A data hash, such as SHA2, is
usually sufficient to establish that the likelihood of data being undetectably changed is extremely low. Even with this
safeguard, it is still possible to tamper with data in transit, either through a man-in-the-middle attack or phishing.
Due to this flaw, data integrity is best asserted when the recipient already possesses the necessary verification
information.
The most common method of asserting the digital origin of data is through digital certificates, a form of public key
infrastructure, to which digital signatures belong. They can also be used for encryption. The digital origin only
means that the certified/signed data can be, with reasonable certainty, trusted to be from somebody who possesses
the private key corresponding to the signing certificate. If the key is not properly safeguarded by the original owner,
digital forgery can become a major concern.
Trusted third parties (TTPs)
The ways in which a party may attempt to repudiate a signature present a challenge to the trustworthiness of the
signatures themselves. The standard approach to mitigating these risks is to involve a trusted third party.
The two most common TTPs are forensic analysts and notaries. A forensic analyst specializing in handwriting can
look at a signature, compare it to a known valid signature, and make a reasonable assessment of the legitimacy of the
first signature. A notary provides a witness whose job is to verify the identity of an individual by checking other
credentials and affixing their certification that the party signing is who they claim to be. Further, a notary provides
the extra benefit of maintaining independent logs of their transactions, complete with the type of credential checked
and another signature that can independently be verified by the preceding forensic analyst. For this double security,
notaries are the preferred form of verification.
On the digital side, the only TTP is the repository for public key certificates. This provides the recipient with the
ability to verify the origin of an item even if no direct exchange of the public information has ever been made. The
digital signature, however, is forensically identical in both legitimate and forged uses - if someone possesses the
private key they can create a "real" signature. The protection of the private key is the idea behind the United States
Department of Defense's Common Access Card (CAC), which never allows the key to leave the card and therefore
necessitates the possession of the card in addition to the personal identification number (PIN) code necessary to
unlock the card for permission to use it for encryption and digital signatures. No practical solution yet exists to the
digital equivalent of the problem that notaries address with physical signatures.
Non-repudiation
264
References
[1] Non-Repudiation in the Digital Environment (Adrian McCullagh) (http:// firstmonday.org/htbin/ cgiwrap/ bin/ ojs/ index.php/ fm/ article/
view/ 778/ 687)
External links
• "Non-repudiation in Electronic Commerce" (Jianying Zhou), Artech House, 2001 (http:// www.artechhouse.
com/ Detail.aspx?strBookId=396)
Novell Cloud Security Service
265
Novell Cloud Security Service
Novell Cloud Security Service
Developer(s) Novell
Initial release early 2010
Type Web application
Website
Novell Cloud Security Service
[1]
Novell Cloud Security Service (NCSS) is a Web-based (SaaS) identity and access management solution, currently
in private beta but scheduled for release in early 2010
[2]
. NCSS allows SaaS, PaaS, and IaaS providers to offer their
enterprise customers the ability to deploy their existing identity infrastructure in the cloud
[3]
.
Core Functionality
At the core of NCSS is the Cloud Security Broker, a collection of cloud elements that work together to provide a
secure place for cloud workloads and cloud storage. SaaS and PaaS platforms access the Security Broker via identity
and event connectors provided by NCSS, while the enterprise accesses the broker via an on-premise secure bridge
run from the data center. This secure bridge, which is firewall friendly, provides a protocol proxy, policy agent, audit
agent, secure communication manager and key agent. The broker ensures that sensitive information always remains
behind the firewall.
How It Works
When an enterprise engages a SaaS provider that uses Novell Cloud Security Service, a user at that enterprise will
either log on to the service directly or via the enterprise’s existing identity systems. A "Cloud Security Broker" will
then verify the identity of the user. If the user is valid, the broker generates and passes an identity token in the format
requested by the cloud provider. NCSS supports multiple industry standards and identity management systems
enabling different SaaS vendors to connect to different enterprise identity systems easily. NCSS also provides
connectors on the SaaS provider side that provide deep audit tracking logs that enterprises can use for compliance
purposes.
Additional Features
NCSS features a graphical dashboard interface for providers and their customers to easily manage all their
connections via a single unified interface. It also includes a key management functionality that maintains the
cryptographic keys necessary for communication between the various components.
References
[1] http:/ / www. novell. com/ products/ cloud-security-service/
[2] "Novell aims to tighten cloud security" (http:/ / news. zdnet.com/ 2100-9595_22-326544.html). ZDNet. 2009-07-30. . Retrieved 2010-25-1.
[3] "Novell To Unveil Strategy For Cloud Computing, Virtualization" (http:/ / www. crn.com/ software/ 222000865). CRN. 2009-12-07. .
Retrieved 2010-01-25.
Novell Cloud Security Service
266
External links
• "Annexing the Cloud" (http:/ / www. novell. com/ connectionmagazine/ 2010/ 01/ novell_cloud_security_service.
html/)
• NCSS Web Page (http:/ / www. novell. com/ products/ cloud-security-service// )
One-time authorization code
One time authorization code as used in the
yammer's desktop client
One time authorization code (OTAC) allows desktop clients for web
applications to securely authenticate to the web application. The web
application generates a unique code (pin) that the user can input into
the desktop client, the desktop client in turn uses that code to
authenticate itself to the web application. This method of
authenticating desktop clients have two benefits:
1. The user's actual username/password are never transmitted from the
desktop based client application over the network;
2. The client has to never cache/store the username/password.
Passwords stored on the desktop can easily be deciphered and
compromised. Use of OTAC removes the need for storing/caching
user's actual passwords on the client computer.
This form of authentication is particularly useful in web applications
that do not have an internal username/password store but instead use
SAML for authentication. Since SAML only works within the browser,
a desktop based web application client can not successfully
authenticate using SAML. Instead, the client application, can use the One time authorization code to authenticate
itself to the web application.
Web Applications that utilize One Time Authorization Codes
• Yammer
Opal Storage Specification
267
Opal Storage Specification
The Opal Storage Specification from the Trusted Computing Group is a set of Storage Workgroup specifications
provide a comprehensive architecture for putting Storage Devices under policy control as determined by the trusted
platform host, the capabilities of the Storage Device to conform with the policies of the trusted platform, and the
lifecycle state of the Storage Device as a Trusted Peripheral.
OPAL SSC Overview
The Opal SSC is an implementation profile for Storage Devices built to:
• Protect the confidentiality of stored user data against unauthorized access once it leaves the owner's control
(involving a power cycle and subsequent deauthentication).
• Enable interoperability between multiple SD vendors.
OPAL SSC Functionalities
The Opal SSC has a wide set of functionalities:
• Security Provider Support
• Interface Communication Protocol
• Cryptographic Features
• Authentication
• Table Management
• Access Control & Personalization
• Issuance
• SSC Discovery
OPAL SSC Features
• Security Protocol 1 Support
• Security Protocol 2 Support
• Communications
• Protocol Stack Reset Commands
List of Hardware Companies with Support for OPAL SSC
• Toshiba
[1]

[2]
• Hitachi
[3]

[4]
• Samsung
[5]
• SandForce
[6]
• Seagate Technology
[7]

[8]
Opal Storage Specification
268
List of Software Companies with Support for OPAL SSC
• Wave Systems
[9]
References
[1] Toshiba/Fujitsu Develops HDD Security Technology based on Opal SSC Standards (http:/ / www.fujitsu. com/ global/ news/ pr/ archives/
month/2009/ 20090128-01.html)
[2] Works with SECUDE on Premier Full Disk Encryption Technology (http:/ / Fujitsu)
[3] Hitachi and SECUDE Collaborate on Data Encryption Solution for Improved Notebook Security and Performance (http:// biz. yahoo.com/
bw/ 090106/ 20090106006344. html?. v=1)
[4] Hitachi and SECUDE Collaborate on Data Encryption Solution for Improved Notebook Security and Performance (http:// www. secude.
com)
[5] Samsung Solid State Drive Datasheet (http:/ / www. samsung. com/ global/ business/ semiconductor/ support/ brochures/downloads/
flash_ssd/ ssd_datasheet_200906. pdf)
[6] Sandforce Industrial Processors (http:/ / www.sandforce. com/ index. php?id=177&parentId=2)
[7] Seagate's Groundbreaking Self-Encrypting Laptop Hard Drive First To Win Key U.S. Government Certification (http:/ / www.seagate. com/
ww/ v/ index. jsp?locale=en-US& name=momentus-FDE-self-encrypting,FIPS-seagate-pr&
vgnextoid=f0ea53279dc0b210VgnVCM1000001a48090aRCRD)
[8] Self-Encrypting Hard Drives in the Enterprise (http:// www. winmagic. com/ solutions/ self-encrypting-hard-drives)
[9] Wave Hails Industry Standard, Declares Hardware Encryption “Ready for Prime Time” (http:// www. wave. com/ news/ press_archive/ 09/
090127_FDE. asp)
Open security
Open security is an initiative to tackle mounting application security challenges. It is based on the premise that any
malware, including viruses, spamware, trogen, rootkits, worms, etc. share the same fundamental characteristic, that
is, to conceal its identity and intention (also known as security through obscurity). On the other hand, legitimate
software and service providers have every intention to tell the world that their software will not harm customers in
any means.
References
Outbound content security
269
Outbound content security
Outbound Content Compliance (also outbound content security) is a new segment of the computer security field,
which aims to detect and prevent outbound content that violates policy of the organization and/or government
regulations. It deals with internal threats, as opposite to more traditional security solutions (firewall, anti-virus,
anti-spam etc.), that are dealing with external threats. Therefore, it is sometimes called inside-out security.
These systems are designed to prevent and detect the unauthorized use and transmission of confidential information.
Parasitic computing
Parasitic computing is programming technique where a program in normal authorized interactions with another
program manages to get the other program to perform computations of a complex nature. It is, in a sense, a security
exploit in that the program implementing the parasitic computing has no authority to consume resources made
available to the other program.
The example given by the original paper was two computers communicating over the Internet, under disguise of a
standard communications session. The first computer is attempting to solve a large and extremely difficult 3-SAT
problem; it has decomposed the original 3-SAT problem in a considerable number of smaller problems. Each of
these smaller problems is then encoded as a relation between a checksum and a packet such that whether the
checksum is accurate or not is also the answer to that smaller problem. The packet/checksum is then sent to another
computer. This computer will, as part of receiving the packet and deciding whether it is valid and well-formed,
create a checksum of the packet and see whether it is identical to the provided checksum. If the checksum is invalid,
it will then request a new packet from the original computer. The original computer now knows the answer to that
smaller problem based on the second computer's response, and can transmit a fresh packet embodying a different
sub-problem. Eventually, all the sub-problems will be answered and the final answer easily calculated.
So in the end, the target computer(s) is unaware that it has performed computation for the benefit of the other
computer, or even done anything besides have a normal TCP/IP session.
The proof-of-concept is obviously extremely inefficient as the amount of computation necessary to merely send the
packets in the first place easily exceeds the computations leached from the other program; and the 3-SAT problem
would be solved much more quickly if just analyzed locally. In addition, in practice packets would probably have to
be retransmitted occasionally when real checksum errors and network problems occur. However, parasitic computing
on the level of checksums is a demonstration of the concept. The authors suggest that as one moves up the
application stack, there might come a point where there is a net computational gain to the parasite - perhaps one
could break down interesting problems into queries of complex cryptographic protocols using public keys. If there
was a net gain, one could in theory use a number of control nodes for which many hosts on the Internet form a
distributed computing network completely unawares.
Parasitic computing
270
References
1. Parasitic computing, Barabasi et al., Nature, 412: 894-897 (2001).
External links
• http:/ / www. nd. edu/ ~parasite
• http:/ / www. szene. ch/ parasit/
Parkerian Hexad
The Parkerian hexad is a set of six elements of information security proposed by Donn B. Parker in 2002. The term
was coined by M. E. Kabay. The Parkerian hexad adds three additional attributes to the three classic security
attributes of the CIA triad (confidentiality, integrity, availability).
The Parkerian Hexad attributes are the following:
• Confidentiality
• Possession or Control
• Integrity
• Authenticity
• Availability
• Utility
These attributes of information are atomic in that they are not broken down into further constituents; they are
non-overlapping in that they refer to unique aspects of information. Any information security breach can be
described as affecting one or more of these fundamental attributes of information.
Confidentiality
Confidentiality refers to limits on who can get what kind of information. For example, executives concerned about
protecting their enterprise’s strategic plans from competitors; individuals are concerned about unauthorized access to
their financial records.
Possession or Control
Possession or Control: Suppose a thief were to steal a sealed envelope containing a bank debit card and (foolishly) its
personal identification number. Even if the thief did not open that envelope, the victim of the theft would
legitimately be concerned that (s)he could do so at any time without the control of the owner. That situation
illustrates a loss of control or possession of information but does not involve the breach of confidentiality.
Integrity
Integrity refers to being correct or consistent with the intended state of information. Any unauthorized modification
of data, whether deliberate or accidental, is a breach of data integrity. For example, data stored on disk are expected
to be stable – they are not supposed to be changed at random by problems with the disk controllers. Similarly,
application programs are supposed to record information correctly and not introduce deviations from the intended
values.its only can use.
From Donn Parker: "My definition of information integrity comes from the dictionaries. Integrity means that the
information is whole, sound, and unimpared (not necessarily correct). It means nothing is missing from the
information it is complete and in intended good order." The author's statement comes close in saying that the
Parkerian Hexad
271
information is in a correct...state. Information may be incorrect or not authentic but have integrity or correct and
authentic but lacking in integrity.
[1]
Authenticity
Authenticity refers to the veracity of the claim of origin or authorship of the information. For example, one method
for verifying the authorship of a hand written document is to compare the handwriting characteristics of the
document to a sampling of others which have already been verified. For electronic information, a digital signature
could be used to verify the authorship of a digital document using public-key cryptography (could also be used to
verify the integrity of the document).
Availability
Availability means having timely access to information. For example, a disk crash or denial-of-service attacks both
cause a breach of availability. Any delay that exceeds the expected service levels for a system can be described as a
breach of availability.
Utility
Utility means usefulness. For example, suppose someone encrypted data on disk to prevent unauthorized access or
undetected modifications – and then lost the decryption key: that would be a breach of utility. The data would be
confidential, controlled, integral, authentic, and available – they just wouldn’t be useful in that form. Similarly,
conversion of salary data from one currency into an inappropriate currency would be a breach of utility, as would the
storage of data in a format inappropriate for a specific computer architecture; e.g., EBCDIC instead of ASCII or
9-track magnetic tape instead of DVD-ROM. A tabular representation of data substituted for a graph could be
described as a breach of utility if the substitution made it more difficult to interpret the data. Utility is often confused
with availability because breaches such as those described in these examples may also require time to work around
the change in data format or presentation. However, the concept of usefulness is distinct from that of availability.
References
[1] Hintzbergen, Jule; Hintzbergen, Kees; Baars, Hans; Smulders, André (2010). Foundations of Information Security Based on Iso27001 and
Iso27002. Best Practice. Van Haren Publishing. p. 13. ISBN 9087535686.
External links
• Admissibility, Authentication, Authorization, Availability, Authenticity model (http:/ / www.schneier. com/ blog/
archives/ 2006/ 08/ updating_the_tr.html)
Further reading
• [ |Kabay, M. E. (http:// www.mekabay. com/ overviews/ index. htm)]. "Parkerian Hexad -- Narrated PowerPoint
Show" (http:/ / www. mekabay. com/ overviews/ hexad. ppt).
• Parker, Donn B. (1998). Fighting Computer Crime. New York, NY: John Wiley & Sons. ISBN 0471163783. The
work in which Parker introduced this model.
• [ |Parker, Donn B. (http:// www. computersecurityhandbook. com/ Author-Parker.html)] (2002). "Toward a New
Framework for Information Security" (http:/ / www.computersecurityhandbook.com/ CSH4/ Chapter5. html). In
Bosworth, Seymour; Kabay, M. E.. The Computer Security Handbook (http:/ / www.computersecurityhandbook.
com/default.html) (4th ed.). New York, NY: John Wiley & Sons. ISBN 0471412589.
Phoraging
272
Phoraging
In the field of computer security, Phoraging (pronounced foraging) is defined as a process of collecting data from
many different online sources to build up the identity of someone with the ultimate aim of committing identity theft.
Along with phishing and pharming, this is the "third P" of cybercrime.
Phoraging is a concept similar in many ways to phishing, pharming, and information diving; and is similar to Mosaic
theory in finance.
Phoraging is searching for information usually with the aim of identity theft whereby a criminal collects data from a
variety of different online sources to build up the identity of a consumer to commit identity theft. Crooks phorage for
information from a variety of different sources including social networking sites, public records phishing attacks, or
confidential information submitted in an unsecured way to websites, as well as viruses and spyware. They put this
data together to guess passwords and the answers to security questions with the ultimate aim of stealing money. PC
Pro referred to this concept in their January 2009 issue.[1] Security vendor VeriSign [2] also use refer to this term [3]
while the UK Office of Fair Trading defines phoraging as a tactic used by "fraudsters who aggregate personal
information from multiple sources with the intent of misusing an individual's identity; a tactic known as
'phoraging."[4]
References
[1] http:/ / www. pcpro.co. uk/ features/ 242967/ your-private-life-exposed-online.html
[2] http:// www. verisign. co. uk/
[3] http:// blogs. verisign. com/ identity-emea/2008/ 04/ social_networking_and_fraud_ph.php
[4] http:// www. oft.gov. uk/ shared_oft/ reports/consumer_protection/ oft921a.pdf
Physical access
Physical access is a term in computer security that refers to the ability of people to physically gain access to a
computer system. According to Gregory White, "Given physical access to an office, the knowledgeable attacker will
quickly be able to find the information needed to gain access to the organization's computer systems and network."
[1]
Attacks and countermeasures
Attacks
Physical access opens up a variety of avenues for hacking
[2]
. Michael Meyers' Network+ Certification All-in-One
Exam Guide notes that "the best network software security measures can be rendered useless if you fail to physically
protect your systems," since an intruder could simply walk off with a server and crack the password at his leisure
[3]
.
Physical access also allows hardware keyloggers to be installed. An intruder may be able to boot from a CD or other
external media and then read unencrypted data on the hard drive
[4]
. They may also exploit a lack of access control in
the boot loader; for instance, pressing F8 while certain versions of Microsoft Windows are booting, specifying
'init=/bin/sh' as a boot parameter to Linux (usually done by editing the command line in GRUB), etc. One could also
use a rogue device to access a poorly secured wireless network; if the signal were sufficiently strong, one might not
even need to breach the perimeter
[5]
.
Physical access
273
Countermeasures
IT security standards in the United States typically call for physical access to be limited by locked server rooms,
sign-in sheets, etc. Physical access systems and IT security systems have historically been administered by separate
departments of organizations, but are increasingly being seen as having interdependent functions needing a single,
converged security policy
[6]
. An IT department could, for instance, check security log entries for suspicious logons
occurring after business hours, and then use keycard swipe records from a building access control system to narrow
down the list of suspects to those who were in the building at that time. Surveillance cameras might also be used to
deter or detect unauthorized access
[5]
References
[1] White, Gregory: Security+ Certification All-in-One Exam Guide, McGraw-Hill, 2003, p. 388.
[2] An attacker with physical access to a computer may be able to access files and other data (http:/ / support. microsoft.com/ kb/ 818200),
Microsoft.
[3] Network+ Certification All-in-One Exam Guide, Michael Meyers, Third Edition, Chapter 17, p. 551, McGraw-Hill Companies, 2004.
[4] Cracking Windows 2000 And XP Passwords With Only Physical Access (http:/ / www.irongeek.com/ i.php?page=security/localsamcrack),
Irongeek.
[5] Threats to Physical Security (http:/ / searchsecurity. techtarget.com/ generic/ 0,295582,sid14_gci1238092,00.html)
[6] Bridging Physical Access Systems and IT Networks (http:/ /www. technewsworld.com/ story/ 54176. html), David Ting, TechNewsWorld,
November 10, 2006.
Polyinstantiation
Polyinstantiation in computer science is the concept of type (class, database row or otherwise) being instantiated
into multiple independent instances (objects, copies). It may also indicate, such as in the case of database
polyinstantiation, that two different instances have the same name (identifier, primary key).
Operating system security
In Operating system security, polyinstantiation is the concept of creating a user or process specific view of a shared
resource. I.e. Process A cannot affect process B by writing malicious code to a shared resource, such as UNIX
directory /tmp.
[1]

[2]
Polyinstantiation of shared resources have similar goals as process isolation, an application of virtual memory, where
processes are assigned their own isolated virtual address space to prevent process A writing into the memory space
of process B.
Database
In databases, polyinstantiation is database-related SQL (structured query language) terminology. It allows a relation
to contain multiple rows with the same primary key; the multiple instances are distinguished by their security levels
[3]
. It occurs because of mandatory policy. Depending on the security level established, one record contains sensitive
information, and the other one does not, that is, a user will see the record's information depending on his/her level of
confidentiality previously dictated by the company's policy
[4]
Consider the following table, where primary key is Name and λ(x) is the security level:
Polyinstantiation
274
Name λ(Name) Age λ(Age) λ
Alice S 18 TS TS
Blob S 22 S S
Blob S 33 TS TS
Trudy TS 15 TS TS
Although useful from a security standpoint, polyinstantiation raises several problems:
- Moral scrutiny, since it involves lying;
- Providing consistent views;
- Explosion in the number of rows;
Cryptography
In cryptography, polyinstantiation is the existence of a cryptographic key in more than one secure physical location.
References
[1] Improve security with polyinstantiation, Using a Pluggable Authentication Module to protect private data. Robb R. Romans, IBM, 26 Feb
2008 (http:// www. ibm. com/ developerworks/ linux/ library/l-polyinstantiation/ index.html)
[2] Polyinstantiation of directories in an SE Linux system. Russell Coker, System Administrators Guild of Australia 2006 (http://www. coker.
com. au/ selinux/ talks/ sage-2006/ PolyInstantiatedDirectories. html)
[3] Solutions to the Polyinstantiation Problem. Sushil Jajodia, Ravi S. Sandhu, and Barbara T. Blaustein (http:/ / www.acsac.org/ secshelf/
book001/21.pdf)
[4] Security in Computing by Charles P.Pfleeger, Shari Layrence Pfleeger.
Portable Executable Automatic Protection
This article describes an automated technique for protecting Portable Executable files used in Windows NT platform.
The proposed technique mainly works on Portable Executable format for 32-bit applications. The article describes
the PE format illustrating its main structures followed by an overview on existing protection techniques, and then it
illustrates a new technique used in packing the PE file in order to protect it against disassembling and reverse
engineering. The protection technique involves a static operation on the file reversed by a dynamic one during the
run-time. The static and the dynamic operations provide a combined solution for software protection against static
(Automatic) and dynamic reverse engineering. The article studies the effect of protection on the performance and
provides a solution to enhance the results.
Introduction
Portable Executable format
[1]
is used to represent executable files in all Windows NT platforms. The PE file format
describes, file headers, sections and structures. File format is important to understand the loading process by the
operating system and the linking mechanism to the other exiting libraries. A PE file is composed of a group of
headers followed by a set of sections holding code, data and other useful information. The headers include DOS
Header, NT Headers and Sections Headers.
[1]

[2]
Each section header holds information like the size, the starting
position, and the characteristics of the section. For example the “.text” section (code section) header provides the
RVA (Relative Virtual Address) value which is used to determine the starting position of the section in memory, this
value is important if the executable file is loaded in a location other than the preferred location (Re-Location).
[1]

[2]
Another example is the NT Headers which provide general information like PE signature, target machine type, the
file characteristic, the Image Base, the Image Size and the Entry Point. The Image Base value determines the
Portable Executable Automatic Protection
275
preferred starting address of the image when loading into memory. The Entry Point determines the address of the
first instruction.
Overview
Normal PE protection uses packing procedure, which alters the file structures, encrypt and compress the file sections
and inject an executable section to perform the unpacking. During run-time, the injected section unpacks the file into
the memory, correct the structure and finally jump to the original entry point in order to start the normal execution.
There are many publicly available packers , which are useful to protect the code and data in the file, though their
security on disk and memory can be compromised.
[3]

[4]

[5]
There have been number of researches regarding the
Software Protection and DRM systems which discuss the problem of securing the software and the intellectual
property of a software vendor against reverse engineering and piracy. The researches propose different techniques,
which represent possible methods for securing the software. Code obfuscation
[6]
is considered a common method
for software protection. In
[7]
Diego et al. present a model for software protection using code Obfuscation and
Fingerprinting combined with license enforcement. The presented model provides a semi-automated process for
protection that involves the direct interaction of the software developer to modify his written code in order to make
use of the protection process, unlike the fully automated model proposed by this paper. Code obfuscation was
presented in
[8]
by modifying a simple compiler (tcc) to modify certain unconditional branching instructions to
conditional branching aiming at misleading automated reverse engineering tools
[5]
to detect the original code.
Another powerful technique for protecting software is self-modifying code, which means that the code is modifying
itself during run-time, meanwhile keeping it hard to reverse it statically.
[9]
Another useful property for
self-modifying code presented in
[10]
to add additional strength to software self-check summing maintaining the
software integrity. White-box cryptography is another method for adding security to software; however, it suffers
from static or dynamic key recovery attacks.
[6]
Static protection operations
This section describes the required steps to apply the static protection on the executable files. The steps are
performed in the Post-Build phase; it modifies the PE file structures, changes its characteristic, alters contents, and
injects new sections. Following section describes the static modifications applied to the PE file in order to apply the
protection.
Modifying PE structure
This step modifies the values in the PE File headers that describe the properties and characteristics of the file. The
changed values are the Number of Sections, Address of Entry Point, Size of Image and Data Directories RVAs and
Sizes (Import, Relocation, COM…) The protection process mainly involves adding a new section (Security Section)
to the PE file. This section is supplied with the necessary information used to perform the unpacking (reversing the
modifications). The supplied information includes the original PE File headers and structures, which are removed or
altered while applying the protection. Fig1 illustrates the overall structure of the PE file after the modifications, with
the added Security Section.
Portable Executable Automatic Protection
276
PE Headers
Dos, NT, Section Headers
Code
Data
Other Sections
Security Section
New Code Stub, Useful Information in Unpacking
Fig1. Illustrates the structure of the PE file after applying the Modifications
Modifying import table structure
The protection operation mainly depends on changing the Imported Address Table; it provides the imported DLLs
and their Functions, upon which the PE file depends on during run time. The protection works by modifying the
Imported Address Table, where the new table will depend on the Unpacking DLL. The Unpacking DLL works with
the Security Section to perform the unpacking operation.
Static code redirection
The Static Code Redirection process is very important for providing extra security and protection to the PE file. This
operation is similar to the method used in,
[8]
aiming to redirect certain JMP/CALL instructions in the PE Original
Code towards Interception Jump Table (IJT) that depends on the Unpacking DLL. Figure2 illustrates the IJT
structure, while Fig3 illustrates the instructions used for the redirection operation in each IJT entry. The static code
redirection process works by disassembling the PE code, then selecting certain Far JMP or CALL instructions and
modifies their target locations to corresponding IJT Entry. The Unpacking DLL is responsible for correcting the IJT
Entry code stub (during run-time – Dynamic Code Redirection) in order to redirect the execution flow towards the
correct location.
PE file encryption
The protection process should encrypt certain parts of the PE file in order to protect it against disassembling and
code reverse engineering, whether the file on disk or in the memory during the execution. The protection process
will encrypt the code, data sections, original import table and IJT. The protection process should hide the key
somewhere in the PE file, or it can use the key as a Derived Key from some parts of the PE file using any key
derivation algorithms.
[11]

[12]
White-box Cryptography
[6]
is considered a solution to this problem, which proposes a
software only solution for protection against key recovery. We propose using a double encryption process, by
combining the derived key with hardware based key (stored or derived from a hardware device, like Hardware
Tokens or even a Machine Identifier). Combining several process of protection on the PE file makes it harder for
automated reverse engineering and disassemblers tools to succeed in reversing the protected code. The code section
encryption provides false program flow in case of direct disassembling. The encryption of the IJT moves the battle of
reverse engineering from the easy static reverse engineering to the harder dynamic one. Moreover, linking the
protection process to the Unpacking DLL increases the effort of the dynamic reverse engineering.
PUSHFD
PUSHAD
MOV EAX,imm_Redirect
PUSH imm_ImgBase
PUSH imm_Entryndx
CALL EAX
Portable Executable Automatic Protection
277
POP EAX
POP EAX
POPAD
POPFD
JMP [TrueRVA]
Fig3. Illustrates the Code Stub in each IJT entry, where the last JMP Instruction redirects the code
towards the Original Instruction
Dynamic unpacking operation
The Dynamic Unpacking operation is responsible for unpacking the protected PE file in memory. The Unpacking
DLL is the object that provides the dynamic protection. When the protected application is executed in memory, the
Windows Loader automatically loads the Unpacking DLL, because it resides in the modified Import Address Table.
The purpose of loading the Unpacking DLL while loading the protected application is to reverse the static protection
modifications before the application starts the execution. The following steps explain the reversing (unpacking)
operations:
1. Go through the debugger detection procedure,
[13]
and stop the unpacking operation if debugging behaviors are
detected.
2. Extract the PE special information from the Injected Security Section.
3. Decrypt Code Section, Data Section, Original Import table and the IJT Stub located in the Security Section using
a Derived (Combined) Encryption Key
4. Perform Base Relocation Process;
[1]
if the Load Address of the PE file in memory was in a different location than
the preferred address (Image Base)
5. Load all Imported DLLs found in the original Import Address Table located in the inject Security Section, then
retrieve the real memory addresses of all their Imported Functions, then update the Imported Address Table
(IAT).
[1]

[2]
This step is associated with an operation called Dynamic PE Infection, mentioned later.
6. Extract all IJT header information and descriptors, and retrieve the correct jump address for every IJT entry.
7. Correct – if necessary – or corrupt the PE file Headers in memory.
[13]
The Unpacking DLL performs the above operations while loading, though in some cases (like protected DLLs) the
Unpacking DLL performs these operations by receiving a call from the Unpacking Code Stub that exists in the
Security Section in the PE file. The static operation changes the original Entry Point to reference the Unpacking
Code Stub instead of the original Entry Point. The Unpacking Code Stub calls an Exported API from the Unpacking
DLL, which performs the mentioned unpacking operations, and then the Code Stub ends by a JMP instruction to the
OEP.
[1]
Transforming the Code Stub to Self-Modifying Code
[9]
can add extra strength to this code stub. One
drawback for self-modifying code is that it is used by many computer viruses, which might raise the threat of
detecting the Protected Application as a virus by commercial antivirus software.
Dynamic PE infection
The purpose of the Dynamic PE Infection is to handle the control to the Unpacking DLL over all loaded modules in
the protected Application memory space, and mainly keeps the Protected Application attached to the Unpacking
DLL throughout the whole execution time. The operation works by API Interception, which main concept is to
intercept some –selected- system APIs and perform some operations additional to the regular operation of that API.
The Dynamic Infection operation mainly works by changing the value of the target API Address in the infected
module’s IAT (Imported Address Table) in memory. By changing this address, any CALL instruction –depending on
the IAT values- will instead call another Function (with the same interface and parameters of the Original API)
exported by the Unpacking DLL. These exported Interception APIs will perform extra functionalities over the main
functionality of the Original Intercepted API. Fig4 illustrates the Interception process resulting from infecting a
Portable Executable Automatic Protection
278
certain module in memory.
In Fig4, The Unpacking DLL modifies the IAT of the protected PE file in memory by replacing the IAT entry
corresponding to a specific API (ex: API_1 Address) by the Address in the Unpacking DLL. When the Original PE
code starts execution, it will make a call to API_1, as illustrated by the CALL instruction. The CALL instruction
makes the API call by referencing the API address in the IAT. The Win Loader is responsible for updating the IAT
with the real Addresses of all the APIs imported by the PE file. By replacing this address, the CALL instruction will
execute the Interception API_1 in Unpacking DLL instead of the real API. The Interception API is responsible for
calling the Real API in its System DLL in order to make the calling process transparent to the protected PE. This
Dynamic Infection operation is recursive over all loaded modules in the application’s memory space.
Dynamic code redirection
As mentioned earlier in the Static Code Redirection, the redirection operation is responsible for redirecting the code
execution to its correct sequence. Figure5 illustrates the Redirection Operation. The Code Redirection increases the
security and protection of the PE file against Reverse Engineering, Cracking and Memory Dumping.
[13]
The process
mainly keeps the PE file always attached to the Unpacking DLL in order to perform the execution correctly, as any
attempt to unload the Unpacking DLL or trying to remove it from the Import Table of the PE file will lead to false
execution of the original code. Although this operation adds more security to the PE file, it still adds overhead on the
execution process, especially in the case of huge loops as in Graphics Applications. The overhead is high, as the
operation adds extra instructions IJT Code Stub and redirection operation for every modified JMP/CALL instruction.
In order to solve this overhead problem, the Dynamic Code Redirection should provide an algorithm that somehow
decreases the overhead on the execution and in the same time does not affect the security of the PE file. The main
target of the redirection algorithm is to decrease the overhead of redirection execution during the original code
execution while maintaining security. The algorithm treats each IJT Entry as a stand-alone entity, and monitors the
number of executions of each entity. At the same time, it monitors the Global number of executions (Global Number
of Hits) of all the entities in the execution time. Those two counters will be the keys to balance the speed, the
performance and the Security of the protected application.
Each entity holds some information that describes it. It holds the Entity State, whether it is Encrypted, Decrypted or
Corrected, besides it holds the Number of Hits, providing how much this entity has been executed. The following is
the explanation of each state:
• Encrypted: The destination RVA in the last JMP instruction in the IJT Entry -in Fig3- is encrypted and requires
decryption by Redirect function in order to perform execution correctly.
• Decrypted: The Destination RVA in the last JMP instruction in the IJT Entry is decrypted and requires no
decryption, though it may affect the Redirection Algorithm depending on the number of hits
• Corrected: The Original Instruction in the Original Code is corrected and will not jump to the IJT Entry, as its
destination RVA is corrected.
The algorithm modifies each IJT Entry according to its Entity State, Entity Number of Hits, Global Number of Hits
for all entities, Number of Corrected/Decrypted Entities per module and four predefined thresholds. The four
thresholds are:
• Entity Hit Threshold (EHT): A threshold, which implies that the state of the entity should change to corrected if
its Number of Hits exceeds that threshold. The algorithm should compare this threshold to the Number of Hits of
the executed entity.
• Global Hit Threshold (GHT): A threshold, which implies that the state of a decrypted IJT Entry with the
minimum Number of Hits should change to encrypted. The algorithm should compare this threshold to the Global
Number of Hits.
• Correction Threshold (CT): A threshold, which implies that the total number of corrected instructions in memory
is high and the algorithm should encrypt the entry with the Minimum Number of Hits. This threshold can be a
Portable Executable Automatic Protection
279
percentage of the total number of entries (ex: 25%)
• Decryption Threshold (DT): A threshold, which implies that the total number of decrypted entries is high and the
algorithm should encrypt the entry with the Minimum Number of Hits. This threshold can be a percentage of the
total number of entries (ex: 50%)
The Entity Hit Threshold mainly aims to speed up the execution of the original code by correcting its destination
address, so that this instruction will not jump to the IJT Entry anymore, assuming that exceeding this threshold
indicates the continuous execution of that specific instruction. Meanwhile, the Global Hit Threshold, Correction
Threshold and Decryption Threshold aim to maintain the security by encrypting a decrypted/corrected Entry.Figure6
illustrates the state diagram of an IJT Entry.
Protected application performance
This section will present some test results applied to protected applications to assess the protected applications
performance compared to their performance before applying the protection. The section also introduces test results to
assess the Redirection algorithm and its effect on the performance and security of the protected application. The first
performance test was to assess the executable loading duration overhead. Since the protected application is statically
linked to the Unpacking DLL, the Unpacking DLL loading will add certain overhead over the loading time of the
executable. This overhead will add time delay to the original loading time of all DLLs in the Import Table, due to
performing the Unpacking, PE Infection and Redirection operations by the Unpacking DLL. This delay is defined as
Unpack Delay Duration. The performance test is conducted to test this Unpack Delay Duration. The test works by
protecting an Executable file and measuring the duration of the Load Operations done by the Unpacking DLL. These
measures will be applied to different number of modules in one application. For example, if Executable A depends
on DLLs B & C, then the measures will be applied on protected A only, then A&B, then A B&C and so forth.
Figure7 shows a graph for the Average Unpack Delay Duration per Protected Modules. The test is applied to
different applications and number of their dependencies. It must be stated that there cannot be a fixed Unpack Delay
Duration for all applications and modules, as the complexity and number of dependencies of the modules differ from
one to another. Therefore, the graph result can differ depending on the target-protected applications. The test results
show that the maximum Average Unpack Delay Duration may reach 300 milliseconds for a set of four protected
modules.
The second performance test is to assess the Redirection algorithm. A custom application written in x86 assembly
language was used in order to provide a more specific test bench for testing the effect of the algorithm. The custom
application contains massive loops with 24 function calls (CALL instructions) which shall be modified by the Static
Code Redirection process. The custom application takes 9500 milliseconds for the whole execution time before
applying protection. The applied protection mainly tested the performance and security of the application after
finishing the execution, by adjusting the thresholds with different values. The performance test is to check the total
execution time of the application, while the security test is to check the state of all IJT entries after finishing
execution. A software cracker using Dynamic Reverse Engineering can simply wait until the application finishes
execution and right then he dumps the application from memory to disk. The Code redirection security should make
this operation useless, as the dumped Code section would give false execution. Table1 presents the final test results
for the protected application with different threshold values. The results show that changing the Thresholds values
affects the performance and security of the protected application. The first entry in Table1 shows a result of adjusting
threshold to maximize the Security, which by the end of the execution kept 30 IJT Entries encrypted, but the
performance was weak as execution duration increased by 177% over the original time. The second table entry
shows a result of adjusting the thresholds to decrease the security slightly for the sake of performance, which led to a
result of 31 Decrypted IJT Entries with enhanced performance of about one extra second over the original time.
Though the performance result is considered acceptable, the application security is at stake, as all the IJT Entries are
decrypted without any encrypted entries. The third entry shows a result for adjusting thresholds values in order to
increase the security without affecting the performance severely, which in turn increased the number of encrypted
Portable Executable Automatic Protection
280
IJT Entries to four. The 5th table entry is the actual test for the Redirection Algorithm. The entry shows a result of
adjusting all the thresholds to maintain security and enhance the performance. The performance was enhanced by
adjusting the Number of Hits Threshold to 50, which in turn affected the Entries states to be corrected. Meanwhile,
adjusting the Decryption and Correction thresholds to 0.9&0.8 respectively maintained the security by keeping 8 IJT
Entries Encrypted and 3 IJT Entries Decrypted. This adjustment enhanced the performance by adding 600
milliseconds - 6.5% increase - over the original time, which gave the fastest performance. This performance is better
than the 2nd entry thresholds and maintains more security.
The Redirection Algorithm proposes two problems that can be a target of a future work. The first problem is how to
enhance the selection of the JMP/CALL instructions to be redirected throughout the Static Code Redirection aiming
at providing the highest security with the optimal performance. The second problem is to find a mathematical model
for the Dynamic Code Redirection operation that can predicts the behavior of the protected application regarding
both application’s security and performance.
References
[1] Microsoft Portable Executable and Common Object File Format (COFF) specification, Microsoft
[2] Matt Pietrek, “An In-Depth Look into the Win32 Portable Executable File Format”, MSDN Magazine, February 2002 http:// msdn. microsoft.
com/ en-us/ magazine/ cc301805. aspx
[3] Li Sun, Tim Ebringer, Serdar Boztas “Hump-and-dump: efficient generic unpacking using an ordered address execution histogram”, Witham
Laboratories, Australia http:// www.datasecurity-event.com/ uploads/ hump_dump.pdf
[4] Matias Madou, Bertrand Anckaert, Bjorn De Sutter, Koen De Bosschere, “Hybrid Static Dynamic Attacks against Software Protection
Mechanisms”, Proceedings of the 5th ACM workshop on Digital Rights Management, November 07-07, 2005, Alexandria, VA, USA
[5] Christopher Kruegel, William Robertson, Fredrik Valeur, Giovanni Vigna, “Static disassembly of obfuscated binaries”, Proceedings of the
13th conference on USENIX Security Symposium, p.18-18, August 09-13, 2004, San Diego, CA
[6] P.C. van Oorschot, “Revisiting Software Protection”, Proc. 6th Int'l Conf. Information Security (ISC 03), LNCS 2851, Springer-Verlag, 2003,
pages. 1–13
[7] Diego Bendersky, Ariel Futoransky, Luciano Notarfrancesco, Carlos Sarraute, and Ariel Waissbein, “Advanced Software Protection Now”,
CoreLabs Technical Report , 2003
[8] Chris Coakley, Jay Freeman, Robert Dick, “Next-Generation Protection against Reverse Engineering”, Anacapa Sciences Inc, http:/ / www.
anacapasciences.com/ publications/ protecting_software2005.02.09. pdf, 2005
[9] Bertrand Anckaert, Matias Madou, and Koen De Bosschere, “A Model for Self-Modifying Code”, Lecture Notes in Computer Science,
Springer Berlin / Heidelberg, Volume 4437/2007, pages 232-248 , 2007
[10] Jonathon T. Giffin, Mihai Christodorescu, Louis Kruger, “Strengthening Software Self-Checksumming via Self-Modifying Code”,
Proceedings of the 21st Annual Computer Security Applications Conference, p.23-32, December 05-09, 2005
[11] B. Kaliski, RFC2898 - PKCS #5: “Password-Based Cryptography Specification Version 2.0”, RSA Laboratories
[12] Shakir M. Hussain and Hussein Al-Bahadili “A Password-Based Key Derivation Algorithm Using the KBRP Method”, American Journal of
Applied Sciences, 777-782, 2008
[13] Peter Ferrie “Anti-unpacker tricks”, Microsoft, USA, http:/ / www.datasecurity-event.com/ uploads/ unpackers. pdf
Pre-boot authentication
281
Pre-boot authentication
Pre-Boot Authentication (PBA) or Power-On Authentication (POA)
[1]
serves as an extension of the BIOS or boot
firmware and guarantees a secure, tamper-proof environment external to the operating system as a trusted
authentication layer. The PBA prevents anything being read from the hard disk such as the operating system until the
user has confirmed he/she has the correct password or other credentials.
[2]
Benefits of Pre-Boot Authentication
• Full disk encryption outside of the operating system level
[2]
• Encryption of temporary files
• Data-at-rest protection
How Pre-Boot Authentication Works
Generic Boot Sequence
1. Basic Input/Output System (BIOS)
2. Master boot record (MBR) partition table
3. Pre-boot authentication (PBA)
4. Operating system (OS) boots
A PBA environment serves as an extension of the BIOS or boot firmware and guarantee a secure, tamper-proof
environment external to the operating system as a trusted authentication layer. The PBA prevents Windows or any
other operating system from loading until the user has confirmed he/she has the correct password to unlock the door.
That trusted layer eliminates the possibility that one of the millions of lines of OS code can compromise the privacy
of personal or company data.
Pre-Boot Authentication Technologies
Combinations with Full Disk Encryption
Pre-Boot Authentication is generally provided by a variety of full disk encryption vendors, but can be installed
separately. Some FDE solutions can function without Pre-Boot Authentication, such as hardware-based full disk
encryption. However, without some form of authentication, encryption provides little protection.
Authentication Methods
The standard complement of authentication methods exist for Pre-Boot Authentication including:
1. Something you know (i.e. username / password)
2. Something you have (i.e. smart card or other token)
3. Something you are (i.e. biometric data)
Pre-boot authentication
282
References
[1] "Sophos brings enterprise-level encryption to the Mac" (http:/ / www.networkworld.com/ news/ 2010/
080210-sophos-brings-enterprise-level-encryption-to.html?source=nww_rss). Network World. August 2, 2010. . Retrieved 2010-08-03.
[2] "Pre-Boot Authentication" (http:// www.secude. com/ html/ ?id=1376). SECUDE. February 21, 2008. . Retrieved 2008-02-22.
Presumed security
Presumed security is a principle in security engineering that a system is safe from attack due to an attacker
assuming, on the basis of probability, that it is secure. Presumed security is the opposite of security through
obscurity. A system relying on security through obscurity may have actual security vulnerabilities, but its owners or
designers deliberately make the system more complex in the hope that attackers are unable to find a flaw. Conversely
a system relying on presumed security makes no attempt to address its security flaws, which may be publicly known,
but instead relies upon potential attackers simply assuming that the target is not worth attacking. The reasons for an
attacker to make this assumption may range from personal risk (the attacker believes the system owners can easily
identify, capture and prosecute them) to technological knowledge (the attacker believes the system owners have
sufficient knowledge of security techniques to ensure no flaws exist, rendering an attack moot).
Although this approach to security is implicitly understood by security professionals, it is rarely discussed or
documented. The phrase "presumed security" appears to have been first coined by the security commentary website
Zero Flaws
[1]
. The article uses the Royal Military Academy Sandhurst as an example, focusing on the apparent lack
of entry security and contrasting it against the presumed security a military installation will have. The article also
details the flaws inherent in a trust seal such as the Verisign Secure Site seal, and explains why this presumed
security approach is actually detrimental to an overall security posture.
References & notes
[1] Zero Flaws: Presumed Security (http:// www. zeroflaws. net/ presumedsecurity)
Principle of least privilege
283
Principle of least privilege
In information security, computer science, and other fields, the principle of least privilege, also known as the
principle of minimal privilege or just least privilege, requires that in a particular abstraction layer of a computing
environment, every module (such as a process, a user or a program on the basis of the layer we are considering) must
be able to access only such information and resources that are necessary for its legitimate purpose.
[1]

[2]
In other words, this means giving a user only those privileges which are absolutely essential to do his/her work. For
example, a backup user does not need to install software, hence the backup user has rights only to run backup and
backup-related applications. Any other privileges like installing software etc. are blocked. The principle applies also
to a single home PC user where they always do work in a normal user account, and opens their admin account
(password protected with greater access) only when the situation absolutely demands it.
When applied to users, the terms least user access or least-privileged user account (LUA) are also used, referring
to the concept that all users at all times should run with as few privileges as possible, and also launch applications
with as few privileges as possible. LUA bugs occur when applications do not work correctly without elevated
privileges.
Usage
The principle of least privilege is widely recognized as an important design consideration in enhancing the protection
of data and functionality from faults (fault tolerance) and malicious behavior (computer security).
The principle of least privilege is also known as the principle of least authority (POLA).
The kernel always runs with maximum privileges since it is the operating system core and has hardware access. One
of the principal responsibilities of an operating system, particularly a multi-user operating system, is management of
the hardware's availability and requests to access it from running processes. When the kernel crashes, the
mechanisms by which it maintains state also fail. Even if there is a way for the CPU to recover without a hard reset,
the code that resumes execution is not always what it should be. Security continues to be enforced, but the operating
system can't respond to the failure properly because detection of the failure wasn't possible. This is because kernel
execution either halted or the program counter resumed execution from somewhere in endless, and — usually —
non-functional loop.
If execution picks up, after the crash, by loading and running trojan code, the author of the trojan code can usurp
control of all processes. The principle of least privilege forces code to run with the lowest privilege/permission level
possible so that, in the event this occurs — or even if execution picks up from an unexpected location — what
resumes execution does not have the ability to do bad things. One method used to accomplish this can be
implemented in the microprocessor hardware. In x86 architecture, the manufacturer designed four (ring 0 - ring 3)
running "modes". (This term can be confusing because the term "mode" is used in certain OS variants to refer
cumulatively, to the state of the set of bits associated with a given resource).
Least privilege is widely misunderstood and, in particular, is almost always confused with the Trusted Computer
System Evaluation Criteria (TCSEC) concept of trusted computing base (TCB) minimization. Minimization is a far
more stringent requirement that is only applicable to the functionally strongest assurance classes, viz., B3 and A1
(which are evidentiarily different but functionally identical).
Least privilege is often associated with privilege bracketing, that is, assuming necessary privileges at the last
possible moment and dismissing them as soon as no longer strictly necessary, therefore ostensibly avoiding fallout
from erroneous code that unintentionally exploits more privilege than is merited. Least privilege has also been
interpreted in the context of distribution of discretionary access control (DAC) permissions, for example asserting
that giving user U read/write access to file F violates least privilege if U can complete his authorized tasks with only
read permission.
Principle of least privilege
284
As implemented in some operating systems, processes execute with a potential privilege set and an active privilege
set. Such privilege sets are inherited from the parent as determined by the semantics of fork(). An executable file that
performs a privileged function—thereby technically constituting a component of the TCB, and concomitantly termed
a trusted program or trusted process—may also be marked with a set of privileges, a logical extension of the notions
of set user ID and set group ID. The inheritance of file privileges by a process are determined by the semantics of the
exec() family of system calls. The precise manner in which potential process privileges, actual process privileges,
and file privileges interact can become complex. In practice, least privilege is practiced by forcing a process to run
with only those privileges required by the task. Adherence to this model is quite complex as well as error-prone.
Historically, the oldest instance of least privilege is probably the source code of login.c, which begins execution with
super-user permissions and—the instant they are no longer necessary—dismisses them via setuid() with a non-zero
argument.
Benefits
• Better system stability. When code is limited in the scope of changes it can make to a system, it is easier to test its
possible actions and interactions with other applications. In practice for example, applications running with
restricted rights will not have access to perform operations that could crash a machine, or adversely affect other
applications running on the same system.
• Better system security. When code is limited in the system-wide actions it may perform, vulnerabilities in one
application cannot be used to exploit the rest of the machine. For example, Microsoft states “Running in standard
user mode gives customers increased protection against inadvertent system-level damage caused by "shatter
attacks" and malware, such as root kits, spyware, and undetectable viruses”.
• Ease of deployment. In general, the fewer privileges an application requires the easier it is to deploy within a
larger environment. This usually results from the first two benefits, applications that install device drivers or
require elevated security privileges typically have additional steps involved in their deployment, for example on
Windows a solution with no device drivers can be run directly with no installation, while device drivers must be
installed separately using the Windows installer service in order to grant the driver elevated privileges.
[3]
Limitations
In practice, true least privilege is neither definable nor possible to enforce. Currently, there is no method that allows
evaluation of a process to define the least amount of privileges it will need to perform its function. This is because it
is not possible to know all the values of variables it may process, addresses it will need, or the precise time such
things will be required. Currently, the closest practical approach is to eliminate privileges that can be manually
evaluated as unnecessary. The resulting set of privileges still exceeds the true minimum required privileges for the
process.
Another limitation is the granularity of control that the operating environment has over privileges for an individual
process.
[4]
In practice, it is rarely possible to control a process' access to memory, processing time, I/O device
addresses or modes with the precision needed to facilitate only the precise set of privileges a process will require.
Principle of least privilege
285
History
The original formulation is from Jerome Saltzer:
Every program and every privileged user of the system should operate using the least amount of privilege necessary
to complete the job. (Protection and the Control of Information Sharing in Multics, CACM 1974, volume 17, issue 7,
page 389)
Peter J. Denning, in his paper "Fault Tolerant Operating Systems" set it in a broader perspective among four
fundamental principles of fault tolerance.
Dynamic assignments of privileges was earlier discussed by Roger Needham in 1972.
[5]

[6]
References
[1] Saltzer 75
[2] Denning 76
[3] Aaron Margosis (August 2006). "Problems of Privilege: Find and Fix LUA Bugs" (http:// technet. microsoft.com/ en-us/ magazine/
cc160944.aspx). Microsoft. .
[4] Matt Bishop, Computer Security: Art and Science (https:// buildsecurityin. us-cert.gov/ daisy/ bsi/ articles/ knowledge/ principles/ 351.
html), Boston, MA: Addison-Wesley, 2003. pp. 343-344 cited Barnum & Gegick 2005
[5] Roger Needham, [Protection systems and protection implementations], Proc. 1972 Fall Joint Computer Conference, AFIPS Conf. Proc., vol.
41, pt. 1, pp. 571-578
[6] Schroeder Least Privilege and More (http:/ / www. cs. cornell.edu/ fbs/ publications/ leastPrivNeedham.pdf)
• Ben Mankin, The Formalisation of Protection Systems, Ph. D thesis, University of Bath, 2004
• P. J. Denning (December 1976). "Fault tolerant operating systems" (http:/ / portal.acm.org/citation.
cfm?id=356680& dl=ACM& coll=&CFID=15151515& CFTOKEN=6184618). ACM Computing Surveys 8 (4):
359–389. doi:10.1145/356678.356680. ISSN 0360-0300.
• Jerry H. Saltzer, Mike D. Schroeder (September 1975). "The protection of information in computer systems"
(http:/ / web. mit. edu/ Saltzer/www/ publications/ protection/ ). Proceedings of the IEEE 63 (9): 1278–1308.
doi:10.1109/PROC.1975.9939.
• Deitel, Harvey M.. An introduction to operating systems (http:/ / portal. acm.org/citation. cfm?id=79046&
dl=GUIDE&coll=GUIDE) (revisited first ed.). Addison-Wesley. p. 673. ISBN 0-201-14502-2. page 31.
External links
• The Saltzer and Schroeder paper cited in the references. (http:// web.mit. edu/ Saltzer/ www/ publications/
protection/ )
• NSA (the one that implemented SELinux) talks about the principle of least privilege (http:// cyberforge.com/
weblog/ aniltj/ archive/2004/ 05/ 26/ 544. aspx)
• A discussion of the implementation of the principle of least privilege in Solaris (http:/ / www.sun. com/
bigadmin/ features/articles/ least_privilege. html)
• "Proof that LUA makes you safer" by Dana Epp (http:// silverstr. ufies. org/ blog/ archives/ 000913.html)
• Applying the Principle of Least Privilege to User Accounts on Windows XP, by Microsoft (http:/ / technet.
microsoft. com/ en-us/ library/bb456992. aspx)
• Privilege Bracketing in the Solaris 10 Operating System, Sun Microsystems (http:/ / wikis. sun. com/ display/
BluePrints/Privilege+Bracketing+in+ the+Solaris+ 10+Operating+System)
Privilege Management Infrastructure
286
Privilege Management Infrastructure
Privilege Management is the process of managing user authorisations based on the ITU-T Recommendation X.509.
The 2001 edition of X.509
[1]
specifies most (but not all) of the components of a Privilege Management
Infrastructure (PMI), based on X.509 attribute certificates (ACs). Later editions of X.509 (2005 and 2009) have
added further components to the PMI, including a delegation service (in 2005
[2]
) and interdomain authorisation (in
the soon to be published 2009 edition
[3]
).
Privilege Management Infrastructures (PMIs) are to authorisation what Public Key Infrastructures (PKIs) are to
authentication. PMIs use attribute certificates (ACs) to hold user privileges, in the form of attributes, instead of
public key certificates (PKCs) to hold public keys. PMIs have Sources of Authority (SoAs) and Attribute Authorities
(AAs) that issue ACs to users, instead of Certification Authorities (CAs) that issue PKCs to users. Usually PMIs rely
on an underlying PKI, since ACs have to be digitally signed by the issuing AA, and the PKI is used to validate the
AA's signature.
An X.509 AC is a generalisation of the well known X.509 public key certificate (PKC), in which the public key of
the PKC has been replaced by any set of attributes of the certificate holder (or subject). Therefore one could in
theory use X.509 ACs to hold a user's public key as well as any other attribute of the user. (In a similar vein, X.509
PKCs can also be used to hold privilege attributes of the subject, by adding them to the subject directory attributes
extension of an X.509 PKC). However, the life cycle of public keys and user privileges are usually very different,
and therefore it isn't usually a good idea to combine both of them in the same certificate. Similarly, the authority that
assigns a privilege to someone is usually different from the authority that certifies someone's public key. Therefore it
isn't usually a good idea to combine the functions of the SoA/AA and the CA in the same trusted authority. PMIs
allow privileges and authorisations to be managed separately from keys and authentication.
The first open source implementation of an X.509 was built with funding under the EC PERMIS project, and the
software is available from here
[4]
. A description of the implementation can be found in
[5]
.
X.509 ACs and PMIs are used today in Grids (see Grid computing), to assign privileges to users, and to carry the
privileges around the Grid. In the most popular Grid privilege management system today, called VOMS
[6]
, user
privileges, in the shape of VO memberships and roles, are placed inside an X.509 AC by the VOMS server, signed
by the VOMS server, and then embedded in the user's X.509 proxy certificate for carrying around the Grid.
Because of the rise in popularity of XML SOAP based services, SAML attribute assertions are now more popular
than X.509 ACs for transporting user attributes. However, they both have similar functionality, which is to strongly
bind a set of privilege attributes to a user.
References
[1] ISO 9594-8/ITU-T Rec. X.509 (2001) The Directory: Public-key and attribute certificate frameworks
[2] ISO 9594-8/ITU-T Rec. X.509 (2005) The Directory: Public-key and attribute certificate frameworks
[3] FPDAM 2 text of Enhancements to Support Recognition of Authority Between PMIs
[4] http:/ / sec. cs.kent. ac. uk/ permis/
[5] D.W.Chadwick, A. Otenko “The PERMIS X.509 Role Based Privilege Management Infrastructure”. Future Generation Computer Systems,
936 (2002) 1–13, December 2002. Elsevier Science BV.
[6] Alfieri, R., Cecchini, R., Ciaschini, V., Dell'Agnello, L., Frohner, A., Lorentey, K., Spataro, F., “From gridmap-file to VOMS: managing
authorization in a Grid environment”. Future Generation Computer Systems. Vol. 21, no. 4, pp. 549-558. Apr. 2005
Privileged Identity Management
287
Privileged Identity Management
Privileged Identity Management (PIM) is a domain within Identity Management focused on the special
requirements of powerful accounts within the IT infrastructure of an enterprise. It is frequently used as an
Information Security and governance tool to help companies in meeting compliance regulations and to prevent
internal data breaches through the use of privileged accounts. The management of privileged identities can be
automated to follow pre-determined or customized policies and requirements for an organization or industry.
Please also see Privileged password management -- since the usual strategy for securing privileged identities is to
periodically scramble their passwords; securely store current password values and control disclosure of those
passwords.
Types of Privileged Identities
The term “Privileged Identities” refers to any type of user or account that holds special or extra permissions within
the enterprise systems. Privileged identities are usually categorized into the following types:
• Generic/Shared Administrative Accounts – the non-personal accounts that exist in virtually every device or
software application. These accounts hold “super user” privileges and are often shared among IT staff. Some
examples are: Windows Administrator user, UNIX root user, and Oracle SYS account.
• Privileged Personal Accounts – the powerful accounts that are used by business users and IT personnel. These
accounts have a high level of privilege and their use (or misuse) can significantly affect the organization’s
business. Some examples are: the CFO’s user, DBA user.
• Application Accounts – the accounts used by applications to access databases and other applications. These
accounts typically have broad access to underlying business information in databases.
• Emergency Accounts – special generic accounts used by the enterprise when elevated privileges are required to
fix urgent problems, such as in cases of business continuity or disaster recovery. Access to these accounts
frequently requires managerial approval. Also called: fire-call IDs, break-glass users, etc.
Special Requirement of Privileged Identities
A Privileged Identity Management technology needs to accommodate for the special needs of privileged accounts,
including their provisioning and life cycle management, authentication, authorization, password management and
monitoring.
• Provisioning and life cycle management – Handles the access permissions of a personal user to shared/generic
privileged accounts based on roles and policies.
• Authentication – controls the strong authentication of privileged identities. Specifically it is providing
applications with a secure alternative to static passwords.
• Authorization – manages powerful permissions and the workflow of providing them, sometimes on-demand, to
privileged identities.
• Password Management – enforces password policies on Privileged Identities, which unlike regular identities may
not be associated with a single person or may be shared among a few.
• Auditing – provides the detailed auditing for actions taken by privileged users. This may include recording of the
user’s session as well as creating correlation between a generic/shared account and a person.
Privileged Identity Management
288
Risks of Unmanaged Privileged Identities
A 2009 report prepared for a US congressional committee by Northrop Grumman Corporation
ngc1
details how US
corporate and government networks are compromised by overseas attackers who exploit unsecured privileged
identities. According to the report, "US government and private sector information, once unreachable or requiring
years of expensive technological or human asset preparation to obtain, can now be accessed, inventoried, and stolen
with comparative ease using computer network operations tools."
The intruders profiled in the report combine zero-day vulnerabilities developed in-house with clever social exploits
to gain access to individual computers inside targeted networks. Once a single computer is compromised, the
attackers exploit "highly privileged administrative accounts" throughout the organization until the infrastructure is
mapped and sensitive information can be extracted quickly enough to circumvent conventional safeguards.
Privileged account passwords that are secured by a privileged identity management framework so as to be
cryptographically complex, frequently changed, and not shared among independent systems and applications offer a
means to mitigate the threat to other computers that arises when a single system on a network is compromised.
cs1
Privileged Identity Management Software
Because common Identity access management frameworks do not manage or control privileged identities
iam1
,
privileged identity management software began to emerge after the year 2000.
Privileged identity management software frameworks manage each of the special requirements outlined above
including discovery, authentication, authorization, password management with scheduled changes, auditing and
compliance reporting. The frameworks generally require administrators to check out privileged account passwords
before each use, prompting requesters to document the reason for each access and re-randomizing the password
promptly after use.
In doing so privileged identity management software can guard against undocumented access to configuration
settings and private data, enforce the provisions of IT service management practices such as ITIL
voh1
, and provide
definitive audit trails to prove compliance with standards such as HIPAA 45 § 164.308(1)(D) and PCI-DSS 10.2. In
addition, the more advanced frameworks also perform discovery of interdependent services, synchronizing password
changes among interdependent accounts to avoid service disruptions that would otherwise result.
erpm1
References
1. Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation by
Northrop Grumman Corporation (http:// www. uscc. gov/ researchpapers/2009/
NorthropGrumman_PRC_Cyber_Paper_FINAL_Approved Report_16Oct2009. pdf)
2. Mismanaged Privileged Accounts: A New Threat To Your Sensitive Data by Chris Stoneff (http:/ / tek-tips.
nethawk. net/ blog/ mismanaged-privileged-accounts-a-new-threat-to-your-sensitive-data)
3. Behr, Kim and Spafford, "The Visible Ops Handbook," p. 28.
Privileged Identity Management
289
External links
• "Privileged Identities Explained" Video (http:// www.youtube. com/ watch?v=5oyRwRDWGgY) (slide
presentation with voice-over from Lieberman Software)
• “Take Control of Your Business with Privilege Centric Risk Assessment” Whitepaper (http:/ / www.cyber-ark.
com/wikipdf/got-privilege-pdf.htm) (white paper plus sales pitch from Cyber-Ark)
• “Secure Your Cloud and Outsourced Business with Privileged Identity Management” Whitepaper (http:/ / www.
cyber-ark.com/ wikipdf/secure-your-cloud-pdf.htm) (white paper plus sales from Cyber-Ark)
• "Best Practices for Managing Privileged Passwords" Whitepaper (http:/ / privileged-password-manager.
hitachi-id.com/ docs/ privileged-password-management-best-practices.html) (white paper; no sales pitch from
Hitachi ID Systems)
Proof-carrying code
Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an
application via a formal proof that accompanies the application's executable code. The host system can quickly
verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to
determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety, i.e.
preventing buffer overflows and other vulnerabilities common in some programming languages.
Proof-carrying code was originally described in 1996 by George Necula and Peter Lee.
Packet filter example
The original publication on proof-carrying code in 1996
[1]
used packet filters as an example: a user-mode
application hands a function written in machine code to the kernel that determines whether or not an application is
interested in processing a particular network packet. Because the packet filter runs in kernel mode, it could
compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional
approaches to this problem include interpreting a domain specific language for packet filtering, inserting checks on
each memory access (software fault isolation), and writing the filter in a high-level language which is compiled by
the kernel before it is run. These approaches all have severe performance disadvantages for code as frequently run as
a packet filter.
With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must
obey: for example, will not access memory outside of the packet and its scratch memory area. A theorem prover or
certifying compiler is used to show that the machine code satisfies this policy. The steps of this proof are recorded
and attached to the machine code which is given to the kernel. The kernel can then rapidly validate the proof,
allowing it to thereafter run the machine code without any additional checks. If a malicious party modifies either the
machine code or the proof, the resulting proof-carrying code is either invalid or harmless (still satisfies the security
policy).
Proof-carrying code
290
References
[1] Necula, G. C. and Lee, P. 1996. Safe kernel extensions without run-time checking. SIGOPS Operating Systems Review 30, SI (Oct. 1996),
229–243.
• George C. Necula and Peter Lee. Proof-Carrying Code (http:/ / www.eecs. berkeley.edu/ ~necula/ Papers/
tr96-165.ps. gz). Technical Report CMU-CS-96-165, November 1996. (62 pages)
• George C. Necula and Peter Lee. Safe, Untrusted Agents Using Proof-Carrying Code (http:/ / www.cs. berkeley.
edu/~necula/ Papers/ pcc_lncs98. ps). Mobile Agents and Security, Giovanni Vigna (Ed.), Lecture Notes in
Computer Science, Vol. 1419, Springer-Verlag, Berlin, ISBN 3-540-64792-9, 1998.
• George C. Necula. Compiling with Proofs (http:/ / www.cs. berkeley.edu/ ~necula/ Papers/ thesis. pdf). PhD
thesis, School of Computer Science, Carnegie Mellon Univ., Sept. 1998.
Public computer
A public computer (or public access computer) is any of various computers available in public areas. Some places
where public computers may be available are libraries, schools, or facilities run by government.
Public computers share similar hardware and software components to personal computers, however, the role and
function of a public access computer is entirely different. A public access computer is used by many different
untrusted individuals throughout the course of the day. The computer must be locked down and secure against both
intentional and unintentional abuse. Users typically don't have authority to install software or change settings. A
personal computer, in contrast, is typically used by a single responsible user, who can customize the machine's
behavior to their preferences.
Public access computers are often provided with tools such as a PC reservation system to regulate access.
The world's first public access computer center was the Marin Computer Center in California, co-founded by David
and Annie Fox in 1977.
[1]
Public computers in the United States
Library computers
In the United States and Canada, almost all public libraries have computers available for the use of patrons, though
some libraries will keep users timed to allow others to get turns and keep the library less busy. Users are often
allowed to print documents that they have created using these computers, though sometimes for a small fee. When
using these computers, it is a smart idea to bring a USB flash drive for digital note-taking or bringing files home if
the need arises.
School computers
The U.S. government has given money to many school boards to purchase computers for educational applications.
There is usually Internet access on these machines, but some schools will put up a blocking service to limit the
websites that students are able to access to only include educational resources, such as Wikipedia or Google. In
addition to controlling the content students are viewing, putting up these blocks can also help to keep the computers
safe by preventing students from downloading malware and other threats. However, the effectiveness of such content
filtering systems is questionable since it can easily be circumvented by using proxy websites, Virtual Private
Networks and for some weak security systems merely knowing the IP address of the intended website is enough to
go through.
Public computer
291
References
[1] Fox, David (2007-08-18), About Us (http:// www. electriceggplant.com/ about.shtml), , retrieved 2008-04-19
Pwnie Awards
Pwnie Award, resembling a My Little Pony toy
The Pwnie Awards recognize both extreme excellence and
incompetence in the field of information security. Winners are selected
by a committee of security industry luminaries from nominations
collected from the information security community. The awards are
presented yearly at the Black Hat Security Conference.
Origins
The name Pwnie Award is based on the word 'pwn', which is
hacker-slang meaning "to compromise" or to "control" based on the
previous usage of the word "own" (and it is pronounced similarly). The
name "The Pwnie Awards" is meant to sound like The Tony Awards, an awards ceremony for Broadway Theater in
New York City.
History
The Pwnie Awards were founded in 2007 by Alexander Sotirov and Dino Dai Zovi following discussions regarding
Dino's discovery of a cross-platform QuickTime vulnerability
[1]
and Alexander's discovery of an ANI file
processing vulnerability
[2]
in Internet Explorer.
Categories
As of 2010, Pwnies are awarded in the following categories:
• Pwnie for Best Server-Side Bug
• Pwnie for Best Client-Side Bug
• Pwnie for Best Privilege Escalation Bug
• Pwnie for Most Innovative Research
• Pwnie for Lamest Vendor Response
• Pwnie for Best Song
• Pwnie for Most Epic FAIL
Pwnie Awards
292
Previous winners
2010
• Best Server-Side Bug: Apache Struts2 framework remote code execution (CVE-2010-1870
[3]
) Meder
Kydyraliev
• Best Client-Side Bug: Java Trusted Method Chaining (CVE-2010-0840
[4]
) Sami Koivu
• Best Privilege Escalation Bug: Windows NT #GP Trap Handler (CVE-2010-0232
[5]
) Tavis Ormandy
• Most Innovative Research: Flash Pointer Inference and JIT Spraying Dionysus Blazakis
• Lamest Vendor Response: LANrev remote code execution Absolute Software
• Best Song: "Pwned - 1337 edition
[6]
" Dr. Raid and Heavy Pennies
• Most Epic FAIL: Microsoft Internet Explorer 8 XSS filter
2009
• Best Server-Side Bug: Linux SCTP FWD Chunk Memory Corruption (CVE-2009-0065) David 'DK2' Kim
• Best Privilege Escalation Bug: Linux udev Netlink Message Privilege Escalation (CVE-2009-1185) Sebastian
Krahmer
• Best Client-Side Bug: msvidctl.dll MPEG2TuneRequest Stack buffer overflow (CVE-2008-0015) Ryan Smith
and Alex Wheeler
• Mass 0wnage: Red Hat Networks Backdoored OpenSSH Packages (CVE-2008-3844) Anonymous
• Best Research: From 0 to 0day on Symbian Credit: Bernhard Mueller
• Lamest Vendor Response: Linux "Continually assuming that all kernel memory corruption bugs are only
Denial-of-Service" Linux Project
• Most Overhyped Bug: MS08-067 Server Service NetpwPathCanonicalize() Stack Overflow (CVE-2008-4250)
Anonymous
• Best Song: Nice Report Doctor Raid
• Most Epic Fail: Twitter Gets Hacked and the "Cloud Crisis" Twitter
• Lifetime Achievement Award: Solar Designer
2008
• Best Server-Side Bug: Windows IGMP Kernel Vulnerability (CVE-2008-0069
[7]
) Alex Wheeler and Ryan
Smith
• Best Client-Side Bug: Multiple URL protocol handling flaws Nate McFeters, Rob Carter, and Billy Rios
• Mass 0wnage: An unbelievable number of WordPress vulnerabilities
• Most Innovative Research: Lest We Remember: Cold Boot Attacks on Encryption Keys (honorable mention was
awarded to Rolf Rolles for work on virtualization obfuscators) J. Alex Halderman, Seth Schoen, Nadia
Heninger, William Clarkson, William Paul, Joseph Calandrino, Ariel Feldman, Rick Astley, Jacob
Appelbaum, Edward Felten
• Lamest Vendor Response: McAfee's "Hacker Safe" certification program
• Most Overhyped Bug: Dan Kaminsky's DNS Cache Poisoning Vulnerability (CVE-2008-1447
[8]
)
• Best Song: Packin' the K!
[9]
by Kaspersky Labs
• Most Epic Fail: Debian's flawed OpenSSL Implementation (CVE-2008-0166
[10]
)
• Lifetime Achievement Award: Tim Newsham
Pwnie Awards
293
2007
• Best Server-Side Bug: Solaris in.telnetd remote root exploit (CVE-2007-0882
[11]
), Kingcope
• Best Client-Side Bug: Unhandled exception filter chaining vulnerability (CVE-2006-3648
[12]
) skape & skywing
• Mass 0wnage: WMF SetAbortProc remote code execution (CVE-2005-4560
[13]
) anonymous
• Most Innovative Research: Temporal Return Addresses, skape
• Lamest Vendor Response: OpenBSD IPv6 mbuf kernel buffer overflow (CVE-2007-1365
[14]
)
• Most Overhyped Bug: MacBook Wi-Fi Vulnerabilities, David Maynor
• Best Song: Symantec Revolution, Symantec
References
[1] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2007-2175
[2] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2007-0038
[3] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2010-1870
[4] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2010-0840
[5] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2010-0232
[6] http:/ / www. sophsec. com/ pwned.mp3
[7] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2007-0069
[8] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2008-1447
[9] http:/ / www. youtube. com/ watch?v=bHxyHlFZ778
[10] http:// cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2008-0166
[11] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2007-0882
[12] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2006-3648
[13] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2005-4560
[14] http:/ / cve.mitre.org/ cgi-bin/cvename. cgi?name=CVE-2007-1365
External links
• The Pwnie Awards (http:// pwnies. com)
Real-time adaptive security
294
Real-time adaptive security
Real-time Adaptive Security is the network security model necessary to accommodate the emergence of multiple
perimeters and moving parts on the network, and increasingly advanced threats targeting enterprises. Adaptive
security can watch a network for malicious traffic and behavioral anomalies, ferret out end point vulnerabilities,
identify real-time changes to systems, automatically enforce end point protections and access rules, block malicious
traffic, follow a compliance dashboard while providing audit data, and more.
[1]
Among the key features of an adaptive security infrastructure are security platforms that share and correlate
information rather than point solutions, so the heuristics system could communicate its suspicions to the firewall.
Other features include finer-grained controls, automation (in addition to human intervention), on-demand security
services, security as a service, and integration of security and management data. Rather than adding security to
custom applications after they go operational, security models would be created at the design phase of an app.
A major change with this model of real-time adaptive security is shifting authorization management and policy to an
on-demand service that contains details and policy enforcement that matches compliance and can adapt to the user’s
situation when he or she is trying to access an application, for instance.
[2]
References
[1] "Special Webcast: Real-Time Adaptive Security: Proactively Mitigating Risks" (https:/ / www.sans. org/ webcasts/ show.
php?webcastid=91853). . Retrieved 06 January 2009.
[2] "Gartner Details Real-Time 'Adaptive' Security Infrastructure" (http:/ / www.darkreading.com/ security/ perimeter/showArticle.
jhtml?articleID=211201107). . Retrieved 06 January 2009.
External links
• Gartner webcast (http:// www. accelacomm. com/ jaw/ source/ 0/ 50197886/ ) — Gartner analyst, Neil
MacDonald, and Sourcefire founder and CTO and Snort creator, Martin Roesch, dive into "Building a Real-Time
Adaptive Security Infrastructure"
RED/BLACK concept
295
RED/BLACK concept
Red/Black box
The RED/BLACK concept refers to the
careful segregation in cryptographic systems
of signals that contain sensitive or classified
plaintext information (RED signals) from
those that carry encrypted information, or
ciphertext (BLACK signals).
In NSA jargon, encryption devices are often
called blackers, because they convert RED
signals to BLACK. TEMPEST standards spelled out in NSTISSAM TEMPEST/2-95 specify shielding or a
minimum physical distance between wires or equipment carrying or processing RED and BLACK signals.
[1]
Different organizations have differing requirements for the separation of RED and BLACK fiber optic cable.
RED/BLACK terminology is also applied to keys.
BLACK keys have themselves been encrypted with a key encryption key (KEK) and are therefore benign.
RED key is not encrypted and must be treated as highly sensitive material.
[2]
References
[1] McConnell, J. M. (12 December 1995). "NSTISSAM TEMPEST/2-95" (http:// web.archive.org/ web/ 20070408221244/ cryptome.org/
tempest-2-95. htm). . Retrieved 2007-12-02.
[2] Clark, Tom (2003). Designing Storage Area Networks (http:// books. google.com/ books?vid=ISBN0321136500& id=xKikTYXkXZEC&
pg=PA483& lpg=PA483&ots=4x6i_DAdqy& dq="red/ black+ concept"& ie=ISO-8859-1&output=html&
sig=eBFfATUP0XqF-D4GtDD7HckpHe8). Addison-Wesley Professional. ISBN 0321136500. .
Reverse engineering
296
Reverse engineering
Reverse engineering is the process of discovering the technological principles of a human made device, object or
system through analysis of its structure, function and operation. It often involves taking something (e.g., a
mechanical device, electronic component, or software program) apart and analyzing its workings in detail to be used
in maintenance, or to try to make a new device or program that does the same thing without using or simply
duplicating (without understanding) the original.
Reverse engineering has its origins in the analysis of hardware for commercial or military advantage.
[1]
The purpose
is to deduce design decisions from end products with little or no additional knowledge about the procedures involved
in the original production. The same techniques are subsequently being researched for application to legacy software
systems, not for industrial or defence ends, but rather to replace incorrect, incomplete, or otherwise unavailable
documentation.
[2]
Motivation
Reasons for reverse engineering:
• Interoperability.
• Lost documentation: Reverse engineering often is done because the documentation of a particular device has been
lost (or was never written), and the person who built it is no longer available. Integrated circuits often seem to
have been designed on obsolete, proprietary systems, which means that the only way to incorporate the
functionality into new technology is to reverse-engineer the existing chip and then re-design it.
• Product analysis. To examine how a product works, what components it consists of, estimate costs, and identify
potential patent infringement.
• Digital update/correction. To update the digital version (e.g. CAD model) of an object to match an "as-built"
condition.
• Security auditing.
• Acquiring sensitive data by disassembling and analysing the design of a system component.
[3]
• Military or commercial espionage. Learning about an enemy's or competitor's latest research by stealing or
capturing a prototype and dismantling it.
• Removal of copy protection, circumvention of access restrictions.
• Creation of unlicensed/unapproved duplicates.
• Materials harvesting, sorting, or scrapping.
[4]
• Academic/learning purposes.
• Curiosity.
• Competitive technical intelligence (understand what your competitor is actually doing, versus what they say they
are doing).
• Learning: learn from others' mistakes. Do not make the same mistakes that others have already made and
subsequently corrected.
Reverse engineering of machines
As computer-aided design (CAD) has become more popular, reverse engineering has become a viable method to
create a 3D virtual model of an existing physical part for use in 3D CAD, CAM, CAE or other software.
[5]
The
reverse-engineering process involves measuring an object and then reconstructing it as a 3D model. The physical
object can be measured using 3D scanning technologies like CMMs, laser scanners, structured light digitizers or
Industrial CT Scanning (computed tomography). The measured data alone, usually represented as a point cloud,
lacks topological information and is therefore often processed and modeled into a more usable format such as a
Reverse engineering
297
triangular-faced mesh, a set of NURBS surfaces or a CAD model.
Reverse engineering is also used by businesses to bring existing physical geometry into digital product development
environments, to make a digital 3D record of their own products or to assess competitors' products. It is used to
analyse, for instance, how a product works, what it does, and what components it consists of, estimate costs, and
identify potential patent infringement, etc.
Value engineering is a related activity also used by businesses. It involves de-constructing and analysing products,
but the objective is to find opportunities for cost cutting.
Reverse engineering of software
The term reverse engineering as applied to software means different things to different people, prompting Chikofsky
and Cross to write a paper researching the various uses and defining a taxonomy. From their paper, they state,
"Reverse engineering is the process of analyzing a subject system to create representations of the system at a higher
level of abstraction."
[6]
It can also be seen as "going backwards through the development cycle".
[7]
In this model, the
output of the implementation phase (in source code form) is reverse-engineered back to the analysis phase, in an
inversion of the traditional waterfall model. Reverse engineering is a process of examination only: the software
system under consideration is not modified (which would make it re-engineering). Software anti-tamper technology
is used to deter both reverse engineering and re-engineering of proprietary software and software-powered systems.
In practice, two main types of reverse engineering emerge. In the first case, source code is already available for the
software, but higher-level aspects of the program, perhaps poorly documented or documented but no longer valid,
are discovered. In the second case, there is no source code available for the software, and any efforts towards
discovering one possible source code for the software are regarded as reverse engineering. This second usage of the
term is the one most people are familiar with. Reverse engineering of software can make use of the clean room
design technique to avoid copyright infringement.
On a related note, black box testing in software engineering has a lot in common with reverse engineering. The tester
usually has the API, but their goals are to find bugs and undocumented features by bashing the product from outside.
Other purposes of reverse engineering include security auditing, removal of copy protection ("cracking"),
circumvention of access restrictions often present in consumer electronics, customization of embedded systems (such
as engine management systems), in-house repairs or retrofits, enabling of additional features on low-cost "crippled"
hardware (such as some graphics card chip-sets), or even mere satisfaction of curiosity.
Binary software
This process is sometimes termed Reverse Code Engineering, or RCE.
[8]
As an example, decompilation of binaries
for the Java platform can be accomplished using Jad. One famous case of reverse engineering was the first non-IBM
implementation of the PC BIOS which launched the historic IBM PC compatible industry that has been the
overwhelmingly dominant computer hardware platform for many years. An example of a group that
reverse-engineers software for enjoyment (and to distribute registration cracks) is CORE which stands for
"Challenge Of Reverse Engineering". Reverse engineering of software is protected in the U.S. by the fair use
exception in copyright law.
[9]
The Samba software, which allows systems that are not running Microsoft Windows
systems to share files with systems that are, is a classic example of software reverse engineering,
[10]
since the Samba
project had to reverse-engineer unpublished information about how Windows file sharing worked, so that
non-Windows computers could emulate it. The Wine project does the same thing for the Windows API, and
OpenOffice.org is one party doing this for the Microsoft Office file formats. The ReactOS project is even more
ambitious in its goals, as it strives to provide binary (ABI and API) compatibility with the current Windows OSes of
the NT branch, allowing software and drivers written for Windows to run on a clean-room reverse-engineered GPL
free software or open-source counterpart.
Reverse engineering
298
Binary software techniques
Reverse engineering of software can be accomplished by various methods. The three main groups of software
reverse engineering are
1. Analysis through observation of information exchange, most prevalent in protocol reverse engineering, which
involves using bus analyzers and packet sniffers, for example, for accessing a computer bus or computer network
connection and revealing the traffic data thereon. Bus or network behavior can then be analyzed to produce a
stand-alone implementation that mimics that behavior. This is especially useful for reverse engineering device
drivers. Sometimes, reverse engineering on embedded systems is greatly assisted by tools deliberately introduced
by the manufacturer, such as JTAG ports or other debugging means. In Microsoft Windows, low-level debuggers
such as SoftICE are popular.
2. Disassembly using a disassembler, meaning the raw machine language of the program is read and understood in
its own terms, only with the aid of machine-language mnemonics. This works on any computer program but can
take quite some time, especially for someone not used to machine code. The Interactive Disassembler is a
particularly popular tool.
3. Decompilation using a decompiler, a process that tries, with varying results, to recreate the source code in some
high-level language for a program only available in machine code or bytecode.
Source code
A number of UML tools refer to the process of importing and analysing source code to generate UML diagrams as
"reverse engineering". See List of UML tools.
Reverse engineering of protocols
Protocols are sets of rules that describe message formats and how messages are exchanged (i.e., the protocol
state-machine). Accordingly, the problem of protocol reverse-engineering can be partitioned into two subproblems;
message format and state-machine reverse-engineering.
The message formats have traditionally been reverse-engineered through a tedious manual process, which involved
analysis of how protocol implementations process messages, but recent research proposed a number of automatic
solutions.
[11]

[12]

[13]
Typically, these automatic approaches either group observed messages into clusters using
various clustering analyses, or emulate the protocol implementation tracing the message processing.
There has been less work on reverse-engineering of state-machines of protocols. In general, the protocol
state-machines can be learned either through a process of offline learning, which passively observes communication
and attempts to build the most general state-machine accepting all observed sequences of messages, and online
learning, which allo