You are on page 1of 257

PAPER NO.

CT 61
SECTION 6

CERTIFIED
INFORMATION COMMUNICATION
TECHNOLOGISTS
(CICT)

SYSTEM SECURITY

STUDY TEXT

i www.someakenya.com Contact: 0707 737 890


KASNEB SYLLABUS

PAPER NO.16 SYSTEMS SECURITY

GENERAL OBJECTIVE

This paper is intended to equip the candidate with the knowledge, skills and attitude thatwill enable
him/her to secure lCT systems in an organization

LEARNING OUTCOMES

A candidate who passes this paper should be able to:


• Identify types of threats to ICT systems
• Adopt different security mechanisms
• Prepare business continuity planning (BCP) strategies
• Develop and implement a systems security policy
• Undertake basic computer forensic audits
• Demonstrate social-ethical and professional values in computing.

CONTENT

1. Introduction to systems security


 Overview of systems security
 Goals of system security
 Security core concepts
 Security mechanisms

2. Security threats and controls


 Sources of threats
 Types of threats
 Crimes againstlCT and computer criminals
 Controlling security threats
 Ethical hacking

3. Systems security
 Classification
 People errors
 Procedural errors
 Software errors
 Electromechanical problems
 Dirty data

4. Physical and logical security


 Physical security
 Logical security (authentication, access rights. Others)

5. Data/software security
 Use of the normal security systems
 Vulnerability assessment

ii www.someakenya.com Contact: 0707 737 890


 Employing virus security precautions
 Employing Internet security precautions
 Vetting of ICT employees

6. Transmission security
 Symmetric encryption
 Asymmetric encryption
 Duplicate and alternate routing
 Firewall types and configuration
 Secure socket layer and transport layer security
 IPv4 and 1Pv6 security
 Wireless network security
 Mobile device security
 Wireless protected access

7. ICT risk management


 Risk management concepts
 Risk analysis
 Risk assessment framework
 Countermeasures
 Corporate risk document

8. Business continuity planning (BCP)


 BCP scope, teams and roles
 Backup types and strategies
 Hot and cold sites
 Disaster recovery plans

9. System security policy implementation


 Components of systems security policy
 Systems security policy development
 System security policy implementation
 Systems security strategies
 Audit

10. Introduction to computer forensics


 Computer forensics concepts
 Incidence handling
 Investigating desktop incidents
 Investigating network incidents
 Securing and preserving evidence

11. Professional values and ethics in computing


 Intellectual property and fraud
 Information systems ethical and social concerns
 Telecommuting and ethical issues of the worker
 Codes of ethics for IT professionals
 Professional ethics and values on the web and Internet

iii www.someakenya.com Contact: 0707 737 890


 Objectivity and integrity in computing
 The role of professional Societies in enforcing professional standards in
Computing

12. Emerging Issues and trends

CONTENT PAGE

Topic 1: Introduction to systems security…………………………………………………….………5


Topic 2: Security threats and controls………………………………………………………….…….13
Topic 3: Systems security…………………………………………………………………………….62
Topic 4: Physical and logical security………………………………………………………………..65
Topic 5: Data/software security………………………………………………………………………68
Topic 6: Transmission security…………………………………………………………………….....81
Topic 7: ICT risk management.....................................................................................................…...121
Topic 8: Business continuity planning (BCP)…………………………………………………….…174
Topic 9: System security policy implementation……………………………………………………187
Topic 10: Introduction to computer forensics…………………………………………………….…221
Topic 11: Professional values and ethics in computing………………………………………….….227
Topic 12: Emerging Issues and trends

iv www.someakenya.com Contact: 0707 737 890


TOPIC 1

INTRODUCTION TO SYSTEMS SECURITY

Overview of systems security

Information security, sometimes shortened to InfoSec, is the practice of defending information


from unauthorized access, use, disclosure, disruption, modification, perusal, inspection,
recording or destruction. It is a general term that can be used regardless of the form the data may
take (e.g. electronic, physical).

Overview
IT security
Sometimes referred to as computer security, Information Technology security is information
security applied to technology (most often some form of computer system). It is worthwhile to
note that a computer does not necessarily mean a home desktop. A computer is any device with a
processor and some memory. Such devices can range from non-networked standalone devices as
simple as calculators, to networked mobile computing devices such as smartphones and tablet
computers. IT security specialists are almost always found in any major enterprise/establishment
due to the nature and value of the data within larger businesses. They are responsible for keeping
all of the technology within the company secure from malicious cyber-attacks that often attempt
to breach into critical private information or gain control of the internal systems.

Information assurance
The act of ensuring that data is not lost when critical issues arise. These issues include, but are
not limited to: natural disasters, computer/server malfunction, physical theft, or any other
instance where data has the potential of being lost. Since most information is stored on
computers in our modern era, information assurance is typically dealt with by IT security
specialists. One of the most common methods of providing information assurance is to have an
off-site backup of the data in case one of the mentioned issues arises.

Threats

Computer system threats come in many different forms. Some of the most common threats today
are software attacks, theft of intellectual property, identity theft, theft of equipment or
information, sabotage, and information extortion. Most people have experienced software attacks
of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples
of software attacks. The theft of intellectual property has also been an extensive issue for many
businesses in the IT field. Intellectual property is the ownership of property usually consisting of
some form of protection. Theft of software is probably the most common in IT businesses today.
Identity theft is the attempt to act as someone else usually to obtain that person's personal
information or to take advantage of their access to vital information. Theft of equipment or
information is becoming more prevalent today due to the fact that most devices today are mobile.
Cell phones are prone to theft and have also become far more desirable as the amount of data

5 www.someakenya.com Contact: 0707 737 890


capacity increases. Sabotage usually consists of the destruction of an organization
organization′s website in an
attempt to cause
use loss of confidence to its customers. Information extortion consists of theft of a
company′s′s property or information as an attempt to receive a payment in exchange for returning
the information or property back to its owner. There are many ways to help protect yourself from
some of these attacks but one of the most functional precautions is user carefulness.

Governments, military, corporations,


corporations financial institutions, hospitals and private businesses
amass a great deal of confidential information about their employees, customers, products,
research and financial status. Most of this information is now collected, processed and stored on
electronic computers and transmitted across networks to other computers.

Should confidential information about


about a business' customers or finances or new product line fall
into the hands of a competitor or a black hat hacker,, a business and its customers could suffer
widespread, irreparable
rreparable financial loss, as well as damage to the company's reputation. Protecting
confidential information is a business requirement and in many cases also an ethical and legal
requirement. Hence a key concern for organizations today is to derive the optimal
optimal information
security investment. The renowned Gordon-Loeb Model actually provides a powerful
mathematical economic approach for addressing this critical concern.

For the individual, information security has a significant effect on privacy,, which is viewed very
differently in different cultures.

The field of information security has grown and evolved significantly in recent years. There are
many ways of gaining entry into the field as a career. It offers many areas for specialization
including securing network(s) and allied infrastructure, securing applications and databases,
security testing,, information systems auditing, business continuity planning and digital forensics.

Definitions

Attributes: or qualities, i.e., Confidentiality, Integrity and Availability


Information Security Attributes:
(CIA). Information Systems are composed in three main portions, hardware, software and
communications with the purpose to help identify and apply information
information security industry
standards, as mechanisms of protection and prevention, at three levels or layers: physical,
6 www.someakenya.com Contact: 0707 737 890
personal and organizational. Essentially, procedures or policies are implemented to tell people
(administrators, users and operators) how to use products to ensure information security within
the organizations.

The definitions of InfoSec suggested in different sources are summarized below (adopted from).

1. "Preservation of confidentiality, integrity and availability of information. Note: In addition,


other properties, such as authenticity, accountability, non-repudiation and reliability can also be
involved." (ISO/IEC 27000:2009)

2. "The protection of information and information systems from unauthorized access, use,
disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity,
and availability." (CNSS, 2010)

3. "Ensures that only authorized users (confidentiality) have access to accurate and complete
information (integrity) when required (availability)." (ISACA, 2008)

4. "Information Security is the process of protecting the intellectual property of an organisation."


(Pipkin, 2000)

5. "...information security is a risk management discipline, whose job is to manage the cost of
information risk to the business." (McDermott and Geer, 2001)

6. "A well-informed sense of assurance that information risks and controls are in balance."
(Anderson, J., 2003)

7. "Information security is the protection of information and minimizes the risk of exposing
information to unauthorized parties." (Venter and Eloff, 2003)

8. "Information Security is a multidisciplinary area of study and professional activity which is


concerned with the development and implementation of security mechanisms of all available
types (technical, organisational, human-oriented and legal) in order to keep information in all its
locations (within and outside the organization’s perimeter) and, consequently, information
systems, where information is created, processed, stored, transmitted and destroyed, free from
threats.

Threats to information and information systems may be categorized and a corresponding security
goal may be defined for each category of threats. A set of security goals, identified as a result of
a threat analysis, should be revised periodically to ensure its adequacy and conformance with the
evolving environment. The currently relevant set of security goals may include: confidentiality,
integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability
and auditability." (Cherdantseva and Hilton, 2013)

7 www.someakenya.com Contact: 0707 737 890


Profession

Information security is a stable and growing profession. Information security professionals are
very stable in their employment; more than 80 percent had no change in employer or
employment in the past year, and the number of professionals is projected to continuously grow
more than 11 percent annually from 2014 to 2015.

 Goals of system security


The real basic goals of information security are

3. Confidentiality
4. Integrity
5. Availability
6. Non-repudiation. Accomplishing these is a management issue before it's a technical one,
as they are essentially business objectives.

Confidentiality is about controlling access to files either in storage or in transit. This requires
systems configuration or products (a technical job). But the critical definition of the parameters
(who should be able to access what) is a business-related process.

Ensuring integrity is a matter of version control - making sure only the right people can change
documents. It also requires an audit trail of the changes, and a fallback position in case changes
prove detrimental. This meshes with non-repudiation (the change record must include who as
well as what and when).

Availability is the Cinderella of information security as it is rarely discussed. But however safe
from hackers your information is, it is no use if you can't get at it when you need to. So you need
to think about data back-ups, bandwidth and standby facilities, which many people still leave out
of their security planning.

 Security core concept

Key concepts

The CIA triad of confidentiality, integrity, and availability is at the heart of information
security. (The members of the classic InfoSec triad — confidentiality, integrity and availability
are interchangeably referred to in the literature as security attributes, properties, security
goals, fundamental aspects, information criteria, critical information characteristics and
basic building blocks.) There is continuous debate about extending this classic trio. Other
principles such as Accountability have sometimes been proposed for addition. It has been
pointed outthat issues such as Non-Repudiation do not fit well within the three core concepts.

Security of Information Systems and Networks proposed the nine generally accepted principles:
Awareness, Responsibility, Response, Ethics, Democracy, Risk Assessment, Security Design
and Implementation, Security Management, and Reassessment. Building upon those, in 2004
8 www.someakenya.com Contact: 0707 737 890
the NIST's Engineering Principles for Information Technology Security proposed 33 principles.
From each of these derived guidelines and practices.

In 2013, based on a thorough analysis of Information Assurance and Security (IAS) literature,
the IAS-octave was proposed as an extension of the CIA-triad. The IAS-octave includes
Confidentiality, Integrity, Availability, Accountability, Auditability,
Authenticity/Trustworthiness, Non-repudiation and Privacy. The completeness and accuracy of
the IAS-octave was evaluated via a series of interviews with IAS academics and experts. The
IAS-octave is one of the dimensions of a Reference Model of Information Assurance and
Security (RMIAS), which summarizes the IAS knowledge in one all-encompassing model.

Confidentiality

In information security, confidentiality "is the property, that information is not made available or
disclosed to unauthorized individuals, entities, or processes"

Integrity

In information security, data integrity means maintaining and assuring the accuracy and
completeness of data over its entire life-cycle. This means that data cannot be modified in an
unauthorized or undetected manner. This is not the same thing as referential integrity in
databases, although it can be viewed as a special case of consistency as understood in the classic
ACID model of transaction processing. Information security systems typically provide message
integrity in addition to data confidentiality.

Availability

For any information system to serve its purpose, the information must be available when it is
needed. This means that the computing systems used to store and process the information, the
security controls used to protect it, and the communication channels used to access it must be
functioning correctly. High availability systems aim to remain available at all times, preventing
service disruptions due to power outages, hardware failures, and system upgrades. Ensuring
availability also involves preventing denial-of-service attacks, such as a flood of incoming
messages to the target system essentially forcing it to shut down.

Non-repudiation

In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also
implies that one party of a transaction cannot deny having received a transaction nor can the
other party deny having sent a transaction. Note: This is also regarded as part of Integrity.

It is important to note that while technology such as cryptographic systems can assist in non-
repudiation efforts, the concept is at its core a legal concept transcending the realm of
technology. It is not, for instance, sufficient to show that the message matches a digital signature
signed with the sender's private key, and thus only the sender could have sent the message and
nobody else could have altered it in transit. The alleged sender could in return demonstrate that

9 www.someakenya.com Contact: 0707 737 890


the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has
been compromised. The fault for these violations may or may not lie with the sender himself, and
such assertions may or may not relieve the sender of liability, but the assertion would invalidate
the claim that the signature necessarily proves authenticity and integrity and thus prevents
repudiation.

The 6 Security Core Concepts


1. Risk Assessment (RA) & Business Impact Analysis (BIA) – If you can’t (in some form)
qualify/quantify your business risks related to your sensitive data, and then determine an
estimated cost-of-loss related to data theft or unavailability, how will you know how much to
spend on security? Put simply; if the cost of security outweighs the value of the data, don’t
do it (this includes compliance). This does NOT mean you should do nothing at all, it just
means you need to re-evaluate how you perform some of your business functions. The first
question is not “How do I protect it? “It’s “Do I need it?”
2. Security Control Selection & Implementation – The RA, done correctly, will show you
where you can make improvements in your security posture. This does not necessarily
involve capital expenditure – which should always be the LAST resort – it can be something
as simple as destroying every instance of redundant data. Regardless, at some point you will
probably purchase technology, but even here you should be careful – now I have to write
another blog – and ensure that this new technology meets all of the business needs defined in
the RA.
3. Security Management Systems – There’s not much point putting security controls in place
if you don’t manage them properly to keep them in place. This is where standards like ISO
2700X come into play. This is the day-to-day procedures used to maintain the operational
aspect of your security infrastructure. Obviously this will vary dramatically by organisation;
from a simple check-list for your corner sandwich shop, to a full time job for larger more
complex organisations. The trick is doing only what’s appropriate, without going overboard.
4. Governance & Change Control – Ask 100 people what Governance is, and you’ll get 105
different answers. I believe governance provides a function that trumps all others; it allows
the business side of an organisation to talk to the IT side in the same language. Business: “I
want this new functionality. “IT: “Sure, but do it this way.” is the perfect conversation. IT,
and especially IT security, are typically seen as roadblocks, but this is just a symptom of
immature Governance processes. As for change control, that’s just common sense. If things
don’t change, the only increase in security risk is from external sources. The threat landscape
changes almost daily, why make things worse by screwing up internally as well?
5. Incident Response (IR) & Disaster Recovery (DR) – Fairly self-explanatory; what’s the
point of being in business if you don’t intend staying in business? For example; if you are an
e-commerce company, you should know from the RA what your maximum downtime is, and
both your security controls and IR & DR processes need to fit according.
6. Business Continuity Management (BCM) & Business As Usual (BAU) – You may ask
why this is broken out from IR & DR. This because BCP and BAU are more related to the
business side of the table, and IR & DR are on the IT side. IT never leads, IT enables, it’s the
business side that needs to lay down the plans for staying in business, as well as how to do so
efficiently, and cost effectively.

10 www.someakenya.com Contact: 0707 737 890


It’s a bold statement, but if you follow these core concepts, it won’t matter the compliance
regime, the data type, of even the type / location your business is in, you’ll be covered …mostly.

Yes, this is a lot of work, and the up-front costs in both capital and resource terms can be
significant, but it’s a damned sight cheaper than the cost of non-compliance, fines, and
particularly; being breached. In the extreme, what if it’s the difference between you being in
business or not?

 Security mechanisms
We use several layers of proven security technologies and processes to provide you with secure
online access to your accounts and information. These are continuously evaluated and updated
by experts to ensure that we protect you and your information. These include:

 Secure Socket Layer (SSL) Encryption


 Authentication
 Firewalls
 Computer Anti-Virus Protection
 Data Integrity
 Ensuring Your Online Safety

Secure Socket Layer (SSL) Encryption


When you successfully login to Online Banking or another secure website using an authentic
user ID and password, servers will establish a secure socket layer (SSL) connection with your
computer. This allows you to communicate privately and prevents other computers from seeing
anything that you are transacting – so you can conduct online business safely. SSL provides 128-
bit encrypted security so that sensitive information sent over the Internet during online
transactions remains confidential.

Authentication
To protect our users, we provide secure private websites for any business that users conduct with
us. Users login to these sites using a valid client number or username and a password. Users are
required to create their own passwords, which should be kept strictly confidential so that no one
else can login to their accounts.

Firewalls
We use a multi-layered infrastructure of firewalls to block unauthorized access by individuals or
networks to our information servers.

11 www.someakenya.com Contact: 0707 737 890


Computer Anti-Virus Protection

We are continuously updating our anti-virus protection. This ensures we maintain the latest in
anti-virus software to detect and prevent viruses from entering our computer network systems.

Data Integrity
The information you send to one of our secure private websites is automatically verified to
ensure it is not altered during information transfers. Our systems detect if data was added or
deleted after you send information. If any tampering has occurred, the connection is dropped and
the invalid information transfer is not processed.

Ensuring Your Online Safety


Find out how these security mechanisms safeguard your communication.

12 www.someakenya.com Contact: 0707 737 890


TOPIC 2

SECURITY THREATS AND CONTROLS

Threats classification
Threats can be classified according to their type and origin:
 Type of threat
o Physical damage
 fire
 water
 pollution
o natural events
 climatic
 seismic
 volcanic
o loss of essential services
 electrical power
 air conditioning
 telecommunication
o compromise of information
 eavesdropping,
 theft of media
 retrieval of discarded materials
o technical failures
 equipment
 software
 capacity saturation
o compromise of functions
 error in use
 abuse of rights
 denial of actions
 Origin of threats
o Deliberate: aiming at information asset
 spying
 illegal processing of data
o accidental
 equipment failure
 software failure
o environmental
 natural event
 loss of power supply
o Negligence: Known but neglected factors, compromising the network safety and
sustainability.

Note that a threat type can have multiple origins.


13 www.someakenya.com Contact: 0707 737 890
Threat model

People can be interested in studying all possible threats that can:

 affect an asset,
 affect a software system
 are brought by a threat agent

Threat classification
Microsoft has proposed a threat classification called STRIDE, from the initials of threat
categories:

 Spoofing of user identity


 Tampering
 Repudiation
 Information disclosure (privacy breach or Data leak)
 Denial of Service (D.o.S.)
 Elevation of privilege

Microsoft used to risk rating security threats using five categories in a classification called
DREAD: Risk assessment model. The model is considered obsolete by Microsoft. The categories
were:

 Damage – how bad would an attack be?


 Reproducibility – how easy it is to reproduce the attack?
 Exploitability – how much work is it to launch the attack?
 Affected users – how many people will be impacted?
 Discoverability – how easy it is to discover the threat?

The DREAD name comes from the initials of the five categories listed.

Associated terms

Threat agents or actors

Threat agents
Individuals within a threat population; practically anyone and anything can, under the right
circumstances, be a threat agent – the well-intentioned, but inept, computer operator who
trashes a daily batch job by typing the wrong command, the regulator performing an audit, or
the squirrel that chews through a data cable.

14 www.someakenya.com Contact: 0707 737 890


Threat agents can take one or more of the following actions against an asset

 Access – simple unauthorized access


 Misuse – unauthorized use of assets (e.g., identity theft, setting up a porn distribution
service on a compromised server, etc.)
 Disclose – the threat agent illicitly discloses sensitive information
 Modify – unauthorized changes to an asset
 Deny access – includes destruction, theft of a non-data asset, etc.

It’s important to recognize that each of these actions affects different assets differently, which
drives the degree and nature of loss. For example, the potential for productivity loss resulting
from a destroyed or stolen asset depends upon how critical that asset is to the organization’s
productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss.
Similarly, the destruction of a highly sensitive asset that doesn’t play a critical role in
productivity won’t directly result in a significant productivity loss. Yet that same asset, if
disclosed, can result in significant loss of competitive advantage or reputation, and generate legal
costs. The point is that it’s the combination of the asset and type of action against the asset that
determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will
be driven primarily by that agent’s motive (e.g., financial gain, revenge, recreation, etc.) and the
nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a
critical server than they are to steal an easily pawned asset like a laptop.

It is important to separate the concept of the event that a threat agent get in contact with the asset
(even virtually, i.e. through the network) and the event that a threat agent act against the asset.

The term Threat Agent is used to indicate an individual or group that can manifest a threat. It is
fundamental to identify who would want to exploit the assets of a company, and how they might
use them against the company.

Threat Agent = Capabilities + Intentions + Past Activities

These individuals and groups can be classified as follows:

 Non-Target Specific: Non-Target Specific Threat Agents are computer viruses, worms,
Trojans and logic bombs.
 Employees: Staff, contractors, operational/maintenance personnel, or security guards who
are annoyed with the company.
 Organized Crime and Criminals: Criminals target information that is of value to them,
such as bank accounts, credit cards or intellectual property that can be converted into
money. Criminals will often make use of insiders to help them.
 Corporations: Corporations are engaged in offensive information warfare or competitive
intelligence. Partners and competitors come under this category.
 Human, Unintentional: Accidents, carelessness.
 Human, Intentional: Insider, outsider.
 Natural: Flood, fire, lightning, meteor, earthquakes.

15 www.someakenya.com Contact: 0707 737 890


 Sources of threats
A threat sources are those who wish a compromise to occur. It is a term used to distinguish them
from threat agents/actors who are those who actually carry out the attack and who may be
commissioned or persuaded by the threat actor to knowingly or unknowingly carry out the
attack.

Threat communities

The following threat communities are examples of the human malicious threat landscape many
organizations face:

 Internal
o Employees
o Contractors (and vendors)
o Partners
 External
o Cyber-criminals (professional hackers)
o Spies
o Non-professional hackers
o Activists
o Nation-state intelligence services (e.g., counterparts to the CIA, etc.)
o Malware (virus/worm/etc.) authors

Threat action

Threat action is an assault on system security.


Completesecurity architecture deals with both intentional acts (i.e. attacks) and accidental
events.Various kinds of threat actions are defined as subentries under "threat consequence".

Threat analysis

Threat analysis is the analysis of the probability of occurrences and consequences of damaging
actions to a system. It is the basis of risk analysis.

Threat consequenceis a security violation that results from a threat action.It includes
disclosure, deception, disruption, and usurpation. The following subentries describe four kinds of
threat consequences, and also list and describe the kinds of threat actions that cause each
consequence. Threat actions that are accidental events are marked by "*".

1 Unauthorized disclosure (a threat consequence)


A circumstance or event whereby an entity gains access to data for which the entity is not
authorized. (See: data confidentiality.). The following threat actions can cause unauthorized
disclosure:
Exposure:A threat action whereby sensitive data is directly released to an unauthorized entity.
This includes:
Deliberate Exposure: Intentional release of sensitive data to an unauthorized entity.
16 www.someakenya.com Contact: 0707 737 890
Scavenging:Searching through data residue in a system to gain unauthorized knowledge of
sensitive data.
* Human error
Human action or inaction that unintentionally results in an entity gaining unauthorized
knowledge of sensitive data
* Hardware/software error
System failure that results in an entity gaining unauthorized knowledge of sensitive data
Interception:A threat action whereby an unauthorized entity directly accesses sensitive data
travelling between authorized sources and destinations. This includes:
Theft:Gaining access to sensitive data by stealing a shipment of a physical medium, such as a
magnetic tape or disk, that holds the data.
Wiretapping (passive):Monitoring and recording data that is flowing between two points in a
communication system (See: wiretapping.)
Emanations analysis
Gaining direct knowledge of communicated data by monitoring and resolving a signal that is
emitted by a system and that contains the data but is not intended to communicate the data. (See:
Emanation.)
Inference: A threat action whereby an unauthorized entity indirectly accesses sensitive data (but
not necessarily the data contained in the communication) by reasoning from characteristics or
byproducts of communications. This includes:
Traffic analysis:Gaining knowledge of data by observing the characteristics of communications
that carry the data.
Signals analysis: Gaining indirect knowledge of communicated data by monitoring and
analyzing a signal that is emitted by a system and that contains the data but is not intended to
communicate the data. (See: Emanation.)
Intrusion:A threat action whereby an unauthorized entity gains access to sensitive data by
circumventing a system's security protections. This includes:
Trespass: Gaining unauthorized physical access to sensitive data by circumventing a system's
protections.
Penetration: Gaining unauthorized logical access to sensitive data by circumventing a system's
protections.
Reverse engineering: Acquiring sensitive data by disassembling and analyzing the design of a
system component
Cryptanalysis: Transforming encrypted data into plain text without having prior knowledge of
encryption parameters or processes.

2 Deception (a threat consequence)


A circumstance or event that may result in an authorized entity receiving false data and believing
it to be true. The following threat actions can cause deception:
Masquerade
A threat action whereby an unauthorized entity gains access to a system or performs a malicious
act by posing as an authorized entity
Spoof: Attempt by an unauthorized entity to gain access to a system by posing as an authorized
user.
Malicious logic

17 www.someakenya.com Contact: 0707 737 890


In context of masquerade, any hardware, firmware, or software (e.g., Trojan horse) that appears
to perform a useful or desirable function, but actually gains unauthorized access to system
resources or tricks a user into executing other malicious logic.
Falsification
A threat action whereby false data deceives an authorized entity.(See: active wiretapping.)
Substitution
Altering or replacing valid data with false data that serves to deceive an authorized entity.
Insertion
Introducing false data that serves to deceive an authorized entity
Repudiation
A threat action whereby an entity deceives another by falsely denying responsibility for an act
False denial of origin
Action whereby the originator of data denies responsibility for its generation
False denial of receipt
Action whereby the recipient of data denies receiving and possessing the data

3 Disruption (a threat consequence)


A circumstance or event that interrupts or prevents the correct operation of system services and
functions (See: denial of service.) The following threat actions can cause disruption:
Incapacitation
A threat action that prevents or interrupts system operation by disabling a system component
Malicious logic
In context of incapacitation, any hardware, firmware, or software (e.g., logic bomb) intentionally
introduced into a system to destroy system functions or resources.
Physical destruction
Deliberate destruction of a system component to interrupt or prevent system operation
* Human error
Action or inaction that unintentionally disables a system component
* Hardware or software error
Error that causes failure of a system component and leads to disruption of system operation
* Natural disaster
Any "act of God" (e.g., fire, flood, earthquake, lightning, or wind) that disables a system
component
Corruption
A threat action that undesirably alters system operation by adversely modifying system functions
or data
Tamper
In context of corruption, deliberate alteration of a system's logic, data, or control information to
interrupt or prevent correct operation of system functions.
Malicious logic
In context of corruption, any hardware, firmware, or software (e.g., a computer virus)
intentionally introduced into a system to modify system functions or data.
* Human error
Human action or inaction that unintentionally results in the alteration of system functions or data
* Hardware or software error
Error that results in the alteration of system functions or data

18 www.someakenya.com Contact: 0707 737 890


* Natural disaster
Any "act of God" (e.g., power surge caused by lightning) that alters system functions or data
Obstruction
A threat action that interrupts delivery of system services by hindering system operations.
Interference
Disruption of system operations by blocking communications or user data or control information
Overload
Hindrance of system operation by placing excess burden on the performance capabilities of a
system component (See: flooding.)

4 Usurpation (a threat consequence)


A circumstance or event that results in control of system services or functions by an
unauthorized entity. The following threat actions can cause usurpation:
Misappropriation
A threat action whereby an entity assumes unauthorized logical or physical control of a system
resource
Theft of service
Unauthorized use of service by an entity
Theft of functionality
Unauthorized acquisition of actual hardware, software, or firmware of a system component
Theft of data
Unauthorized acquisition and use of data
Misuse
A threat action that causes a system component to perform a function or service that is
detrimental to system security.
Tamper
In context of misuse, deliberate alteration of a system's logic, data, or control information to
cause the system to perform unauthorized functions or services.
Violation of permissions
Action by an entity that exceeds the entity's system privileges by executing an unauthorized
function.

 Types of threats
External

 Strategic: like competition and customer demand...


 Operational: Regulation, suppliers, contracts
 Financial: FX, credit
 Hazard: Natural disaster, cyber, external criminal act
 Compliance: new regulatory or legal requirements are introduced, or existing ones are
changed, exposing the organisation to a non-compliance risk if measures are not taken to
ensure compliance

19 www.someakenya.com Contact: 0707 737 890


Internal

 Strategic: R&D
 Operational: Systems and process (H&R, Payroll)
 Financial: Liquidity, cash flow
 Hazard: Safety and security; employees and equipment
 Compliance: Actual or potential changes in the organization’s systems, processes,
suppliers, etc. may create exposure to a legal or regulatory non-compliance.

 Crimes against lCT and computer criminals


Mostcybercrimes are committed by individuals or small groups. However, large organized crime
groups also take advantage of the Internet. These "professional" criminals find new ways to
commit old crimes, treating cybercrime like a business and forming global criminal
communities.

Criminal communities share strategies and tools and can combine forces to launch coordinated
attacks. They even have an underground marketplace where cyber criminals can buy and sell
stolen information and identities.

It's very difficult to crack down on cyber criminals because the Internet makes it easier for
people to do things anonymously and from any location on the globe. Many computers used in
cyber-attacks have actually been hacked and are being controlled by someone far away. Crime
laws are different in every country too, which can make things really complicated when a
criminal launches an attack in another country.

Attack Techniques

Here are a few types of attacks cyber criminals use to commit crimes. You may recognize a few
of them:

 Botnet - a network of software robots, or bots, that automatically spread malware


 Fast Flux - moving data quickly among the computers in a botnet to make it difficult to
trace the source of malware or phishing websites
 Zombie Computer - a computer that has been hacked into and is used to launch malicious
attacks or to become part of a botnet
 Social Engineering - using lies and manipulation to trick people into revealing their
personal information. Phishing is a form of social engineering
 Denial-of-Service attacks - flooding a network or server with traffic in order to make it
unavailable to its users
 Skimmers - Devices that steal credit card information when the card is swiped through
them. This can happen in stores or restaurants when the card is out of the owner's view,
and frequently the credit card information is then sold online through a criminal
community.

20 www.someakenya.com Contact: 0707 737 890


Some identity thieves target organizations that store people's personal information, like schools
or credit card companies. But most cyber criminals will target home computers rather than trying
to break into a big institution's network because it's much easier.

By taking measures to secure your own computer and protect your personal information, you are
not only preventing cyber criminals from stealing your identity, but also protecting others by
preventing your computer from becoming part of a botnet.

Social Engineering

Social engineering is a tactic used by cyber criminals that uses lies and manipulation to trick
people into revealing their personal information. Social engineering attacks frequently involve
very convincing fake stories to lure victims into their trap. Common social engineering attacks
include:

 Sending victims an email that claims there's a problem with their account and has a link
to a fake website. Entering their account information into the site sends it straight to the
cyber-criminal (phishing)
 Trying to convince victims to open email attachments that contain malware by claiming it
is something they might enjoy (like a game) or need (like anti-malware software)
 Pretending to be a network or account administrator and asking for the victim's password
to perform maintenance
 Claiming that the victim has won a prize but must give their credit card information in
order to receive it
 Asking for a victim's password for an Internet service and then using the same password
to access other accounts and services since many people re-use the same password
 Promising the victim they will receive millions of dollars, if they will help out the sender
by giving them money or their bank account information

Like other hacking techniques, social engineering is illegal in the United States and other
countries. To protect yourself from social engineering, don't trust any emails or messages you
receive that request any sort of personal information. Most companies will never ask you for
personal information through email. Let a trusted adult know when you receive an email or
message that might be a social engineering attack, and don't believe everything you read.

Reformed Criminals: Grey Hat Hackers

For a hacker who wants to come clean and turn away from crime, one option is to work for the
people they used to torment, by becoming a security consultant. These hackers-turned-good-guys
are called Grey Hat Hackers.

In the past, they were Black Hat Hackers, who used their computer expertise to break into
systems and steal information illegally, but now they are acting as White Hat Hackers, who
specialize in testing the security of their clients' information systems. For a fee, they will attempt
to hack into a company's network and then present the company with a report detailing the
existing security holes and how those holes can be fixed.

21 www.someakenya.com Contact: 0707 737 890


The advantage of this is that they can use their skills for a good cause and help stop other cyber
criminals. Keeping up with security and cyber criminals is a full-time job, and many companies
can't afford to have someone completely dedicated to it. Grey Hat Hackers have real-world
hacking experience and know more methods of infiltrating networks than most computer security
professionals. However, since they used to be criminals there's always going to be a question of
trust.

 Controlling security threats

Controls

Selecting proper controls and implementing those will initially help an organization to bring
down risk to acceptable levels. Control selection should follow and should be based on the risk
assessment. Controls can vary in nature but fundamentally they are ways of protecting the
confidentiality, integrity or availability of information.

1. Administrative

Administrative controls (also called procedural controls) consist of approved written policies,
procedures, standards and guidelines. Administrative controls form the framework for running
the business and managing people. They inform people on how the business is to be run and how
day-to-day operations are to be conducted. Laws and regulations created by government bodies
are also a type of administrative control because they inform the business. Some industry sectors
have policies, procedures, standards and guidelines that must be followed – the Payment Card
Industry Data Security Standard (PCI DSS) required by Visa and MasterCard is such an
example. Other examples of administrative controls include the corporate security policy,
password policy, hiring policies, and disciplinary policies.

Administrative controls form the basis for the selection and implementation of logical and
physical controls. Logical and physical controls are manifestations of administrative controls.
Administrative controls are of paramount importance.

2. Logical

Logical controls (also called technical controls) use software and data to monitor and control
access to information and computing systems. For example: passwords, network and host-based
firewalls, network intrusion detection systems, access control lists, and data encryption are
logical controls.

An important logical control that is frequently overlooked is the principle of least privilege. The
principle of least privilege requires that an individual, program or system process is not granted
any more access privileges than are necessary to perform the task. A blatant example of the
failure to adhere to the principle of least privilege is logging into Windows as user Administrator
to read email and surf the web. Violations of this principle can also occur when an individual
collects additional access privileges over time. This happens when employees' job duties change,

22 www.someakenya.com Contact: 0707 737 890


or they are promoted to a new position, or they transfer to another department. The access
privileges required by their new duties are frequently added onto their already existing access
privileges which may no longer be necessary or appropriate.

3. Physical

Physical controls monitor and control the environment of the work place and computing
facilities. They also monitor and control access to and from such facilities.
facilities. For example: doors,
locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras,
barricades, fencing, security guards, cable locks, etc. Separating the network and workplace into
functional areas are also physical controls.

An important physical control that is frequently overlooked is the separation of duties


duties.
Separation of duties ensures that an individual cannot complete a critical task by himself. For
example: an employee who submits a request for reimbursement should not also be able to
authorize payment or print the check. An applications programmer should not also be the server
administrator or the database administrator – these roles and responsibilities must be separated
from one another.

Defense in depth

The onion model of defense in depth

Information security must protect information throughout the life span of the information, from
the initial creation of the information on through to the final disposal of the information. The
information must be protected while in motion and while at rest. During its lifetime, information
may pass through many different information processing systems and through many different
parts of information processing systems. There are many different ways the information and
information systems can be threatened. To fully protect the information during
during its lifetime, each
component of the information processing system must have its own protection mechanisms. The
building up, layering on and overlapping of security measures is called defense in depth. The
strength of any system is no greater than its weakest link. Using a defense in depth strategy,
should one defensive measure fail there are other defensive measures in place that continue to
provide protection.

23 www.someakenya.com Contact: 0707 737 890


Recall the earlier discussion about administrative controls, logical controls, and physical
controls. The three types of controls can be used to form the basis upon which to build a defense-
in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct
layers or planes laid one on top of the other. Additional insight into defense-in- depth can be
gained by thinking of it as forming the layers of an onion, with data at the core of the onion,
people the next outer layer of the onion, and network security, host-based security and
application security forming the outermost layers of the onion. Both perspectives are equally
valid and each provides valuable insight into the implementation of a good defense-in-depth
strategy.

Security classification for information

An important aspect of information security and risk management is recognizing the value of
information and defining appropriate procedures and protection requirements for the
information. Not all information is equal and so not all information requires the same degree of
protection. This requires information to be assigned a security classification.

The first step in information classification is to identify a member of senior management as the
owner of the particular information to be classified. Next, develop a classification policy. The
policy should describe the different classification labels, define the criteria for information to be
assigned a particular label, and list the required security controls for each classification.

Some factors that influence which classification information should be assigned include how
much value that information has to the organization, how old the information is and whether or
not the information has become obsolete. Laws and other regulatory requirements are also
important considerations when classifying information.

The Business Model for Information Security enables security professionals to examine security
from systems perspective, creating an environment where security can be managed holistically,
allowing actual risks to be addressed.

The type of information security classification labels selected and used will depend on the nature
of the organization, with examples being:

 In the business sector, labels such as: Public, Sensitive, Private, and Confidential.
 In the government sector, labels such as: Unclassified, Unofficial, Protected,
Confidential, Secret, Top Secret and their non-English equivalents.
 In cross-sectorial formations, the Traffic Light Protocol, that consists of: White, Green,
Amber, and Red.

All employees in the organization, as well as business partners, must be trained on the
classification schema and understand the required security controls and handling procedures for
each classification. The classification of a particular information asset that has been assigned
should be reviewed periodically to ensure the classification is still appropriate for the
information and to ensure the security controls required by the classification are in place and are
followed in their right procedures.

24 www.someakenya.com Contact: 0707 737 890


Access control

Access to protected information must be restricted to people who are authorized to access the
information. The computer programs, and in many cases the computers that process the
information, must also be authorized. This requires that mechanisms be in place to control the
access to protected information. The sophistication of the access control mechanisms should be
in parity with the value of the information being protected – the more sensitive or valuable the
information the stronger the control mechanisms need to be. The foundation on which access
control mechanisms are built start with identification and authentication.

Access control is generally considered in three steps: Identification, Authentication, and


Authorization.

Identification

Identification is an assertion of who someone is or what something is. If a person makes the
statement "Hello, my name is John Doe" they are making a claim of who they are. However,
their claim may or may not be true. Before John Doe can be granted access to protected
information it will be necessary to verify that the person claiming to be John Doe really is John
Doe. Typically the claim is in the form of a username. By entering that username you are
claiming "I am the person the username belongs to".

Authentication

Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to
make a withdrawal, he tells the bank teller he is John Doe—a claim of identity. The bank teller
asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the
license to make sure it has John Doe printed on it and compares the photograph on the license
against the person claiming to be John Doe. If the photo and name match the person, then the
teller has authenticated that John Doe is who he claimed to be. Similarly by entering the correct
password, the user is providing evidence that they are the person the username belongs to.

There are three different types of information that can be used for authentication:

 Something you know: things such as a PIN, a password, or your mother's maiden name.
 Something you have: a driver's license or a magnetic swipe card.
 Something you are: biometrics, including palm prints, fingerprints, voice prints and retina
(eye) scans.

Strong authentication requires providing more than one type of authentication information (two-
factor authentication). The username is the most common form of identification on computer
systems today and the password is the most common form of authentication. Usernames and
passwords have served their purpose but in our modern world they are no longer adequate.
Usernames and passwords are slowly being replaced with more sophisticated authentication
mechanisms.

25 www.someakenya.com Contact: 0707 737 890


Authorization

After a person, program or computer has successfully been identified and authenticated then it
must be determined what informational resources they are permitted to access and what actions
they will be allowed to perform (run, view, create, delete, or change). This is called
authorization. Authorization to access information and other computing services begins with
administrative policies and procedures. The policies prescribe what information and computing
services can be accessed, by whom, and under what conditions. The access control mechanisms
are then configured to enforce these policies. Different computing systems are equipped with
different kinds of access control mechanisms—some may even offer a choice of different access
control mechanisms. The access control mechanism a system offers will be based upon one of
three approaches to access control or it may be derived from a combination of the three
approaches.

The non-discretionary approach consolidates all access control under a centralized


administration. The access to information and other resources is usually based on the individuals
function (role) in the organization or the tasks the individual must perform. The discretionary
approach gives the creator or owner of the information resource the ability to control access to
those resources. In the Mandatory access control approach, access is granted or denied basing
upon the security classification assigned to the information resource.

Cryptography

Information security uses cryptography to transform usable information into a form that renders
it unusable by anyone other than an authorized user; this process is called encryption.
Information that has been encrypted (rendered unusable) can be transformed back into its
original usable form by an authorized user, who possesses the cryptographic key, through the
process of decryption. Cryptography is used in information security to protect information from
unauthorized or accidental disclosure while the information is in transit (either electronically or
physically) and while information is in storage.

Cryptography provides information security with other useful applications as well including
improved authentication methods, message digests, digital signatures, non-repudiation, and
encrypted network communications. Older less secure applications such as telnet and ftp are
slowly being replaced with more secure applications such as ssh that use encrypted network
communications. Wireless communications can be encrypted using protocols such as
WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-TG.hn)
are secured using AES for encryption and X.1035 for authentication and key exchange. Software
applications such as GnuPG or PGP can be used to encrypt data files and Email.

Cryptography can introduce security problems when it is not implemented correctly.


Cryptographic solutions need to be implemented using industry accepted solutions that have
undergone rigorous peer review by independent experts in cryptography. The length and strength
of the encryption key is also an important consideration. A key that is weak or too short will
produce weak encryption. The keys used for encryption and decryption must be protected with
the same degree of rigor as any other confidential information. They must be protected from

26 www.someakenya.com Contact: 0707 737 890


unauthorized disclosure and destruction and they must be available when needed. Public key
infrastructure (PKI) solutions address many of the problems that surround key management.

Access controls are security features that control how users and systems communicate and
interact with other systems and resources.
Access is the flow of information between a subject and an object.
A subject is an active entity that requests access to an object or the data within an object. E.g.:
user, program, process etc.
An object is a passive entity that contains the information. E.g.: Computer, Database, File,
Program etc.
Access controls give organization the ability to control, restrict, monitor, and protect resource
availability, integrity and confidentiality

Access Control Challenges

 Various types of users need different levels of access - Internal users, contractors,
outsiders, partners, etc.
 Resources have different classification levels- Confidential, internal use only, private,
public, etc.
 Diverse identity data must be kept on different types of users - Credentials, personal data,
contact information, work-related data, digital certificates, cognitive passwords, etc.
 The corporate environment is continually changing- Business environment needs,
resource access needs, employee roles, actual employees, etc.

Access Control Principles

 Principle of Least Privilege: States that if nothing has been specifically configured for an
individual or the groups, he/she belongs to, the user should not be able to access that
resource i.e. Default no access
 Separation of Duties: Separating any conflicting areas of responsibility so as to reduce
opportunities for unauthorized or unintentional modification or misuse of organizational
assets and/or information.
 Need to know : It is based on the concept that individuals should be given access only to
the information that they absolutely require in order to perform their job duties

Access Control Criteria

The criteria for providing access to an object include

 Roles
 Groups
 Location
 Time
 Transaction Type

27 www.someakenya.com Contact: 0707 737 890


Access Control Practices

 Deny access to systems by undefined users or anonymous accounts.


 Limit and monitor the usage of administrator and other powerful accounts.
 Suspend or delay access capability after a specific number of unsuccessful logon
attempts.
 Remove obsolete user accounts as soon as the user leaves the company.
 Suspend inactive accounts after 30 to 60 days.
 Enforce strict access criteria.
 Enforce the need-to-know and least-privilege practices.
 Disable unneeded system features, services, and ports.
 Replace default password settings on accounts.
 Limit and monitor global access rules.
 Ensure that logon IDs is non-descriptive of job function.
 Remove redundant resource rules from accounts and group memberships.
 Remove redundant user IDs, accounts, and role-based accounts from resource access
lists.
 Enforce password rotation.
 Enforce password requirements (length, contents, lifetime, distribution, storage, and
transmission).
 Audit system and user events and actions and review reports periodically.
 Protect audit logs.

Security Principles
 Fundamental Principles (CIA)
 Identification
 Authentication
 Authorization
 Non Repudiation

Identification Authentication and Authorization

Identification describes a method of ensuring that a subject is the entity it claims to be. E.g.: A
user name or an account no.

Authentication is the method of proving the subjects identity. E.g.: Password, Passphrase, and
PIN

Authorization is the method of controlling the access of objects by the subject. E.g.: A user
cannot delete a particular file after logging into the system

Note: There must be a three step process of Identification, Authentication and Authorization in
order for a subject to access an object

28 www.someakenya.com Contact: 0707 737 890


Identification and Authentication

Identification Component Requirements

When issuing identification values to users or subjects, ensure that

 Each value should be unique, for user accountability


 A standard naming scheme should be followed
 The values should be non-descriptive of the users position or task
 The values should not be shared between the users.

Authentication Factors

There are 3 general factors for authenticating a subject.

 Something a person knows- E.g.: passwords, PIN- least expensive, least secure
 Something a person has – E.g.: Access Card, key- expensive, secure
 Something a person is- E.g.: Biometrics- most expensive, most secure

Note: For a strong authentication to be in process, it must include two out of the three
authentication factors- also referred to as two factor authentication.

Authentication Methods

Biometrics

 Verifies an individual’s identity by analyzing a unique personal attribute or behavior


 It is the most effective and accurate method for verifying identification.
 It is the most expensive authentication mechanism
 Types of Biometric Systems
o Finger Print- are based on the ridge endings, bifurcation exhibited by the friction
edges and some minutiae of the finger
o Palm Scan- are based on the creases, ridges, and grooves that are unique in each
individuals palm
o Hand Geometry- are based on the shape (length, width) of a person’s hand and
fingers
o Retina Scan- is based on the blood vessel pattern of the retina on the backside of
the eyeball.
o Iris Scan- is based on the colored portion of the eye that surrounds the pupil. The
iris has unique patterns, rifts, colors, rings, coronas and furrows.
o Signature Dynamics- is based on electrical signals generated due to physical
motion of the hand during signing a document
o Keyboard Dynamics- is based on electrical signals generated while the user types
in the keys (passphrase) on the keyboard.
o Voice Print- based on human voice

29 www.someakenya.com Contact: 0707 737 890


o Facial Scan- based on the different bone structures, nose ridges, eye widths,
forehead sizes and chin shapes of the face.
o Handy Topography- based on the different peaks, valleys, overall shape and
curvature of the hand.
 Types of Biometric Errors
o Type I Error: When a biometric system rejects an authorized individual ( false
rejection rate)
o Type II Error: When a biometric systems accepts imposters who should be
rejected (false acceptance rate)
o Crossover Error Rate (CER): The point at which the false rejection rate equals
false acceptance rate. It is also called as Equal Error Rate (EER).

Passwords

 It is the most common form of system identification and authentication mechanism


 A password is a protected string of characters that is used to authenticate an individual
 Password Management
o Password should be properly guaranteed, updated, and kept secret to provide and
effective security
o Passwords generators can be used to generate passwords that are uncomplicated,
pronounceable, non-dictionary words.
o If the user chooses his passwords, the system should enforce certain password
requirement like insisting to use special char, no of char, case sensitivity etc. )
 Techniques for Passwords Attack
o Electronic monitoring- Listening to network traffic to capture information,
especially when a user is sending her password to an authentication server. The
password can be copied and reused by the attacker at another time, which is called
a replay attack.
o Access the password file- Usually done on the authentication server. The
password file contains many users’ passwords and, if compromised, can be the
source of a lot of damage. This file should be protected with access control
mechanisms and encryption.
o Brute force attacks Performed with tools that cycle through many possible
character, number, and symbol combinations to uncover a password.
o Dictionary attacks Files of thousands of words are used to compare to the user’s
password until a match is found.
o Social engineering An attacker falsely convinces an individual that she has the
necessary authorization to access specific resources
 Password checkers can be used to check the strength of the password by trying to break
into the system
 Passwords should be encrypted and hashed
 Password aging should be implemented
 No of logon attempts should be limited

30 www.someakenya.com Contact: 0707 737 890


Cognitive Passwords

 Cognitive passwords are facts or opinion-based information used to verify an individual


identity (e.g.: mothers maidens name)
 This is best used for helpdesk services, and occasionally used services.

One-Time or Dynamic Passwords

 It is a token based system used for authentication purposes where the service is used only
once
 It is used in environments that require a higher level of security than static password
provides
 Types of token generators
o Synchronous (e.g.: Secure ID) - A synchronous token device/generator
synchronizes with the authentication service by any of the two means.
 Time Based: In this method the token device and the authentication
service must hold the same time within their internal clocks. The time
value on the token device and a secret key are used to create a onetime
password. This password is decrypted by the server and compares it to the
value that is expected.
 Counter Based: In this method the user will need to initiate the logon
sequence on the computer and push a button on the token device. This
causes the token device and the authentication service to advance to the
next authentication value. This value and a base secret are hashed and
displayed to the user. The user enters this resulting value along with a user
ID to be authenticated.
o Asynchronous: A token device that is using an asynchronous token-generating
method uses a challenge/response scheme to authenticate the user. In this
situation, the authentication server sends the user a challenge, a random value also
called a nonce. The user enters this random value into the token device, which
encrypts it and returns a value that the user uses as a one-time password. The user
sends this value, along with a username, to the authentication server. If the
authentication server can decrypt the value and it is the same challenge value that
was sent earlier, the user is authenticated
 Example: SecureID
o It is one of the most widely used time-based tokens from RSA Security
o It uses a time based synchronous two-factor authentication

Cryptographic Keys

 Uses private keys and Digital Signatures


 Provides a higher level of security than passwords.

31 www.someakenya.com Contact: 0707 737 890


Passphrase

 A passphrase is a sequence of characters that is longer than a password and in some


cases, takes the place of a password during an authentication process.
 The application transforms the pass phrase into a virtual password and into a format
required by the application
 It is more secure than passwords

Memory Cards

 Holds information but cannot process them


 More secure than passwords but costly
 E.g.: Swipe cards, ATM cards

Smart Cards

 Holds information and has the capability to process information and can provide a two
factor authentication (knows and has)
 Categories of Smart Cards
o Contact
o Contactless
 Hybrid- has 2 chips and supports both contact and contactless
 Combi- has a microprocessor that can communicate with both a contact as
well as a contact reader.
 More expensive and tamperproof than memory cards
 Types of smartcard attacks
o Fault generation: Introducing of computational errors into smart card with the
goal of uncovering the encryption keys that are being used and stored on cards
o Side Channel Attacks: These are non-intrusive attacks and are used to uncover
sensitive information about how a component works without trying to
compromise any type of flaw or weakness. The following are some of the
examples
 Differential Power Analysis: Examining the power emission that are
released during processing
 Electromagnetic Analysis: Examining the frequency that are emitted
o Timing: How long a specific process takes to complete
o Software Attacks: Inputting instructions into the card that will allow for the
attacker to extract account information. The following are some of the examples
 Microprobing: Uses needles to remove the outer protective material on the
cards circuits by using ultrasonic vibrations thus making it easy to tap the
card ROM chip

32 www.someakenya.com Contact: 0707 737 890


Access Control Categories

Access controls can be implemented at various layers of a network and individual systems.The
access controls can be classified into three layers or categories, each category having different
access control mechanisms that can be carried out manually or automatically.

 Administrative Controls
 Physical Controls
 Technical or Logical Controls

Each category of access control has several components that fall within it, as described

1. Administrative

The administrative controls are defined by the top management in an organization.

Administrative Control Components

Policy and Procedures

 A security policy is a high-level plan that states management’s intent pertaining to how
security should be practiced within an organization, what actions are acceptable, and
what level of risk the company is willing to accept. This policy is derived from the laws,
regulations, and business objectives that shape and restrict the company.
 The security policy provides direction for each employee and department regarding how
security should be implemented and followed, and the repercussions for noncompliance.
Procedures, guidelines, and standards provide the details that support and enforce the
company’s security policy.

Personnel Controls

 Personnel controls indicate how employees are expected to interact with security
mechanisms, and address noncompliance issues pertaining to these expectations.
 Change of Status: These controls indicate what security actions should be taken when an
employee is hired, terminated, suspended, moved into another department, or promoted.
 Separation of duties: The separation of duties should be enforced so that no one
individual can carry out a critical task alone that could prove to be detrimental to the
company.

Example: A bank teller who has to get supervisory approval to cash checks over $2000 is an
example of separation of duties. For a security breach to occur, it would require collusion, which
means that more than one person would need to commit fraud, and their efforts would need to be
concerted. The use of separation of duties drastically reduces the probability of security breaches
and fraud.

33 www.someakenya.com Contact: 0707 737 890


 Rotation of duties means that people rotate jobs so that they know how to fulfill the
obligations of more than one position. Another benefit of rotation of duties is that if an
individual attempts to commit fraud within his position, detection is more likely to
happen if there is another employee who knows what tasks should be performed in that
position and how they should be performed.

Supervisory Structure

 Management must construct a supervisory structure which enforces management


members to be responsible for employees and take a vested interest in their activities. If
an employee is caught hacking into a server that holds customer credit card information,
that employee and her supervisor will face the consequences?

Security-Awareness Training

 This control helps users/employees understand how to properly access resources, why
access controls are in place and the ramification for not using the access controls
properly.

Testing

 This control states that all security controls, mechanisms, and procedures are tested on a
periodic basis to ensure that they properly support the security policy, goals, and
objectives set for them.
 The testing can be a drill to test reactions to a physical attack or disruption of the
network, a penetration test of the firewalls and perimeter network to uncover
vulnerabilities, a query to employees to gauge their knowledge, or a review of the
procedures and standards to make sure they still align with business or technology
changes that have been implemented.

Examples of Administrative Controls

 Security policy
 Monitoring and supervising
 Separation of duties
 Job rotation
 Information classification
 Personnel procedures
 Investigations
 Testing
 Security-awareness and training

2. Physical

Physical controls support and work with administrative and technical (logical) controls to supply
the right degree of access control.

34 www.someakenya.com Contact: 0707 737 890


Physical Control Components

Network Segregation

 Network segregation can be carried out through physical and logical means. A section of
the network may contain web servers, routers, and switches, and yet another network
portion may have employee workstations.
 Each area would have the necessary physical controls to ensure that only the permitted
individuals have access into and out of those sections.

Perimeter Security

 The implementation of perimeter security depends upon the company and the security
requirements of that environment.
 One environment may require employees to be authorized by a security guard by
showing a security badge that contains picture identification before being allowed to
enter a section. Another environment may require no authentication process and let
anyone and everyone into different sections.
 Perimeter security can also encompass closed-circuit TVs that scan the parking lots and
waiting areas, fences surrounding a building, lighting of walkways and parking areas,
motion detectors, sensors, alarms, and the location and visual appearance of a building.
These are examples of perimeter security mechanisms that provide physical access
control by providing protection for individuals, facilities, and the components within
facilities.

Computer Controls

 Each computer can have physical controls installed and configured, such as locks on the
cover so that the internal parts cannot be stolen, the removal of the floppy and CD-ROM
drives to prevent copying of confidential information, or implementation of a protection
device that reduces the electrical emissions to thwart attempts to gather information
through airwaves.

Work Area Separation

 Some environments might dictate that only particular individuals can access certain areas
of the facility.

Data Backups

 Backing up data is a physical control to ensure that information can still be accessed after
an emergency or a disruption of the network or a system.

35 www.someakenya.com Contact: 0707 737 890


Cabling

 There are different types of cabling that can be used to carry information throughout a
network.
 Some cable types have sheaths that protect the data from being affected by the electrical
interference of other devices that emit electrical signals.
 Some types of cable have protection material around each individual wire to ensure that
there is no crosstalk between the different wires.
 All cables need to be routed throughout the facility in a manner that is not in people’s
way or that could be exposed to any danger of being cut, burnt, crimped, or eavesdropped
upon.

Control Zone

 It is a specific area that surrounds and protects network devices that emit electrical
signals. These electrical signals can travel a certain distance and can be contained by a
specially made material, which is used to construct the control zone.
 The control zone is used to resist penetration attempts and disallow sensitive information
to “escape” through the airwaves.
 A control zone is used to ensure that confidential information is contained and to hinder
intruders from accessing information through the airwaves.
 Companies that have very sensitive information would likely protect that information by
creating control zones around the systems that are processing that information

Examples of Physical Control

 Fences
 Locks
 Badge system
 Security guard
 Biometric system
 Mantrap doors
 Lighting
 Motion detectors
 Closed-circuit TVs
 Alarms
 Backups
 safe storage area of backups

3. Technical

Technical controls called logical controls are the s/w tools used to restrict subject’s access to
objects. They can be core OS components, add-on security packages, applications, n/w h/w
devices, protocols, encryption mechanisms, and access control metrics.

36 www.someakenya.com Contact: 0707 737 890


They protect the integrity and availability of resources by limiting the number of subjects that
can access them and protect the confidentiality of resources by preventing disclosure to
unauthorized subjects.

Technical Control Components

System Access

 In this type, control of access to resources is based on the sensitivity of data, clearance
level of users, and user’s rights and permissions. As technical control for system access
can be a user name password, Kerberos implementation, biometrics, PKI, RADIUS,
TACACS or authentication using smartcards.

Network Access

 This control defines the access control mechanism to access the different network
resources like the routers, switches, firewalls, bridges etc.

Encryption and protocols

 These controls are used to protect information as it passes throughout an n/w and resides
on computers. They preserve the confidentiality and integrity of data and enforce specific
paths for communication to take place.

Auditing

 These controls track activity within a n/w, on a n/w device or on a specific computer
.They help to point out weakness of other technical controls and make the necessary
changes.

Network Architecture

 This control defines the logical and physical layout of the network, and also the access
control mechanisms between different n/w segments.

Examples of Technical Controls

 ACLs
 Routers
 Encryption
 Audit logs
 IDS
 Antivirus software
 Firewalls
 Smart cards
 Dial-up call-back systems

37 www.someakenya.com Contact: 0707 737 890


 Alarms and alerts

Access Control Types


Each of the access control categories – administrative, physical and technical work at different
levels, each at a different level of granularity and perform different functionalities based on the
type.

The different types of access control are

 Preventative- Avoid undesirable events from occurring


 Detective- Identify undesirable events that have occurred
 Corrective- Correct undesirable events that have occurred
 Deterrent- Discourage security violations
 Recovery- Restore resources and capabilities
 Compensative- Provide alternatives to other controls

Access Control Threats

Denial of Service(DoS/DDoS)

Overview

A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is


an attempt to make a computer resource unavailable to its intended users. Although the means to,
motives for, and targets of a DoS attack may vary, it generally consists of the concerted,
malevolent efforts of a person or persons to prevent an Internet site or service from functioning
efficiently or at all, temporarily or indefinitely.

The purpose of DoS attacks is to force the targeted computer(s) to reset, or consume its resources
so that it can no longer provide its intended service

Types of DoS Attacks

A DoS attack can be perpetrated in a number of ways. There are five basic types of attack:

 Consumption of computational resources, such as bandwidth, disk space, or CPU time;


 Disruption of configuration information, such as routing information;
 Disruption of state information, such as unsolicited resetting of TCP sessions;
 Disruption of physical network components.
 Obstructing the communication media between the intended users and the victim so that
they can no longer communicate adequately.

38 www.someakenya.com Contact: 0707 737 890


Countermeasures

Unfortunately, there are no effective ways to prevent being the victim of a DoS or DDoS attack,
but there are steps you can take to reduce the likelihood that an attacker will use your computer
to attack other computers:

 Install and maintain anti-virus software.


 Install a firewall, and configure it to restrict traffic coming into and leaving your
computer.
 Follow good security practices for distributing your email address.Applying email filters
may help you manage unwanted traffic.

Buffer Overflows

Overview

A buffer overflow is an anomalous condition where a process attempts to store data beyond the
boundaries of a fixed-length buffer. The result is that the extra data overwrites adjacent memory
locations. The overwritten data may include other buffers, variables and program flow data and
may cause a process to crash or produce incorrect results. They can be triggered by inputs,
specifically designed to execute malicious code or to make the program operate in an unintended
way. As such, buffer overflows cause many software vulnerabilities and form the basis of many
exploits.

Buffer Overflow Techniques

 Stack Buffer Overflow


o A stack buffer overflow occurs when a program writes to a memory address on
the program's call stack outside of the intended data structure; usually a fixed
length buffer.
o Stack buffer overflow bugs are caused when a program writes more data to a
buffer located on the stack than there was actually allocated for that buffer. This
almost always results in corruption of adjacent data on the stack, and in cases
where the overflow was triggered by mistake, will often cause the program to
crash or operate incorrectly.
o A technically inclined and malicious user may exploit stack-based buffer
overflows to manipulate the program in one of several ways:
 By overwriting a local variable that is near the buffer in memory on the
stack to change the behaviour of the program which may benefit the
attacker.
 By overwriting the return address in a stack frame. Once the function
returns, execution will resume at the return address as specified by the
attacker, usually a user input filled buffer.
 By overwriting a function pointer,or exception handler, which is
subsequently executed.

39 www.someakenya.com Contact: 0707 737 890


 Heap Buffer Overflow
o A heap overflow is another type of buffer overflow that occurs in the heap data
area. Memory on the heap is dynamically allocated by the application at run-time
and typically contains program data.
o Exploitation goes as follows: If an application copies data without first checking
to see if it fits into the chunk (blocks of data in the heap), the attacker could
supply the application with a piece of data that is too large, overwriting heap
management information (metadata) of the next chunk. This allows an attacker to
overwrite an arbitrary memory location with four bytes of data. In most
environments, this may allow the attacker control over the program execution.

Countermeasure

 Choice of programming language


 Use of safe libraries
 Stack-smashing protection which refers to various techniques for detecting buffer
overflows on stack-allocated variables.The most common implementation being
StackGuard, and SSP
 Executable space protection which is the marking of memory regions as non-executable,
such that an attempt to execute machine code in these regions will cause an exception. It
makes use of hardware features such as the NX bit (Non Execute bit).
 Address space layout randomization: A technique which involves arranging the positions
of key data areas, usually including the base of the executable and position of libraries,
heap, and stack, randomly in a process' address space.
 Deep packet inspection:It is a form of computer network packet filtering that examines
the data and/or header part of a packet as it passes an inspection point, searching for non-
protocol compliance, viruses, spam, intrusions or predefined criteria to decide if the
packet can pass or if it needs to be routed to a different destination, or for the purpose of
collecting statistical information. It also called Content Inspection or Content Processing.

Malicious Software, Password Crackers, Spoofing/Masquerading

Overview

A spoofing attack is a situation in which one person or program successfully masquerades as


another by falsifying data and thereby gaining an illegitimate advantage.

 Popular Spoofing Techniques


o Man-in-the-middle attack (MITM):An attack in which an attacker is able to read,
insert and modify at will messages between two parties without either party
knowing that the link between them has been compromised. The attacker must be
able to observe and intercept messages going between the two victims
o IP address Spoofing: refers to the creation of IP packets with a forged (spoofed)
source IP address with the purpose of concealing the identity of the sender or
impersonating another computing system.
o URL spoofing: A Spoofed URL describes one website that poses as another

40 www.someakenya.com Contact: 0707 737 890


o Phishing: An attempt to criminally and fraudulently acquire sensitive information,
such as usernames, passwords and credit card details, by masquerading as a
trustworthy entity in an electronic communication.
o Referrer spoofing:It is the sending of incorrect referrer information along with an
HTTP request, sometimes with the aim of gaining unauthorized access to a web
site. It can also be used because of privacy concerns, as an alternative to sending
no referrer at all.
o Spoofing of file-sharing Networks: Polluting the file-sharing networks where
record labels share files that are mislabeled, distorted or empty to discourage
downloading from these sources.
o Caller ID spoofing :This allows callers to lie about their identity, and present false
names and numbers, which could of course be used as a tool to defraud or harass
o E-mail addresses spoofing:A technique commonly used for spam e-mail and
phishing to hide the origin of an e-mail message by changing certain properties of
the e-mail, such as the From, Return-Path and Reply-To fields.
o Login spoofing: A technique used to obtain a user's password. The user is
presented with an ordinary looking login prompt for username and password,
which is actually a malicious program, usually called a Trojan horse under the
control of the attacker. When the username and password are entered, this
information is logged or in some way passed along to the attacker, breaching
security.

Countermeasures

 Be skeptical of e-mails indicating that you need to make changes to your accounts or
warnings indicating that accounts will be terminated without you doing some type of
activity online.
 Call the legitimate company to find out if this is a fraudulent message.
 Review the address bar to see if the domain name is correct.
 When submitting any type of financial information or credential data, an SSL connection
should be set up, which is indicated in the address bar (https://) and a closed-padlock icon
in the browser at the bottom-right corner.
 Do not click on an HTML link within an e-mail. Type the URL out manually instead.
 Do not accept e-mail in HTML format.

Emanations

Overview

All electronic devices emit electrical signals. These signals can hold important information, and
if an attacker buys the right equipment and positions himself in the right place, he could capture
this information from the airwaves and access data transmissions as if he had a tap directly on
the network wire.

41 www.someakenya.com Contact: 0707 737 890


Countermeasure

 Tempest: Tempest is the name of a program, and now a standardized technology that
suppresses signal emanations with shielding material. Vendors who manufacture this type
of equipment must be certified to this standard. In devices that are Tempest rated, other
components are also modified, especially the power supply, to help reduce the amount of
electricity that is used unlike the normal devices which have just an outer metal coating,
referred to as a Faraday cage. This type of protection is usually needed only in military
institutions, although other highly secured environments do utilize this type of safeguard.
o Tempest Technologies: Tempest technology is complex, cumbersome, and
expensive, and therefore only used in highly sensitive areas that really need this
high level of protection. Two alternatives to Tempest exist
 White Noise: White noise is a uniform spectrum of random electrical
signals. It is distributed over the full spectrum so that the bandwidth is
constant and an intruder is not able to decipher real information from
random noise or random information.
 Control Zone: Some facilities use material in their walls to contain
electrical signals. This prevents intruders from being able to access
information that is emitted via electrical signals from network devices.
This control zone creates a type of security perimeter and is constructed to
protect against unauthorized access to data or compromise of sensitive
information.

Shoulder Surfing

Overview

Shoulder surfing refers to using direct observation techniques, such as looking over someone's
shoulder, to get information. Shoulder surfing is particularly effective in crowded places because
it's relatively easy to observe someone as they:

o Fill out a form


o Enter their PIN at an automated teller machine or a POS Terminal
o Use a calling card at a public pay phone
o Enter passwords at a cybercafé, public and university libraries, or airport kiosks.
o Enter a digit code for a rented locker in a public place such as a swimming pool or
airport.
 Shoulder surfing is also be done at a distance using binoculars or other vision-enhancing
devices. Inexpensive, miniature closed-circuit television cameras can be concealed in
ceilings, walls or fixtures to observe data entry. To prevent shoulder surfing, it is advised
to shield paperwork or the keypad from view by using one's body or cupping one's hand.
 Recent automated teller machines now have a sophisticated display which discourages
shoulder surfers. It grows darker beyond a certain viewing angle, and the only way to tell
what is displayed on the screen is to stand directly in front of it.
 Certain models of credit card readers have the keypad recessed, and employ a rubber
shield that surrounds a significant part of the opening towards the keypad. This makes

42 www.someakenya.com Contact: 0707 737 890


shoulder-surfing significantly harder, as seeing the keypad is limited to a much more
direct angle than previous models. Taken further, some keypads alter the physical
location of the keys after each key-press. Also, security cameras are not allowed to be
placed directly above an ATM.

Object Reuse

Overview

Object reuse issues pertain to reassigning to a subject media that previously contained one or
more objects.

The sensitive information that may be left by a process should be securely cleared before
allowing another process the opportunity to access the object. This ensures that information not
intended for this individual or any other subject is not disclosed.

For media that holds confidential information, more extreme methods should be taken to ensure
that the files are actually gone, not just their pointers.

Countermeasures

 Sensitive data should be classified by the data owners.


 How the data is stored and accessed should also be strictly controlled and audited by
software controls.
 Before allowing one subject to use media that was previously used, the media should be
erased or degaussed. If media holds sensitive information and cannot be purged, there
should be steps on how to properly destroy it so that there is no way for others to obtain
this information.

Data Remanence

Overview

Data remanence is the residual representation of data that has been in some way been nominally
erased or removed. This residue may be due to data being left intact by a nominal delete
operation, or through physical properties of the storage medium.

Data remanence may make inadvertent disclosure of sensitive information possible, should the
storage media be released into an uncontrolled environment.

43 www.someakenya.com Contact: 0707 737 890


Countermeasures

Classes of Countermeasures

o Clearing
 Clearing is the removal of sensitive data from storage devices in such a
way that there is assurance, proportional to the sensitivity of the data, that
the data may not be reconstructed using normal system functions. The data
may still be recoverable, but not without unusual effort.
 Clearing is typically considered an administrative protection against
accidental disclosure within an organization. For example, before a floppy
disk is re-used within an organization, its contents may be cleared to
prevent their accidental disclosure to the next user.
o Purging
 Purging or sanitizing is the removal of sensitive data from a system or
storage device with the intent that the data cannot be reconstructed by any
known technique.
 Purging is generally done before releasing media outside of control, such
as before discarding old media, or moving media to a computer with
different security requirements.
 Methods to Countermeasure
o Overwriting
 A common method used to counter data remanence is to overwrite the
storage medium with new data. This is often called a wiping or shredding
a file or disk. Because such methods can often be implemented in software
alone, and may be able to selectively target only part of a medium, it is a
popular, low-cost option for some applications.
 The simplest overwrite technique writes the same data everywhere -- often
just a pattern of all zeroes. At a minimum, this will prevent the data from
being retrieved simply by reading from the medium again, and thus is
often used for clearing.
o Degaussing
 Degaussing is the removal or reduction of a magnetic field. Applied to
magnetic media, degaussing may purge an entire media element quickly
and effectively. A device, called a degausser, designed for the media being
erased, is used.
 Degaussing often renders hard disks inoperable, as it erases low-level
formatting which is only done at the factory, during manufacture.
Degaussed floppy disks can generally be reformatted and reused.
o Encryption
 Encrypting data before it is stored on the medium may mitigate concerns
about data remanence. If the decryption key is strong and carefully
controlled (i.e., not itself subject to data remanence), it may effectively
make any data on the medium unrecoverable. Even if the key is stored on
the medium, it may prove easier or quicker to overwrite just the key, vs
the entire disk.

44 www.someakenya.com Contact: 0707 737 890


 Encryption may be done on a file-by-file basis, or on the whole disk.
o Physical destruction
 Physical destruction of the data storage medium is generally considered
the most certain way to counter data remanence, although also at the
highest cost. Not only is the process generally time-consuming and
cumbersome, it obviously renders the media unusable. Further, with the
high recording densities of modern media, even a small media fragment
may contain large amounts of data.
 Specific destruction techniques include:
 Physically breaking the media apart, by grinding, shredding, etc.
 Incinerating
 Phase transition (i.e., liquefaction or vaporization of a solid disk)
 Application of corrosive chemicals, such as acids, to recording surfaces
 For magnetic media, raising its temperature above the Curie point

Backdoor/Trapdoor

Overview

A backdoor is a malicious computer program or particular means that provide the attacker with
unauthorized remote access to a compromised system exploiting vulnerabilities of installed
software and bypassing normal authentication.

A backdoor works in background and hides from the user. It is very similar to a virus and
therefore is quite difficult to detect and completely disable.

A backdoor is one of the most dangerous parasite types, as it allows a malicious person to
perform any possible actions on a compromised computer. The attacker can use a backdoor to

o Spy on a user,
o Manage files,
o Install additional software or dangerous threats,
o Control the entire system including any present applications or hardware devices,
o Shutdown or reboot a computer or
o Attack other hosts.

Often a backdoor has additional harmful capabilities like keystroke logging, screenshot capture,
file infection, even total system destruction or other payload. Such parasite is a combination of
different privacy and security threats, which works on its own and doesn’t require to be
controlled at all.

Most backdoors are autonomic malicious programs that must be somehow installed to a
computer. Some parasites do not require the installation, as their parts are already integrated into
particular software running on a remote host. Programmers sometimes left such backdoors in
their software for diagnostics and troubleshooting purposes. Hackers often discover these
undocumented features and use them to break into the system.

45 www.someakenya.com Contact: 0707 737 890


Countermeasure

 Powerful antivirus and anti-spyware products

Dictionary Attacks

Overview

Dictionary attacks are launched by programs which are fed with a list (dictionaries) of commonly
used words or combinations of characters, and then compare these values to capture passwords.

Once the right combination of characters is identified, the attacker can use this password to
authenticate herself as a legitimate user.

Sometimes the attacker can even capture the password file using this kind of activity.

Countermeasures

To properly protect an environment against dictionary and other password attacks, the following
practices should be followed:

 Do not allow passwords to be sent in clear text.


 Encrypt the passwords with encryption algorithms or hashing functions.
 Employ one-time password tokens.
 Use hard-to-guess passwords.
 Rotate passwords frequently.
 Employ an IDS to detect suspicious behavior.
 Use dictionary cracking tools to find weak passwords chosen by users.
 Use special characters, numbers, and upper- and lowercase letters within the password.
 Protect password files.

Brute force Attacks

Overview

Brute force is defined as “trying every possible combination until the correct one is identified.”
The most effective way to uncover passwords is through a hybrid attack, which combines a
dictionary attack and a brute force attack
A brute force attack is also known as an exhaustive attack.
These are usually used for war dialing in hopes of finding a modem that can be exploited to gain
unauthorized access.

Countermeasures

For phone brute force attacks, auditing and monitoring of this type of activity should be in place
to uncover patterns that could indicate a war dialing attack:

46 www.someakenya.com Contact: 0707 737 890


 Perform brute force attacks to find weaknesses and hanging modems.
 Make sure only necessary phone numbers are made public.
 Provide stringent access control methods that would make brute force attacks less
successful.
 Monitor and audit for such activity.
 Employ IDS to watch for suspicious activity.
 Set lockout thresholds.

Social Engineering

Overview

Social engineering is a collection of techniques used for manipulation of the natural human
tendency to trust in order to obtain information that will allow a hacker to gain unauthorized
access to a valued system and the information that resides on that system.

 Forms of a Social engineering attack


o Physical: the workplace, the phone, your trash(dumpster diving), and even on-line
o Psychological: Persuasion
o Reverse Social Engineering

Common Social Engineering Attacks

 At work Place
o In the workplace, the hacker can simply walk in the door, like in the movies, and
pretend to be a maintenance worker or consultant who has access to the
organization. Then the intruder struts through the office until he or she finds a few
passwords lying around and emerges from the building with ample information to
exploit the network from home later that night
o Another technique to gain authentication information is to just stand there and
watch an oblivious employee type in his password.
 On Phone/Help Desk
o It’s most prevalent type of social engineering attack.
o A hacker will call up and imitate someone in a position of authority or relevance
and gradually pull information out of the user.
o Help desks are particularly prone to this type of attack. Hackers are able to
pretend they are calling from inside the corporation by playing tricks on the PBX
or the company operator, so caller-ID is not always the best defense
o Help desks are particularly vulnerable because they are in place specifically to
help, a fact that may be exploited by people who are trying to gain illicit
information
 Dumpster Diving
o Dumpster diving, also known as trashing is another popular method of social
engineering. A huge amount of information can be collected through company
dumpsters (trash can).
o The following items turn to be a potential security leaks in our trash:

47 www.someakenya.com Contact: 0707 737 890


 Company phone books which can give the hackers names and numbers of
people to target and impersonate
 Organizational charts contain information about people who are in
positions of authority within the organization
 Memos provide small tidbits of useful information for creating
authenticity
 Company policy manuals show hackers how secure (or insecure) the
company really is
 Calendars of meetings may tell attackers which employees are out of town
at a particular time
 System manuals, printouts of sensitive data or login names and passwords
may give hackers the exact keys they need to unlock the network.
 Disks and tapes can be restored to provide all sorts of useful information.
 Company letterhead and memo forms
 Online
o One way in which hackers can obtain online passwords is through an on-line
form: they can send out some sort of sweepstakes information and ask the user to
put in a name (including e-mail address – that way, she might even get that
person’s corporate account password as well) and password.
o E-mail can also be used for more direct means of gaining access to a system. For
instance, mail attachments sent from someone of authenticity can carry viruses,
worms and Trojan horses
 Persuasion
o This technique where the hackers themselves teach social engineering from a
psychological point-of-view, emphasizing how to create the perfect psychological
environment for the attack.
o Basic methods of persuasion include: impersonation, ingratiation, conformity,
diffusion of responsibility, and plain old friendliness.Regardless of the method
used, the main objective is to convince the person disclosing the information that
the social engineer is in fact a person that they can trust with that sensitive
information. The other important key is to never ask for too much information at a
time, but to ask for a little from each person in order to maintain the appearance
of a comfortable relationship
 Impersonation generally means creating some sort of character and
playing out the role.Some common roles that may be played in
impersonation attacks include: a repairman, IT support, a manager, a
trusted third party or a fellow employee
 Conformity is a group-based behavior, but can be used occasionally in the
individual setting by convincing the user that everyone else has been
giving the hacker the same information requested. When hackers attack in
such a way as to diffuse the responsibility of the employee giving the
password away that alleviates the stress on the employee.
 Reverse Social Engineering
o This is when the hacker creates a persona that appears to be in a position of
authority so that employees will ask him for information, rather than the other
way around. If researched, planned and executed well, reverse social engineering

48 www.someakenya.com Contact: 0707 737 890


attacks may offer the hacker an even better chance of obtaining valuable data
from the employees; however, this requires a great deal of preparation, research,
and pre-hacking to pull off.

Countermeasures

 Having proper security policies in place which addresses both physical and psychological
aspects of the attack
 Providing proper training to employees, helpdesk personnel

Access Control Technologies

Single Sign-On

Introduction

SSO is a technology that allows a user to enter credentials one time and be able to access all
resources in primary and secondary network domains

Advantages

 Reduces the amount of time users spend authenticating to resources.


 Enable the administrator to streamline user accounts and better control access rights
 Improves security by reducing the probability that users will write down their passwords
 Reduces the administrators time in managing the access permissions

Limitations

 Every platform application and resource needs to accept the same type of credentials, in
the same format and interpret their meaning in the same way.

Disadvantages

 Once an individual is in, he is in, thus giving a bigger scope to an attacker

Kerberos

Introduction

Kerberos is an authentication protocol that was designed in mid-1980 as part of MIT’s project
Athena.

 It works in a C/S model and is based on symmetric key cryptography


 It is widely used in UNIX systems and also the default authentication method for
windows 2k and 2k3 and is the de-facto standard for heterogeneous networks.

49 www.someakenya.com Contact: 0707 737 890


Kerberos Components

 Key Distribution Center (KDC)


o Holds all users and services secret key and info about the principles in the
database
o Provides an authentication service with the help of a service called AS
o Provides key distribution functionality
o Provides a ticket granting service (TGS)
 Secret Keys are the keys shared between principle and KDC generally using symmetric
key cryptography algorithm that are used to authenticate the principles and communicate
securely
 Principles are users, applications or any network services
 A ticket is a token generated by KDC and given to a principle when one principle need to
authenticate another principle
 Realm is a set of principles. A KDC can be responsible for one or more realms. Realms
allow an administrator to logically group resources and users.
 Session Keys are the keys shared between the principles that will enable them
communicate security

Kerberos Authentication Process

 User enters username and password into the workstation (WS)


 The Kerberos s/w on the workstation sends the username to the Authentication Server
(AS) on the KDC.
 The AS generates a Ticket Granting Ticket (TGT) encrypting it with the user’s secret key
stored in DB with the help of TGT and sends it to the user.
 The password entered by the user is transformed into a secret key using which the ticket
(TGT) is decrypted and thus the user gains access to the WS.
 Suppose the user wants to use the printer, the users system send the TGT to the TGS on
the KDC
 The TGS generates a new ticket with two instances of a session key, one encrypted with
the user’s secret key and the other encrypted with the print server’s secret key. This ticket
may also contain an authenticator which contains info on user.
 The new ticket is sent to the users system which is used to authenticate with the print
server.
 The user’s system decrypts and extracts the session key, adds a second authenticator set
of identification information to the ticket and sends the ticket onto the print server.
 The print server receives the ticket, decrypts and extracts the session key, and decrypts
and extracts the two authenticators in the ticket. If the printer server can decrypt and
extract the session key, it knows that the KDC created the ticket, because only the KDC
has the secret key that was used to encrypt the session key. If the authenticator
information that the KDC and the user put into the ticket matches, then the print server
knows that it received the ticket from the correct principal.

50 www.someakenya.com Contact: 0707 737 890


Weakness of Kerberos

 The KDC can be a single point of failure. If the KDC goes down, no one can access
needed resources. Redundancy is necessary for the KDC.
 The KDC must be able to handle the number of requests it receives in a timely manner. It
must be scalable.
 Secret keys are temporarily stored on the users’ workstation, which means it is possible
for an intruder to obtain these cryptographic keys.
 Session keys are decrypted and reside on the users’ workstations, either in a cache or in a
key table. Again, an intruder can capture these keys.
 Kerberos is vulnerable to password guessing. The KDC does not know if a dictionary
attack is taking place.
 Network traffic is not protected by Kerberos if encryption is not enabled.

SESAME

Introduction

SESAME (Secure European Systems for Applications in a Multi-vendor Environment) is a SSO


technology that was developed to extend Kerberos functionality and improve upon its weakness.

SESAME uses a symmetric and asymmetric cryptographic technique to protect exchanges of


data and to authenticate subjects to network resources.

SESAME uses digitally signed privileged Attribute Certificates (PAC) to authenticate subjects to
objects. PAC contains the subject’s identity, access capabilities for the object, access time
period, and life time of the PAC

Security Domain

Introduction

 A domain is a set of resources that are available to a subject.


 A security domain refers to the set the resources working under the same security policy
and managed by the same group.
 Domains can be separated by logical boundaries, such as
o Firewalls with ACL’s
o Directory services making access decisions
o Objects that have their own ACL’s indicating which individual or group can
access them.
 Domains can be architected in a hierarchical manner that dictates the relationship
between the different domains and the ways in which subjects within the different
domains can communicate.
 Subjects can access resources in domains of equal or lower trust levels.

51 www.someakenya.com Contact: 0707 737 890


Thin Clients

Introduction

 Thin clients are diskless computers that are sometimes called as dumb terminals.
 It is based on C/S technology where a user is supposed to logon to a remote server to use
the computing and network resources.
 When the user starts the client, it runs a short list of instructions and then points itself to a
server that will actually download the operating system, or interactive operating software,
to the terminal. This enforces a strict type of access control, because the computer cannot
do anything on its own until it authenticates to a centralized server, and then the server
gives the computer its operating system, profile, and functionality.
 Thin-client technology provides another type of SSO access for users, because users
authenticate only to the central server or mainframe, which then provides them access to
all authorized and necessary resources.

Access Control Models

Introduction

An access control model is a framework that dictates how subjects access objects.

 It uses access control technologies and security mechanisms to enforce the rules and
objectives of the model.
 There are three main types of access control models:
o Discretionary,
o Mandatory, and
o Nondiscretionary (also called role-based).

Discretionary Access Control

 The control of access is based on the discretion (wish) of the owner


 A system that uses DAC enables the owner of the resource to specify which subjects can
access specific resources
 The most common implementation of DAC is through ACL’s which are dictated and set
by the owners and enforced by the OS.
 Examples: Unix, Linux, Windows access control is based on DAC
 DAC systems grant or deny access based on the identity of the subject. The identity can
be user identity or a group identity (Identity based access control)

Mandatory Access Control

 This model is very structured and strict and is based on a security label (also known as
sensitivity label) attached to all objects
 The subjects are given security clearance by classifying the subjects as secret, top secret,
confidential etc.) and the objects are also classified similarly

52 www.someakenya.com Contact: 0707 737 890


 The clearance and the classification data is stored in the security labels, which are bound
to the specific subject and object.
 When the system makes a decision about fulfilling a request to access an object it is
based on the clearance of the subject. The classification of the object and the security
policy of the system
 This model is used and is suitable for military systems where classifications and
confidentiality is of at most important
 SE Linux, by NSA, trusted Solaris are examples of this model
 Security label are made up of a classification and categories, where classification
indicates the security level and the categories enforce need to know rules.

Non-Discretionary or Role Based Access Control

 A RBAC is based on user roles and uses a centrally administered set of controls to
determine how subjects and objects interact.
 The RBAC approach simplifies the access control administration
 It is a best system for a company that has high employee turnover.
 Note: The RBAC can be generally used in combination with MAC and DAC systems

Access Control Techniques


Different access control technologies are available to support the different access control models.

 Rule-Based Access Control


 Constrained User Interface
 Access Control Matrix
 Content Dependent Access Control
 Context Dependent Access Control

Rule-Based Access Control

 Rule-based access control uses specific rules that indicate what can and cannot happen
between a subject and an object.
 A subject should meet a set of predefined rules before it can access an object.
 It is not necessarily an identity based i.e. it can be applicable to all the users or subjects
irrespective of their identities.
 E.g.: Routers and firewall use rules to filter incoming and outgoing packets

Constrained User Interface

 Constrained user interfaces restrict user’s access ability by not allowing them to request
certain functions or information, or to have access to specific system resources.
 There are three major types of restricted interfaces:
o Menus and Shells:

53 www.someakenya.com Contact: 0707 737 890


o Database Views
o Physically Constrained Interfaces.

Access Control Matrix

 An access control matrix is a table of subjects and objects indicating what actions
individual subjects can take upon individual objects.
 The access rights that are assigned to individual subjects are called capabilities and that
assigned to objects are called Access Control Lists (ACL).
 This technique uses a capability table to specify the capabilities of a subject pertaining to
specific objects. A capability can be in the form of a token, ticket, or key.
o Each row is a capability and each column is an ACL for a given user.
o Kerberos uses a capability based system where every user is given a ticket, which
is his capability table.
 ACL’s are list of subjects that are authorized to access a specific object and they define
what level of authorization is granted ( both at individual and at group level)
 ACL’s map values from the access control matrix to the object.
 Note: A capability table is bound to a subject, whereas an ACL is bound to an object.

Content Dependent Access Control

 Access to the objects is based on the content within the object.


 Example: Database Views, E-mail filtering etc.

Context Dependent Access Control

 The access decisions are based on the context of a collection of information rather than
on the sensitivity of the data.
 Example: A firewall makes a context-based access decisions when they collect state
information on a packet before allowing it into the network.

Access Control Monitoring(IDS/IPS)


Access Control Monitoring is a method of keeping track of who attempts to access specific
network resources. The ACM system can fall into two categories: Intrusion Prevention System
(IPS) and Intrusion Detection System (IDS)

Intrusion Detection Systems

Basic Concepts

Intrusion detection is the process of detecting an unauthorized use of, or attack upon, a computer,
network, or a telecommunication infrastructure.

IDS are designed to aid in mitigating the damage that can be caused by hacking, or breaking into
sensitive computer and network systems.
54 www.someakenya.com Contact: 0707 737 890
Common Components of an IDS

 Sensors: collect traffic and user activity data and send it to an analyzer.
 Analyzer: detects an activity that it is programmed to deem as fishy and sends an alert to
the administrative interface.
 Administrative Interface: Report the alert details.

Common Functions of an IDS

 Watch for attacks


 Parse audit logs
 Protect system files
 Alert administrators during attacks
 Expose a hackers technique
 Illustrate which vulnerabilities need to be addressed
 Help track down individual hackers

IDS Types

 Network-Based IDS: A network-based IDS (NIDS) uses sensors, which are either host
computers with the necessary software installed or dedicated appliances—each with its
network interface card (NIC) in promiscuous mode. The NIC driver captures all traffic
and passes it to an analyzer to look for specific types of patterns.
 Host-Based IDS: A host-based IDS (HIDS) can be installed on individual workstations
and/or servers and watch for inappropriate or anomalous activity. HIDSs are usually used
to make sure users do not delete system files, reconfigure important settings, or put the
system at risk in any other way.

IDS Technologies

Both HIDS and NIDS can employ the following technologies

 Knowledge or Signature Based


 Statistical Anomaly Based
 Rule Based

Knowledge or Signature Based

 These are knowledge based systems where some knowledge is accumulated about
specific attacks and a model called signatures is developed.
 The main disadvantage of these systems is they cannot detect new attacks and a few
signatures need to be written and continuously updated.
 Also known as misuse-detection system
 Attacks
o Land Attacks ( packets modified to have the same s/c and destination IP)

55 www.someakenya.com Contact: 0707 737 890


Security Humor: Attacks or viruses that have been discovered in production
environments are referred to as being “in the wild.” Attacks and viruses that
exist but have not been released are referred to as being “in the zoo.”

Statistical Anomaly Based

 These are behavioral based systems, which do not use any predefined signatures, but
rather are put in a learning mode to build a profile by continually sampling the
environments normal activities.
 The longer the IDS is put in a learning mode, in most instances, the more accurate a
profile it will build and the better protection it will provide.
 Once a profile is build, a different profile is build based on the same sampling on all the
future traffic and the data are compared to identify the abnormalities.
 Also known as profile-based systems
 Advantages
o Can detect new attacks including 0 day attacks
o Can also detect low and slow attacks in which an attacker tries to stay beneath the
radar by sending a few packets at a time over a longer period of time.
 Disadvantages
o Developing a correct profile to reduce false positives can be difficult.
o There is a possibility for an attacker to integrate his/her activities into the
behavior pattern of the n/w traffic. This can be controlled by ensuring that there
are no attack activities currently underway while the IDS are in learning mode.
o The success factors for these systems are based on determining proper threshold
in order to reduce/avoid false positives (threshold set to too low) or false
negatives (threshold set to too high)
 Attacks
o Bring the IDS offline by DoS and send the IDS incorrect data in order to distract
the n/w and security individuals to make them busy chasing wrong packets, while
the real attack takes place.
 Techniques
o Protocol Anomaly based:
 These types of IDS have specific knowledge of each protocol that they
will be monitoring.
 The IDS builds a profile (model) of each protocol’s normal usage and uses
it to match with the profile build during the actual operation.
 Common protocol vulnerabilities
 At the DLL, the ARP does not have any protection against ARP
attacks where bogus data can be inserted into its table.
 At the n/w layer, the ICMP can be used in a LOKI attack to move
data from one place to another, when this protocol was designed to
only be used to send status information. This data can be a code
which can be made to be executed by the backdoor on a
compromised system.
 IP headers can be easily modified for spoofed attacks ( one acting
as other)

56 www.someakenya.com Contact: 0707 737 890


 At the TL, TCP packets can be injected into the connection
between the two systems for a session hijack attack.
o Traffic Anomaly based:
 These systems have traffic-anomaly filters, which detect changes in traffic
patterns as in DoS attacks or a new service that appears on the network.
 Once there is a profile that is built that captures the baselines of an
environment’s ordinary traffic, all future traffic patterns are compared to
that profile.
 As with all filters, the thresholds are tunable to adjust the sensitivity, to
reduce the number of false positives and false negatives.
 Since this is a type of statistical anomaly– based IDS, it can detect
unknown attacks

Rule Based

 Rule-based intrusion detection is commonly associated with the use of an expert system.
 An expert system is made up of a knowledge base, inference engine, and rule-based
programming.
o Knowledge is represented as rules, and the data that is to be analyzed is referred
to as facts.
o The knowledge of the system is written in rule-based programming (IF situation
THEN action). These rules are applied to the facts, the data that comes in from a
sensor, or a system that is being monitored.
 Example: Consider the Rule-IF a root user creates File1 AND creates File2 SUCH
THAT they are in the same directory THEN there is a call to AdministrativeTool1
TRIGGER send alert. This rule has been defined such that if a root user creates two files
in the same directory and then makes a call to a specific administrative tool, an alert
should be sent.
 The more complex the rules, the more demands on software and hardware processing
requirements
 Cannot detect new attacks
 Techniques
o State Based IDS
 A state transition takes place when a variable’s value changes, which
usually happens continuously within every system.
 In a state-based IDS, the initial state is the state prior to the execution of
an attack, and the compromised state is the state after successful
penetration.
 The IDS has rules that outline what state transition sequences should
sound an alarm. The activity that takes place between the initial and
compromised state is what the state-based IDS looks for, and it sends an
alert if any of the state-transition sequences match its preconfigured rules.
 This type of IDS scans for attack signatures in the context of a stream of
activity instead of just looking at individual packets. It can only identify
known attacks and requires frequent updates of its signatures.
o Model Based IDS

57 www.someakenya.com Contact: 0707 737 890


 In a model-based IDS, the product has several scenario models that
represent how specific attacks and intrusions take place. The models
outline how the system would behave if it were under attack, the different
steps that would be carried out by the attacker, and the evidence that
would be available for analysis if specific intrusions took place.
 The IDS takes in the audit log data and compares it to the different models
that have been developed, to see if the data meets any of the models’
specifications. If the IDS find data in an audit log that matches the
characteristics in a specific model, it sends an alert.

IDS Sensors

 Network-based IDSs use sensors for monitoring purposes. A sensor, which works as an
analysis engine, is placed on the network segment the IDS is responsible for monitoring.
 The sensor receives raw data from an event generator and compares it to a signature
database, profile, or model, depending upon the type of IDS.
 If there is some type of a match, which indicates suspicious activity, the sensor works
with the response module to determine what type of activity needs to take place (alerting
through instant messaging, page, e-mail, or carry out firewall reconfiguration, and so on).
 The sensor’s role is to filter received data, discard irrelevant information, and detect
suspicious activity.
 A monitoring console can be used to monitor all sensors and supplies the network staff
with an overview of the activities of all the sensors in the network, but the difficulty
arises in a switched environment, where traffic is forwarded through a VPN and is not
rebroadcast to all the ports.This can be overcome using Spanning Ports by mirroring the
traffic from all the ports to one monitored port.
 Sensor Placement
o Sensors can be placed outside of the firewall to detect attacks
o Inside the firewall (in the perimeter network) to detect actual intrusions.
o At highly sensitive areas, DMZs, and on extranets
 Multiple Sensors can be used in high traffic environments to ensure all packets are
investigated. Also if necessary to optimize network bandwidth and speed, different
sensors can be set up to analyze each packet for different signatures. That way, the
analysis load can be broken up over different points.

Intrusion Prevention System

The traditional IDS only detect that something bad may be taking place and send an alert. The
goal of an IPS is to detect this activity and not allow the traffic to gain access to the target in the
first place.

An IPS is a preventative and proactive technology, whereas an IDS is a detective and after-the-
fact technology.

There has been a long debate on IPS and it turned out to be an extension of IDS and everything
that holds for IDS also holds for IPS apart for IPS being preventative and IDS being detective.

58 www.someakenya.com Contact: 0707 737 890


Access Control Assurance

Basic Concepts

Accountability is the method of tracking and logging the subject’s actions on the objects.

Auditing is an activity where the users/subjects actions on the objects are monitored in order to
verify that the sensitivity policies are enforced and can be used as an investigation tool.

Advantages of Auditing

 To track unauthorized activities performed by individuals.


 Detect intrusion.
 Reconstruct events and system conditions.
 Provide legal resource material and produce problem reports.

Note: A security professional should be able to access an environment and its security goals
,know what actions should be audited ,and know what is to be done with that information after it
is captured – without wasting too much disk space , CPU power & staff time.

What to Audit?

 System-level events
o System performance
o Logon attempts (successful and unsuccessful)
o Logon ID
o Date and time of each logon attempt
o Lockouts of users and terminals
o Use of administration utilities
o Devices used
o Functions performed
o Requests to alter configuration files
 Application-level events
o Error messages
o Files opened and closed
o Modifications of files
o Security violations within application
 User-level events
o Identification and authentication attempts
o Files, services, and resources used
o Commands initiated
o Security violations

Review of Audit Information

 Audit trails can be reviewed manually or through automated means.

59 www.someakenya.com Contact: 0707 737 890


 Types of audit reviews
o Event oriented: done as and when an event occurs.
o Periodic: done periodically to access the health of the system.
o Real time: done with the help of automated tools as and when the audit
information gets created.
 Audit trail analysis tools: These tools help in reducing/filtering the audit log information
that is not necessary and provides only that information necessary for auditing.
 Types of audit trail analysis tools
o audit reduction tools : these tools reduces the amount of information within an
audit log, discards mundane tasks information and records system performance
,security and user functionality information that are necessary for auditing.
o Variance – detection tools: these tools monitor computer and resource usage
trends and detect variations unusual activities e.g.: an employee logging into the
machine during odd hours.
o Attack signature – detection: these tools parse the audit logs based on some
predefined patterns in the database. If a pattern matches any of the pattern or
signature in the database, it indicates that an attack has taken place or is in
progress.
o Key stroke monitoring.

Protecting Audit Data and Log Information

 Audit logs should be protected by implementing strict access control.


 The integrity of the data should be ensured with the use of digital signatures, message
digest tools, and strong access control.
 The confidentiality can be protected with encryption and access controls and can be
stored on CD-ROM’S to prevent loss or modification of the data. The modification of
logs is often called as scrubbing.
 Unauthorized access attempts to audit logs should be captured and reported.

 Ethical hacking
An ethical hacker is a computer and networking expert who systematically attempts to penetrate
a computer system or network on behalf of its owners for the purpose of finding security
vulnerabilities that a malicious hacker could potentially exploit.

Ethical hackers use the same methods and techniques to test and bypass a system's defenses as
their less-principled counterparts, but rather than taking advantage of any vulnerability found,
they document them and provide actionable advice on how to fix them so the organization can
improve its overall security.

The purpose of ethical hacking is to evaluate the security of a network or system's infrastructure.
It entails finding and attempting to exploit any vulnerability to determine whether unauthorized
access or other malicious activities are possible. Vulnerabilities tend to be found in poor or
improper system configuration, known and unknown hardware or software flaws, and

60 www.someakenya.com Contact: 0707 737 890


operational weaknesses in process or technical countermeasures. One of the first examples of
ethical hacking occurred in the 1970s, when the United States government used groups of
experts called "red teams" to hack its own computer systems. It has become a sizable sub-
industry within the information security market and has expanded to also cover the physical and
human elements of an organization's defenses. A successful test doesn't necessarily mean a
network or system is 100% secure, but it should be able to withstand automated attacks and
unskilled hackers.

Any organization that has a network connected to the Internet or provides an online service
should consider subjecting it to a penetration test. Various standards such as the Payment Card
Industry Data Security Standard require companies to conduct penetration testing from both an
internal and external perspective on an annual basis and after any significant change in the
infrastructure or applications. Many large companies, such as IBM, maintain employee teams of
ethical hackers, while there are plenty of firms that offer ethical hacking as a service. Trustwave
Holdings, Inc., has an Ethical Hacking Lab for attempting to exploit vulnerabilities that may be
present in ATMs, point-of-sale devices and surveillance systems. There are various organizations
that provide standards and certifications for consultants that conduct penetration testing
including:

Ethical hacking is a proactive form of information security and is also known as penetration
testing, intrusion testing and red teaming. An ethical hacker is sometimes called a legal or white
hat hacker and its counterpart a black hat, a term that comes from old Western movies, where the
"good guy" wore a white hat and the "bad guy" wore a black hat. The term "ethical hacker" is
frowned upon by some security professionals who see it has a contradiction in terms and prefer
the name "penetration tester."

Before commissioning an organization or individual, it is considered a best practice to read their


service-level and code of conduct agreements covering how testing will be carried out, and how
the results will be handled, as they are likely to contain sensitive information about how the
system tested. There have been instances of "ethical hackers" reporting vulnerabilities they have
found while testing systems without the owner's express permission. Even the LulzSec black hat
hacker group has claimed its motivations include drawing attention to computer security flaws
and holes. This type of hacking is a criminal offence in most countries, even if the purported
intentions were to improve system security. For hacking to be deemed ethical, the hacker must
have the express permission from the owner to probe their network and attempt to identify
potential security risks.

61 www.someakenya.com Contact: 0707 737 890


TOPIC 3

SYSTEMS SECURITY

 Classification
An important aspect of information security and risk management is recognizing the value of
information and defining appropriate procedures and protection requirements for the
information. Not all information is equal and so not all information requires the same degree of
protection. This requires information to be assigned a security classification.

The first step in information classification is to identify a member of senior management as the
owner of the particular information to be classified. Next, develop a classification policy. The
policy should describe the different classification labels, define the criteria for information to be
assigned a particular label, and list the required security controls for each classification.

Some factors that influence which classification information should be assigned include how
much value that information has to the organization, how old the information is and whether or
not the information has become obsolete. Laws and other regulatory requirements are also
important considerations when classifying information.

The Business Model for Information Security enables security professionals to examine security
from systems perspective, creating an environment where security can be managed holistically,
allowing actual risks to be addressed.

The type of information security classification labels selected and used will depend on the nature
of the organization, with examples being:

 In the business sector, labels such as: Public, Sensitive, Private, and Confidential.
 In the government sector, labels such as: Unclassified, Unofficial, Protected,
Confidential, Secret, Top Secret and their non-English equivalents.
 In cross-sectorial formations, the Traffic Light Protocol, this consists of: White, Green,
Amber, and Red.

All employees in the organization, as well as business partners, must be trained on the
classification schema and understand the required security controls and handling procedures for
each classification. The classification of a particular information asset that has been assigned
should be reviewed periodically to ensure the classification is still appropriate for the
information and to ensure the security controls required by the classification are in place and are
followed in their right procedures.

 People errors
In general, errors and accidents in computer systems may be classified as people errors.
Procedural errors, software errors, electromechanical problems, and “dirty data” problems.

62 www.someakenya.com Contact: 0707 737 890


People errors: recall that one part of a computer system is the people who manage it or run it.
For instance, McConnell of Roanoke, Virginia, found that he couldn’t get past a bank’s
automated telephone system to talk to a real person. This was not the fault of the system so much
as of the people at the bank. McConnell, president of a software firm, thereupon wrote a program
that automatically phoned eight different numbers at the bank. People picking up the phone
heard the recording, “this is an automated customer complaint. To hear a live complaint,
press…” quite often, what may seem to be “the computer’s fault” is human indifference or bad
management.

 Procedural errors
Procedural errors: some spectacular computer failures have occurred because someone didn’t
follow procedures. Consider the 2.30 hour shutdown of NASDAQ, the nation’s second largest
stock market. NASDAQ is so automated that it likes to call itself “the stock market for the next
100 years.” In July 1994, NASDAQ was shut down by an effort, ironically, to make the
computer system more user- friendly. Technicians were phasing in new software, adding
technical improvements a day at a time. A few days into this process, the technician tried to add
more features to the software flooding the data storage capability of the shortened the reading
day.

 Software errors
Software errors: we are forever hearing about “software glitches” or “software bugs.” A
software bug is an error in a program that causes it to malfunction.
An example of a somewhat small error is the one a school employee in Newark, New Jersey,
made in coding the school system’s master scheduling program. When 1000 student and 90
teachers showed up for the start of school at Central High School, half the student had
incomplete or no schedules for classes. Some classrooms had no teachers while others had four
instead of one.
Especially with complex software, there are always bugs, even after the system has been
thoroughly tested and “debugged”. However, there comes a point in the software development
process where debugging must end. That is, the probability of the bugs disrupting the system is
considered to be so low that it is not worth searching further for them.

 Electromechanical problems
Electromechanical problems: Mechanical systems, such as printers. And electrical systems,
such as circuit boards, don’t always work. They may be faultily constructed, get dirty or
overheated, wear out, or become damaged in some other way, power failures can shit a system
sown. Power surges can burn out equipment.
Whatever the reason, whether electromechanical failure or another problem, computer downtime
is expensive. A survey of about 450 information system executives picked from fortune 1000
companies found that companies on average suffer nine 4 hour computer system failures a year.
Each failure cost the company an average of $3330000. Because of them, companies were
unable to deliver a service, complete productivity because of idle time.

63 www.someakenya.com Contact: 0707 737 890


 Dirty data
“Dirty data” problems: when keyboarding a research paper, you undoubtedly make a few
typing errors. Typos are also a fact of life for all the data entry people around the world who feed
a continual stream of raw data into computer systems. A lot of problems are caused by this kind
of “dirty data”. Dirty data is data that is incomplete, outdated, or otherwise inaccurate.

64 www.someakenya.com Contact: 0707 737 890


TOPIC 4

PHYSICAL AND LOGICAL SECURITY

 Physical security

Physical security is the protection of personnel, hardware, programs, networks, and data from
physical circumstances and events that could cause serious losses or damage to an enterprise,
agency, or institution. This includes protection from fire, natural disasters, burglary, theft,
vandalism, and terrorism

Physical security is often overlooked (and its importance underestimated) in favor of more
technical and dramatic issues such as hacking, viruses, Trojans, and spyware. However, breaches
of physical security can be carried out with little or no technical knowledge on the part of an
attacker. Moreover, accidents and natural disasters are a part of everyday life, and in the long
term, are inevitable.

There are three main components to physical security. First, obstacles can be placed in the way
of potential attackers and sites can be hardened against accidents and environmental disasters.
Such measures can include multiple locks, fencing, walls, fireproof safes, and water sprinklers.
Second, surveillance and notification systems can be put in place, such as lighting, heat sensors,
smoke detectors, intrusion detectors, alarms, and cameras. Third, methods can be implemented to
apprehend attackers (preferably before any damage has been done) and to recover quickly from
accidents, fires, or natural disasters.

 Logical security (authentication, access rights. Others)

Logical Security consists of software safeguards for an organization’s systems, including user
identification and password access, authenticating, access rights and authority levels. These
measures are to ensure that only authorized users are able to perform actions or access
information in a network or a workstation. It is a subset of computer security.

Elements of logical security


Elements of logical security are:

 User IDs, also known as logins, user names, logons or accounts, are unique personal
identifiers for agents of a computer program or network that is accessible by more than
one agent. These identifiers are based on short strings of alphanumeric characters, and are
either assigned or chosen by the users.
 Authentication is the process used by a computer program, computer, or network to
attempt to confirm the identity of a user. Blind credentials (anonymous users) have no
identity, but are allowed to enter the system. The confirmation of identities is essential to

65 www.someakenya.com Contact: 0707 737 890


the concept of access control, which gives access to the authorized and excludes the
unauthorized.
 Biometrics authentication is the measuring of a user’s physiological or behavioral
features to attempt to confirm his/her identity. Physiological aspects that are used include
fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand
measurements. Behavioral aspects that are used include signature recognition, gait
recognition, speaker recognition and typing pattern recognition. When a user registers
with the system which he/she will attempt to access later, one or more of his/her
physiological characteristics are obtained and processed by a numerical algorithm. This
number is then entered into a database, and the features of the user attempting to match
the stored features must match up to a certain error rate.

Token Authentication

Token Authentication comprises security tokens which are small devices that authorized users of
computer systems or networks carry to assist in identifying that who is logging in to a computer
or network system is actually authorized. They can also store cryptographic keys and biometric
data. The most popular type of security token (RS Security) displays a number which changes
every minute. Users are authenticated by entering a personal identification number and the
number on the token. The token contains a time of day clock and a unique seed value, and the
number displayed is a cryptographic hash of the seed value and the time of day. The computer
which is being accessed also contains the same algorithm and is able to match the number by
matching the user’s seed and time of day. Clock error is taken into account, and values a few
minutes off are sometimes accepted. Another similar type of token (Cryptogram) can produce a
value each time a button is pressed. Other security tokens can connect directly to the computer
through USB, Smart card or Bluetooth ports, or through special purpose interfaces. Cell phones
and PDA's can also be used as security tokens with proper programming.

Password Authentication

Password Authentication uses secret data to control access to a particular resource. Usually, the
user attempting to access the network, computer or computer program is queried on whether they
know the password or not, and is granted or denied access accordingly. Passwords are either
created by the user or assigned, similar to usernames. However, once assigned a password, the
user usually is given the option to change the password to something of his/her choice.
Depending on the restrictions of the system or network, the user may change his/her password to
any alphanumeric sequence. Usually, limitations to password creation include length restrictions,
a requirement of a number, uppercase letter or special character, or not being able to use the past
four or five changed passwords associated with the username. In addition, the system may force
a user to change his/her password after a given amount of time.

Two-Way Authentication

Two-Way Authentication involves both the user and system or network convincing each other
that they know the shared password without transmitting this password over any communication
channel. This is done by using the password as the encryption key to transmit a randomly

66 www.someakenya.com Contact: 0707 737 890


generated piece of information, or “the challenge.” The other side must then return a similarly
encrypted value which is some predetermined function of the originally offered information,
his/her “response,” which proves that he/she was able to decrypt the challenge. Kerberos (a
computer network authentication protocol) is a good example of this, as it sends an encrypted
integer N, and the response must be the encrypted integer N + 1.

Common setup and access rights


Access Rights and Authority Levels are the rights or power granted to users to create, change,
delete or view data and files within a system or network. These rights vary from user to user, and
can range from anonymous login (Guest) privileges to Superuser (root) privileges. Guest and
Superuser accounts are the two extremes, as individual access rights can be denied or granted to
each user. Usually, only the system administrator (a.k.a. the Superuser) has the ability to grant or
deny these rights.

Guest accounts, or anonymous logins, are set up so that multiple users can log in to the account
at the same time without a password. Users are sometimes asked to type a username. This
account has very limited access, and is often only allowed to access special public files. Usually,
anonymous accounts have read access rights only for security purposes.

The superuser is an authority level assigned to system administrators on most computer


operating systems. In UNIX and related operating systems, this level is also called root, and has
all access rights in the system, including changing ownership of files. In pre-Windows XP and
NT systems (such as DOS and Windows 9x), all users are effectively superusers, and all users
have all access rights. In Windows NT and related systems (such as Windows 2000 and XP), a
superuser is known as the Administrator account. However, this Administrator account may or
may not exist, depending on whether separation of privileges has been set up.

Logical security protects computer software by discouraging user excess by implementing user
identifications, passwords, authentication, biometrics and smart cards. Physical security prevents
and discourages attackers from entering a building by installing fences, alarms, cameras, security
guards and dogs, electronic access control, intrusion detection and administration access
controls. The difference between logical security and physical security is logical security protects
access to computer systems and physical security protects the site and everything located within
the site.

67 www.someakenya.com Contact: 0707 737 890


TOPIC 5

DATA/SOFTWARE SECURITY

 Use of the normal security systems


 Vulnerability assessment
Vulnerability analysis, also known as vulnerability assessment, is a process that defines,
identifies, and classifies the security holes (vulnerabilities) in a computer, network, or
communications infrastructure. In addition, vulnerability analysis can forecast the effectiveness
of proposed countermeasures and evaluate their actual effectiveness after they are put into use

Vulnerability analysis consists of several steps:

 Defining and classifying network or system resources


 Assigning relative levels of importance to the resources
 Identifying potential threats to each resource
 Developing a strategy to deal with the most serious potential problems first
 Defining and implementing ways to minimize the consequences if an attack occurs.

If security holes are found as a result of vulnerability analysis, a vulnerability disclosure may be
required. The person or organization that discovers the vulnerability, or a responsible industry
body such as the Computer Emergency Readiness Team (CERT), may make the disclosure. If
the vulnerability is not classified as a high level threat, the vendor may be given a certain amount
of time to fix the problem before the vulnerability is disclosed publicly.

The third stage of vulnerability analysis (identifying potential threats) is sometimes performed by
a white hat using ethical hacking techniques. Using this method to assess vulnerabilities, security
experts deliberately probe a network or system to discover its weaknesses. This process provides
guidelines for the development of countermeasures to prevent a genuine attack.

Vulnerabilities and attacks

Vulnerability is a system susceptibility or flaw, and much vulnerability are documented in the
Common Vulnerabilities and Exposures (CVE) database and vulnerability management is the
cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities as they
are discovered. An exploitable vulnerability is one for which at least one working attack or
"exploit" exists.

To secure a computer system, it is important to understand the attacks that can be made against
it, and these threats can typically be classified into one of the categories below:

68 www.someakenya.com Contact: 0707 737 890


Backdoors

A backdoor in a computer system, a cryptosystem or an algorithm, is any secret method of


bypassing normal authentication or security controls. They may exist for a number of reasons,
including by original design or from poor configuration. They may also have been added later by
an authorized party to allow some legitimate access or by an attacker for malicious reasons; but
regardless of the motives for their existence, they create vulnerability.

Denial-of-service attack

Denial of service attacks are designed to make a machine or network resource unavailable to its
intended users. Attackers can deny service to individual victims, such as by deliberately entering
a wrong password enough consecutive times to cause the victim account to be locked, or they
may overload the capabilities of a machine or network and block all users at once. While a
network attack from a single IP address can be blocked by adding a new firewall rule, many
forms of Distributed denial of service (DDoS) attacks are possible, where the attack comes from
a large number of points - and defending is much more difficult. Such attacks can originate from
the zombie computers of a botnet, but a range of other techniques are possible including
reflection and amplification attacks, where innocent systems are fooled into sending traffic to the
victim.

Direct-access attacks

Common consumer devices that can be used to transfer data surreptitiously

An unauthorized user gaining physical access to a computer is often able to directly download
data from it. They may also compromise security by making operating system modifications,
installing software worms, keyloggers, or covert listening devices. Even when the system is
protected by standard security measures, these may be able to be by passed by booting another
operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted
Platform Module are designed to prevent these attacks.

Eavesdropping

Eavesdropping is the act of surreptitiously listening to a private conversation, typically between


hosts on a network. For instance, programs such as Carnivore and NarusInsight have been used
by the FBI and NSA to eavesdrop on the systems of internet service providers. Even machines
that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped
upon via monitoring the faint electro-magnetic transmissions generated by the hardware;
TEMPEST is a specification by the NSA referring to these attacks.

Spoofing

Spoofing of user identity describes a situation in which one person or program successfully
masquerades as another by falsifying data.

69 www.someakenya.com Contact: 0707 737 890


Tampering

Tampering describes a malicious modification of products. So-called "Evil Maid" attacks and
security services planting of surveillance capability into routers are examples.

Privilege escalation

Privilege escalation describes a situation where an attacker with some level of restricted access is
able to, without authorization, elevate their privileges or access level. So for example a standard
computer user may be able to fool the system into giving them access to restricted data; or even
to "become root" and have full unrestricted access to a system.

Phishing

Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit
card details. Phishing is typically carried out by email spoofing or instant messaging, and it often
directs users to enter details at a fake website whose look and feel are almost identical to the
legitimate one.

Clickjacking

Clickjacking, also known as "UI redress attack or User Interface redress attack", is a malicious
technique in which an attacker tricks a user into clicking on a button or link on another webpage
while the user intended to click on the top level page. This is done using multiple transparent or
opaque layers. The attacker is basically "hijacking" the clicks meant for the top level page and
routing them to some other irrelevant page, most likely owned by someone else. A similar
technique can be used to hijack keystrokes. Carefully drafting a combination of style sheets,
iframes, buttons and text boxes, a user can be led into believing that they are typing the password
or other information on some authentic webpage while it is being channeled into an invisible
frame controlled by the attacker.

Social engineering and Trojans

Social engineering aims to convince a user to disclose secrets such as passwords, card numbers,
etc. by, for example, impersonating a bank, a contractor, or a customer.

Computer protection (countermeasures)

In computer security a countermeasure is an action, device, procedure, or technique that reduces


a threat, vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can
cause, or by discovering and reporting it so that corrective action can be taken.

Some common countermeasures are listed in the following sections:

70 www.someakenya.com Contact: 0707 737 890


Security measures

A state of computer "security" is the conceptual ideal, attained by the use of the three processes:
threat prevention, detection, and response. These processes are based on various policies and
system components, which include the following:

 User accountaccess controls and cryptography can protect systems files and data,
respectively.
 Firewalls are by far the most common prevention systems from a network security
perspective as they can (if properly configured) shield access to internal network
services, and block certain kinds of attacks through packet filtering. Firewalls can be both
hardware- or software-based.
 Intrusion Detection System (IDS) products are designed to detect network attacks in-
progress and assist in post-attack forensics, while audit trails and logs serve a similar
function for individual systems.
 "Response" is necessarily defined by the assessed security requirements of an individual
system and may cover the range from simple upgrade of protections to notification of
legal authorities, counter-attacks, and the like. In some special cases, a complete
destruction of the compromised system is favored, as it may happen that not all the
compromised resources are detected.

Today, computer security comprises mainly "preventive" measures, like firewalls or an exit
procedure. A firewall can be defined as a way of filtering network data between a host or a
network and another network, such as the Internet, and can be implemented as software running
on the machine, hooking into the network stack (or, in the case of most UNIX-based operating
systems such as Linux, built into the operating system kernel) to provide real time filtering and
blocking. Another implementation is a so-called physical firewall which consists of a separate
machine filtering network traffic. Firewalls are common amongst machines that are permanently
connected to the Internet.

However, relatively few organisations maintain computer systems with effective detection
systems, and fewer still have organized response mechanisms in place. As result, as Reuters
points out: "Companies for the first time report they are losing more through electronic theft of
data than physical stealing of assets". The primary obstacle to effective eradication of cybercrime
could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it
is basic evidence gathering by using packet capture appliances that puts criminals behind bars.

Reducing vulnerabilities

While formal verification of the correctness of computer systems is possible, it is not yet
common. Operating systems formally verified include seL4, and SYSGO's PikeOS - but these
make up a very small percentage of the market.

Cryptography properly implemented is now virtually impossible to directly break. Breaking them
requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the
transmission), or some other extra cryptanalytic information.

71 www.someakenya.com Contact: 0707 737 890


Two factor authentication is a method for mitigating unauthorized access to a system or sensitive
information. It requires "something you know"; a password or PIN, and "something you have"; a
card, dongle, cellphone, or other piece of hardware. This increases security as an unauthorized
person needs both of these to gain access.

Social engineering and direct computer access (physical) attacks can only be prevented by non-
computer means, which can be difficult to enforce, relative to the sensitivity of the information.
Even in a highly disciplined environment, such as in military organizations, social engineering
attacks can still be difficult to foresee and prevent.

It is possible to reduce an attacker's chances by keeping systems up to date with security patches
and updates, using a security scanner or/and hiring competent people responsible for security.
The effects of data loss/damage can be reduced by careful backing up and insurance.

Security by design

Security by design, or alternately secure by design, means that the software has been designed
from the ground up to be secure. In this case, security is considered as a main feature.

Some of the techniques in this approach include:

 The principle of least privilege, where each part of the system has only the privileges that
are needed for its function. That way even if an attacker gains access to that part, they
have only limited access to the whole system.
 Automated theorem proving to prove the correctness of crucial software subsystems.
 Code reviews and unit testing, approaches to make modules more secure where formal
correctness proofs are not possible.
 Defense in depth, where the design is such that more than one subsystem needs to be
violated to compromise the integrity of the system and the information it holds.
 Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-
safe for the equivalent in safety engineering). Ideally, a secure system should require a
deliberate, conscious, knowledgeable and free decision on the part of legitimate
authorities in order to make it insecure.
 Audit trails tracking system activity, so that when a security breach occurs, the
mechanism and extent of the breach can be determined. Storing audit trails remotely,
where they can only be appended to, can keep intruders from covering their tracks.
 Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept
as short as possible when bugs are discovered.

Security architecture

The Open Security Architecture organization defines IT security architecture as "the design
artifacts that describe how the security controls (security countermeasures) are positioned, and
how they relate to the overall information technology architecture. These controls serve the
purpose to maintain the system's quality attributes: confidentiality, integrity, availability,
accountability and assurance services".

72 www.someakenya.com Contact: 0707 737 890


Defined as security architecture as "a unified security design that addresses the necessities and
potential risks involved in a certain scenario or environment. It also specifies when and where to
apply security controls. The design process is generally reproducible." The key attributes of
security architecture are:

 The relationship of different components and how they depend on each other.
 The determination of controls based on risk assessment, good practice, finances, and
legal matters.
 The standardization of controls.

Hardware protection mechanisms

While hardware may be a source of insecurity, such as with microchip vulnerabilities


maliciously introduced during the manufacturing process, hardware-based or assisted computer
security also offers an alternative to software-only computer security. Using devices and
methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling
USB ports, and mobile-enabled access may be considered more secure due to the physical access
(or sophisticated backdoor access) required in order to be compromised. Each of these is covered
in more detail below.

 USB dongles are typically used in software licensing schemes to unlock software
capabilities, but they can also be seen as a way to prevent unauthorized access to a
computer or other device's software. The dongle, or key, essentially creates a secure
encrypted tunnel between the software application and the key. The principle is that an
encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides
a stronger measure of security, since it is harder to hack and replicate the dongle than to
simply copy the native software to another machine and use it. Another security
application for dongles is to use them for accessing web-based content such as cloud
software or Virtual Private Networks (VPNs). In addition, a USB dongle can be
configured to lock or unlock a computer.
 Trusted platform modules (TPMs) secure devices by integrating cryptographic
capabilities onto access devices, through the use of microprocessors, or so-called
computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to
detect and authenticate hardware devices, preventing unauthorized network and data
access.

 Computer case intrusion detection refers to a push-button switch which is triggered when
a computer case is opened. The firmware or BIOS is programmed to show an alert to the
operator when the computer is booted up the next time.
 Drive locks are essentially software tools to encrypt hard drives, making them
inaccessible to thieves. Tools exist specifically for encrypting external drives as well.
 Disabling USB ports is a security option for preventing unauthorized and malicious
access to an otherwise secure computer. Infected USB dongles connected to a network
from a computer inside the firewall are considered by Network World as the most
common hardware threat facing computer networks.

73 www.someakenya.com Contact: 0707 737 890


 Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of
cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy
(LE), Near field communication (NFC) on non-iOS devices and biometric validation such
as thumb print readers, as well as QR code reader software designed for mobile devices,
offer new, secure ways for mobile phones to connect to access control systems. These
control systems provide computer security and can also be used for controlling access to
secure buildings.

Secure operating systems

One use of the term "computer security" refers to technology that is used to implement secure
operating systems. Much of this technology is based on science developed in the 1980s and used
to produce what may be some of the most impenetrable operating systems ever. Though still
valid, the technology is in limited use today, primarily because it imposes some changes to
system management and also because it is not widely understood. Such ultra-strong secure
operating systems are based on operating system kernel technology that can guarantee that
certain security policies are absolutely enforced in an operating environment. An example of
such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling
of special microprocessor hardware features, often involving the memory management unit, to a
special correctly implemented operating system kernel. This forms the foundation for a secure
operating system which, if certain critical parts are designed and implemented correctly, can
ensure the absolute impossibility of penetration by hostile elements. This capability is enabled
because the configuration not only imposes a security policy, but in theory completely protects
itself from corruption. Ordinary operating systems, on the other hand, lack the features that
assure this maximal level of security. The design methodology to produce such secure systems is
precise, deterministic and logical.

Systems designed with such methodology represent the state of the art of computer security
although products using such security are not widely known. In sharp contrast to most kinds of
software, they meet specifications with verifiable certainty comparable to specifications for size,
weight and power. Secure operating systems designed this way are used primarily to protect
national security information, military secrets, and the data of international financial institutions.
These are very powerful security tools and very few secure operating systems have been certified
at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to
"unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS
LAN). The assurance of security depends not only on the soundness of the design strategy, but
also on the assurance of correctness of the implementation, and therefore there are degrees of
security strength defined for COMPUSEC. The Common Criteria quantifies security strength of
products in terms of two components, security functionality and assurance level (such as EAL
levels), and these are specified in a Protection Profile for requirements and a Security Target for
product descriptions. None of these ultra-high assurance secures general purpose operating
systems have been produced for decades or certified under Common Criteria.

In USA parlance, the term High Assurance usually suggests the system has the right security
functions that are implemented robustly enough to protect DoD and DoE classified information.
Medium assurance suggests it can protect less valuable information, such as income tax

74 www.someakenya.com Contact: 0707 737 890


information. Secure operating systems designed to meet medium robustness levels of security
functionality and assurances have seen wider use within both government and commercial
markets. Medium robust systems may provide the same security functions as high assurance
secure operating systems but do so at a lower assurance level (such as Common Criteria levels
EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are
implemented flawlessly, and therefore less dependable. These systems are found in use on web
servers, guards, database servers, and management hosts and are used not only to protect the data
stored on these systems but also to provide a high level of protection for network connections
and routing services.

Secure coding

If the operating environment is not based on a secure operating system capable of maintaining a
domain for its own execution, and capable of protecting application code from malicious
subversion, and capable of protecting the system from subverted code, then high degrees of
security are understandably not possible. While such secure operating systems are possible and
have been implemented, most commercial systems fall in a 'low security' category because they
rely on features not supported by secure operating systems (like portability, and others). In low
security operating environments, applications must be relied on to participate in their own
protection. There are 'best effort' secure coding practices that can be followed to make an
application more resistant to malicious subversion.

In commercial environments, the majority of software subversion vulnerabilities result from a


few known kinds of coding defects. Common software defects include buffer overflows, format
string vulnerabilities, integer overflow, and code/command injection. These defects can be used
to cause the target system to execute putative data. However, the "data" contain executable
instructions, allowing the attacker to gain control of the processor.

Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord,
"Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of
these defects, but are still prone to code/command injection and other software defects which
facilitate subversion.

Another bad coding practice occurs when an object is deleted during normal operation yet the
program neglects to update any of the associated memory pointers, potentially causing system
instability when that location is referenced again. This is called dangling pointer, and the first
known exploit for this particular problem was presented in July 2007. Before this publication the
problem was known but considered to be academic and not practically exploitable.

Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically
achievable, insofar as the code (ideally, read-only) and data (generally read/write) generally
tends to have some form of defect.

75 www.someakenya.com Contact: 0707 737 890


Capabilities and access control lists

Within computer systems, two of many security models capable of enforcing privilege separation
are access control lists (ACLs) and capability-based security. Using ACLs to confine programs
has been proven to be insecure in many situations, such as if the host computer can be tricked
into indirectly allowing restricted file access, an issue known as the confused deputy problem. It
has also been shown that the promise of ACLs of giving access to an object to only one person
can never be guaranteed in practice. Both of these problems are resolved by capabilities. This
does not mean practical flaws exist in all ACL-based systems, but only that the designers of
certain utilities must take responsibility to ensure that they do not introduce flaws.

Capabilities have been mostly restricted to research operating systems, while commercial OSs
still uses ACLs. Capabilities can, however, also be implemented at the language level, leading to
a style of programming that is essentially a refinement of standard object-oriented design. An
open source project in the area is the E language.

The most secure computers are those not connected to the Internet and shielded from any
interference. In the real world, the most secure systems are operating systems where security is
not an add-on.

Response to breaches

Responding forcefully to attempted security breaches (in the manner that one would for
attempted physical security breaches) is often very difficult for a variety of reasons:

 Identifying attackers is difficult, as they are often in a different jurisdiction to the systems
they attempt to breach, and operate through proxies, temporary anonymous dial-up
accounts, wireless connections, and other anonymizing procedures which make
backtracking difficult and are often located in yet another jurisdiction. If they
successfully breach security, they are often able to delete logs to cover their tracks.
 The sheer number of attempted attacks is so large that organisations cannot spend time
pursuing each attacker (a typical home user with a permanent (e.g., cable modem)
connection will be attacked at least several times per day , so more attractive targets
could be presumed to see many more). Note however, that most of the sheer bulk of these
attacks is made by automated vulnerability scanners and computer worms.
 Law enforcement officers are often unfamiliar with information technology, and so lack
the skills and interest in pursuing attackers. There are also budgetary constraints. It has
been argued that the high cost of technology, such as DNA testing, and improved
forensics mean less money for other kinds of law enforcement, so the overall rate of
criminals not getting dealt with goes up as the cost of the technology increases. In
addition, the identification of attackers across a network may require logs from various
points in the network and in many countries, the release of these records to law
enforcement (with the exception of being voluntarily surrendered by a network
administrator or a system administrator) requires a search warrant and, depending on the
circumstances, the legal proceedings required can be drawn out to the point where the
records are either regularly destroyed, or the information is no longer relevant.

76 www.someakenya.com Contact: 0707 737 890


 Employing virus security precautions
You must safeguard your PC. Following these basic rules will help you protect you and your
family whenever you go online.

1. Protect your computer with strong security software and keep it updated. McAfee
Total Protection provides proven PC protection from Trojans, hackers, and spyware. Its
integrated anti-virus, anti-spyware, firewall, anti-spam, anti-phishing, and backup
technologies work together to combat today’s advanced multi-faceted attacks. It scans
disks, email attachments, files downloaded from the web, and documents generated by
word processing and spreadsheet programs.
2. Use a security conscious Internet service provider (ISP) that implements strong anti-
spam and anti-phishing procedures. The SpamHaus organization lists the current top-10
worst ISPs in this category—consider this when making your choice.
3. Enable automatic Windows updates, or download Microsoft updates regularly, to keep
your operating system patched against known vulnerabilities. Install patches from other
software manufacturers as soon as they are distributed. A fully patched computer behind
a firewall is the best defense against Trojan and spyware installation.
4. Use great caution when opening attachments. Configure your anti-virus software to
automatically scan all email and instant message attachments. Make sure your email
program doesn’t automatically open attachments or automatically render graphics, and
ensure that the preview pane is turned off. Never open unsolicited emails, or attachments
that you’re not expecting—even from people you know.
5. Be careful when using P2P file sharing. Trojans hide within file-sharing programs
waiting to be downloaded. Use the same precautions when downloading shared files that
you do for email and instant messaging. Avoid downloading files with the extensions
.exe, .scr, .lnk, .bat, .vbs, .dll, .bin, and .cmd.
6. Use security precautions for your PDA, cell phone, and Wi-Fi devices. Viruses and
Trojans arrive as an email/IM attachment, are downloaded from the Internet, or are
uploaded along with other data from a desktop. Cell phone viruses and mobile phishing
attacks are in the beginning stages, but will become more common as more people access
mobile multimedia services and Internet content directly from their phones. Mobile Anti-
Virus software for a selected devices is available for free with some McAfee PC
products. Always use a PIN code on your cell phone and never install or download
mobile software from a un-trusted source.
7. Configure your instant messaging application correctly. Make sure it does not open
automatically when you fire up your computer.
8. Beware of spam-based phishing schemes. Don’t click on links in emails or IM.
9. Back up your files regularly and store the backups somewhere besides your PC. If you
fall victim to a virus attack, you can recover photos, music, movies, and personal
information like tax returns and bank statements.
10. Stay aware of current virus news by checking sites like McAfee Labs Threat Center.

77 www.someakenya.com Contact: 0707 737 890


 Employing Internet security precautions

Protect Your Computer from Viruses, Hackers, and Spies

Today we use our computers to do so many things. We go online to search for information, shop,
bank, do homework, play games, and stay in touch with family and friends. As a result, our
computers contain a wealth of personal information about us. This may include banking and
other financial records, and medical information - information that we want to protect. If your
computer is not protected, identity thieves and other fraudsters may be able to get access and
steal your personal information. Spammers could use your computer as a "zombie drone" to send
spam that looks like it came from you. Malicious viruses or spyware could be deposited on your
computer, slowing it down or destroying files.

By using safety measures and good practices to protect your home computer, you can protect
your privacy and your family. The following tips are offered to help you lower your risk while
you're online.

Install a firewall

A firewall is a software program or piece of hardware that blocks hackers from entering and
using your computer. Hackers search the Internet the way some telemarketers automatically dial
random phone numbers. They send out pings (calls) to thousands of computers and wait for
responses. Firewalls prevent your computer from responding to these random calls. A firewall
blocks communications to and from sources you don't permit. This is especially important if you
have a high-speed Internet connection, like DSL or cable.

Some operating systems have built-in firewalls that may be shipped in the "off" mode. Be sure to
turn your firewall on. To be effective, your firewall must be set up properly and updated
regularly. Check your online "Help" feature for specific instructions.

Use anti-virus software

Anti-virus software protects your computer from viruses that can destroy your data, slow down
or crash your computer, or allow spammers to send email through your account. Anti-virus
protection scans your computer and your incoming email for viruses, and then deletes them. You
must keep your anti-virus software updated to cope with the latest "bugs" circulating the Internet.
Most anti-virus software includes a feature to download updates automatically when you are
online. In addition, make sure that the software is continually running and checking your system
for viruses, especially if you are downloading files from the Web or checking your email. Set
your anti-virus software to check for viruses when you first turn on your computer. You should
also give your system a thorough scan at least twice a month.

Use anti-spyware software

Spyware is software installed without your knowledge or consent that can monitor your online
activities and collect personal information while you surf the Web. Some kinds of spyware,

78 www.someakenya.com Contact: 0707 737 890


called keyloggers, record everything you key in - including your passwords and financial
information. Signs that your computer may be infected with spyware include a sudden flurry of
pop-up ads, being taken to Web sites you don't want to go to, and generally slowed performance.

Spyware protection is included in some anti-virus software programs. Check your anti-virus
software documentation for instructions on how to activate the spyware protection features. You
can buy separate anti-spyware software programs. Keep your anti-spyware software updated and
run it regularly.

To avoid spyware in the first place, download software only from sites you know and trust.
Piggybacking spyware can be an unseen cost of many "free" programs. Don't click on links in
pop-up windows or in spam email.

Manage your system and browser to protect your privacy

Hackers are constantly trying to find flaws or holes in operating systems and browsers. To
protect your computer and the information on it, put the security settings in your system and
browser at medium or higher. Check the "Tool" or "Options" menus for how to do this. Update
your system and browser regularly, taking advantage of automatic updating when it's available.
Windows Update is a service offered by Microsoft. It will download and install software updates
to the Microsoft Windows Operating System, Internet Explorer, Outlook Express, and will also
deliver security updates to you. Patching can also be run automatically for other systems, such as
Macintosh Operating System.

Use a strong password - and keep it to yourself

Protect your computer from intruders by choosing passwords that are hard to guess. Use strong
passwords with at least eight characters, a combination of letters, numbers and special characters.
Don't use a word that can easily be found in a dictionary. Some hackers use programs that can try
every word in the dictionary. Try using a phrase to help you remember your password, using the
first letter of each word in the phrase. For example, HmWc@w2 - How much wood could a
woodchuck chuck. Protect your password the same way you would the key to your home. After
all, it is a "key" to your personal information.

Secure your wireless network

If you use a wireless network in your home, be sure to take precautions to secure it against
hackers. Encrypting wireless communications is the first step. Choose a wireless router with an
encryption feature and turn it on. WPA encryption is considered stronger than WEP.1 Your
computer, router, and other equipment must use the same encryption. If your router enables
identifier broadcasting, disable it. Note the SSID name so you can connect your computers to the
network manually.2 Hackers know the pre-set passwords of this kind of equipment. Be sure to
change the default identifier on your router and the pre-set administrative password. Turn off
your wireless network when you're not using it.

79 www.someakenya.com Contact: 0707 737 890


Remember that public "hot spots" may not be secure. It's safest to avoid accessing or sending
sensitive personal information over a public wireless network. You may also consider buying a
mobile broadband card that will allow you to connect to the Internet without relying on Wi-Fi
hot spots. A mobile broadband card is a device that plugs into your computer, laptop, PDA, or
cell phone and uses a cell phone signal to provide high-speed Internet access. They are sold by
cell phone companies and require a monthly service plan.

Be careful if you share files

Many consumers enjoy sharing digital files, such as music, movies, photos, and software. File-
sharing software that connects your computer to a network of computers is often available for
free. File-sharing can pose several risks. When connected to a file-sharing network, you may
allow others to copy files you didn't intend to share. You might download a virus or bit of
spyware that makes your computer vulnerable to hackers. You might also break the law by
downloading material that is copyright protected.

Shop safely online

When shopping online, check out the Web site before entering your credit card number or other
personal information. Read the privacy policy and look for opportunities to opt out of
information sharing. (If there is no privacy policy posted, beware! Shop elsewhere.) Learn how
to tell when a Web site is secure. Look for "https" in the address bar or an unbroken padlock icon
at the bottom of the browser window. These are signs that your information will be encrypted or
scrambled, protecting it from hackers as it moves across the Internet.

Parents, take control

Don't let your children risk your family's privacy. Make sure they know how to use the Internet
safely. For younger children, install parental control software that limits the Web sites kids can
visit. But remember - no software can substitute for parental supervision.

 Vetting of ICT employees

80 www.someakenya.com Contact: 0707 737 890


TOPIC 6

TRANSMISSION SECURITY

Definition -Transmission Security (TRANSEC)

Transmission security (TRANSEC) is the process of securing data transmissions from being
infiltrated, exploited or intercepted by an individual, application or device. TRANSEC secures
data as it travels over a communication medium. It is generally implemented in military and
government organization networks and devices, such as radar and radio communication
equipment.

Transmission Security (TRANSEC) explained


TRANSEC is part of communication security (COMSEC) and is implemented and managed
through several techniques, such as burst encoding, spread spectrum and frequency hopping.
Each transmission stream is secured through a transmission security key (TSK) and
cryptographic algorithm. Both the TSK and algorithm enable the creation of a pseudorandom
sequence on top of the transmitted data. The key goals and objectives of TRANSEC are:

 To create a low probability of interception (LPI) for transmissions


 To create a low probability of detection (LPD) for the measures TRANSEC takes
 To ensure anti-jam, or resistance to jamming

Security: Secure Internet Data Transmission.


Sniff, spoof, encryption

 What Is Transmission Security?


 How Information Is Transmitted
 How Information Is Intercepted and Read
 Sniffing Devices
 Devices for Spoofing
 Methods of Transmissions and Their Levels of Security
 Encryption
o Why Use Encryption?
o Private Key Encryption
o Public Key Encryption
o State-of-the-Art Encryption and Its Future
 Why a Technical Solution Is Never the Whole Solution
 Client/Server Issues
 Secure Computing in Practice
o File Transmission
o Interactive Transmission

81 www.someakenya.com Contact: 0707 737 890


 How Much Is Too Much?
 What Level of Security Is Right for You?
 Summary

In the two preceding chapters we examined ways in which to keep your data safe, mainly from
within an organization. I discussed the best ways to keep hackers out of your intranet and how to
protect actual data from viruses and human error as well as the physical security of your software
and hardware. Now that you've secured your tools and applications physically and have taken all
precautions internally to keep data safe, it's time to consider how safe your data is during
transmission. This transmission from one computer to another could be within your LAN, within
your intranet, or over the Internet.

This chapter's topic, secure transmission, explores the security risks involved with data
transmission, such as eavesdropping and decrypting. It discusses why and how to establish
secure channels as well as ways to prevent or foil attacks on these secure channels. It's aimed
primarily at anyone who is trying to design a fully secure system of computers and data or for
anyone interested in encrypting data for transmission. Any individual involved with transmitting
sensitive data-whether in a business that exchanges confidential information, either inside its
corporate headquarters or with customers, or in an organization that exchanges any sensitive data
between just two computers-should not skip this chapter. This includes banks; corporations with
offices in different geographical locations that share proprietary information, regardless of
whether it's public or private; or individuals doing business on the Internet, including selling
products and conducting business transactions.

What Is Transmission Security?


Transmission security is the capability to send a message electronically from one computer
system to another computer system so that only the intended recipient receives and reads the
message and the message received is identical to the message sent. The message would not be
identical if it was altered in anyway, whether transmitted over faulty channels or intercepted by
an eavesdropper. Transmission security translates into secure networks. Although many people
regard networks as computers connected by wires, this definition of a network, while technically
correct, misses the point. Rather, networks are transmitted data, the data flowing over wires.

All transmissions can be intercepted. And the cautious user looks at all transmissions as if they
will be intercepted. You can minimize the risks of transmission interception, but you can never,
under any circumstances, completely rule it out. After all, it is people who design and put wires
in their place, and people can get to them. Accessing wires is somewhat comparable, although
much more difficult, to accessing a transmission sent over airwaves, as on a CB radio. For
example, as a ham, you may have a message intended only for other hams. Although hams are
the main communicators on these frequencies, anyone with the right radio equipment can tune in
and listen, so it's likely your message will be received and heard by other listeners who pick up
the frequency, whether you want them to hear it or not.

Similar risks occur with cellular phones, even though most transmission takes place over wire
and not air. One risky transmission occurred between Prince Charles and his mistress Camilla

82 www.someakenya.com Contact: 0707 737 890


Parker Bowles when an eavesdropper intercepted a now infamous cellular phone conversation
between the two.

How Information Is Transmitted


Most networking schemes involve data transmission over certain whole sections of the network.
Most network transmissions don't go directly from computer A to computer B. Ethernet
networks; for example, involve transmission to all directly connected computers on the local
network. Two computers are "directly connected" if there is no device between them that filters
the transmission based on its destination. So if computer A sends a message to computer E,
computers B, C, and D will receive the message but will ignore it, because it is not intended for
them, as shown in Figure 16.1. Many other types of networks, including Token Ring, FDDI, and
some switched Ethernets operate on the same idea: Transmitted packets go to many devices on
the network and expect the recipients to ignore messages destined for other computers. This is
much like radio or television transmission, in which signals are sent out in every direction, but
radios and TVs not on the correct station don't use the signal.

How Information Is Intercepted and Read

Any computer with access to the physical network wire or in the vicinity of over-air
transmissions, however, could be instructed not to ignore the signals intended for other
computers. This is the essence of electronic eavesdropping.

Information is considered intercepted when someone other than the intended recipient receives
the information. Data can be intercepted in many ways, such as electronic eavesdropping or by
using the recipient's password. It can occur anywhere, including in a chat room or through an e-
mail exchange.

The tools required to read the transmission depend on how the information is intercepted. If an
intruder is stealing transmissions at the most basic level (stealing the data packets straight off the
wire or out of the air), the interloper will need something that translates electronic signals from
voltage changes to the numbers and letters that those changes represent. Computers for which the
transmission is intended do this automatically, because they are expecting the signal and already
know its characteristics, how to decode it, and what to do with it. A much simpler method would
be intercepting a message by just looking over someone's shoulder to read what they have
written. Again, the legitimate user already has a context in which to interpret the on-screen
information. The snooper, however, still has to interpret the message, and this isn't always so
simple.

Sniffing Devices

There are troubleshooting programs and devices designed to analyze LAN traffic. These are
commonly referred to as packet sniffers, because they are created to "sniff" packets of data for
the network engineer. As mentioned in the preceding section, all transmissions are broadcast
over all the wires. When one computer wants to communicate with another, it sends out an
electrical signal through the network, which could be copper wire, fiber optic cable, or air. The

83 www.someakenya.com Contact: 0707 737 890


signal travels over this whole section of the network until it reaches the end of its signal strength
in the air, the end of the wire or cable, or a network device that turns the packet back because the
packet's destination is not on the other side of the device. At each point along this journey that
the signal encounters a network interfaces, that interface examines the signal. If the interface sees
the signal is for someone else, it ignores it. If the interface recognizes a signal for it, it reads it
and gives it to the other parts of the computer for interpretation and use.

The nice thing about LANs is that the systems administrator can use a sniffer to tap into the wire
to examine it. A systems administrator should occasionally examine these lines to check on the
raw material going over the LAN. This is where packet sniffers are helpful. Packet sniffers will
instruct your computer to look at every signal over the wire or only signals that meet certain
criteria. This allows the systems administrator to analyze and actually read electrical signals.
However, anyone with malicious intent also can use packet sniffers for analyzing and reading
network traffic.

Now, you might think there are users out there maliciously using packet sniffers to read data
worldwide, continuously. It's true that there may be many users with malicious intent snooping
around networks, but it is not as simple as just purchasing a packet sniffer. There are devices-
generally referred to as internetworking devices and more specifically referred to as routers and
bridges-that actually filter the electrical signals sent out as data packets. These devices filter
signals logically, which means that any data passing through a bridge or router must be intended
to go through that bridge or router; the destination of the data must be on the other side of the
internetworking device to get through the filter. If the destination of the data is not on the other
side of the filter, the internetworking device won't pass the signal; and if it doesn't pass the
signal, someone on the other side is unable to sniff the information, as shown in Figure 16.2.
Anytime you have a network that requires any sort of logical divisions, you need an
internetworking device. If you are connected to the Internet, you have an internetworking device.
If your local network spans a large physical distance, you have some sort of internetworking
device.

Figure 16.2: This sniffer cannot smell packets on the other side of the router.

Devices for Spoofing


Spoofing is somewhat of an overrated threat. Spoofing means getting your computer to pretend it
is a different computer. The user forces the computer to present credentials to the network that
are false. To do so, the user doesn't need tools but rather information to make those credentials
realistic. The Internet identifies computers by numbers: Every computer has a unique number on
the Internet. Some computers will grant access to systems they are charged with protecting or
resources that they guard on the basis of the identification number presented to them by another
computer. In this way, if a computer presents a fake identification number, the computer that
requested the number could be fooled.

These are generally difficult attacks to carry out because of how information is transmitted from
computer to computer. When information is transmitted, it must follow a route based on your
address. If you are using a fake address, the information returning to you will look for your fake

84 www.someakenya.com Contact: 0707 737 890


address and thus take a route that does not lead to you, as shown in Figure 16.3. For example, if
you send mail to someone but you want them to think you are someone else, you put someone
else's return address on the envelope. When they write back to the person at the return address,
the mail carrier delivers the message to that address and not back to you. The Internet equivalent
of the dutiful mail carrier is termed "forbidding source routing" and is easy to enable. You can't
get return messages, so the attack is difficult to carry out. In addition, firewalls know the
difference between inside and outside, and a firewall will ignore messages from outside by
computers claiming to have an inside address. Similarly, the mailroom at IBM will view
suspiciously any internal company mail brought in by a mail carrier. These simple safeguards
make it difficult to carry out a spoof attack from the outside.

Figure 16.3: Spoofed packets reach their destination but not their origin.

A drawback of a spoof attack from inside the company is that if a computer on the Internet at any
time detects any other computer on the Internet with the same Internet address, both computers
will complain. In this case, if someone is spoofing you by pretending to be you and your
computer is on or being monitored, the trick would be detected easily because your computer
will tell you that there is another computer on the network with the same address.

Still another drawback of a spoofing attack is that every network interface on any computer has a
unique identifying number. Anyone trying to spoof your IP address on a local network could
disable the computer he or she is spoofing, avoiding the earlier mentioned conflict. This would
fail, however, if any other computer on the network were using the address routing protocol
(ARP). The address routing protocol matches Internet addresses to the number given to a
network card. Therefore, turning off your computer would eliminate the IP conflict, but the
interface card number mismatch would require either stealing the network card, making a special
one, or adjusting the ARP on the third computer.

Attacks in which individuals pretend to be another user can occur on several levels. The attacker
can pretend that his or her network interface is one that it isn't by manufacturing a network card
with a fake address. The user then might pretend to have the Internet address of another
computer and thus steal that computer's transmission or create transmissions under the guise of
the impersonated computer. A user could also pretend to be a different person by stealing that
person's username and password in one of about a billion ways. In addition, a user could steal
information simply by gaining access to a computer whose data was not protected against direct
physical intrusion.

Methods of Transmissions and Their Levels of Security

At the most basic level transmission occurs over wires or in the air; every electrical signal travels
one way or the other. Transmission is more secure over wire because an eavesdropper or hacker
must be physically near the wire, whereas an interception of an air transmission can occur
anywhere in reach of the signal.

An attempt to intercept a transmission traveling via fiber by tapping into the cable would be
more easily detected than a tap into copper wire, because the tapper could easily damage or

85 www.someakenya.com Contact: 0707 737 890


impair a particular segment of the network, which should be easy to spot. Detecting an
interception that took place over the air would be nearly impossible.

Encryption
There are two aspects to consider when planning for transmission security. The first aspect,
discussed in the preceding paragraph, is how transmissions are physically sent (that is, over wire
or air). The impossibility of preventing physical interception should now be clear. The second
aspect of secure transmission relates to the content that is being transmitted. Securing the content
of the message is done through encryption.

Encryption involves transforming messages to make them legible only for the intended
recipients. Encryption is the process of translating plain text into ciphertext. Human-readable
information intended for transmission is plain text, whereas ciphertext is the text that is actually
transmitted. At the other end, decryption is the process of translating ciphertext back into plain
text. (Figure 16.4 demonstrates the process.) Encryption algorithm refers to the steps that a
personal computer takes to turn plain text into ciphertext. A key is a piece of information, usually
a number that allows the sender to encode a message only for the receiver. Another key also
allows the receiver to decode messages sent to him or her.

Figure 16.4: Plain text is encrypted to produce ciphertext. Ciphertext is decrypted to produce
plain text. Keys are used for both encryption and decryption.

Now that you have the basic encryption jargon down, let's look at why and how encryption is
essential for secure transmissions.

Why Use Encryption?

As you've learned by now, your transmissions can have only so much physical security. It is
reasonable to assume that at some point someone may intercept your transmissions. Whether you
expect an interception or whether you just generally suspect that interceptions may occur, you
should transmit your information in a format that is useless to any interceptors. At the simplest
level, this means when transmitting a message to someone, you use a coded message or slang
(nicknames) that no one else understands. When Ulysses S. Grant captured Vicksburg during the
Civil War, he sent a coded but predetermined message to Abraham Lincoln that read "The father
of waters flows unvexed to the sea," meaning that the Union now owned the whole Mississippi
river. Perhaps a good plan at the time, but still, Grant and Lincoln (or their advisers/confidantes)
had to communicate a predetermined message and the message's meaning. A more recent
example of a coded message might involve the use of nicknames. For instance, you and your
sister give nicknames to family members whom you discuss unfavorably. Should a malicious
family member decide to intercept a transmission, you would hope he wouldn't understand which
family members you and your sister refer to in your messages. The obvious drawback of this
coded message, like the Grant-Lincoln message, is that you and the recipient must establish a
system of code before you begin transmitting messages.

86 www.someakenya.com Contact: 0707 737 890


A better system is one that allows you to send any message, even one you had not anticipated, to
anyone without fear of interception. This is why an encryption system is so valuable; it allows
any message to be transmitted that will be useless to anyone who intercepts it.

Private Key Encryption

Another rather simple form of encryption is commonly known as private key or symmetric
encryption. It's called private key encryption because each party must know before the message
is sent how to interpret the message. For example, spies in the movies always have a sequence of
statements that they exchange to be sure of each other's identity, like "the sun is shining" must be
followed by "the ice is still slippery." This is an example of encrypting so that only the person
for whom a message is intended will understand it.

Other systems have been developed so that information can be encrypted in a general way.
Again, using history as an example, one encryption method is commonly referred to as Caesar's
code. According to history, Caesar would send messages that were encoded by replacing each
letter in the message with the letter three places higher in the alphabet (A was replaced by D, B
by E, and so on). The recipient just had to change the letters back to find out what the message
said. An enemy who intercepted the message and did not know the method of encoding it would
be unable to decipher it. Clearly though, this encoding method is not terribly difficult to break.
This is called private key encryption because the method of encryption must be kept quiet.
Anyone who knows the method could decode the message. It also is called symmetric because
the same key is used to both encrypt and decrypt the message. Other private key methods have
been devised to be more difficult to break.

Data Encrypt Standard (DES) is a private key system adopted by the U.S. government as a
standard very secure method of encryption. An even more secure private key method is called a
one-time pad. A one-time pad involves sheets of paper with random numbers on them: These
numbers are used to transform the message; each number or sequence of numbers is used only
once. The recipient of the message has an identical pad to use to decrypt the message. One-time
pads have been proven to be foolproof-without having a copy of the pad. Supposedly,
mathematicians can prove that a one-time pad is impossible to break.

The drawbacks to private key systems, however, are twofold. First, anyone who learns the
method of encryption and gets the key, or a number or sequence of numbers or the sequences'
equivalent of numbers that are used as a random input into the encrypted system, can break the
key. Second, keys must be exchanged before transmission with any recipient or potential
recipient of your message. So, to exchange keys you need a secure method of transmission, but
essentially what you've done is create a need for another secure method of transmission.

Public Key Encryption

To overcome the drawbacks of private key systems, a number of mathematicians have invented
public key systems. Unknown until about 30 years ago, public key systems were developed from
some very subtle insights about the mathematics of large numbers and how they relate to the
power of computers. Public key means that anyone can publish his or her method of encryption,

87 www.someakenya.com Contact: 0707 737 890


publish a key for his or her messages, and only the recipient can read the messages. This works
because of what is known in math as a trapdoor problem. A trapdoor is a mathematical formula
that is easy to work forward but very hard to work backward. In general it is easy to multiply two
very large numbers together, but it is very difficult to take a very large number and find its two
prime factors. Public key algorithms depend on a person publishing a large public key and others
being unable to factor this public key into its component parts. Because the creator of the key
knows the factors of his or her large number, he or she can use those factors to decode messages
created by others using his or her public key. Those who only know the public key will be unable
to discover the private key, because of the difficulty of factoring the large number. (Figure 16.5
shows the difference between private and public key encryptions.)

Figure 16.5: Private key encryption uses one key to go both ways. Public key encryption uses
one key to encrypt (the public key) and one key to decrypt (the secret key).

Public key methods vary, but one of the most common, and also free, is PGP (pretty good
privacy). This is a public key encryption method that allows you to exchange messages with
anyone that will send you his or her key. When you receive a key from someone, your PGP
software can use that key to encode a message that only that person can interpret. The PGP
method also allows you to encode a signature that only can be decoded using your public key,
ensuring that it was you who sent the message. There are many free software packages that allow
users to encode e-mail and other files they send. These software packages also will generate a
public key for you. The software, along with the source codes, is available for almost all
common operating systems.

Public key encryption works because users can send any message to any person without first
meeting them or exchanging secret keys or secret encryption schemes. This obviously makes an
extremely powerful tool in commerce for transmission of confidential customer information
between buyers and sellers. In addition, public key encryption is extremely secure because
decrypting public key encryption methods is a matter of time. If someone had enough time, that
person could decipher your message. With commonly used methods, however, even an entire
nation of hackers with the most powerful computers would take many years to decipher
encrypted messages.

Now that I've told you about what many in the world of computer security consider the most
secure method of transmission, I must tell you that there are times when public key encryption
doesn't work. When the method used for encryption isn't secure, the message isn't secure.
Because the methods of encryption are usually public, anyone who is interested in finding a hole
has all the information necessary to find any holes. Holes often are discovered in methods
previously thought to be secure. The fact that the algorithm is public makes the method more
secure over the long term but less secure over the short term. In the long term all the flaws will
be discovered and fixed, but over the short term flaws will be discovered and perhaps exploited.
A second insecurity of public key methods in general is that public key encryption won't work
when a recipient has no method of authenticating the sender. If someone sends you his or her
public key, you can use that to encode a message for that person only-but it doesn't mean they
are who they say they are.

88 www.someakenya.com Contact: 0707 737 890


Services of certifying authorities, such as Verasign, Inc., are needed to ensure the authenticity of
correspondence. These certifying authorities use common identification methods to authenticate
the identity of their subscribers. When verified, the authority issues a digital certificate to the
subscriber. The subscriber then can use this certificate in his or her Web server to carry on secure
communications with those browsing the Web site. Individuals who want to use public keys for
their correspondence or companies that wish to prove their identity in electronic correspondence
also can get an identity service from a certifying authority. Certifying authorities aim to
overcome the aforementioned weakness of public keys being only as authentic as the user who
sends it. The service only removes the dilemma one level, however, because the authority's
services are only as good as their methods of authenticating subscribers.

Public key also doesn't work if your private keys are compromised. Keeping your private key
secure is essential to the security of the system. Remember that the security of a public key
system depends on no one being able to get your private key by knowing your public key. Your
private key is what you use to decode messages sent to you and to prove your identity to others
to whom you send messages. If someone is able to gain possession of your private key, that
person could read your messages and forge messages from you.

State-of-the-Art Encryption and Its Future

Encryption has often involved making a choice between public and private key security methods.
Public key encryption involves a heavy computing load, meaning that transmission with a public
key takes more time and resources. Private key systems are less cumbersome but also less secure
and less versatile. To overcome the drawbacks of both security methods, users have combined
public and private key systems, such as an exchange of DES keys using a public system and then
using those keys for the private DES system. Remember that private key systems can be stronger
because it is possible to make an unbreakable private key system. A public key system is not
theoretically unbreakable; it's just too difficult to do it in real life. The weak point in a private
key system is the exchange of keys, so the very secure public key method can be used to
exchange keys, and then the completely secure private key system can be used to do the actual
transmission. A second advantage is that public key systems require a big commitment of
computing power for every message. Private key, by comparison, is far less computing intensive
and therefore cheaper and more efficient overall for transmission.

This combination likely will continue and become more common in the future, but it's unlikely
that most systems will become public key. As computing resources advance to make public key
encryption easier, the resources for cracking those keys also advance. This means that keys will
become longer while the calculations will become bigger.

Why a Technical Solution Is Never the Whole Solution


This topic cannot be discussed enough. No matter how good your solutions are, no matter how
many guards are around your computers or how many passwords or encrypted materials you
have, if the people in your organization don't follow good security policies or if you don't have a
clear security policy, your network is not secure. Remember, the goal to good security is to keep
information away from other people, not from other computers. Throughout history people have

89 www.someakenya.com Contact: 0707 737 890


gotten information in basically the same ways. For example, disgruntled employees often can be
sources of information leaks to competitors. This happens about 100 times for every one time a
hacker intrudes. Of course you must have the right technical solutions for your network, but they
just aren't important compared with the human concerns. All the important information is really
in someone's head, and it doesn't take packet sniffers to pull it out. (For a complete discussion on
good security policies, see Chapter 14, "Security: Keeping Hackers Out.")

Human history is full of spy stories about stolen information; these stories are never about how
someone used a computer to get the information. Of the many recent incidents of breaches of
national security-Aldrich Ames, who gave details of espionage operations; the Walkers, who
sold Navy code books; the Rosenbergs, who gave away atomic secrets-almost none involved
strictly computer-based breaches. The reason this rarely occurs is that all the data is handled by
humans-they're the ones who put data in computers-and humans have far less strict security than
computers do.

Client/Server Issues
A group known as the computer emergency response team (CERT) at Carnegie-Mellon
University makes it their business to find security holes in the Internet and then to make the
public aware of these holes. CERT especially concerns itself with computer-Internet connections
using TCP/IP protocol and maintains a list of Internet-related security holes. To find the
information about CERT, look for their home page at http://www.cert.org/.

Reading information about holes and keeping abreast of security issues will give you information
about old holes, including what holes have been discovered, allowing you to plug your system.
Usually hackers are aware of old holes and search systems for those holes, creating havoc on
private or public networks. Exploiting unplugged known holes is overwhelmingly more common
than finding a new, undiscovered hole. After an intruder has used a hole to eavesdrop your
transmissions, that person can use any information you transmit. A hacker could sell your
marketing plans, reschedule your meetings, steal product orders, or provide your customers with
inappropriate or wrong information. Most users don't keep themselves up-to-date on security
holes, exposing themselves to holes anyone else, including hackers, might know about.

In a way, anyone setting up a server or client is creating his or her own security hole. By its
nature, a Web server or a file server is a machine that invites other computers to visit and use its
resources; this basis itself is insecure. The challenge now is to prevent people from using
anything but the resources you have set up for them to access. On the client side, you are always
asking for people to be interactive. A good example is Java. With Java the user asks the server
for a LAN executable file. This means your computer is specifically taking direction from
another computer. Suppose that the server directs your computer to reconfigure its own hard
drive; this is an example of a security hole. This could happen inadvertently if you have an
incompetent programmer who has written a Java application that damages the computer, or it
could be malicious intent. Although both Java and JavaScript have extensive safeguards, there
are still lingering doubts about how secure they truly are. Never dismiss the inadvertent and
never overemphasize the malicious; they are both equally dangerous.

90 www.someakenya.com Contact: 0707 737 890


Secure Computing in Practice

Almost all network computing involves one of two types of transmission: file transfer or
interactive transmission. File transfer involves one computer transferring a block of data and
expecting nothing in return other than acknowledgment of reception. Interactive transmission
involves two computers that have meaningful transmissions flowing in both directions. With file
transmission, only the file to be transferred must be encrypted. Anyone who intercepted the
transfer would only know that something had been transferred. Because only that file must be
encrypted and the file must be ready before transfer, encryption can take place at any time before
transfer. Interactive transmission, however, often involves spontaneous messages and must occur
on both ends.

File Transmission

In practice, there are several types of file transmissions most users perform, including the
transmission of files through FTP (file transfer protocol), submitting forms by a Web server, and
sending e-mail.

Information transferred in this way should be encrypted before transmission. Transferring


unencrypted files with these methods means the files travel as plain text, ready to be intercepted
and interpreted by anyone. Clearly, encrypting files for transmission adds a level of
inconvenience, but to secure the transmission, this inconvenience is unavoidable. Unfortunately,
security decisions always involve a trade-off between security and convenience.

Using encryption in these cases is simple. Many shareware PGP programs exist to allow a user to
encrypt a file. Other stronger methods exist for purchase, including products made by RSA
security. The advantage of using these programs is that the encryption can be tested before the
file is sent, ensuring its usefulness.

Interactive Transmission

To use any computer system over a network interactively, users must overcome two security
exposures. First, users must authenticate themselves, and this exposes the authentication process
to interception. Anyone sending out his or her password over the network is often sending that
password out in clear text, which means anyone eavesdropping can pick up the password and
username and use them. Stolen password and username combinations are the most common
problem of interactive transmission. The other problem occurs while the user is using the system.
The information being typed in is most likely going out in plain text, which can be intercepted.
There are a few systems designed to limit the security risk in using a remote system interactively.

One method is called Kerberos, shown in Figure 16.6. When a user logs into a workstation, that
workstation authenticates the user so that the user's password is never sent over the network in
any form. That workstation then contacts the Kerberos server, which issues the user a ticket; that
ticket contains encrypted information used to authenticate the user of other network computers.
It's secure because the username and password are never transmitted over the network. The local
machine does the entire authentication, and then it uses a secure method of transmission to

91 www.someakenya.com Contact: 0707 737 890


authenticate itself to the Kerberos server. The server then passes an encrypted ticket back to the
user, who sends that ticket over the network, as opposed to using his or her password and
username. For example, if the user's Telnet is somewhere, the user contacts the remote computer,
which then asks the user for his or her username and password. It then transmits both across the
network.

Figure 16.6: Two computers using Kerberos for authentication require a third computer as a
Kerberos server.

With a Kerberos server this never happens. The user is authenticated locally, and all the
exchanges with the network are encrypted and completed. However, a drawback is that every
machine you want to send information to or any applications or services you wish to use must be
"Kerberized" so that the machine will accept your credentials. A second drawback is that if the
Kerberos server is ever compromised-that is, if an unauthorized person ever gains access to the
Kerberos server-then the integrity of the entire system is compromised.

If you are interacting a lot across the network, that information is insecure. With Kerberos, the
transmission between the machines is not encrypted, just the authentication process is. So
someone couldn't use passwords to gain access; but if all they wanted was to look at the
information you are sending, they could do so. For example, if you log into a financial system
and type in account numbers and financial data, an eavesdropper could get this information
without actually getting on the system.

Secure RPC (Remote Procedure Call) is another method of reducing network security exposure.
The difference between RPC and Kerberos is that after you authenticate yourself to the local
machine, which has your private key stored on it, all your transmission across the network is
encrypted. You can then authenticate yourself to other machines and transmit all your
transactions over a secure channel. Like Kerberos, the main drawback is that any machines you
want to interact with must be equipped with the proper decrypting software, which is a hassle.
Also, because RPC is a public key encryption method, you take a performance hit because all the
encryption and decryption must be done before sending out anything across the network, which
takes a lot of time and computational power.

The final encrypted transmission method is SSL (secure sockets layer). SSL is a method of
encrypting all the communications between computers. It is used to encrypt and decrypt
communications between a Web browser and a Web server. Whenever you use URLs beginning
with https://, you're using SSL. SSL is included with security capable Netscape browsers. SSL
uses technology based on the commercially available public key encryption products of RSA,
Inc. SSL itself is an open standard, and the algorithms are free to all. SSL libraries can be used to
encrypt all traffic among computers, because the encryption occurs at a level that makes it
transparent to both the user and any programs he or she is running.

How Much Is Too Much?


Security always involves a trade-off between the security of your data and the ease with which
that data is accessible. Like any computer system or any amount of data, you must look carefully

92 www.someakenya.com Contact: 0707 737 890


at the dollar value of secure transmissions. Encrypting a transmission so that it is too slow to be
of any value must be weighed against the danger of having the transmission intercepted. The
point of having a network is to transmit important data in a timely fashion. If these functions are
impaired, your security measures are costing, not saving, you money. When implementing
transmission security, your concern must be the amount of time and resources that someone
would have to apply to decipher your transmissions. The simplest measure of this security is the
length of the keys used in your encryption algorithm. Usually the particular software package
that does your encryption will recommend a particular key length. These are usually sufficient
enough to ensure your security, and longer ones are often merely an additional burden.

What Level of Security Is Right for You?


I cannot stress often enough that security costs money. If you are implementing complicated
security measures for data that is not valuable, you are wasting money. When deciding on
security measures, make the dollar-smart decision. That is, if you must upgrade all your
computer hardware to handle the public key software, you should make sure that the cost of the
upgrade matches the value of the data that will be encrypted. Clearly those selling products over
the Internet would benefit greatly from extremely secure communications and need to spend
accordingly. On the other hand, a company that uses the Internet only to disseminate catalogs
and price information will not need to have such secure transmissions. Also, a company that
wishes to send out confidential contracts will probably need some sort of secure e-mail
capability, but it may not be necessary to pay a certifying authority for the service of verifying
the company's identity to all its customers. That is, the likelihood of someone intercepting the
transmissions and supplying a false contract seems not only slim but also easily detectable. It
should be relatively simple to look at the times and manners in which your company needs
secure transmission. After this has been determined, choose the encryption tools that cover these
paths.

Summary
When it comes to security, secure data transmission fills out the final third of the security
equation, right behind (or before, depending on how you look at it) Security of data storage and
security of the physical technology and the location of that technology. Assuming you've
satisfied the first two-thirds of the security equation, before setting out to secure your data during
transmission, first determine the value of that data and then spend accordingly to secure it.
Valuable data with little or no security can prove as costly as invaluable data with too much
unnecessary security.

After determining the value of your security, consider the most appropriate options for
transmitting data and then explore the various encryption methods necessary for protecting your
specific data transmissions. And, finally, I can't reiterate enough that a technical solution is never
the whole solution. Data originates from individuals, not from computers, so implementing
strong security policies and procedures is as important as choosing all the physical and technical
barriers to your data.

93 www.someakenya.com Contact: 0707 737 890


 Symmetric encryption
A type of encryption where the same key is used to encrypt and decrypt the message. This differs
from asymmetric (or public-key) encryption, which uses one key to encrypt a message and
another to decrypt the message.

Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic
keys for both encryption of plaintext and decryption of cipher text. The keys may be identical or
there may be a simple transformation to go between the two keys. The keys, in practice,
represent a shared secret between two or more parties that can be used to maintain a private
information link. This requirement that both parties have access to the secret key is one of the
main drawbacks of symmetric key encryption, in comparison to public-key encryption.

Types of symmetric-key algorithms

Symmetric-key encryption can use either stream ciphers or block ciphers.

 Stream ciphers encrypt the digits (typically bytes) of a message one at a time.
 Block ciphers take a number of bits and encrypt them as a single unit, padding the
plaintext so that it is a multiple of the block size. Blocks of 64 bits have been commonly
used. The Advanced Encryption Standard (AES) algorithm approved by NIST in
December 2001 uses 128-bit blocks.

Implementations

Examples of popular symmetric algorithms include Twofish, Serpent, AES (Rijndael), Blowfish,
CAST5, RC4, 3DES, Skipjack, Safer+/++ (Bluetooth), and IDEA.

Cryptographic primitives based on symmetric ciphers

Symmetric ciphers are commonly used to achieve other cryptographic primitives than just
encryption.

Encrypting a message does not guarantee that this message is not changed while encrypted.
Hence often a message authentication code is added to a cipher text to ensure that changes to the
cipher text will be noted by the receiver. Message authentication codes can be constructed from
symmetric ciphers (e.g. CBC-MAC).

However, symmetric ciphers cannot be used for non-repudiation purposes except by involving
additional parties.

Another application is to build hash functions from block ciphers. See one-way compression
function for descriptions of several such methods.

94 www.someakenya.com Contact: 0707 737 890


Construction of symmetric ciphers

Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's
construction makes it possible to build invertible functions from other functions that are
themselves not invertible.

Security of symmetric ciphers


Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen
plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the
functions for each round can greatly reduce the chances of a successful attack.

Key generation
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly
always used to generate the symmetric cipher session keys. However, lack of randomness in
those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks
in the past. Therefore, it is essential that an implementation uses a source of high entropy for its
initialization.

 Asymmetric encryption

Asymmetric cryptography or public-key cryptography is cryptography in which a pair of keys is


used to encrypt and decrypt a message so that it arrives securely. Initially, a network user
receives a public and private key pair from a certificate authority. Any other user who wants to
send an encrypted message can get the intended recipient's public key from a public directory.
They use this key to encrypt the message, and they send it to the recipient. When the recipient
gets the message, they decrypt it with their private key, which no one else should have access to.

A cryptographic system that uses two keys -a public key known to everyone and a private or
secret key known only to the recipient of the message. When John wants to send a secure
message to Jane, he uses Jane's public key to encrypt the message. Jane then uses her private key
to decryptit. An important element to the public key system is that the public and private keys are
related in such a way that only the public key can be used to encrypt messages and only the
corresponding private key can be used to decrypt them. Moreover, it is virtually impossible to
deduce the private key if you know the public key.

Public-key systems, such as Pretty Good Privacy (PGP), are becoming popular for transmitting
information via the Internet. They are extremely secure and relatively simple to use. The only
difficulty with public-key systems is that you need to know the recipient's public key to encrypt a
message for him or her. What's needed, therefore, is a global registry of public keys, which is
one of the promises of the new LDAPtechnology.

95 www.someakenya.com Contact: 0707 737 890


Public key cryptography was invented in 1976 by Whitfield Diffie and Martin Hellman. For this
reason, it is sometime calledDiffie-Hellman encryption. It is also called asymmetric encryption
because it uses two keys instead of one key (symmetric encryption)

Because of the computational complexity of asymmetric encryption, it is typically only used for
short messages, typically the transfer of a symmetric encryption key. This symmetric key is then
used to encrypt the rest of the potentially long & heavy conversation. The symmetric
encryption/decryption is based on simpler algorithms and is much faster.

Message authentication involves hashing the message to produce a "digest," and encrypting the
digest with the private key to produce a digital signature. Thereafter anyone can verify this
signature by
(1) Computing the hash of the message,
(2) Decrypting the signature with the signer's public key, and
(3) Comparing the computed digest with the decrypted digest. Equality between the digests
confirms the message is unmodified since it was signed, and that the signer, and no one else,
intentionally performed the signature operation — presuming the signer's private key has
remained secret to the signer. The security of such procedure depends on a hash algorithm of
such quality that it is computationally impossible to alter or find a substitute message that
produces the same digest - but studies have shown that even with the MD5 and SHA-1
algorithms, producing an altered or substitute message is not impossible. The current hashing
standard for encryption is SHA-2. The message itself can also be used in place of the digest.

Public-key algorithms are fundamental security ingredients in cryptosystems, applications and


protocols. They underpin various Internet standards, such as Transport Layer Security (TLS),
S/MIME, PGP, and GPG. Some public key algorithms provide key distribution and secrecy (e.g.,
Diffie–Hellman key exchange), some provide digital signatures (e.g., Digital Signature
Algorithm), and some provide both (e.g., RSA).

Public-key cryptography finds application in, amongst others, the IT security discipline
information security. Information security (IS) is concerned with all aspects of protecting
electronic information assets against security threats. Public-key cryptography is used as a
method of assuring the confidentiality, authenticity and non-reputability of electronic
communications and data storage.

96 www.someakenya.com Contact: 0707 737 890


Understanding

Public-key
key cryptography is often used to secure electronic communication over an open
networked environment such as the internet, without relying on a covert channel even for key
exchange. Open networked environments are susceptible to a variety of communication security
problems such as man-in-the-middle
middle attacks and other security
security threats. Security properties
required for communication typically include that the communication being sent must not be
readable during transit (preserving confidentiality), the communication must not be modified
during transit (preserving the integrity
integrity of the communication), the communication must originate
from an identified party (sender authenticity) and to ensure non-repudiation
non repudiation or non
non-denial of the
sending of the communication. Combining public-key
public key cryptography with an Enveloped Public
Key Encryption
tion (EPKE) method, allows for the secure sending of a communication over an open
networked environment.

The distinguishing technique used in public-key


public key cryptography is the use of asymmetric key
algorithms, where a key used by one party to perform either encryption
encryption or decryption is not the
same as the key used by another in the counterpart operation. Each user has a pair of

97 www.someakenya.com Contact: 0707 737 890


cryptographic keys – a public encryption key and a private decryption key. For example, a
key pair used for digital signatures consists of a private signing key and a public verification
key. The public key may be widely distributed, while the private key is known only to its
proprietor. The keys are related mathematically, but the parameters are chosen so that calculating
the private key from the public key is unfeasible.

In contrast, symmetric-key algorithms – variations of which have been used for thousands of
years – use a single secret key, which must be shared and kept private by both the sender and the
receiver, for example in both encryption and decryption. To use a symmetric encryption scheme,
the sender and receiver must securely share a key in advance.

Because symmetric key algorithms are nearly always much less computationally intensive than
asymmetric ones, it is common to exchange a key using a key-exchange algorithm, then transmit
data using that key and a symmetric key algorithm. PGP and the SSL/TLS family of schemes use
this procedure, and are thus called hybrid cryptosystems.

Description
Two of the best-known uses of public-key cryptography are:

 Public-key encryption, in which a message is encrypted with a recipient's public key. The
message cannot be decrypted by anyone who does not possess the matching private key,
who is thus presumed to be the owner of that key and the person associated with the
public key. This is used in an attempt to ensure confidentiality.

 Digital signatures, in which a message is signed with the sender's private key and can be
verified by anyone who has access to the sender's public key. This verification proves
that the sender had access to the private key, and therefore is likely to be the person
associated with the public key. This also ensures that the message has not been tampered
with, as any manipulation of the message will result in changes to the encoded message
digest, which otherwise remains unchanged between the sender and receiver.

An analogy to public-key encryption is that of a locked mail box with a mail slot. The mail slot is
exposed and accessible to the public – its location (the street address) is, in essence, the public
key. Anyone knowing the street address can go to the door and drop a written message through
the slot. However, only the person who possesses the key can open the mailbox and read the
message.

An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The
message can be opened by anyone, but the presence of the unique seal authenticates the sender.

A central problem with the use of public-key cryptography is confidence/proof that a particular
public key is authentic, in that it is correct and belongs to the person or entity claimed, and has
not been tampered with or replaced by a malicious third party. The usual approach to this
problem is to use a public-key infrastructure (PKI), in which one or more third parties – known
as certificate authorities – certify ownership of key pairs. PGP, in addition to being a certificate

98 www.someakenya.com Contact: 0707 737 890


authority structure, has used a scheme generally called the "web of trust", which decentralizes
such authentication of public keys by a central mechanism, and substitutes individual
endorsements of the link between user and public key. To date, no fully satisfactory solution to
the "public key authentication problem" has been found.

Practical considerations

Enveloped Public Key Encryption

Enveloped Public Key Encryption (EPKE) is the method of applying public-key cryptography
and ensuring that an electronic communication is transmitted confidentially, has the contents of
the communication protected against being modified (communication integrity) and cannot be
denied from having been sent (non-repudiation). This is often the method used when securing
communication on an open networked environment such by making use of the Transport Layer
Security (TLS) or Secure Sockets Layer (SSL) protocols.

EPKE consists of a two-stage process that includes both Public Key Encryption (PKE) and a
digital signature. Both Public Key Encryption and digital signatures make up the foundation of
Enveloped Public Key Encryption (these two processes are described in full in their own
sections).

For EPKE to work effectively, it is required that:

 Every participant in the communication has their own unique pair of keys. The first key
that is required is a public key and the second key that is required is a private key.
 Each person's own private and public keys must be mathematically related where the
private key is used to decrypt a communication sent using a public key and vice versa.
Some well-known asymmetric encryption algorithms are based on the RSA
cryptosystem.
 The private key must be kept absolutely private by the owner, though the public key can
be published in a public directory such as with a certification authority.

To send a message using EPKE, the sender of the message first signs the message using their
own private key, this ensures non-repudiation of the message. The sender then encrypts their
digitally signed message using the receivers’ public key thus applying a digital envelope to the
message. This step ensures confidentiality during the transmission of the message. The receiver
of the message then uses their private key to decrypt the message thus removing the digital
envelope and then uses the sender's public key to decrypt the sender's digital signature. At this
point, if the message has been unaltered during transmission, the message will be clear to the
receiver.

Due to the computationally complex nature of RSA-based asymmetric encryption algorithms, the
time taken to encrypt a large documents or files to be transmitted can take an increased amount
of time to complete. To speed up the process of transmission, instead of applying the sender's
digital signature to the large documents or files, the sender can rather hash the documents or files
using a cryptographic hash function and then digitally sign the generated hash value, therefore

99 www.someakenya.com Contact: 0707 737 890


enforcing non-repudiation. Hashing is a much faster computation to complete as opposed to
using an RSA-based digital signature algorithm alone. The sender would then sign the newly
generated hash value and encrypt the original documents or files with the receiver's public key.
The transmission would then take place securely and with confidentiality and non-repudiation
still intact. The receiver would then verify the signature and decrypt the encrypted documents or
files with their private key.

Note: The sender and receiver do not usually carry out the process mentioned above manually
though rather rely on sophisticated software to automatically complete the EPKE process.

Public Key Encryption

The goal of Public Key Encryption (PKE) is to ensure that the communication being sent is kept
confidential during transit.

To send a message using PKE, the sender of the message uses the public key of the receiver to
encrypt the contents of the message. The encrypted message is then transmitted electronically to
the receiver and the receiver can then use their own matching private key to decrypt the message.

The encryption process of using the receivers’ public key is useful for preserving the
confidentiality of the message as only the receiver has the matching private key to decrypt the
message. Therefore, the sender of the message cannot decrypt the message once it has been
encrypted using the receivers public key. However, PKE does not address the problem of non-
repudiation, as the message could have been sent by anyone that has access to the receiver’s
public key.

Digital signatures

The goal of a digital signature scheme is to ensure that the sender of the communication that is
being sent is known to the receiver and that the sender of the message cannot repudiate a
message that they sent. Therefore, the purpose of digital signatures is to ensure the non-
repudiation of the message being sent. This is useful in a practical setting where a sender wishes
to make an electronic purchase of shares and the receiver wants to be able to prove who
requested the purchase. Digital signatures do not provide confidentiality for the message being
sent.

The message is signed using the sender's private signing key. The digitally signed message is
then sent to the receiver, who can then use the sender's public key to verify the signature.

Certification authority

In order for Enveloped Public Key Encryption to be as secure as possible, there needs to be a
"gatekeeper" of public and private keys, or else anyone could create key pairs and masquerade as
the intended sender of a communication, proposing them as the keys of the intended sender. This
digital key "gatekeeper" is known as a certification authority. A certification authority is a trusted

100 www.someakenya.com Contact: 0707 737 890


third party that can issue public and private keys, thus certifying public keys. It also works as a
depository to store key chain and enforce the trust factor.

A postal analogy

An analogy that can be used to understand the advantages of an asymmetric system is to imagine
two people, Alice and Bob, who are sending a secret message through the public mail. In this
example, Alice wants to send a secret message to Bob, and expects a secret reply from Bob.

With a symmetric key system, Alice first puts the secret message in a box, and locks the box
using a padlock to which she has a key. She then sends the box to Bob through regular mail.
When Bob receives the box, he uses an identical copy of Alice's key (which he has somehow
obtained previously, maybe by a face-to-face meeting) to open the box, and reads the message.
Bob can then use the same padlock to send his secret reply.

In an asymmetric key system, Bob and Alice have separate padlocks. First, Alice asks Bob to
send his open padlock to her through regular mail, keeping his key to himself. When Alice
receives it she uses it to lock a box containing her message, and sends the locked box to Bob.
Bob can then unlock the box with his key and read the message from Alice. To reply, Bob must
similarly get Alice's open padlock to lock the box before sending it back to her.

The critical advantage in an asymmetric key system is that Bob and Alice never need to send a
copy of their keys to each other. This prevents a third party – perhaps, in this example, a corrupt
postal worker that will open unlocked boxes – from copying a key while it is in transit, allowing
the third party to spy on all future messages sent between Alice and Bob. So, in the public key
scenario, Alice and Bob need not trust the postal service as much. In addition, if Bob were
careless and allowed someone else to copy his key, Alice's messages to Bob would be
compromised, but Alice's messages to other people would remain secret, since the other people
would be providing different padlocks for Alice to use.

Another kind of asymmetric key system, called a three-pass protocol, requires neither party to
even touch the other party's padlock (or key); Bob and Alice have separate padlocks. First, Alice
puts the secret message in a box, and locks the box using a padlock to which only she has a key.
She then sends the box to Bob through regular mail. When Bob receives the box, he adds his
own padlock to the box, and sends it back to Alice. When Alice receives the box with the two
padlocks, she removes her padlock and sends it back to Bob. When Bob receives the box with
only his padlock on it, Bob can then unlock the box with his key and read the message from
Alice. Note that, in this scheme, the order of decryption is NOT the same as the order of
encryption – this is only possible if commutative ciphers are used. A commutative cipher is one
in which the order of encryption and decryption is interchangeable, just as the order of
multiplication is interchangeable (i.e., A*B*C = A*C*B = C*B*A). This method is secure for
certain choices of commutative ciphers, but insecure for others (e.g., a simple XOR). For example,
let E1() and E2() be two encryption functions, and let "M" be the message so that if Alice
encrypts it using E1() and sends E1(M) to Bob. Bob then again encrypts the message as
E2(E1(M)) and sends it to Alice. Now, Alice decrypts E2(E1(M)) using E1(). Alice will now get
E2(M), meaning when she sends this again to Bob, he will be able to decrypt the message using

101 www.someakenya.com Contact: 0707 737 890


E2() and get "M". Although none of the keys were ever exchanged, the message "M" may well be
a key (e.g., Alice's Public key). This three-pass protocol is typically used during key exchange.

Actual algorithms: two linked keys

Not all asymmetric key algorithms operate in this way. In the most common, Alice and Bob each
own two keys, one for encryption and one for decryption. In a secure asymmetric key encryption
scheme, the private key should not be deducible from the public key. This makes possible
public-key encryption, since an encryption key can be published without compromising the
security of messages encrypted with that key.

In other schemes, either key can be used to encrypt the message. When Bob encrypts a message
with his private key, only his public key will successfully decrypt it, authenticating Bob's
authorship of the message. In the alternative, when a message is encrypted with the public key,
only the private key can decrypt it. In this arrangement, Alice and Bob can exchange secret
messages with no prior secret agreement, each using the other's public key to encrypt, and each
using his own to decrypt.

Weaknesses

Among symmetric key encryption algorithms, only the one-time pad can be proven to be secure
against any adversary – no matter how much computing power is available. However, there is no
public-key scheme with this property, since all public-key schemes are susceptible to a "brute-
force key search attack". Such attacks are impractical if the amount of computation needed to
succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential
attackers. In many cases, the work factor can be increased by simply choosing a longer key. But
other algorithms may have much lower work factors, making resistance to a brute-force attack
irrelevant. Some special and specific algorithms have been developed to aid in attacking some
public key encryption algorithms – both RSA and ElGamal encryption have known attacks that
are much faster than the brute-force approach. These factors have changed dramatically in recent
decades, both with the decreasing cost of computing power and with new mathematical
discoveries.

Aside from the resistance to attack of a particular key pair, the security of the certification
hierarchy must be considered when deploying public key systems. Some certificate authority –
usually a purpose-built program running on a server computer – vouches for the identities
assigned to specific private keys by producing a digital certificate. Public key digital certificates
are typically valid for several years at a time, so the associated private keys must be held
securely over that time. When a private key used for certificate creation higher in the PKI server
hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is
possible, making any subordinate certificate wholly insecure.

Major weaknesses have been found for several formerly promising asymmetric key algorithms.
The 'knapsack packing' algorithm was found to be insecure after the development of a new
attack. Recently, some attacks based on careful measurements of the exact amount of time it
takes known hardware to encrypt plain text have been used to simplify the search for likely

102 www.someakenya.com Contact: 0707 737 890


decryption keys (see "side channel attack"). Thus, mere use of asymmetric key algorithms does
not ensure security. A great deal of active research is currently underway to both discover, and to
protect against, new attack algorithms.

Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-
the-middle" attack, in which the communication of public keys is intercepted by a third party
(the "man in the middle") and then modified to provide different public keys instead. Encrypted
messages and responses must also be intercepted, decrypted, and re-encrypted by the attacker
using the correct public keys for different communication segments, in all instances, so as to
avoid suspicion. This attack may seem to be difficult to implement in practice, but it is not
impossible when using insecure media (e.g., public networks, such as the Internet or wireless
forms of communications) – for example, a malicious staff member at Alice or Bob's Internet
Service Provider (ISP) might find it quite easy to carry out. In the earlier postal analogy, Alice
would have to have a way to make sure that the lock on the returned packet really belongs to Bob
before she removes her lock and sends the packet back. Otherwise, the lock could have been put
on the packet by a corrupt postal worker pretending to be Bob, so as to fool Alice.

One approach to prevent such attacks involves the use of a certificate authority, a trusted third
party responsible for verifying the identity of a user of the system. This authority issues a
tamper-resistant, non-spoofabledigital certificate for the participants. Such certificates are signed
data blocks stating that this public key belongs to that person, company, or other entity. This
approach also has its weaknesses – for example, the certificate authority issuing the certificate
must be trusted to have properly checked the identity of the key-holder, must ensure the
correctness of the public key when it issues a certificate, must be secure from computer piracy,
and must have made arrangements with all participants to check all their certificates before
protected communications can begin. Web browsers, for instance, are supplied with a long list of
"self-signed identity certificates" from PKI providers – these are used to check the bona fides of
the certificate authority and then, in a second step, the certificates of potential communicators.
An attacker who could subvert any single one of those certificate authorities into issuing a
certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if
the certificate scheme were not used at all. In an alternate scenario rarely discussed, an attacker
who penetrated an authority's servers and obtained its store of certificates and keys (public and
private) would be able to spoof, masquerade, decrypt, and forge transactions without limit.

Despite its theoretical and potential problems, this approach is widely used. Examples include
SSL and its successor, TLS, which are commonly used to provide security for web browser
transactions (for example, to securely send credit card details to an online store).

 Duplicate and alternate routing

Duplicate Packets
Duplicate packets are often observed network behaviour.

103 www.someakenya.com Contact: 0707 737 890


A packet is duplicated somewhere on the network and received twice at the receiving host. It is
very often not desirable to get these duplicates, as the receiving application might think that's
"fresh" data (which it isn't).

If a sending host thinks a packet is not transmitted correctly because of a PacketLoss, it might
Retransmit that packet. The receiving host might already get the first packet, and will receive a
second one, which is a duplicated packet.

ConnectionOrientedProtocols such as TCP will detect duplicate packets, and will ignore them
completely.

ConnectionlessProtocols such as UDP won't detect duplicate packets, because there's no


information in, for example, the UDP header to identify a packet so that packets can be
recognized as duplicates. The data from that packet will be indicated twice (or even more) to the
application; it's the responsibility of the application to detect duplicates (perhaps by supplying
enough information in its headers to do so) and process them appropriately, if necessary.

Reasons:For most networks, duplicate packets is a typical behaviour, e.g. this will happen if the
sending side transmitted a packet correctly, but think it wasn't received at all.

Sometimes, defective hardware/software simply duplicates packets.

Troubleshooting
If the network is configured correctly, there's not much that can be done against duplicate
packets as this is a somewhat "intended" behaviour.

Discussion

 Q: Is it possible to turn off the display of duplicate packets? Over 25% of the packets for
many of my TCP scans are duplicates. I must decode the traffic of the systems now,
before the network engineers have had time to flush out the congestion causes.
A: Try using

not tcp.analysis.duplicate_ack and not tcp.analysis.retransmission

What is alternative and diverse routing?

Alternative routing provides two different cables from the local exchange to your site, so you can
protect against cable failure as your service will be maintained on the alternative route.

With diverse routing, you can protect not only against cable failure but also against local
exchange failure as there are two separate routes from two exchanges to your site.

Alternate routing:The ability to use another transmission line if the regular line is busy.

104 www.someakenya.com Contact: 0707 737 890


 Firewall types and configuration

Introduction to firewalls

A firewall is a hardware or software system that prevents unauthorized access to or from a


network. It can be implemented in both hardware and software, or a combination of both.
Firewalls are frequently used to prevent unauthorized Internet users from accessing private
networks connected to the Internet. All data entering or leaving the intranet pass through the
firewall, which examines each packet and blocks those that do not meet the specified security
criteria.

Generally, firewalls are configured to protect against unauthenticated interactive logins from the
outside world. This helps prevent hackers from logging into machines on your network. More
sophisticated firewalls block traffic from the outside to the inside, but permit users on the inside
to communicate a little more freely with the outside.

Firewalls are essential since they provide a single block point, where security and auditing can be
imposed. Firewalls provide an important logging and auditing function; often, they provide
summaries to the administrator about what type/volume of traffic has been processed through it.
This is an important benefit: Providing this block point can serve the same purpose on your
network as an armed guard does for your physical premises.

What are the different types of firewalls?

105 www.someakenya.com Contact: 0707 737 890


The National Institute of Standards and Technology (NIST) 800-10 divide firewalls into three
basic types:

 Packet filters
 Stateful inspection
 Proxys

These three categories, however, are not mutually exclusive, as most modern firewalls have a
mix of abilities that may place them in more than one of the three. For more information and
detail on each category, see the NIST Guidelines on firewalls and firewall policy.

One way to compare firewalls is to look at the Transmission Control Protocol/Internet Protocol
(TCP/IP) layers that each is able to examine. TCP/IP communications are composed of four
layers; they work together to transfer data between hosts. When data transfers across networks, it
travels from the highest layer through intermediate layers to the lowest layer; each layer adds
more information. Then the lowest layer sends the accumulated data through the physical
network; the data next moves upward, through the layers, to its destination. Simply put, the data
a layer produces is encapsulated in a larger container by the layer below it. The four TCP/IP
layers, from highest to lowest, are described further in the figure below.

Firewall implementation

The firewall remains a vital component in any network security architecture, and today's
organizations have several types to choose from. It's essential that IT professionals identify the
type of firewall that best suits the organization's network security needs.

Once selected, one of the key questions that shapes a protection strategy is "Where should the
firewall be placed?" There are three common firewall topologies: the bastion host, screened

106 www.someakenya.com Contact: 0707 737 890


subnet and dual-firewall architectures. Enterprise security depends on choosing the right firewall
topology.

The next decision to be made, after the topology chosen, is where to place individual firewall
systems in it. At this point, there are several types to consider, such as bastion host, screened
subnet and multi-homed firewalls.

Remember that firewall configurations do change quickly and often, so it is difficult to keep on
top of routine firewall maintenance tasks. Firewall activity, therefore, must be continuously
audited to help keep the network secure from ever-evolving threats.

Network layer firewalls (stateful inspection)

Network layer firewalls generally make their decisions based on the source address, destination
address and ports in individual IP packets. A simple router is the traditional network layer
firewall, since it is not able to make particularly complicated decisions about what a packet is
actually talking to or where it actually came from.

One important distinction many network layer firewalls possess is that they route traffic directly
through them, which means in order to use one, you either need to have a validly assigned IP
address block or a private Internet address block. Network layer firewalls tend to be very fast and
almost transparent to their users.

Application layer firewalls (packet filter)

Application layer firewalls are hosts that run proxy servers, which permit no traffic directly
between networks, and they perform elaborate logging and examination of traffic passing
through them. Since proxy applications are simply software running on the firewall, it is a good
place to do logging and access control. Application layer firewalls can be used as network
address translators, since traffic goes in one side and out the other after having passed through an
application that effectively masks the origin of the initiating connection.

However, run-of-the-mill network firewalls can't properly defend applications. As Michael Cobb
explains, application layer firewalls offer Layer 7 security on a more granular level, and may
even help organizations get more out of existing network devices.

In some cases, having an application in the way may impact performance and make the firewall
less transparent. Older application layer firewalls that are still in use are not particularly
transparent to end users and may require some user training. However, more modern application
layer firewalls are often totally transparent. Application layer firewalls tend to provide more
detailed audit reports and tend to enforce more conservative security models than network layer
firewalls.

Future firewalls will likely combine some characteristics of network layer firewalls and
application layer firewalls. It is likely that network layer firewalls will become increasingly
aware of the information going through them, and application layer firewalls have already

107 www.someakenya.com Contact: 0707 737 890


become more transparent. The end result will be kind of a fast packet-screening system that logs
and checks data as it passes through.

Proxy firewalls

Proxy firewalls offer more security than other types of firewalls, but at the expense of speed and
functionality, as they can limit which applications the network supports. Why are they more
secure? Unlike stateful firewalls or application layer firewalls, which allow or block network
packets from passing to and from a protected network, traffic does not flow through a proxy.
Instead, computers establish a connection to the proxy, which serves as an intermediary, and
initiate a new network connection on behalf of the request. This prevents direct connections
between systems on either side of the firewall and makes it harder for an attacker to discover
where the network is, because they don't receive packets created directly by their target system.

Proxy firewalls also provide comprehensive, protocol-aware security analysis for the protocols
they support. This allows them to make better security decisions than products that focus purely
on packet header information.

Unified threat management

A new category of network security products -- called unified threat management (UTM) --
promises integration, convenience and protection from pretty much every threat out there; these
are especially valuable for enterprise use. As Mike Rothman explains, the evolution of UTM
technology and vendor offerings makes these products even more valuable to enterprises.

Security expert Karen Scarfone defines UTM products as firewall appliances that not only guard
against intrusion but also perform content filtering, spam filtering, application control, Web
content filtering, intrusion detection and antivirus duties; in other words, a UTM device
combines functions traditionally handled by multiple systems. These devices are designed to
combat all levels of malicious activity on the computer network.

An effective UTM solution delivers a network security platform comprised of robust and fully
integrated security and networking functions along with other features, such as security
management and policy management by a group or user. It is designed to protect against next
generation application layer threats and offers a centralized management through a single
console, all without impairing the performance of the network.

Advantages of using UTM

Convenience and ease of installation are the two key advantages of unified threat management
security appliances. There is also much less human intervention required to install and configure
them appliances. Other advantages of UTM are listed below:

 Reduced complexity: The integrated all-in-one approach simplifies not only product
selection but also product integration, and ongoing support as well.

108 www.someakenya.com Contact: 0707 737 890


 Ease of deployment: Since there is much less human intervention required, either
vendors or the customers themselves can easily install and maintain these products.
 Integration capabilities: UTM appliances can easily be deployed at remote locations
without the on-site help of any security professional. In this scenario a plug-and-play
appliance can be installed and managed remotely. This kind of management is synergistic
with large, centralized software-based firewalls.
 Black box character: Users have a tendency to play with things, and the black box
nature of a UTM limits the damage users can do and, thus, reduces help desk calls and
improves security.
 Troubleshooting ease: When a box fails, it is easier to swap out than troubleshoot. This
process gets the node back online quicker, and a non-technical person can do it, too. This
feature is especially important for remote offices without dedicated technical staff on site.

Some of the leading UTM solution providers are Check Point, Cisco, Dell, Fortinet, HP, IBM
and Juniper Networks.

Challenges of using UTM

UTM products are not the right solution for every environment. Many organizations already have
a set of point solutions installed that, combined, provide network security capabilities similar to
what UTMs offer, and there can be substantial costs involved in ripping and replacing the
existing technology install a UTM replacement. There are also advantages to using the individual
products together, rather than a UTM. For instance, when individual point products are
combined, the IT staff is able to select the best product available for each network security
capability; a UTM can mean having to compromise and acquire a single product that has
stronger capabilities in some areas and weaker ones in others.

Another important consideration when evaluating UTM solutions is the size of the organization
in which it would be installed. Smallest organizations might not need all the network security
features of a UTM. There is no need for a smaller firm to tax its budget with a UTM if many of
its functions aren't needed. On the other hand, a UTM may not be right for larger, more cyber-
dependent organizations either, since these often need a level of scalability and reliability in their
network security that UTM products might not support (or at least not support as well as a set of
point solutions). Also a UTM system creates a single point of failure for most or all network
security capabilities; UTM failure could conceivably shut down an enterprise, with a catastrophic
effect on company security. How much an enterprise is willing to rely on a UTM is a question
that must be asked, and answered.

 Secure socket layer and transport layer security


SSL (Secure Sockets Layer) is a standard security technology for establishing an encrypted
links between a server and a client—typically a web server (website) and a browser; or a mail
server and a mail client (e.g., Outlook).

109 www.someakenya.com Contact: 0707 737 890


How SSL Works
When a Web browser tries to connect to a website using SSL, the browser will first request the
web server identify itself. This prompts the web server to send the browser a copy of the SSL
Certificate. The browser checks to see if the SSL Certificate is trusted -- if the SSL Certificate is
trusted, then the browser sends a message to the Web server. The server then responds to the
browser with a digitally signed acknowledgement to start an SSL encrypted session. This allows
encrypted data to be shared between the browser and the server. You may notice that your
browsing session now starts with https (and not http).

Secure HTTP (S-HTTP)


Another protocol for transmitting data securely over the World Wide Web is Secure HTTP (S-
HTTP). Whereas SSL creates a secure connection between a client and a server, over which any
amount of data can be sent securely, S-HTTP is designed to transmit individual messages
securely. SSL and S-HTTP, therefore, can be seen as complementary rather than competing
technologies. Both protocols were approved by the Internet Engineering Task Force (IETF) as a
standard.

SSL 3.0 Vulnerable and Obsolete


SSL version 3.0 is based on the 1996 draft. In 2014, the 3.0 version of SSL was considered
vulnerable due to POODLE (Padding Oracle On Downgraded Legacy Encryption) attacks. These
attacks allowed secure HTTP cookies or HTTP Authorization header contents to be stolen from
downgraded communications. Today, SSL 3.0 is considered obsolete and has been succeeded by
Transport Layer Security (TLS), but it is still widely deployed.

Going From SSL to TLS


Secure Sockets Layer (SSL) is the predecessor to Transport Layer Security (TLS). TLS is an
Internet Engineering Task Force (IETF) standards track protocol that is based on the earlier SSL
specifications.

Transport Layer Security (TLS) is a protocol that ensures privacy between communicating
applications and their users on the Internet. When a server and client communicate, TLS ensures
that no third party may eavesdrop or tamper with any message. TLS is the successor to the
Secure Sockets Layer (SSL).

 IPv4 and 1Pv6 security


The Internet Protocol (IP) is the principal communications protocol in the Internet protocol
suite for relaying datagrams across network boundaries. Its routing function enables
internetworking, and essentially establishes the Internet.

IP has the task of delivering packets from the source host to the destination host solely based on
the IP addresses in the packet headers. For this purpose, IP defines packet structures that
encapsulate the data to be delivered. It also defines addressing methods that are used to label the
datagram with source and destination information.

110 www.someakenya.com Contact: 0707 737 890


Historically, IP was the connectionless datagram service in the original Transmission Cont
Control
Program introduced by Vint Cerf and Bob Kahn in 1974; the other being the connection
connection-oriented
Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as
TCP/IP.

The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the
Internet. Its successor is Internet Protocol Version 6 (IPv6).

Function

The Internet Protocol is responsible for addressing hosts and for routing datagrams (packets)
from a source host to a destination host across one or more IP networks. For this purpose, the
Internet Protocol defines the format of packets and provides an addressing system that has ttwo
functions: identifying hosts; and providing a logical location service

Datagram construction

Sample encapsulation of application data from UDP to a Link protocol frame

Each datagram has two components: a header and a payload. The IP header is tagged with the
source IP address, the destination IP address, and other meta-data
meta data needed to route and deliver the
datagram. The payloadad is the data that is transported. This method of nesting the data payload in
a packet with a header is called encapsulation.

IP addressing and routing

IP addressing entails the assignment of IP addresses and associated parameters to host interfaces.
The address space is divided into networks and subnetworks,, involving the designation of
network or routing prefixes. IP routing is performed by all hosts, as well as routers, whose main
function is to transport packets across network boundaries. Routers communicate with one
another via specially designed routing protocols, either interior gateway protocols or exterior
gateway
teway protocols, as needed for the topology of the network.

IP routing is also common in local networks. For example, many Ethernet switches support IP
multicast operations. These switches use IP addresses and Internet Group Management Protocol
to control multicast routing but use MAC addresses for the actual routing.
routing

111 www.someakenya.com Contact: 0707 737 890


Reliability

The design of the Internet protocols is based on the end-to-end principle. The network
infrastructure is considered inherently unreliable at any single network element or transmission
medium and assumes that it is dynamic in terms of availability of links and nodes. No central
monitoring or performance measurement facility exists that tracks or maintains the state of the
network. For the benefit of reducing network complexity, the intelligence in the network is
purposely mostly located in the end nodes of data transmission. Routers in the transmission path
forward packets to the next known, directly reachable gateway matching the routing prefix for
the destination address.

As a consequence of this design, the Internet Protocol only provides best effort delivery and its
service is characterized as unreliable. In network architectural language, it is a connectionless
protocol, in contrast to connection-oriented modes of transmission. Various error conditions may
occur, such as data corruption, packet loss, duplication and out-of-order delivery. Because
routing is dynamic, meaning every packet is treated independently, and because the network
maintains no state based on the path of prior packets, different packets may be routed to the same
destination via different paths, resulting in out-of-order sequencing at the receiver.

Internet Protocol Version 4 (IPv4) provides safeguards to ensure that the IP packet header is
error-free. A routing node calculates a checksum for a packet. If the checksum is bad, the routing
node discards the packet. The routing node does not have to notify either end node, although the
Internet Control Message Protocol (ICMP) allows such notification. By contrast, in order to
increase performance, and since current link layer technology is assumed to provide sufficient
error detection, the IPv6 header has no checksum to protect it.

All error conditions in the network must be detected and compensated by the end nodes of a
transmission. The upper layer protocols of the Internet protocol suite are responsible for
resolving reliability issues. For example, a host may cache network data to ensure correct
ordering before the data is delivered to an application.

Link capacity and capability


The dynamic nature of the Internet and the diversity of its components provide no guarantee that
any particular path is actually capable of, or suitable for, performing the data transmission
requested, even if the path is available and reliable. One of the technical constraints is the size of
data packets allowed on a given link. An application must assure that it uses proper transmission
characteristics. Some of this responsibility lies also in the upper layer protocols. Facilities exist
to examine the maximum transmission unit (MTU) size of the local link and Path MTU
Discovery can be used for the entire projected path to the destination. The IPv4 internetworking
layer has the capability to automatically fragment the original datagram into smaller units for
transmission. In this case, IP provides re-ordering of fragments delivered out of order.

The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment
size to be smaller than the MTU. The User Datagram Protocol (UDP) and the Internet Control
Message Protocol

112 www.someakenya.com Contact: 0707 737 890


Definition - What does Internet Protocol Version 4 (IPv4) mean?

Internet Protocol Version 4 (IPv4) is the fourth revision of the IP and a widely used protocol in
data communication over different kinds of networks. IPv4 is a connectionless protocol used in
packet-switched layer networks, such as Ethernet. It provides the logical connection between
network devices by providing identification for each device. There are many ways to configure
IPv4 with all kinds of devices - including manual and automatic configurations - depending on
the network type.

IPv4 is based on the best-effort model. This model guarantees neither delivery nor avoidance of
duplicate delivery; these aspects are handled by the upper layer transport.

Techopedia explains Internet Protocol Version 4 (IPv4)

IPv4 is defined and specified in IETF publication RCF 791. It is used in the packet-switched link
layer in the OSI model.

IPv4 uses 32-bit addresses for Ethernet communication in five classes, named A, B, C, D and E.
Classes A, B and C have a different bit length for addressing the network host. Class D addresses
are reserved for multicasting, while class E addresses are reserved for future use.

Class A has subnet mask 255.0.0.0 or /8, B has subnet mask 255.255.0.0 or /16 and class C has
subnet mask 255.255.255.0 or /24. For example, with a /16 subnet mask, the network
192.168.0.0 may use the address range of 192.168.0.0 to 192.168.255.255. Network hosts can
take any address from this range; however, address 192.168.255.255 is reserved for broadcast
within the network.

The maximum number of host addresses IPv4 can assign to end users is 232. IPv6 presents a
standardized solution to overcome IPv4's limitations. Because of its 128-bit address length, it can
define up to 2,128 addresses.

Internet Protocol Version 6 (IPv6)

Definition - What does Internet Protocol Version 6 (IPv6) mean?

Internet Protocol Version 6 (IPv6) is an Internet Protocol (IP) used for carrying data in packets
from a source to a destination over various networks. IPv6 is the enhanced version of IPv4 and
can support very large numbers of nodes as compared to IPv4. It allows for 2128 possible node,
or address, combinations.

IPv6 is also known as Internet Protocol Next Generation (IPng).

Techopedia explains Internet Protocol Version 6 (IPv6)

113 www.someakenya.com Contact: 0707 737 890


Released June 6, 2012, IPv6 was developed in hexadecimal format and contains 8 octets to
provide large scalability. Like IPv4, IPv6 deals with address broadcasting without containing
broadcast addresses in any class.

IPv6 (Internet Protocol version 6) is a set of specifications from the Internet Engineering Task
Force (IETF) that's essentially an upgrade of IP version 4 (IPv4). The basics of IPv6 are similar
to those of IPv4 -- devices can use IPv6 as source and destination addresses to pass packets over
a network, and tools like ping work for network testing as they do in IPv4, with some slight
variations.

The most obvious improvement in IPv6 over IPv4 is that IP addresses are lengthened from 32
bits to 128 bits. This extension anticipates considerable future growth of the Internet and
provides relief for what was perceived as an impending shortage of network addresses. IPv6 also
supports auto-configuration to help correct most of the shortcomings in version 4, and it has
integrated security and mobility features.

IPv6 features include:

 Supports source and destination addresses that are 128 bits (16 bytes) long.
 Requires IPSec support.
 Uses Flow Label field to identify packet flow for QoS handling by router.
 Allows the host to send fragments packets but not routers.
 Doesn't include a checksum in the header.
 Uses a link-local scope all-nodes multicast address.
 Does not require manual configuration or DHCP.
 Uses host address (AAAA) resource records in DNS to map host names to IPv6
addresses.
 Uses pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6
addresses to host names.
 Supports a 1280-byte packet size (without fragmentation).
 Moves optional data to IPv6 extension headers.
 Uses Multicast Neighbor Solicitation messages to resolve IP addresses to link-layer
addresses.
 Uses Multicast Listener Discovery (MLD) messages to manage membership in local
subnet groups.
 Uses ICMPv6 Router Solicitation and Router Advertisement messages to determine the
IP address of the best default gateway.

 Wireless network security


One of the biggest concerns for wireless users is making sure their router and wireless network
are secure. I think we all know by now that, when it comes to technology, there is no such thing
as being 100 percent secure. Once you send data over a wireless signal, you've already
potentially exposed your data to hackers, and once you've set up a router, Wi-Fi signal leeches
are always a possibility.

114 www.someakenya.com Contact: 0707 737 890


That said, there are plenty of ways to harden the security of your router and wireless network.
Most of them are fairly easy to put in place, while some take just a bit of configuration in the
router's interface. I've included several options each with an accompanying image to get you
going towards a more secure Wi-Fi network.

The routers I use as examples are the Cisco Linksys Smart Wi-Fi AC 1750HD Video Pro
EA6500 and the Netgear N600 Wireless Dual Band Gigabit Router (WNDR3700)—with
Netgear's new Genie management software. Management software varies from router to router,
but most of the settings presented here can be found in just about all consumer wireless routers,
especially those made in the last three years.

Step 1: WPA2
I think it's common networking knowledge that there really is no excuse to not use any
encryption method other than WPA2. In all but the oldest wireless devices, just about all modern
wireless clients support it.

Step 2: Change Default Passwords


You never want to set up a new router and leave the default password of either the SSIDs (if the
router came preconfigured) or to the admin account, which gives access to the router's
management software. In fact, I like to change even the Guest Account default settings, if I
enabled Guest Account and the router has guest credentials set up.

Changing the admin password, is usually found in the "System" or "Administration" areas of the
interface. Changing the SSID's passphrase is typically under "Wireless Settings." By the way,
you see the password I have set in the image below? Don't use that one. That's just a router for
testing, my home router has a much stronger password. For some good advice on creating
passwords, give Password Protection: How to Create Strong Passwords", a read.

Step 3: Change the Default SSID name


I can't tell you how many times, I'll look at wireless networks in range and see SSIDs such as
"NETGEAR095," essentially, SSIDs that are preconfigured and easily give away the make of the
router. When I see this, I also think perhaps the person who set up the router left the default
admin credentials to the router's software. Someone with strong intent could access an unsecured
network, and with a quick web search, discover the default password to the admin account just
by knowing the type of router. Give your network a name that does not reveal the make or model
of your router.

Step 4: Device Lists


Most routers have a device list that shows the wired and wireless clients currently connected. It
pays to periodically take a look and familiarize yourself with your router's device listing. Years
ago, you would only see a list showing a connected client's IP address, MAC address, and maybe
the hostname.

Newer router interfaces are getting fancier. The most recent interface on the Cisco Linksys
routers shows all of this information plus an icon of the type of client that's connected ( a picture
of a bridge, a NAS, a computer…and so on). I've met with vendors who are also releasing cloud

115 www.someakenya.com Contact: 0707 737 890


and mobile apps that let you remotely see what or who is connected to your network and alert
you when a device connects. If this is an important feature for you, you can expect to see a lot
innovation in intrusion detection and home networks soon.

Step 5: Turn off Guest Networking


I've never tested a router out-of-the box that had guest networking on by default. If I did—that
router would not get a very high review rating. Guest networking allows others to access your
routers, and by default it's usually unsecure access (although you can typically add security).
That said, if you inherited your router from someone else, it pays to make sure guest networking
is turned off (or at least secured) when you set the router up for your use. Doing so, requires
usually nothing more than ticking off a checkbox in the router's interface.

 Mobile device security

Learning guide: Mobile device protection


Learn how to protect your mobile workforce from increasingly varied mobile device security
threats and meet end-user demand with mobile device protection policies and technologies.

116 www.someakenya.com Contact: 0707 737 890


Recent trends in enterprise mobility have made mobile device security an imperative. IDC
reported in 2010 that for the first time smartphone sales outpaced PC sales. Faced by this
onslaught of devices and recognizing the productivity and cost benefits, organizations are
increasingly implementing bring-your-own device (BYOD) policies. Research firm J. Gold
Associates reports that about 25%-35% of enterprises currently have a BYOD policy, and they
expect that to grow to over 50% over the next two years. This makes sense as mobility evolves
from a nice-to-have capability to a business advantage.

But the competitive edge and other benefits of mobility can be lost if smartphones and tablet PCs
are not adequately protected against mobile device security threats. While the market shows no
sign of slowing, IT organizations identify security as one of their greatest concerns about
extending mobility. The purpose of this Learning Guide is to help assuage some of those
concerns by arming you with knowledge of mobile device security threats and how to implement
protection measures.

Mobile device security threats

Mobile devices face a number of threats that pose a significant risk to corporate data. Like
desktops, smartphones and tablet PCs are susceptible to digital attacks, but they are also highly
vulnerable to physical attacks given their portability. Here is an overview of the various mobile
device security threats and the risks they pose to corporate assets.

Mobile malware – Smartphones and tablets are susceptible to worms, viruses, Trojans and
spyware similarly to desktops. Mobile malware can steal sensitive data, rack up long distance
phone charges and collect user data. High-profile mobile malware infections are few, but that is
likely to change. In addition, attackers can use mobile malware to carry out targeted attacks
against mobile device users.

Eavesdropping – Carrier-based wireless networks have good link-level security but lack end-to-
end upper-layer security. Data sent from the client to an enterprise server is often unencrypted,
allowing intruders to eavesdrop on users’ sensitive communications.

Unauthorized access – Users often store login credentials for applications on their mobile
devices, making access to corporate resources only a click or tap away. In this manner
unauthorized users can easily access corporate email accounts and applications, social media
networks and more.

Theft and loss – Couple mobile devices’ small form factor with PC-grade processing power and
storage, and you have a high risk for data loss. Users store a significant amount of sensitive
corporate data–such as business email, customer databases, corporate presentations and business
plans–on their mobile devices. It only takes one hurried user to leave their iPhone in a taxicab for
a significant data loss incident to occur.

Unlicensed and unmanaged applications – Unlicensed applications can cost your company in
legal costs. But whether or not applications are licensed, they must be updated regularly to fix

117 www.someakenya.com Contact: 0707 737 890


vulnerabilities that could be exploited to gain unauthorized access or steal data. Without
visibility into end users’ mobile devices, there is no guarantee that they are being updated.

Mobile device policies


A mobile device policy is a written document that outlines the organization’s strategy for
allowing tablet PCs and smartphones to connect to the corporate network. A mobile device
policy covers who gets a mobile device, who pays for it, what constitutes acceptable use, user
responsibilities, penalties for non-compliance, and the range of devices and operating systems
the IT organization supports. In order to make these decisions, it is important that management
understands what data is sensitive, whether data is regulated and the impact mobile devices will
have on that data.

Encryption for mobile devices

Encrypting data at rest and in motion helps prevent data loss and successful eavesdropping
attempts on mobile devices. Carrier networks have good encryption of the airlink, but the rest of
the value chain between the client and enterprise server remains open unless explicitly managed.
Contemporary tablet PCs and smartphones can secure Web and email with SSL/TLS, Wi-Fi with
WPA2 and corporate data with mobile VPN clients. The primary challenge facing IT
organizations is ensuring proper configuration and enforcement, as well as protecting credentials
and configurations to prevent reuse on unauthorized devices.

Data at rest can be protected with self-protecting applications that store email messages, contacts
and calendars inside encrypted containers. These containers separate business data from personal
data, making it easier to wipe business data should the device become lost or stolen.

Authentication and authorization for mobile devices

Authentication and authorization controls help protect unauthorized access to mobile devices and
the data on them. Ideally, Craig Mathias, principal with advisory firm Farpoint Group, says IT
organizations should implement two-factor authentication on mobile devices, which requires
users to prove their identity using something they know–like a password–and something they
have–like a fingerprint. In addition to providing robust authentication and authorization, Mathias
says two-factor authentication can also be used to drive a good encryption implementation.
Unfortunately, two-factor authentication technology is not yet widely available in mobile
devices. Until then, IT organizations should require users to use native device-level
authentication (PIN, password).

Remote wipe for mobile device security


Authentication and encryption help prevent data loss in the case of mobile device theft or loss,
but physical security can be further fortified with remote wipe and “phone home” capabilities.
Native remote lock, find and wipe capabilities can be used to either recover a lost mobile device
or permanently delete the data on them. Be careful, however, if you choose to use these
functionalities. Experts recommend defining policies for these technologies and asking users to

118 www.someakenya.com Contact: 0707 737 890


sign a consent form. Remote wipe could put the user’s personal data at risk and “phone home” or
“find me” services can raise privacy concerns.

Mobile device management


When experts and IT professionals talk about securing mobile devices, the conversation often
turns to mobile device management systems, and for good reason. Most mobile device
management products include basic security functionality. They also enable centralized
visibility, policy configuration, application provisioning and compliance reporting for any
mobile device that accesses network resources – regardless of who owns it. These functions are
key security controls and their centralized management makes them practical. For example, most
mobile device management systems feature Exchange ActiveSync policies, which allow you to
deny corporate mail access by unencrypted devices. Others offer more extensive and transparent
control to enable IT organizations to enroll and secure iPads, for example, without relying on
iTunes or Exchange.

 Wireless protected access


Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access II (WPA2) are two security
protocols and security certification programs developed by the Wi-Fi Alliance to secure wireless
computer networks. The Alliance defined these in response to serious weaknesses researchers
had found in the previous system, Wired Equivalent Privacy (WEP).

WPA (sometimes referred to as the draft IEEE 802.11i standard) became available in 2003. The
Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the
more secure and complex WPA2. WPA2 became available in 2004 and is common shorthand for
the full IEEE 802.11i (or IEEE 802.11i-2004) standard.

A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA and WPA2
security to be bypassed and effectively broken in many situations. WPA and WPA2 security
implemented without using the Wi-Fi Protected Setup feature are unaffected by the security
vulnerability

Stands for "Wi-Fi Protected Access." WPA is a security protocol designed to create secure
wireless (Wi-Fi) networks. It is similar to the WEP protocol, but offers improvements in the way
it handles security keys and the way users are authorized.

For an encrypted data transfer to work, both systems on the beginning and end of a data transfer
must use the same encryption/decryption key. While WEP provides each authorized system with
the same key, WPA uses the temporal key integrity protocol (TKIP), which dynamically changes
the key that the systems use. This prevents intruders from creating their own encryption key to
match the one used by the secure network.

WPA also implements something called the Extensible Authentication Protocol (EAP) for
authorizing users. Instead of authorizing computers based solely on their MAC address, WPA

119 www.someakenya.com Contact: 0707 737 890


can use several other methods to verify each computer's identity. This makes it more difficult for
unauthorized systems to gain access to the wireless network.

More notes

Wi-Fi Protected Access (WPA) is a security standard for users of computers equipped with Wi-
Fi wireless connection. It is an improvement on and is expected to replace the original Wi-Fi
security standard, Wired Equivalent Privacy (WEP). WPA provides more sophisticated data
encryption than WEP and also provides user authentication (WEP's user authentication is
considered insufficient). WEP is still considered useful for the casual home user, but insufficient
for the corporate environment where the large flow of messages can enable eavesdroppers to
discover encryption keys more quickly

WPA's encryption method is the Temporal Key Integrity Protocol (TKIP). TKIP addresses the
weaknesses of WEP by including a per-packet mixing function, a message integrity check, an
extended initialization vector, and a re-keying mechanism. WPA provides "strong" user
authentication based on 802.1x and the Extensible Authentication Protocol (EAP). WPA depends
on a central authentication server such as RADIUS to authenticate each user.

Wi-Fi Protected Access is a subset of and will be compatible with IEEE 802.11i (sometimes
referred to as WPA2), a security standard under development. Software updates that will allow
both server and client computers to implement WPA are expected to become widely available
during 2003. Access points (see hot spots) can operate in mixed WEP/WPA mode to support
both WEP and WPA clients. However, mixed mode effectively provides only WEP-level
security for all users. Home users of access points that use only WPA can operate in a special
home-mode in which the user need only enter a password to be connected to the access point.
The password will trigger authentication and TKIP encryption.

120 www.someakenya.com Contact: 0707 737 890


TOPIC 7

ICT RISK MANAGEMENT


IT risk management is the application of risk management methods to Information technology in
order to manage IT risk, i.e.: The business risk associated with the use, ownership, operation,
involvement, influence and adoption of IT within an enterprise or organization
IT risk management can be considered a component of a wider enterprise risk management
system.

The establishment, maintenance and continuous update of an ISMS provide a strong indication
that a company is using a systematic approach for the identification, assessment and management
of information security risks.
Different methodologies have been proposed to manage IT risks, each of them divided in
processes and steps.

According to Risk IT, it encompasses not just only the negative impact of operations and service
delivery which can bring destruction or reduction of the value of the organization, but also the
benefit\value enabling risk associated to missing opportunities to use technology to enable or
enhance business or the IT project management for aspects like overspending or late delivery
with adverse business impact.

Because risk is strictly tied to uncertainty, Decision theory should be applied to manage risk as a
science, i.e. rationally making choices under uncertainty.
Generally speaking, risk is the product of likelihood times impact (Risk = Likelihood * Impact).
The measure of an IT risk can be determined as a product of threat, vulnerability and asset
values:

Risk = Threat * Vulnerability * Asset

A more current Risk management framework for IT Risk would be the TIK framework: Risk =
((Vulnerability * Threat) / Counter Measure) * Asset Value at Risk IT Risk.

Definitions
"Risk management is the process of identifying vulnerabilities and threats to the information
resources used by an organization in achieving business objectives, and deciding what
countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the
information resource to the organization."

There are two things in this definition that may need some clarification. First, the process of risk
management is an ongoing iterative process. It must be repeated indefinitely. The business
environment is constantly changing and new threats and vulnerability emerge every day. Second,
the choice of countermeasures (controls) used to manage risks must strike a balance between
productivity, cost, effectiveness of the countermeasure, and the value of the informational asset
being protected.

121 www.someakenya.com Contact: 0707 737 890


Risk management is the process that allows IT managers to balance the operational and
economic costs of protective measures and achieve gains in mission capability by protecting the
IT systems and data that support their organizations’ missions. This process is not unique to the
IT environment; indeed it pervades decision-making in all areas of our daily lives.

The head of an organizational unit must ensure that the organization has the capabilities needed
to accomplish its mission. These mission owners must determine the security capabilities that
their IT systems must have to provide the desired level of mission support in the face of real
world threats. Most organizations have tight budgets for IT security; therefore, IT security
spending must be reviewed as thoroughly as other management decisions. A well-structured risk
management methodology, when used effectively, can help management identify appropriate
controls for providing the mission-essential security capabilities.

Risk management in the IT world is quite a complex, multi faced activity, with a lot of relations
with other complex activities. The picture show the relationships between different related terms.

The American National Information Assurance Training and Education Center defines risk in the
IT field as:

1. The total process to identify, controls, and minimize the impact of uncertain events. The
objective of the risk management program is to reduce risk and obtain and maintain DAA
approval. The process facilitates the management of security risks by each level of
management throughout the system life cycle. The approval process consists of three
elements: risk analysis, certification, and approval.
2. An element of managerial science concerned with the identification, measurement,
control, and minimization of uncertain events. An effective risk management program
encompasses the following four phases:
1. ARisk assessment, as derived from an evaluation of threats and vulnerabilities.
2. Management decision.
3. Control implementation.
4. Effectiveness review.
3. The total process of identifying, measuring, and minimizing uncertain events affecting
AIS resources. It includes risk analysis, cost benefit analysis, safeguard selection,
security test and evaluation, safeguard implementation, and systems review.
4. The total process of identifying, controlling, and eliminating or minimizing uncertain
events that may affect system resources. lt includes risk analysis, cost benefit analysis,
selection, implementation and test, security evaluation of safeguards, and overall security
review.

Risk management as part of enterprise risk management

Some organizations have, and many others should have, a comprehensive Enterprise risk
management (ERM) in place. The four objectives categories addressed, according to Committee
of Sponsoring Organizations of the Treadway Commission (COSO) are:

 Strategy - high-level goals, aligned with and supporting the organization's mission

122 www.someakenya.com Contact: 0707 737 890


 Operations - effective and efficient use of resources
 Financial Reporting - reliability of operational and financial reporting
 Compliance - compliance with applicable laws and regulations

According to Risk It framework by ISACA, IT risk is transversal to all four categories. The IT
risk should be managed in the framework of Enterprise risk management: Risk appetite and Risk
sensitivity of the whole enterprise should guide the IT risk management process. ERM should
provide the context and business objectives to IT risk management

Risk management methodology

ENISA: The Risk Management Process, according to ISO Standard 13335

The term methodology means an organized set of principles and rules that drive action in a
particular field of knowledge. A methodology does not describe specific methods; nevertheless it
does specify several processes that need to be followed. These processes constitute a generic
framework. They may be broken down in sub-processes, they may be combined, or their

123 www.someakenya.com Contact: 0707 737 890


sequence may change. However, any risk management exercise must carry out these processes in
one form or another.

Due to the probabilistic nature and the need of cost benefit analysis, the IT risks are managed
following a process that accordingly to NIST SP 800-30 can be divided in the following steps:

1. risk assessment,
2. risk mitigation, and
3. Evaluation and assessment.

Effective risk management must be totally integrated into the Systems Development Life Cycle.

Information risk analysis conducted on applications, computer installations, networks and


systems under development should be undertaken using structured methodologies.

Context establishment

This step is the first step in ISOISO/IEC 27005 framework. Most of the elementary activities are
foreseen as the first sub process of Risk assessment according to NIST SP 800-30. This step
implies the acquisition of all relevant information about the organization and the determination
of the basic criteria, purpose, scope and boundaries of risk management activities and the
organization in charge of risk management activities. The purpose is usually the compliance with
legal requirements and provide evidence of due diligence supporting an ISMS that can be
certified. The scope can be an incident reporting plan, a business continuity plan.

Another area of application can be the certification of a product.

Criteria include the risk evaluation, risk acceptance and impact evaluation criteria. These are
conditioned by:

 legal and regulatory requirements


 the strategic value for the business of information processes
 stakeholder expectations
 negative consequences for the reputation of the organization

Establishing the scope and boundaries, the organization should be studied: its mission, its values,
its structure; its strategy, its locations and cultural environment. The constraints (budgetary,
cultural, political, and technical) of the organization are to be collected and documented as guide
for next steps.

Organization for security management

The setup of the organization in charge of risk management is foreseen as partially fulfilling the
requirement to provide the resources needed to establish, implement, operate, monitor, review,
maintain and improve an ISMS. The main roles inside this organization are:

124 www.someakenya.com Contact: 0707 737 890


 Senior Management
 Chief information officer (CIO)
 System and Information owners
 the business and functional managers
 the Information System Security Officer (ISSO) or Chief information security officer
(CISO)
 IT Security Practitioners
 Security Awareness Trainers

 Risk management concepts


Basic concepts
Risks exist because entities, companies and organisations have “assets” of a material or
immaterial nature that could be subject to damage that has consequences on the entity in
question.
Four concepts are important here:
• Assets, a term often used in the field of IT security
• Asset damage,
• Consequences for the entity,
• Possible but uncertain causes.

1. Assets
In very general terms, an asset can be defined as anything that could be of value or importance to
the entity.
In information security, the ISO/IEC 27005 standarddistinguishes between
•Primary assets including
 Processes and activities,
 Information
•Supporting assets including:
 Equipment,
 Software,
 Networks,
 Personnel,
 Premises,
 Organisational support

This is of course a very general definition that, while common to all methods, translates into a
range of practical applications.

2. Asset damage
Clearly, risks (and their consequences) differ depending on what type of damage occurs.
Different categories of assets will be damaged in different ways, and while it is easy to list the
ways in which information can be damaged (by being lost, tampered with or exposed, among
other things), few standard classifications exist for processes or certain support-related assets.

125 www.someakenya.com Contact: 0707 737 890


The type of damage an asset sustains is not clearlyspecified in the ISO/IEC 27005 standard,
which does not distinguish either damage from consequences. In our view however it is
important to distinguish between direct consequences of damage to assets, and secondary or
indirect consequences affecting processes and the entity’s activities.

3. Consequences for the entity


The nature of consequences can vary widely, depending on whether the entity in question is a
commercial business, public organisation or association for example.
The only important thing to keep in mind at this stage is that an evaluation of these consequences
will have to focus on the entityrather than its information systems or the technical scope of
analysis, and that an evaluation of risk must include an assessment of the impact that damage of
a particular asset would have on the entity.

4. Possible but uncertain causes of damage to an asset


Definitions of risk usually make reference to the cause or type of cause – necessarily uncertain –
of damage to an asset. The ISO guide 73 uses the term “event” to describe this notion of cause.
Generally speaking:
•A risk (as opposed to an observation or certainty) exists only if an uncertain action or event
happens that leads to the occurrence of that risk – in other words the damage of the asset in
question,
•Risk evaluations must include an assessment of how likely this action or event is to occur.
While our use of the word “cause” can be ambiguous in the sense that there are direct causes
(what Guide 73 calls “events”) and indirect causes (what the same guide calls “sources”), it
represents well the general idea of something that will lead to damage.

Defining threat
ISO/IEC 27000 series of standards on risk related to information systems refer to the idea of
“threat” Which is not really defined, except to say “a threat has the potential to harm assets such
as information, processes, and systems and therefore organizations”
One might assume that a threat is similar to the “cause” mentioned above, but it is in fact quite
different: threats can apply to a wide range of aspects, particularly:
•Events or actions that can lead to the occurrence of a risk (for example an accident, fire, media
theft, etc.),
•Actions or methods of action that make the occurrence of risk possible without causing it (for
example abuse of privilege,illegal access rights or identity theft),
•Effects related to and which indicate undetermined causes (for example the saturation of an
information system),
•Behaviour (for example unauthorized use of equipment) that is not in itself an event that leads
to the occurrence of risk
These examples show that a threat is not strictly linked to the cause of a risk, but it does make
defining typologies of risk possible using a list of typical threats.

Defining vulnerability
The term vulnerability is sometimes used in risk analysis but more widely in the domain of
information systems security

126 www.someakenya.com Contact: 0707 737 890


Vulnerability can be defined in two ways;
Linguistically speaking, the most correct definition describes vulnerability as a feature of a
system, object or asset that may be susceptible to threats.

If we take the example of a typed or handwritten document, where the threat would be rain or
storms in general, possible vulnerabilities would be:
•that the ink is not waterproof,
•the paper is water-sensitive,
•the material it is written on is degradable.
Often, it is more useful to think of vulnerabilities in terms of security controls and their potential
shortcomings.
Then, vulnerability is defined as a shortcoming or flaw in a security system that could be used by
a threat to strike a targeted system, object or asset.
In the example above, the exploited vulnerability was a lack of protection against storms.
From here, vulnerability branches out in many directions, as every security system has
weaknesses and any solution intended to reduce vulnerability is vulnerable itself.
If we go back to the example of the document made of degradable material an initial solution is
the storage away from storms
•Resulting vulnerabilities:
 Faulty plumbing systems within the building,
 Inadequate or poorly executed storage procedures,
 Activation of fire protection sprinklers, Etc.
When examining the notion of vulnerability, it may be useful to keep in mind that these two
approaches are not the same.

By using these general concepts, several definitions of risk are possible and in fact proposed by
different risk management methods. At the same time, they are compatible with standard-setting
documents

 Risk analysis
Security in any system should be commensurate with its risks. However, the process to
determine which security controls are appropriate and cost effective is quite often a complex and
sometimes a subjective matter. One of the prime functions of security risk analysis is to put this
process onto a more objective basis. There are a number of distinct approaches to risk analysis.
However, these essentially break down into two types: quantitative and qualitative.

Quantitative Risk Analysis

This approach employs two fundamental elements; the probability of an event occurring and the
likely loss should it occur.

Quantitative risk analysis makes use of a single figure produced from these elements. This is
called the 'Annual Loss Expectancy (ALE)' or the 'Estimated Annual Cost (EAC)'. This is
calculated for an event by simply multiplying the potential loss by the probability.

127 www.someakenya.com Contact: 0707 737 890


It is thus theoretically possible to rank events in order of risk (ALE) and to make decisions based
upon this.

The problems with this type of risk analysis are usually associated with the unreliability and
inaccuracy of the data. Probability can rarely be precise and can, in some cases, promote
complacency. In addition, controls and countermeasures often tackle a number of potential
events and the events themselves are frequently interrelated. Notwithstanding the drawbacks, a
number of organisations have successfully adopted quantitative risk analysis.

Qualitative Risk Analysis

This is by far the most widely used approach to risk analysis. Probability data is not required and
only estimated potential loss is used. Most qualitative risk analysis methodologies make use of a
number of interrelated elements:

THREATS

These are things that can go wrong or that can 'attack' the system. Examples might include fire or
fraud. Threats are ever present for every system.

VULNERABILITIES

These make a system more prone to attack by a threat or make an attack more likely to have
some success or impact. For example, for fire vulnerability would be the presence of
inflammable materials (e.g. paper).

CONTROLS

These are the countermeasures for vulnerabilities. There are four types:

 Deterrent controls reduce the likelihood of a deliberate attack


 Preventative controls protect vulnerabilities and make an attack unsuccessful or reduce its
impact
 Corrective controls reduce the effect of an attack
 Detective controls discover attacks and trigger preventative or corrective controls.

These elements can be illustrated by a simple relational model:

128 www.someakenya.com Contact: 0707 737 890


The knowledge base supplied with COBRA Risk Consultant employs this methodology and
variations of it.

 Risk assessment framework


Risk assessment

ENISA: Risk assessment inside risk management

Risk Management is a recurrent activity that deals with the analysis, planning, implementation,
control and monitoring of implemented measurements and the enforced security policy. On the
contrary, Risk Assessment is executed at discrete time points (e.g. once a year, on demand, etc.)
and – until the performance of the next assessment - provides a temporary view of assessed risks
and while parameterizing the entire Risk Management process. This view of the relationship of
Risk Management to Risk Assessment is depicted in figure as adopted from OCTAVE.

129 www.someakenya.com Contact: 0707 737 890


Risk assessment is often conducted in more than one iteration, the first being a high-level
assessment to identify high risks, while the other iterations detailed the analysis of the major
risks and other risks.
According to National Information Assurance Training and Education Center risk assessment in
the IT field is:
1. A study of the vulnerabilities, threats, likelihood, loss or impact, and theoretical
effectiveness of security measures. Managers use the results of a risk assessment to
develop security requirements and specifications.
2. The process of evaluating threats and vulnerabilities, known and postulated, to determine
expected loss and establish the degree of acceptability to system operations.
3. An identification of a specific ADP facility's assets, the threats to these assets, and the
ADP facility's vulnerability to those threats.
4. An analysis of system assets and vulnerabilities to establish an expected loss from certain
events based on estimated probabilities of the occurrence of those events. The purpose of
a risk assessment is to determine if countermeasures are adequate to reduce the
probability of loss or the impact of loss to an acceptable level.
5. A management tool which provides a systematic approach for determining the relative
value and sensitivity of computer installation assets, assessing vulnerabilities, assessing
loss expectancy or perceived risk exposure levels, assessing existing protection features
and additional protection alternatives or acceptance of risks and documenting
management decisions. Decisions for implementing additional protection features are
normally based on the existence of a reasonable ratio between cost/benefit of the
safeguard and sensitivity/value of the assets to be protected. Risk assessments may vary
from an informal review of a small scale microcomputer installation to a more formal
and fully documented analysis (i.e., risk analysis) of a large scale computer installation.
Risk assessment methodologies may vary from qualitative or quantitative approaches to
any combination of these two approaches.

130 www.someakenya.com Contact: 0707 737 890


ISO 27005 framework

Risk assessment receives as input the output of the previous step Context establishment; the
output is the list of assessed risks prioritized according to risk evaluation criteria. The process
can divided in the following steps:
 Risk analysis, further divided in:
o Risk identification
o Risk estimation
o Risk evaluation
The following table compares these ISO 27005 processes with Risk IT framework processes:

Risk assessment constituent processes


ISO 27005 Risk IT
Risk analysis RE2 has as its objective developing useful information to support risk decisions
that take into account the business relevance of risk factors.
RE1 Collect data serves as input to the analysis of risk (e.g., identifying risk
factors, collecting data on the external environment).
Risk The identification of risk comprises the following elements:
identification  Risk scenarios
 Risk factors
Risk RE2.2 Estimate IT risk
estimation
Risk RE2.2 Estimate IT risk
evaluation

Code of practice for information security management recommends the following be examined
during a risk assessment:
 security policy,
 organization of information security,
 asset management,
 human resources security,
 physical and environmental security,
 communications and operations management,
 access control,
 information systems acquisition, development and maintenance, (see Systems
Development Life Cycle)
 information security incident management,
 business continuity management, and
 Regulatory compliance.

Risk identification

Risk identification states what could cause a potential loss; the following are to be identified:
 assets, primary (i.e. Business processes and related information) and supporting (i.e.
hardware, software, personnel, site, organization structure)
 threats
 existing and planned security measures

131 www.someakenya.com Contact: 0707 737 890


 vulnerabilities
 consequences
 related business processes
The output of sub process is made up of:
 list of asset and related business processes to be risk managed with associated list of
threats, existing and planned security measures
 list of vulnerabilities unrelated to any identified threats
 List of incident scenarios with their consequences.

Risk estimation
There are two methods of risk assessment in information security field, qualitative and
quantitative.

Purely quantitative risk assessment is a mathematical calculation based on security metrics on


the asset (system or application). For each risk scenario, taking into consideration the different
risk factors Single loss expectancy (SLE) is determined. Then, considering the probability of
occurrence on a given period basis, for example the annual rate of occurrence (ARO), the
Annualized Loss Expectancy is determined as the product of ARO X SLE. It is important to
point out that the values of assets to be considered are those of all involved assets, not only the
value of the directly affected resource.
For example, if you consider the risk scenario of a Laptop theft threat, you should consider the
value of the data (a related asset) contained in the computer and the reputation and liability of the
company (other assets) deriving from the loss of availability and confidentiality of the data that
could be involved. It is easy to understand that intangible assets (data, reputation, liability) can
be worth much more than physical resources at risk (the laptop hardware in the example).
Intangible asset value can be huge, but is not easy to evaluate: this can be a consideration against
a pure quantitative approach.

Qualitative risk assessment (three to five steps evaluation, from Very High to Low) is performed
when the organization requires a risk assessment be performed in a relatively short time or to
meet a small budget, a significant quantity of relevant data is not available, or the persons
performing the assessment don't have the sophisticated mathematical, financial, and risk
assessment expertise required. Qualitative risk assessment can be performed in a shorter period
of time and with less data. Qualitative risk assessments are typically performed through
interviews of a sample of personnel from all relevant groups within an organization charged with
the security of the asset being assessed. Qualitative risk assessments are descriptive versus
measurable. Usually a qualitative classification is done followed by a quantitative evaluation of
the highest risks to be compared to the costs of security measures.
Risk estimation has as input the output of risk analysis and can be split in the following steps:
 assessment of the consequences through the valuation of assets
 assessment of the likelihood of the incident (through threat and vulnerability valuation)
 assign values to the likelihood and consequence of the risks
The output is the list of risks with value levels assigned. It can be documented in a risk register
During risk estimation there are generally three values of a given asset, one for the loss of one of
the CIA properties: Confidentiality, Integrity, and Availability.

132 www.someakenya.com Contact: 0707 737 890


Risk evaluation
The risk evaluation process receives as input the output of risk analysis process. It compares each
risk level against the risk acceptance criteria and prioritizes the risk list with risk treatment
indications.

NIST SP 800 30 framework (see the figure below)


To determine the likelihood of a future adverse event, threats to an IT system must be in
conjunction with the potential vulnerabilities and the controls in place for the IT system.
Impact refers to the magnitude of harm that could be caused by a threat’s exercise of
vulnerability. The level of impact is governed by the potential mission impacts and produces a
relative value for the IT assets and resources affected (e.g., the criticality sensitivity of the IT
system components and data). The risk assessment methodology encompasses nine primary
steps:
 Step 1 System Characterization
 Step 2 Threat Identification
 Step 3 Vulnerability Identification
 Step 4 Control Analysis
 Step 5 Likelihood Determination
 Step 6 Impact Analysis
 Step 7 Risk Determination
 Step 8 Control Recommendations
 Step 9 Results Documentation

Risk mitigation
Risk mitigation, the second process according to SP 800-30, the third according to ISO 27005 of
risk management, involves prioritizing, evaluating, and implementing the appropriate risk-
reducing controls recommended from the risk assessment process. Because the elimination of all
risk is usually impractical or close to impossible, it is the responsibility of senior management
and functional and business managers to use the least-cost approach and implement the most
appropriate controls to decrease mission risk to an acceptable level, with minimal adverse impact
on the organization’s resources and mission.

ISO 27005 framework


The risk treatment process aim at selecting security measures to:
 reduce
 retain
 avoid
 transfer
Risk and produce a risk treatment plan that is the output of the process with the residual risks
subject to the acceptance of management.
There are some lists to select appropriate security measures, but is up to the single organization
to choose the most appropriate one according to its business strategy, constraints of the
environment and circumstances. The choice should be rational and documented. The importance
of accepting a risk that is too costly to reduce is very high and led to the fact that risk acceptance
is considered a separate process.

133 www.someakenya.com Contact: 0707 737 890


Risk transfer apply were the risk has a very high impact but is not easy to reduce significantly the
likelihood by means of security controls: the insurance premium should be compared against the
mitigation costs, eventually evaluating some mixed strategy to partially treat the risk. Another
option is to outsource the risk to somebody more efficient to manage the risk.

Risk avoidance describes any action where ways of conducting business are changed to avoid
any risk occurrence. For example, the choice of not storing sensitive information about
customers can be avoidance for the risk that customer data can be stolen.
The residual risks, i.e. the risk reaming after risk treatment decision have been taken, should be
estimated to ensure that sufficient protection is achieved. If the residual risk is unacceptable, the
risk treatment process should be iterated.
NIST SP 800 30 framework

134 www.someakenya.com Contact: 0707 737 890


Risk mitigation methodology flow chart from NIST SP 800-30 Figure 4-2

Risk mitigation is a systematic methodology used by senior management to reduce mission risk.
Risk mitigation can be achieved through any of the following risk mitigation options:
 Risk Assumption. To accept the potential risk and continue operating the IT system or to
implement controls to lower the risk to an acceptable level
 Risk Avoidance. To avoid the risk by eliminating the risk cause and/or consequence
(e.g., forgo certain functions of the system or shut down the system when risks are
identified)

135 www.someakenya.com Contact: 0707 737 890


 Risk Limitation. To limit the risk by implementing controls that minimize the adverse
impact of a threat’s exercising a vulnerability (e.g., use of supporting, preventive,
detective controls)
 Risk Planning. To manage risk by developing a risk mitigation plan that prioritizes,
implements, and maintains controls
 Research and Acknowledgement. To lower the risk of loss by acknowledging the
vulnerability or flaw and researching controls to correct the vulnerability
 Risk Transference. To transfer the risk by using other options to compensate for the
loss, such as purchasing insurance.

Address the greatest risks and strive for sufficient risk mitigation at the lowest cost, with minimal
impact on other mission capabilities: this is the suggestion contained in.

Risk communication
Risk communication is a horizontal process that interacts bi-directionally with all other processes
of risk management. Its purpose is to establish a common understanding of all aspect of risk
among all the organization's stakeholder. Establishing a common understanding is important,
since it influences decisions to be taken. The Risk Reduction Overview method is specifically
designed for this process. It presents a comprehensible overview of the coherence of risks,
measures and residual risks to achieve this common understanding.

Risk monitoring and review


Risk management is an ongoing, never ending process. Within this process implemented security
measures are regularly monitored and reviewed to ensure that they work as planned and that
changes in the environment rendered them ineffective. Business requirements, vulnerabilities and
threats can change over the time.
Regular audits should be scheduled and should be conducted by an independent party, i.e.
somebody not under the control of whom is responsible for the implementations or daily
management of ISMS.

IT evaluation and assessment


Security controls should be validated. Technical controls are possible complex systems that are
tested and verified. The hardest part to validate is people knowledge of procedural controls and
the effectiveness of the real application in daily business of the security procedures.
Vulnerability assessment, both internal and external, and Penetration test are instruments for
verifying the status of security controls.
Information technology security audit is an organizational and procedural control with the aim of
evaluating security. The IT systems of most organization are evolving quite rapidly. Risk
management should cope with these changes through change authorization after risk re-
evaluation of the affected systems and processes and periodically review the risks and mitigation
actions.
Monitoring system events according to a security monitoring strategy, an incident response plan
and security validation and metrics are fundamental activities to assure that an optimal level of
security is obtained.
It is important to monitor the new vulnerabilities, apply procedural and technical security

136 www.someakenya.com Contact: 0707 737 890


controls like regularly updating software, and evaluate other kinds of controls to deal with zero-
day attacks.
The attitude of involved people to benchmark against best practice and follow the seminars of
professional associations in the sector are factors to assure the state of art of an organization IT
risk management practice.

Integrating risk management into system development life cycle


Effective risk management must be totally integrated into the SDLC. An IT system’s SDLC has
five phases: initiation, development or acquisition, implementation, operation or maintenance,
and disposal. The risk management methodology is the same regardless of the SDLC phase for
which the assessment is being conducted. Risk management is an iterative process that can be
performed during each major phase of the SDLC.
Table 2-1 Integration of Risk Management into the SDLC
SDLC Phases Phase Characteristics Support from Risk Management Activities
Phase 1: Initiation The need for an IT system is Identified risks are used to support the
expressed and the purpose development of the system requirements,
and scope of the IT system is including security requirements, and a security
documented concept of operations (strategy)
Phase 2: The IT system is designed, The risks identified during this phase can be
Development or purchased, programmed, used to support the security analyses of the IT
Acquisition developed, or otherwise system that may lead to architecture and design
constructed tradeoffs during system development
Phase 3: The system security features The risk management process supports the
Implementation should be configured, assessment of the system implementation
enabled, tested, and verified against its requirements and within its modeled
operational environment. Decisions regarding
risks identified must be made prior to system
operation
Phase 4: The system performs its Risk management activities are performed for
Operation or functions. Typically the periodic system reauthorization (or
Maintenance system is being modified on reaccreditation) or whenever major changes are
an ongoing basis through the made to an IT system in its operational,
addition of hardware and production environment (e.g., new system
software and by changes to interfaces)
organizational processes,
policies, and procedures
Phase 5: Disposal This phase may involve the Risk management activities are performed for
disposition of information, system components that will be disposed of or
hardware, and software. replaced to ensure that the hardware and
Activities may include software are properly disposed of, that residual
moving, archiving, data is appropriately handled, and that system
discarding, or destroying migration is conducted in a secure and
information and sanitizing the systematic manner
hardware and software

Early integration of security in the SDLC enables agencies to maximize return on investment in
their security programs, through:
 Early identification and mitigation of security vulnerabilities and misconfigurations,
resulting in lower cost of security control implementation and vulnerability mitigation;

137 www.someakenya.com Contact: 0707 737 890


 Awareness of potential engineering challenges caused by mandatory security controls;
 Identification of shared security services and reuse of security strategies and tools to
reduce development cost and schedule while improving security posture through proven
methods and techniques; and
 Facilitation of informed executive decision making through comprehensive risk
management in a timely manner.
This guide focuses on the information security components of the SDLC. First, descriptions of
the key security roles and responsibilities that are needed in most information system
developments are provided. Second, sufficient information about the SDLC is provided to allow
a person who is unfamiliar with the SDLC process to understand the relationship between
information security and the SDLC. The document integrates the security steps into the linear,
sequential (a.k.a. waterfall) SDLC. The five-step SDLC cited in the document is an example of
one method of development and is not intended to mandate this methodology. Lastly, SP 800-64
provides insight into IT projects and initiatives that are not as clearly defined as SDLC-based
developments, such as service-oriented architectures, cross-organization projects, and IT facility
developments.
Security can be incorporated into information systems acquisition, development and maintenance
by implementing effective security practices in the following areas.
 Security requirements for information systems
 Correct processing in applications
 Cryptographic controls
 Security of system files
 Security in development and support processes
 Technical vulnerability management
Information systems security begins with incorporating security into the requirements process for
any new application or system enhancement. Security should be designed into the system from
the beginning. Security requirements are presented to the vendor during the requirements phase
of a product purchase. Formal testing should be done to determine whether the product meets the
required security specifications prior to purchasing the product.
Correct processing in applications is essential in order to prevent errors and to mitigate loss,
unauthorized modification or misuse of information. Effective coding techniques include
validating input and output data, protecting message integrity using encryption, checking for
processing errors, and creating activity logs.
Applied properly, cryptographic controls provide effective mechanisms for protecting the
confidentiality, authenticity and integrity of information. An institution should develop policies
on the use of encryption, including proper key management. Disk Encryption is one way to
protect data at rest. Data in transit can be protected from alteration and unauthorized viewing
using SSL certificates issued through a Certificate Authority that has implemented a Public Key
Infrastructure.
System files used by applications must be protected in order to ensure the integrity and stability
of the application. Using source code repositories with version control, extensive testing,
production back-off plans, and appropriate access to program code are some effective measures
that can be used to protect an application's files.
Security in development and support processes is an essential part of a comprehensive quality
assurance and production control process, and would usually involve training and continuous
oversight by the most experienced staff.

138 www.someakenya.com Contact: 0707 737 890


Applications need to be monitored and patched for technical vulnerabilities. Procedures for
applying patches should include evaluating the patches to determine their appropriateness, and
whether or not they can be successfully removed in case of a negative impact.

Critique of risk management as a methodology


Risk management as a scientific methodology has been criticized as being shallow. Major
programs that implies risk management applied to IT systems of large organizations as FISMA
has been criticized.
The risk management methodology is based on scientific foundations of statistical decision
making: indeed, by avoiding the complexity that accompanies the formal probabilistic model of
risks and uncertainty, risk management looks more like a process that attempts to guess rather
than formally predict the future on the basis of statistical evidence. It is highly subjective in
assessing the value of assets, the likelihood of threats occurrence and the significance of the
impact.
Having considered these criticisms the risk management is a very important instrument in
designing, implementing and operating secure information systems because it systematically
classifies and drives the process of deciding how to treat risks. Its usage is foreseen by legislative
rules in many countries. A better way to deal with the subject has not emerged.

Risk managements methods


It is quite hard to list most of the methods that at least partially support the IT risk management
process. Efforts in this direction were done by:
 NIST Description of Automated Risk Management Packages That NIST/NCSC Risk
Management Research Laboratory Has Examined, updated 1991
 ENISA in 2006; a list of methods and tools is available on line with a comparison engine.
Among them the most widely used are:
o CRAMM Developed by British government is compliant to ISO/IEC 17799,
Gramm–Leach–Bliley Act (GLBA) and Health Insurance Portability and
Accountability Act (HIPAA)
o EBIOS developed by the French government it is compliant with major security
standards: ISO/IEC 27001, ISO/IEC 13335, ISO/IEC 15408, ISO/IEC 17799 and
ISO/IEC 21287
o Standard of Good Practice developed by Information Security Forum (ISF)
o Mehari developed by Clusif Club de la Sécurité de l'InformationFrançais
o TIK IT Risk Framework developed by IT Risk Institute
o Octave developed by Carnegie Mellon University, SEI (Software Engineering
Institute) The Operationally Critical Threat, Asset, and Vulnerability
EvaluationSM (OCTAVE) approach defines a risk-based strategic assessment and
planning technique for security.
o IT-Grundschutz (IT Baseline Protection Manual) developed by Federal Office for
Information Security (BSI) (Germany); IT-Grundschutz provides a method for an
organization to establish an Information Security Management System (ISMS). It
comprises both generic IT security recommendations for establishing an
applicable IT security process and detailed technical recommendations to achieve
the necessary IT security level for a specific domain

139 www.someakenya.com Contact: 0707 737 890


More note
Security controls should be selected based on real risks to an organization's assets and
operations. The alternative -- selecting controls without a methodical analysis of threats and
controls -- is likely to result in implementation of security controls in the wrong places, wasting
resources while at the same time, leaving an organization vulnerable to unanticipated threats.

A risk assessment frameworkestablishes the rules for what is assessed, who needs to be involved,
the terminology used in discussing risk, the criteria for quantifying, qualifying, and comparing
degrees of risk, and the documentation that must be collected and produced as a result of
assessments and follow-on activities. The goal of a framework is to establish an objective
measurement of risk that will allow an organization to understand business risk to critical
information and assets both qualitatively and quantitatively. In the end, the risk assessment
framework provides the tools necessary to make business decisions regarding investments in
people, processes, and technology to bring risk to acceptable level.

Two of the most popular risk frameworks in use today are;

1 OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation),


developed at Carnegie Mellon University
2 NIST risk assessment framework documented in NIST Special Publication 800-30.
3 ISACA's RISK IT (part of COBIT).
4 ISO 27005:2008 (part of the ISO 27000 series that includes ISO 27001 and 27002). All
the frameworks have similar approaches but differ in their high level goals. OCTAVE,
NIST, and ISO 27005 focus on security risk assessments, where RISK IT applies to the
broader IT risk management space.

How does a company know which framework is the best fit for its needs? We'll provide an
overview of the general structure and approach to risk assessment, draw a comparison of the
frameworks, and offer some guidance for experimentation and selection of an appropriate
framework.

Asset-based assessments

All risk assessment methods require organizations to select an asset as the object of the
assessment. Generally speaking, assets can be people, information, processes, systems,
applications, or systems. However frameworks differ in how strict they are requiring
organizations to follow a particular discipline in identifying what constitutes an asset. For
example CMU's original OCTAVE framework allowed an organization to select any item
previously described as the asset to be assessed, where the most recent methodology in the
OCTAVE series, Allegro, requires assets to be information.

There are advantages and disadvantages associated with any definition of asset. For example, if
an asset is a system or application, the assessment team will need to include all information
owners affected by the system. On the other hand, if the asset is information, the scope of the
assessment would need to include all systems and applications that affect the information.

140 www.someakenya.com Contact: 0707 737 890


Practically speaking, it is important to define the asset precisely so the scope of the assessment is
clear. It is also useful to be consistent in how assets are defined from assessment to assessment to
facilitate comparisons of results.

A critical component of a risk assessment framework is that it establishes a common set of


terminology so organizations can discuss risk effectively. See below for a list of terms used in
most frameworks.

Framework technology

Risk assessments frameworks establish the meaning of terms to get everyone on the same page.
Here are some terms used in most frameworks.

Actors, motives, access: These terms describe who is responsible for the threat, what might
motivate the actor or attacker to carry out an attack, and the access that is necessary to perpetrate
an attack or carry out the threat. Actors may be a disgruntled employee, a hacker from the
Internet, or simply a well-meaning administrator who accidently damages an asset. The access
required to carry out an attack is important in determining how large a group may be able to
realize a threat. The larger the attacking community (e.g., all users on the Internet versus a few
trusted administrators), the more likely an attack can be attempted.

Asset owners: Owners have the authority to accept risk. Owners must participate in risk
assessment and management as they are ultimately responsible for allocating funding for controls
or accepting the risk resulting from a decision not to implement controls.

Asset custodians: A person or group responsible for implementing and maintaining the systems
and security controls that protect an asset. This is typically an IT entity.

Impact: The business ramifications of an asset being compromised. The risk assessment team
needs to understand and document the degree of damage that would result if the confidentiality,
integrity, or availability of an asset is lost. The terms impact, business impact, and inherent risk
are usually used to describe, in either relative or monetary terms, how the business would be
affected by the loss. It's important to note that impact assumes the threat has been realized;
impact is irrespective of the likelihood of compromise.

Information asset: An abstract logical grouping of information that is, as a unit, valuable to an
organization. Assets have owners that are responsible for protecting value of the asset.
Risk magnitude or risk measurement criteria: The product of likelihood and the impact described
above. If we consider likelihood a probability value (less than 1) and impact a value of high,
medium, or low, the risk magnitude can be "calculated" and compared to risks of various threats
on particular assets.

Security requirements: The qualities of an asset that must be protected to retain its value.
Depending on the asset, different degrees of confidentiality, integrity, and availability must be
protected. For example, confidentiality and integrity of personal identifying information may be
critical for a given environment while availability may be less of a concern.

141 www.someakenya.com Contact: 0707 737 890


Threats, threat scenarios or vectors: According to OCTAVE, threats are conditions or
situations that may adversely affect an asset. Threats and threat scenarios involve particular
classes of actors (attackers or users) and methods or vectors by which an attack or threat may be
carried out.

Risk assessment methodology

The heart of a risk assessment framework is an objective, repeatable methodology that


gathers input regarding business risks, threats, vulnerabilities, and controls and produces a
risk magnitude that can be discussed, reasoned about, and treated. The various risk
frameworks follow similar structures, but differ in the description and details of the steps.
However, they all follow the general pattern of identifying assets and stakeholders,
understanding security requirements, enumerating threats, identifying and assessing the
effectiveness of controls, and calculating the risk based on the inherent risk of compromise and
the likelihood that the threat will be realized. The following is a basic methodology, largely
derived from the OCTAVE and NIST frameworks.

1. Identify assets and stakeholders

All risk assessment methods require a risk assessment team to clearly define the scope of the
asset, the business owner of the asset, and those people responsible for the technology and
particularly the security controls for the asset. The asset defines the scope of the assessment and
the owners and custodians define the members of the risk assessment team.

NIST's approach allows the asset to be a system, application, or information, while OCTAVE is
more biased toward information and OCTAVE Allegro requires the asset to be information.
Regardless of what method you choose, this step must define the boundaries and contents of the
asset to be assessed.

2. Analyze impact

The next step is to understand the both the dimensions and magnitude of the business impact to
the organization, assuming the asset was compromised. The dimensions of compromise are
confidentiality, integrity, and availability while the magnitude is typically described as low,
medium, or high corresponding to the financial impact of the compromise.

It's important to consider the business impact of a compromise in absence of controls to avoid
the common mistake of assuming that a compromise could not take place because the controls
are assumed to be effective. The exercise of analyzing the value or impact of asset loss can help
determine which assets should undergo risk assessment. This step is mostly the responsibility of
the business team, but technical representatives can profit by hearing the value judgments of the
business.

The output of this step is a document (typically a form) that describes the business impact in
monetary terms or, more often, a graded scale for compromise of the confidentiality, integrity,
and availability of the asset.

142 www.someakenya.com Contact: 0707 737 890


3. Identify threats

Identify the various ways an asset could be compromised that would have an impact on the
business. Threats involve people exploiting weaknesses or vulnerabilities intentionally or
unintentionally that result in a compromise. This process typically starts at a high level, looking
at general areas of concern (e.g., a competitor gaining access to proprietary plans stored in a
database) and progressing to more detailed analysis (e.g., gaining unauthorized access through a
remote access method). The idea is to list the most common combinations of actors or
perpetrators and paths that might lead to the compromise an asset (e.g., application interfaces,
storage systems, remote access, etc.). These combinations are called threat scenarios.

The assessment team uses this list later in the process to determine whether these threats are
effectively defended against by technical and process controls. The output of this step is the list
of threats described in terms of actors, access path or vector, and the associated impact of the
compromise.

4. Investigate vulnerabilities

Use the list of threats and analyze the technical components and business processes for flaws that
might facilitate the success of a threat. The vulnerabilities may have been discovered in separate
design and architecture reviews, penetration testing, or control process reviews. Use these
vulnerabilities to assemble or inform the threat scenarios described above. For example, a
general threat scenario may be defined as a skilled attacker from the Internet motivated by
financial reward gains access to an account withdrawal function; a known vulnerability in a Web
application may make that threat more likely. This information is used in the later stage of
likelihood determination.

This step is designed to allow the assessment team to determine the likelihood that vulnerability
can be exploited by the actor identified in the threat scenario. The team considers factors such as
the technical skills and access necessary to exploit the vulnerability in rating the vulnerability
exploit likelihood from low to high. This will be used in the likelihood calculation later to
determine the magnitude of risk.

5. Analyze controls

Look at the technical and process controls surrounding an asset and consider their effectiveness
in defending against the threats defined earlier. Technical controls like authentication and
authorization, intrusion detection, network filtering and routing, and encryption are considered in
this phase of the assessment. It's important, however, not to stop there. Business controls like
reconciliation of multiple paths of transactions, manual review and approval of activities, and
audits can often be more effective in preventing or detecting attacks or errors than technical
controls. The multi-disciplinary risk assessment team is designed to bring both types of controls
into consideration when determining the effectiveness of controls.

At the conclusion of this step, the assessment team documents the controls associated with the
asset and their effectiveness in defending against the particular threats.

143 www.someakenya.com Contact: 0707 737 890


6. Calculate threat likelihood

After identifying a particular threat, developing scenarios describing how the threat may be
realized, and judging the effectiveness of controls in preventing exploitation of a vulnerability,
use a "formula" to determine the likelihood of an actor successfully exploiting a vulnerability
and circumventing known business and technical controls to compromise an asset.

The team needs to consider the motivation of the actor, the likelihood of being caught (captured
in control effectiveness), and the ease with which the asset may be compromised, then come up
with a measure of overall likelihood, from low to high.

7. Calculate risk magnitude

The calculation of risk magnitude or residual risk combines the business impact of compromise
of the asset (considered at the start of the assessment), taking into consideration the diminishing
effect of the particular threat scenario under consideration (e.g., the particular attack may only
affect confidentiality and not integrity) with the likelihood of the threat succeeding. The result is
a measure of the risk to the business of a particular threat. This is typically expressed as one of
three or four values (low, medium, high, and sometimes severe).

This measure of risk is the whole point of the risk assessment. It serves as a guide to the business
as to the importance of addressing the vulnerabilities or control weaknesses that allow the threat
to be realized. Ultimately, the risk assessment forces a business decision to treat or accept risk.

Anyone reading a risk assessment method for the first time will probably get the impression that
they describe a clean and orderly stepwise process that can be sequentially executed. However,
you'll find that you need to repeatedly return to earlier steps when information in later steps helps
to clarify the real definition of the asset, which actors may be realistically considered in a threat
scenario, or what the sensitivity of a particular asset is. It often takes an organization several
attempts to get used to the idea that circling back to earlier steps is a necessary and important
part of the process.

Which framework is best?

Over the years, many risk frameworks have been developed and each has its own advantages and
disadvantages. In general, they all require organizational discipline to convene a multi-
disciplinary team, define assets, list threats, evaluate controls, and conclude with an estimate of
the risk magnitude.

OCTAVE, probably the most well-known of the risk frameworks, comes in three sizes. The
original, full-featured version is a heavyweight process with substantial documentation meant for
large organizations. OCTAVE-S is designed for smaller organizations where the multi-
disciplinary group may be represented by fewer people, sometimes exclusively technical folks
with knowledge of the business. The documentation burden is lower and the process is lighter
weight.

144 www.someakenya.com Contact: 0707 737 890


The latest product in the OCTAVE series is Allegro, which has more of a lightweight feel and
takes a more focused approach than its predecessors. Allegro requires the assets to be
information, requiring additional discipline at the start of the process, and views systems,
applications, and environments as containers. The scope of the assessment needs to be based on
the information abstraction (e.g., Protected Health Information) and identify and assess risk
across the containers in which the information is stored, processed, or transmitted.

One of the benefits of the OCTAVE series is that each of the frameworks provides templates for
worksheets to document each step in the process. These can either be used directly or customized
for a particular organization.

The NIST framework, described in NIST Special Publication 800-30, is a general one that can be
applied to any asset. It uses slightly different terminology than OCTAVE, but follows a similar
structure. It doesn't provide the wealth of forms that OCTAVE does, but is relatively
straightforward to follow. Its brevity and focus on more concrete components (e.g., systems)
makes it a good candidate for organizations new to risk assessment. Furthermore, because it's
defined by NIST, it's approved for use by government agencies and organizations that work with
them.

ISACA's COBIT and the ISO 27001 and 27002 are IT management and security frameworks that
require organizations to have a risk management program. Both offer but don't require their own
versions of risk management frameworks: COBIT has RISK IT and ISO has ISO 27005:2008. .
They recommend repeatable methodologies and specify when risk assessments should take
place. The ISO 27000 series is designed to deal with security, while COBIT encompasses all of
IT; consequently, the risk assessments required by each correspond to those scopes. In other
words, risk assessment in COBIT -- described in RISK IT -- goes beyond security risks and
includes development, business continuity and other types of operational risk in IT, whereas ISO
27005 concentrates on security exclusively.

ISO 27005 follows a similar structure to NIST but defines terms differently. The framework
includes steps called context establishment, risk identification and estimation, in which threats,
vulnerabilities and controls are considered, and a risk analysis step that discusses and documents
threat likelihood and business impact.. ISO 27005 includes annexes with forms and examples,
but like other risk frameworks, it's up to the organization implementing it to evaluate or quantify
risk in ways that are relevant to its particular business.

Organizations that do not have a formal risk assessment methodology would do well to review
the risk assessment requirements in ISO 27001 and 27002 and consider the 27005 or NIST
approach. The ISO standards provide a good justification for formal risk assessments and outline
requirements, while the NIST document provides a good introduction to a risk assessment
framework.

145 www.someakenya.com Contact: 0707 737 890


 Countermeasures
In Computer Security a countermeasure is an action, device, procedure, or technique that
reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the
harm it can cause, or by discovering and reporting it so that corrective action can be taken.

According to the Glossary by InfosecToday, the meaning of countermeasure is:

The deployment of a set of security services to protect against a security threat.

A resource (both physical and logical) can have one or more vulnerabilities that can be exploited
by a threat agent in a threat action. The result can potentially compromises the Confidentiality,
Integrity or Availability properties of resources (potentially different that the vulnerable one) of
the organization and others involved parties (customers, suppliers).
The so-called CIA triad is the basis of Information Security.

The attack can be active when it attempts to alter system resources or affect their operation: so it
compromises Integrity or Availability. A "passive attack" attempts to learn or make use of
information from the system but does not affect system resources: so it compromises
Confidentiality.

A Threat is a potential for violation of security, which exists when there is a circumstance,
capability, action, or event that could breach security and cause harm. That is, a threat is a
possible danger that might exploit vulnerability. A threat can be either "intentional" (i.e.,
intelligent; e.g., an individual cracker or a criminal organization) or "accidental" (e.g., the
possibility of a computer malfunctioning, or the possibility of an "act of God" such as an
earthquake, a fire, or a tornado).

A set of policies concerned with information security management, the information security
management systems (ISMS), has been developed to manage, according to Risk management
principles, the countermeasures in order to accomplish to a security strategy set up following
rules and regulations applicable in a country.

The following are materials related to countermeasures;

 An explanation of attacker methodology


 Descriptions of common attacks
 How to categorize threats
 How to identify and counter threats at the network, host, and application levels

Overview

When you incorporate security features into your application's design, implementation, and
deployment, it helps to have a good understanding of how attackers think. By thinking like
attackers and being aware of their likely tactics, you can be more effective when applying

146 www.someakenya.com Contact: 0707 737 890


countermeasures. Here we, describes the classic attacker methodology and profiles the anatomy
of a typical attack.

We analyze Web application security from the perspectives of threats, countermeasures,


vulnerabilities, and attacks. The following set of core terms are defined to avoid confusion and to
ensure they are used in the correct context.

 Asset. A resource of value such as the data in a database or on the file system, or a
system resource
 Threat. A potential occurrence — malicious or otherwise — that may harm an asset
 Vulnerability. A weakness that makes a threat possible
 Attack (or exploit). An action taken to harm an asset
 Countermeasure. A safeguard that addresses a threat and mitigates risk

This chapter also identifies a set of common network, host, and application level threats, and the
recommended countermeasures to address each one. The chapter does not contain an exhaustive
list of threats, but it does highlight many top threats. With this information and knowledge of
how an attacker works, you will be able to identify additional threats. You need to know the
threats that are most likely to impact your system to be able to build effective threat models.
These threat models are the subject of "Threat Modeling."

How to Use This Chapter

The following are recommendations on how to use this chapter:

 Become familiar with specific threats that affect the network host and application.
The threats are unique for the various parts of your system, although the attacker's goals
may be the same.
 Use the threats to identify risk. Then create a plan to counter those threats.
 Apply countermeasures to address vulnerabilities. Countermeasures are summarized
in this chapter. Use Part III, "Building Secure Web Applications," and Part IV, "Securing
Your Network, Host, and Application," of this guide for countermeasure implementation
details.
 When you design, build, and secure new systems, keep the threats in this chapter in
mind. The threats exist regardless of the platform or technologies that you use.

Anatomy of an Attack

By understanding the basic approach used by attackers to target your Web application, you will
be better equipped to take defensive measures because you will know what you are up against.
The basic steps in attacker methodology are summarized below and illustrated in Figure 2.1:
 Survey and assess
 Exploit and penetrate
 Escalate privileges
 Maintain access
 Deny service

147 www.someakenya.com Contact: 0707 737 890


Figure 2.1

Basic steps for attacking methodology

Survey and Assess

Surveying and assessing the potential target are done in tandem. The first step an attacker usually
takes is to survey the potential target to identify and assess its characteristics. These
characteristics may include its supported services and protocols together with potential
vulnerabilities and entry points. The attacker uses the information gathered in the survey and
assess phase to plan an initial attack.

For example, an attacker can detect a cross-site scripting (XSS) vulnerability by testing to see if
any controls in a Web page echo back output.

Exploit and Penetrate

Having surveyed a potential target, the next step is to exploit and penetrate. If the network and
host are fully secured, your application (the front gate) becomes the next channel for attack.

For an attacker, the easiest way into an application is through the same entrance that legitimate
users use — for example, through the application's logon page or a page that does not require
authentication.

Escalate Privileges

After attackers manage to compromise an application or network, perhaps by injecting code into
an application or creating an authenticated session with the operating system, they immediately
attempt to escalate privileges. Specifically, they look for administration privileges provided by
accounts that are members of the Administrators group. They also seek out the high level of
privileges offered by the local system account.

Using least privileged service accounts throughout your application is a primary defense against
privilege escalation attacks. Also, many network level privilege escalation attacks require an
interactive logon session.

148 www.someakenya.com Contact: 0707 737 890


Maintain Access

Having gained access to a system, an attacker takes steps to make future access easier and to
cover his or her tracks. Common approaches for making future access easier include planting
back-door programs or using an existing account that lacks strong protection. Covering tracks
typically involves clearing logs and hiding tools. As such, audit logs are a primary target for the
attacker.

Log files should be secured, and they should be analyzed on a regular basis. Log file analysis can
often uncover the early signs of an attempted break-in before damage is done.

Deny Service

Attackers who cannot gain access often mount a denial of service attack to prevent others from
using the application. For other attackers, the denial of service option is their goal from the
outset. An example is the SYN flood attack, where the attacker uses a program to send a flood of
TCP SYN requests to fill the pending connection queue on the server. This prevents other users
from establishing network connections.

Understanding Threat Categories

While there are many variations of specific attacks and attack techniques, it is useful to think
about threats in terms of what the attacker is trying to achieve. This changes your focus from the
identification of every specific attack — which is really just a means to an end — to focusing on
the end results of possible attacks.

STRIDE

Threats faced by the application can be categorized based on the goals and purposes of the
attacks. A working knowledge of these categories of threats can help you organize a security
strategy so that you have planned responses to threats. STRIDE is the acronym used at Microsoft
to categorize different threat types. STRIDE stands for:

 Spoofing. Spoofing is attempting to gain access to a system by using a false identity. This
can be accomplished using stolen user credentials or a false IP address. After the attacker
successfully gains access as a legitimate user or host, elevation of privileges or abuse
using authorization can begin.
 Tampering. Tampering is the unauthorized modification of data, for example as it flows
over a network between two computers.
 Repudiation. Repudiation is the ability of users (legitimate or otherwise) to deny that
they performed specific actions or transactions. Without adequate auditing, repudiation
attacks are difficult to prove.
 Information disclosure. Information disclosure is the unwanted exposure of private
data. For example, a user views the contents of a table or file he or she is not authorized
to open, or monitors data passed in plaintext over a network. Some examples of
information disclosure vulnerabilities include the use of hidden form fields, comments

149 www.someakenya.com Contact: 0707 737 890


embedded in Web pages that contain database connection strings and connection details,
and weak exception handling that can lead to internal system level details being revealed
to the client. Any of this information can be very useful to the attacker.
 Denial of service. Denial of service is the process of making a system or application
unavailable. For example, a denial of service attack might be accomplished by
bombarding a server with requests to consume all available system resources or by
passing it malformed input data that can crash an application process.
 Elevation of privilege. Elevation of privilege occurs when a user with limited privileges
assumes the identity of a privileged user to gain privileged access to an application. For
example, an attacker with limited privileges might elevate his or her privilege level to
compromise and take control of a highly privileged and trusted process or account.

1 STRIDE Threats and Countermeasures

Each threat category described by STRIDE has a corresponding set of countermeasure


techniques that should be used to reduce risk. These are summarized in Table 2.1. The
appropriate countermeasure depends upon the specific attack. More threats, attacks, and
countermeasures that apply at the network, host, and application levels are presented later in this
chapter.

Table 2.1 STRIDE Threats and Countermeasures

Threat Countermeasures
Spoofing user Use strong authentication.
identity Do not store secrets (for example, passwords) in plaintext.
Do not pass credentials in plaintext over the wire.
Protect authentication cookies with Secure Sockets Layer (SSL).
Tampering with Use data hashing and signing.
data Use digital signatures.
Use strong authorization.
Use tamper-resistant protocols across communication links.
Secure communication links with protocols that provide message integrity.
Repudiation Create secure audit trails.
Use digital signatures.
Information Use strong authorization.
disclosure Use strong encryption.
Secure communication links with protocols that provide message confidentiality.
Do not store secrets (for example, passwords) in plaintext.
Denial of Use resource and bandwidth throttling techniques.
service Validate and filter input.
Elevation of Follow the principle of least privilege and use least privileged service accounts to
privilege run processes and access resources.

2 Network Threats and Countermeasures

The primary components that make up your network infrastructure are routers, firewalls, and
switches. They act as the gatekeepers guarding your servers and applications from attacks and
intrusions. An attacker may exploit poorly configured network devices. Common vulnerabilities

150 www.someakenya.com Contact: 0707 737 890


include weak default installation settings, wide open access controls, and devices lacking the
latest security patches. Top network level threats include:

 Information gathering
 Sniffing
 Spoofing
 Session hijacking
 Denial of service

a) Information Gathering

Network devices can be discovered and profiled in much the same way as other types of systems.
Attackers usually start with port scanning. After they identify open ports, they use banner
grabbing and enumeration to detect device types and to determine operating system and
application versions. Armed with this information, an attacker can attack known vulnerabilities
that may not be updated with security patches.

Countermeasures to prevent information gathering include:

 Configure routers to restrict their responses to foot printing requests.


 Configure operating systems that host network software (for example, software firewalls)
to prevent foot printing by disabling unused protocols and unnecessary ports.

b) Sniffing

Sniffing or eavesdropping is the act of monitoring traffic on the network for data such as
plaintext passwords or configuration information. With a simple packet sniffer, an attacker can
easily read all plaintext traffic. Also, attackers can crack packets encrypted by lightweight
hashing algorithms and can decipher the payload that you considered to be safe. The sniffing of
packets requires a packet sniffer in the path of the server/client communication.

Countermeasures to help prevent sniffing include:

 Use strong physical security and proper segmenting of the network. This is the first step
in preventing traffic from being collected locally.
 Encrypt communication fully, including authentication credentials. This prevents sniffed
packets from being usable to an attacker. SSL and IPSec (Internet Protocol Security) are
examples of encryption solutions.

c) Spoofing

Spoofing is a means to hide one's true identity on the network. To create a spoofed identity, an
attacker uses a fake source address that does not represent the actual address of the packet.
Spoofing may be used to hide the original source of an attack or to work around network access
control lists (ACLs) that are in place to limit host access based on source address rules.

151 www.someakenya.com Contact: 0707 737 890


Although carefully crafted spoofed packets may never be tracked to the original sender, a
combination of filtering rules prevents spoofed packets from originating from your network,
allowing you to block obviously spoofed packets.

Countermeasures to prevent spoofing include:

 Filter incoming packets that appear to come from an internal IP address at your
perimeter.
 Filter outgoing packets that appear to originate from an invalid local IP address.

d) Session Hijacking

Also known as man in the middle attacks, session hijacking deceives a server or a client into
accepting the upstream host as the actual legitimate host. Instead the upstream host is an
attacker's host that is manipulating the network so the attacker's host appears to be the desired
destination.

Countermeasures to help prevent session hijacking include:

 Use encrypted session negotiation.


 Use encrypted communication channels.
 Stay informed of platform patches to fix TCP/IP vulnerabilities, such as predictable
packet sequences.

e) Denial of Service

Denial of service denies legitimate users access to a server or services. The SYN flood attack is a
common example of a network level denial of service attack. It is easy to launch and difficult to
track. The aim of the attack is to send more requests to a server than it can handle. The attack
exploits a potential vulnerability in the TCP/IP connection establishment mechanism and floods
the server's pending connection queue.

Countermeasures to prevent denial of service include:

 Apply the latest service packs.


 Harden the TCP/IP stack by applying the appropriate registry settings to increase the size
of the TCP connection queue, decrease the connection establishment period, and employ
dynamic backlog mechanisms to ensure that the connection queue is never exhausted.
 Use a network Intrusion Detection System (IDS) because these can automatically detect
and respond to SYN attacks.

3 Host Threats and Countermeasures

Host threats are directed at the system software upon which your applications are built. This
includes Windows 2000, Microsoft Windows Server 2003, Internet Information Services (IIS),

152 www.someakenya.com Contact: 0707 737 890


the .NET Framework, and SQL Server depending upon the specific server role. Top host level
threats include:

 Viruses, Trojan horses, and worms


 Foot printing
 Profiling
 Password cracking
 Denial of service
 Arbitrary code execution
 Unauthorized access

Viruses, Trojan Horses, and Worms

A virus is a program that is designed to perform malicious acts and cause disruption to your
operating system or applications. A Trojan horse resembles a virus except that the malicious
code is contained inside what appears to be a harmless data file or executable program. A worm
is similar to a Trojan horse except that it self-replicates from one server to another. Worms are
difficult to detect because they do not regularly create files that can be seen. They are often
noticed only when they begin to consume system resources because the system slows down or
the execution of other programs halt. The Code Red Worm is one of the most notorious to afflict
IIS; it relied upon a buffer overflow vulnerability in a particular ISAPI filter.

Although these three threats are actually attacks, together they pose a significant threat to Web
applications, the hosts these applications live on, and the network used to deliver these
applications. The success of these attacks on any system is possible through many vulnerabilities
such as weak defaults, software bugs, user error, and inherent vulnerabilities in Internet
protocols.

Countermeasures that you can use against viruses, Trojan horses, and worms include:

 Stay current with the latest operating system service packs and software patches.
 Block all unnecessary ports at the firewall and host.
 Disable unused functionality including protocols and services.
 Harden weak, default configuration settings.

Foot printing

Examples of foot printing are port scans, ping sweeps, and NetBIOS enumeration that can be
used by attackers to glean valuable system-level information to help prepare for more significant
attacks. The type of information potentially revealed by foot printing includes account details,
operating system and other software versions, server names, and database schema details.

Countermeasures to help prevent foot printing include:

 Disable unnecessary protocols.


 Lock down ports with the appropriate firewall configuration.

153 www.someakenya.com Contact: 0707 737 890


 Use TCP/IP and IPSec filters for defense in depth.
 Configure IIS to prevent information disclosure through banner grabbing.
 Use an IDS that can be configured to pick up foot printing patterns and reject suspicious
traffic.

Password Cracking

If the attacker cannot establish an anonymous connection with the server, he or she will try to
establish an authenticated connection. For this, the attacker must know a valid username and
password combination. If you use default account names, you are giving the attacker a head start.
Then the attacker only has to crack the account's password. The use of blank or weak passwords
makes the attacker's job even easier.

Countermeasures to help prevent password cracking include:

 Use strong passwords for all account types.


 Apply lockout policies to end-user accounts to limit the number of retry attempts that can
be used to guess the password.
 Do not use default account names, and rename standard accounts such as the
administrator's account and the anonymous Internet user account used by many Web
applications.
 Audit failed logins for patterns of password hacking attempts.

Denial of Service

Denial of service can be attained by many methods aimed at several targets within your
infrastructure. At the host, an attacker can disrupt service by brute force against your application,
or an attacker may know of a vulnerability that exists in the service your application is hosted in
or in the operating system that runs your server.

Countermeasures to help prevent denial of service include:

 Configure your applications, services, and operating system with denial of service in
mind.
 Stay current with patches and security updates.
 Harden the TCP/IP stack against denial of service.
 Make sure your account lockout policies cannot be exploited to lock out well known
service accounts.
 Make sure your application is capable of handling high volumes of traffic and that
thresholds are in place to handle abnormally high loads.
 Review your application's failover functionality.
 Use an IDS that can detect potential denial of service attacks.

154 www.someakenya.com Contact: 0707 737 890


Arbitrary Code Execution

If an attacker can execute malicious code on your server, the attacker can either compromise
server resources or mount further attacks against downstream systems. The risks posed by
arbitrary code execution increase if the server process under which the attacker's code runs is
over-privileged. Common vulnerabilities include weak IIS configuration and unpatched servers
that allow path traversal and buffer overflow attacks, both of which can lead to arbitrary code
execution.

Countermeasures to help prevent arbitrary code execution include:

 Configure IIS to reject URLs with "../" to prevent path traversal.


 Lock down system commands and utilities with restricted ACLs.
 Stay current with patches and updates to ensure that newly discovered buffer overflows
are speedily patched.

Unauthorized Access

Inadequate access controls could allow an unauthorized user to access restricted information or
perform restricted operations. Common vulnerabilities include weak IIS Web access controls,
including Web permissions and weak NTFS permissions.

Countermeasures to help prevent unauthorized access include:

 Configure secure Web permissions.


 Lock down files and folders with restricted NTFS permissions.
 Use .NET Framework access control mechanisms within your ASP.NET applications,
including URL authorization and principal permission demands.

4 Application Threats and Countermeasures

A good way to analyze application-level threats is to organize them by application vulnerability


category. The various categories used in the subsequent sections of this chapter and throughout
the guide, together with the main threats to your application, are summarized in Table 2.2.

155 www.someakenya.com Contact: 0707 737 890


Table 2.2 Threats by Application Vulnerability Category

Category Threats
Input validation Buffer overflow; cross-site scripting; SQL injection; canonicalization
Authentication Network eavesdropping; brute force attacks;

dictionary attacks; cookie replay; credential theft


Authorization Elevation of privilege; disclosure of confidential data; data tampering; luring
attacks
Configuration Unauthorized access to administration interfaces; unauthorized access to
management configuration stores; retrieval of clear text configuration data; lack of individual
accountability; over-privileged process and service accounts
Sensitive data Access sensitive data in storage; network eavesdropping; data tampering
Session Session hijacking; session replay; man in the middle
management
Cryptography Poor key generation or key management; weak or custom encryption
Parameter Query string manipulation; form field manipulation; cookie manipulation; HTTP
manipulation header manipulation
Exception Information disclosure; denial of service
management
Auditing and User denies performing an operation; attacker exploits an application without
logging trace; attacker covers his or her tracks

Input Validation

Input validation is a security issue if an attacker discovers that your application makes unfounded
assumptions about the type, length, format, or range of input data. The attacker can then supply
carefully crafted input that compromises your application.

When network and host level entry points are fully secured; the public interfaces exposed by
your application become the only source of attack. The input to your application is a means to
both test your system and a way to execute code on an attacker's behalf. Does your application
blindly trust input? If it does, your application may be susceptible to the following:

 Buffer overflows
 Cross-site scripting
 SQL injection
 Canonicalization

The following section examines these vulnerabilities in detail, including what makes these
vulnerabilities possible.

Buffer Overflows

Buffer overflow vulnerabilities can lead to denial of service attacks or code injection. A denial of
service attack causes a process crash; code injection alters the program execution address to run

156 www.someakenya.com Contact: 0707 737 890


an attacker's injected code. The following code fragment illustrates a common example of buffer
overflow vulnerability.

void SomeFunction( char *pszInput )

char szBuffer[10];

// Input is copied straight into the buffer when no type checking is performed

strcpy(szBuffer, pszInput);

...

Managed .NET code is not susceptible to this problem because array bounds are automatically
checked whenever an array is accessed. This makes the threat of buffer overflow attacks on
managed code much less of an issue. It is still a concern, however, especially where managed
code calls unmanaged APIs or COM objects.

Countermeasures to help prevent buffer overflows include:

 Perform thorough input validation. This is the first line of defense against buffer
overflows. Although a bug may exist in your application that permits expected input to
reach beyond the bounds of a container, unexpected input will be the primary cause of
this vulnerability. Constrain input by validating it for type, length, format and range.
 When possible, limit your application's use of unmanaged code, and thoroughly inspect
the unmanaged APIs to ensure that input is properly validated.
 Inspect the managed code that calls the unmanaged API to ensure that only appropriate
values can be passed as parameters to the unmanaged API.
 Use the /GS flag to compile code developed with the Microsoft Visual C++®
development system. The /GS flag causes the compiler to inject security checks into the
compiled code. This is not a fail-proof solution or a replacement for your specific
validation code; it does, however, protect your code from commonly known buffer
overflow attacks. For more information, see the .NET Framework Product documentation
http://msdn.microsoft.com/en-us/library/8dbf701c(VS.71).aspx and Microsoft
Knowledge Base article 325483 "WebCast: Compiler Security Checks: The –GS
compiler switch."

Example of Code Injection Through Buffer Overflows

An attacker can exploit a buffer overflow vulnerability to inject code. With this attack, a
malicious user exploits an unchecked buffer in a process by supplying a carefully constructed

157 www.someakenya.com Contact: 0707 737 890


input value that overwrites the program's stack and alters a function's return address. This causes
execution to jump to the attacker's injected code.

The attacker's code usually ends up running under the process security context. This emphasizes
the importance of using least privileged process accounts. If the current thread is impersonating,
the attacker's code ends up running under the security context defined by the thread
impersonation token. The first thing an attacker usually does is call the RevertToSelf API to
revert to the process level security context that the attacker hopes has higher privileges.

Make sure you validate input for type and length, especially before you call unmanaged code
because unmanaged code is particularly susceptible to buffer overflows.

Cross-Site Scripting(XSS)

An XSS attack can cause arbitrary code to run in a user's browser while the browser is connected
to a trusted Web site. The attack targets your application's users and not the application itself, but
it uses your application as the vehicle for the attack.

Because the script code is downloaded by the browser from a trusted site, the browser has no
way of knowing that the code is not legitimate. Internet Explorer security zones provide no
defense. Since the attacker's code has access to the cookies associated with the trusted site and
are stored on the user's local computer, a user's authentication cookies are typically the target of
attack.

Example of Cross-Site Scripting

To initiate the attack, the attacker must convince the user to click on a carefully crafted
hyperlink, for example, by embedding a link in an email sent to the user or by adding a malicious
link to a newsgroup posting. The link points to a vulnerable page in your application that echoes
the invalidated input back to the browser in the HTML output stream. For example, consider the
following two links.

Here is a legitimate link:

www.yourwebapplication.com/logon.aspx?username=bob

Here is a malicious link:

www.yourwebapplication.com/logon.aspx?username=<script>alert('hacker code')</script>

If the Web application takes the query string, fails to properly validate it, and then returns it to
the browser, the script code executes in the browser. The preceding example displays a harmless
pop-up message. With the appropriate script, the attacker can easily extract the user's
authentication cookie, post it to his site, and subsequently make a request to the target Web site
as the authenticated user.

158 www.someakenya.com Contact: 0707 737 890


Countermeasures to prevent XSS include:

 Perform thorough input validation. Your applications must ensure that input from query
strings, form fields, and cookies are valid for the application. Consider all user input as
possibly malicious, and filter or sanitize for the context of the downstream code. Validate
all input for known valid values and then reject all other input. Use regular expressions to
validate input data received via HTML form fields, cookies, and query strings.
 Use HTMLEncode and URLEncode functions to encode any output that includes user
input. This converts executable script into harmless HTML.

SQL Injection

A SQL injection attack exploits vulnerabilities in input validation to run arbitrary commands in
the database. It can occur when your application uses input to construct dynamic SQL statements
to access the database. It can also occur if your code uses stored procedures that are passed
strings that contain unfiltered user input. Using the SQL injection attack, the attacker can execute
arbitrary commands in the database. The issue is magnified if the application uses an over-
privileged account to connect to the database. In this instance it is possible to use the database
server to run operating system commands and potentially compromise other servers, in addition
to being able to retrieve, manipulate, and destroy data.

Example of SQL Injection

Your application may be susceptible to SQL injection attacks when you incorporate invalidated
user input into database queries. Particularly susceptible is code that constructs dynamic SQL
statements with unfiltered user input. Consider the following code:

SqlDataAdapter myCommand = new SqlDataAdapter(

"SELECT * FROM Users

WHERE UserName ='" + txtuid.Text + "'", conn);

Attackers can inject SQL by terminating the intended SQL statement with the single quote
character followed by a semicolon character to begin a new command, and then executing the
command of their choice. Consider the following character string entered into the txtuid field.

'; DROP TABLE Customers -

This results in the following statement being submitted to the database for execution.

SELECT * FROM Users WHERE UserName=''; DROP TABLE Customers --'

This deletes the Customers table, assuming that the application's login has sufficient permissions
in the database (another reason to use a least privileged login in the database). The double dash (-

159 www.someakenya.com Contact: 0707 737 890


-) denotes a SQL comment and is used to comment out any other characters added by the
programmer, such as the trailing quote.

Note The semicolon is not actually required. SQL Server will execute two commands separated
by spaces.

Other more subtle tricks can be performed. Supplying this input to the txtuid field:

' OR 1=1 -

builds this command:

SELECT * FROM Users WHERE UserName='' OR 1=1 -

Because 1=1 is always true, the attacker retrieves every row of data from the Users table.

Countermeasures to prevent SQL injection include:

 Perform thorough input validation. Your application should validate its input prior to
sending a request to the database.
 Use parameterized stored procedures for database access to ensure that input strings are
not treated as executable statements. If you cannot use stored procedures, use SQL
parameters when you build SQL commands.
 Use least privileged accounts to connect to the database.

Canonicalization

Different forms of input that resolve to the same standard name (the canonical name), is referred
to as canonicalization. Code is particularly susceptible to canonicalization issues if it makes
security decisions based on the name of a resource that is passed to the program as input. Files,
paths, and URLs are resource types that are vulnerable to canonicalization because in each case
there are many different ways to represent the same name. File names are also problematic. For
example, a single file could be represented as:

c:\temp\somefile.dat

somefile.dat

c:\temp\subdir\..\somefile.dat

c:\ temp\ somefile.dat

..\somefile.dat

160 www.someakenya.com Contact: 0707 737 890


Ideally, your code should not accept input file names. If it does, the name should be converted to
its canonical form prior to making security decisions, such as whether access should be granted
or denied to the specified file.

Countermeasures to address canonicalization issues include:

 Avoid using file names as input where possible and instead use absolute file paths that
cannot be changed by the end user.
 Make sure that file names are well formed (if you must accept file names as input) and
validate them within the context of your application. For example, check that they are
within your application's directory hierarchy.
 Ensure that the character encoding is set correctly to limit how input can be represented.
Check that your application's Web.config has set the requestEncoding and
responseEncoding attributes on the <globalization> element.

Authentication

Depending on your requirements, there are several available authentication mechanisms to


choose from. If they are not correctly chosen and implemented, the authentication mechanism
can expose vulnerabilities that attackers can exploit to gain access to your system. The top
threats that exploit authentication vulnerabilities include:

 Network eavesdropping
 Brute force attacks
 Dictionary attacks
 Cookie replay attacks
 Credential theft

Network Eavesdropping

If authentication credentials are passed in plaintext from client to server, an attacker armed with
rudimentary network monitoring software on a host on the same network can capture traffic and
obtain user names and passwords.

Countermeasures to prevent network eavesdropping include:

 Use authentication mechanisms that do not transmit the password over the network such
as Kerberos protocol or Windows authentication.
 Make sure passwords are encrypted (if you must transmit passwords over the network) or
use an encrypted communication channel, for example with SSL.

Brute Force Attacks

Brute force attacks rely on computational power to crack hashed passwords or other secrets
secured with hashing and encryption. To mitigate the risk, use strong passwords. Additionally,

161 www.someakenya.com Contact: 0707 737 890


use hashed passwords with salt; this slows down the attacker considerably and allows sufficient
time for countermeasures to be activated.

Dictionary Attacks

This attack is used to obtain passwords. Most password systems do not store plaintext passwords
or encrypted passwords. They avoid encrypted passwords because a compromised key leads to
the compromise of all passwords in the data store. Lost keys mean that all passwords are
invalidated.

Most user store implementations hold password hashes (or digests). Users are authenticated by
re-computing the hash based on the user-supplied password value and comparing it against the
hash value stored in the database. If an attacker manages to obtain the list of hashed passwords, a
brute force attack can be used to crack the password hashes.

With the dictionary attack, an attacker uses a program to iterate through all of the words in a
dictionary (or multiple dictionaries in different languages) and computes the hash for each word.
The resultant hash is compared with the value in the data store. Weak passwords such as
"Yankees" (a favorite team) or "Mustang" (a favorite car) will be cracked quickly. Stronger
passwords such as "?You'LlNevaFiNdMeyePasSWerd!", are less likely to be cracked.

Note Once the attacker has obtained the list of password hashes, the dictionary attack can be
performed offline and does not require interaction with the application.

Countermeasures to prevent dictionary attacks include:

 Use strong passwords that are complex, are not regular words, and contain a mixture of
upper case, lower case, numeric, and special characters.
 Store non-reversible password hashes in the user store. Also combine a salt value (a
cryptographically strong random number) with the password hash.

Cookie Replay Attacks

With this type of attack, the attacker captures the user's authentication cookie using monitoring
software and replays it to the application to gain access under a false identity.

Countermeasures to prevent cookie replay include:

 Use an encrypted communication channel provided by SSL whenever an authentication


cookie is transmitted.
 Use a cookie timeout to a value that forces authentication after a relatively short time
interval. Although this doesn't prevent replay attacks, it reduces the time interval in which
the attacker can replay a request without being forced to re-authenticate because the
session has timed out.

162 www.someakenya.com Contact: 0707 737 890


Credential Theft

If your application implements its own user store containing user account names and passwords,
compare its security to the credential stores provided by the platform, for example, a Microsoft
Active Directory® directory service or Security Accounts Manager (SAM) user store. Browser
history and cache also store user login information for future use. If the terminal is accessed by
someone other than the user who logged on, and the same page is hit, the saved login will be
available.

Countermeasures to help prevent credential theft include:

 Use and enforce strong passwords.


 Store password verifiers in the form of one way hashes with added salt.
 Enforce account lockout for end-user accounts after a set number of retry attempts.
 To counter the possibility of the browser cache allowing login access, create functionality
that either allows the user to choose to not save credentials, or force this functionality as a
default policy.

Authorization

Based on user identity and role membership, authorization to a particular resource or service is
either allowed or denied. Top threats that exploit authorization vulnerabilities include:

 Elevation of privilege
 Disclosure of confidential data
 Data tampering
 Luring attacks

Elevation of Privilege

When you design an authorization model, you must consider the threat of an attacker trying to
elevate privileges to a powerful account such as a member of the local administrators group or
the local system account. By doing this, the attacker is able to take complete control over the
application and local machine. For example, with classic ASP programming, calling the
RevertToSelf API from a component might cause the executing thread to run as the local system
account with the most power and privileges on the local machine.

The main countermeasure that you can use to prevent elevation of privilege is to use least
privileged process, service, and user accounts.

Disclosure of Confidential Data

The disclosure of confidential data can occur if sensitive data can be viewed by unauthorized
users. Confidential data includes application specific data such as credit card numbers, employee
details, financial records and so on together with application configuration data such as service
account credentials and database connection strings. To prevent the disclosure of confidential

163 www.someakenya.com Contact: 0707 737 890


data you should secure it in persistent stores such as databases and configuration files, and during
transit over the network. Only authenticated and authorized users should be able to access the
data that is specific to them. Access to system level configuration data should be restricted to
administrators.

Countermeasures to prevent disclosure of confidential data include:

 Perform role checks before allowing access to the operations that could potentially reveal
sensitive data.
 Use strong ACLs to secure Windows resources.
 Use standard encryption to store sensitive data in configuration files and databases.

Data Tampering

Data tampering refers to the unauthorized modification of data.

Countermeasures to prevent data tampering include:

 Use strong access controls to protect data in persistent stores to ensure that only
authorized users can access and modify the data.
 Use role-based security to differentiate between users who can view data and users who
can modify data.

Luring Attacks

A luring attack occurs when an entity with few privileges is able to have an entity with more
privileges perform an action on its behalf.

To counter the threat, you must restrict access to trusted code with the appropriate authorization.
Using .NET Framework code access security helps in this respect by authorizing calling code
whenever a secure resource is accessed or a privileged operation is performed.

Configuration Management

Many applications support configuration management interfaces and functionality to allow


operators and administrators to change configuration parameters, update Web site content, and to
perform routine maintenance. Top configuration management threats include:

 Unauthorized access to administration interfaces


 Unauthorized access to configuration stores
 Retrieval of plaintext configuration secrets
 Lack of individual accountability
 Over-privileged process and service accounts

164 www.someakenya.com Contact: 0707 737 890


Unauthorized Access to Administration Interfaces

Administration interfaces are often provided through additional Web pages or separate Web
applications that allow administrators, operators, and content developers to managed site content
and configuration. Administration interfaces such as these should be available only to restricted
and authorized users. Malicious users able to access a configuration management function can
potentially deface the Web site, access downstream systems and databases, or take the
application out of action altogether by corrupting configuration data.

Countermeasures to prevent unauthorized access to administration interfaces include:

 Minimize the number of administration interfaces.


 Use strong authentication, for example, by using certificates.
 Use strong authorization with multiple gatekeepers.
 Consider supporting only local administration. If remote administration is absolutely
essential, use encrypted channels, for example, with VPN technology or SSL, because of
the sensitive nature of the data passed over administrative interfaces. To further reduce
risk, also consider using IPSec policies to limit remote administration to computers on the
internal network.

Unauthorized Access to Configuration Stores

Because of the sensitive nature of the data maintained in configuration stores, you should ensure
that the stores are adequately secured.

Countermeasures to protect configuration stores include:

 Configure restricted ACLs on text-based configuration files such as Machine.config and


Web.config.
 Keep custom configuration stores outside of the Web space. This removes the potential to
download Web server configurations to exploit their vulnerabilities.

Retrieval of Plaintext Configuration Secrets

Restricting access to the configuration store is a must. As an important defense in depth


mechanism, you should encrypt sensitive data such as passwords and connection strings. This
helps prevent external attackers from obtaining sensitive configuration data. It also prevents
rogue administrators and internal employees from obtaining sensitive details such as database
connection strings and account credentials that might allow them to gain access to other systems.

Lack of Individual Accountability

Lack of auditing and logging of changes made to configuration information threatens the ability
to identify when changes were made and who made those changes. When a breaking change is
made either by an honest operator error or by a malicious change to grant privileged access,
action must first be taken to correct the change. Then apply preventive measures to prevent

165 www.someakenya.com Contact: 0707 737 890


breaking changes to be introduced in the same manner. Keep in mind that auditing and logging
can be circumvented by a shared account; this applies to both administrative and
user/application/service accounts. Administrative accounts must not be shared.
User/application/service accounts must be assigned at a level that allows the identification of a
single source of access using the account, and that contains any damage to the privileges granted
that account.

Over-privileged Application and Service Accounts

If application and service accounts are granted access to change configuration information on the
system, they may be manipulated to do so by an attacker. The risk of this threat can be mitigated
by adopting a policy of using least privileged service and application accounts. Be wary of
granting accounts the ability to modify their own configuration information unless explicitly
required by design.

Sensitive Data

Sensitive data is subject to a variety of threats. Attacks that attempt to view or modify sensitive
data can target persistent data stores and networks. Top threats to sensitive data include:

 Access to sensitive data in storage


 Network eavesdropping
 Data tampering

Access to Sensitive Data in Storage

You must secure sensitive data in storage to prevent a user — malicious or otherwise — from
gaining access to and reading the data.

Countermeasures to protect sensitive data in storage include:

 Use restricted ACLs on the persistent data stores that contain sensitive data.
 Store encrypted data.
 Use identity and role-based authorization to ensure that only the user or users with the
appropriate level of authority are allowed access to sensitive data. Use role-based security
to differentiate between users who can view data and users who can modify data.

Network Eavesdropping

The HTTP data for Web application travels across networks in plaintext and is subject to
network eavesdropping attacks, where an attacker uses network monitoring software to capture
and potentially modify sensitive data.

Countermeasures to prevent network eavesdropping and to provide privacy include:


 Encrypt the data.
 Use an encrypted communication channel, for example, SSL.

166 www.someakenya.com Contact: 0707 737 890


Data Tampering

Data tampering refers to the unauthorized modification of data, often as it is passed over the
network.

One countermeasure to prevent data tampering is to protect sensitive data passed across the
network with tamper-resistant protocols such as hashed message authentication codes (HMACs).

An HMAC provides message integrity in the following way:

1. The sender uses a shared secret key to create a hash based on the message payload.
2. The sender transmits the hash along with the message payload.
3. The receiver uses the shared key to recalculate the hash based on the received message
payload. The receiver then compares the new hash value with the transmitted hash value.
If they are the same, the message cannot have been tampered with.

Session Management

Session management for Web applications is an application layer responsibility. Session security
is critical to the overall security of the application.

Top session management threats include:

 Session hijacking
 Session replay
 Man in the middle

Session Hijacking

A session hijacking attack occurs when an attacker uses network monitoring software to capture
the authentication token (often a cookie) used to represent a user's session with an application.
With the captured cookie, the attacker can spoof the user's session and gain access to the
application. The attacker has the same level of privileges as the legitimate user.

Countermeasures to prevent session hijacking include:

 Use SSL to create a secure communication channel and only pass the authentication
cookie over an HTTPS connection.
 Implement logout functionality to allow a user to end a session that forces authentication
if another session is started.
 Make sure you limit the expiration period on the session cookie if you do not use SSL.
Although this does not prevent session hijacking, it reduces the time window available to
the attacker.

167 www.someakenya.com Contact: 0707 737 890


Session Replay

Session replay occurs when a user's session token is intercepted and submitted by an attacker to
bypass the authentication mechanism. For example, if the session token is in plaintext in a cookie
or URL, an attacker can sniff it. The attacker then posts a request using the hijacked session
token.

Countermeasures to help address the threat of session replay include:

 Re-authenticate when performing critical functions. For example, prior to performing a


monetary transfer in a banking application, make the user supply the account password
again.
 Expire sessions appropriately, including all cookies and session tokens.
 Create a "do not remember me" option to allow no session data to be stored on the client.

Man in the Middle Attacks

A man in the middle attack occurs when the attacker intercepts messages sent between you and
your intended recipient. The attacker then changes your message and sends it to the original
recipient. The recipient receives the message, sees that it came from you, and acts on it. When
the recipient sends a message back to you, the attacker intercepts it, alters it, and returns it to
you. You and your recipient never know that you have been attacked.

Any network request involving client-server communication, including Web requests,


Distributed Component Object Model (DCOM) requests, and calls to remote components and
Web services, are subject to man in the middle attacks.

Countermeasures to prevent man in the middle attacks include:

 Use cryptography. If you encrypt the data before transmitting it, the attacker can still
intercept it but cannot read it or alter it. If the attacker cannot read it, he or she cannot
know which parts to alter. If the attacker blindly modifies your encrypted message, then
the original recipient is unable to successfully decrypt it and, as a result, knows that it has
been tampered with.
 Use Hashed Message Authentication Codes (HMACs). If an attacker alters the message,
the recalculation of the HMAC at the recipient fails and the data can be rejected as
invalid.

Cryptography

Most applications use cryptography to protect data and to ensure it remains private and
unaltered. Top threats surrounding your application's use of cryptography include:

 Poor key generation or key management


 Weak or custom encryption
 Checksum spoofing

168 www.someakenya.com Contact: 0707 737 890


Poor Key Generation or Key Management

Attackers can decrypt encrypted data if they have access to the encryption key or can derive the
encryption key. Attackers can discover a key if keys are managed poorly or if they were
generated in a non-random fashion.

Countermeasures to address the threat of poor key generation and key management include:

 Use built-in encryption routines that include secure key management. Data Protection
application programming interface (DPAPI) is an example of an encryption service
provided on Windows 2000 and later operating systems where the operating system
manages the key.
 Use strong random key generation functions and store the key in a restricted location —
for example, in a registry key secured with a restricted ACL — if you use an encryption
mechanism that requires you to generate or manage the key.
 Encrypt the encryption key using DPAPI for added security.
 Expire keys regularly.

Weak or Custom Encryption

An encryption algorithm provides no security if the encryption is cracked or is vulnerable to


brute force cracking. Custom algorithms are particularly vulnerable if they have not been tested.
Instead, use published, well-known encryption algorithms that have withstood years of rigorous
attacks and scrutiny.

Countermeasures that address the vulnerabilities of weak or custom encryption include:

 Do not develop your own custom algorithms.


 Use the proven cryptographic services provided by the platform.
 Stay informed about cracked algorithms and the techniques used to crack them.

Checksum Spoofing

Do not rely on hashes to provide data integrity for messages sent over networks. Hashes such as
Secure Hash Algorithm (SHA1) and Message Digest compression algorithm (MD5) can be
intercepted and changed. Consider the following base 64 encoding UTF-8 message with an
appended Message Authentication Code (MAC).

Plaintext: Place 10 orders.

Hash: T0mUNdEQh13IO9oTcaP4FYDX6pU=

If an attacker intercepts the message by monitoring the network, the attacker could update the
message and recompute the hash (guessing the algorithm that you used). For example, the
message could be changed to:

169 www.someakenya.com Contact: 0707 737 890


Plaintext: Place 100 orders.

Hash: oEDuJpv/ZtIU7BXDDNv17EAHeAU=

When recipients process the message, and they run the plaintext ("Place 100 orders") through the
hashing algorithm, and then recompute the hash, the hash they calculate will be equal to
whatever the attacker computed.

To counter this attack, use a MAC or HMAC. The Message Authentication Code Triple Data
Encryption Standard (MACTripleDES) algorithm computes a MAC, and HMACSHA1 computes
an HMAC. Both use a key to produce a checksum. With these algorithms, an attacker needs to
know the key to generate a checksum that would compute correctly at the receiver.

Parameter Manipulation

Parameter manipulation attacks are a class of attack that relies on the modification of the
parameter data sent between the client and Web application. This includes query strings, form
fields, cookies, and HTTP headers. Top parameter manipulation threats include:

 Query string manipulation


 Form field manipulation
 Cookie manipulation
 HTTP header manipulation

Query String Manipulation

Users can easily manipulate the query string values passed by HTTP GET from client to server
because they are displayed in the browser's URL address bar. If your application relies on query
string values to make security decisions, or if the values represent sensitive data such as
monetary amounts, the application is vulnerable to attack.

Countermeasures to address the threat of query string manipulation include:

 Avoid using query string parameters that contain sensitive data or data that can influence
the security logic on the server. Instead, use a session identifier to identify the client and
store sensitive items in the session store on the server.
 Choose HTTP POST instead of GET to submit forms.
 Encrypt query string parameters.

Form Field Manipulation

The values of HTML form fields are sent in plaintext to the server using the HTTP POST
protocol. This may include visible and hidden form fields. Form fields of any type can be easily
modified and client-side validation routines bypassed. As a result, applications that rely on form
field input values to make security decisions on the server are vulnerable to attack.

170 www.someakenya.com Contact: 0707 737 890


To counter the threat of form field manipulation, instead of using hidden form fields, use session
identifiers to reference state maintained in the state store on the server.

Cookie Manipulation

Cookies are susceptible to modification by the client. This is true of both persistent and memory-
resident cookies. A number of tools are available to help an attacker modify the contents of a
memory-resident cookie. Cookie manipulation is the attack that refers to the modification of a
cookie, usually to gain unauthorized access to a Web site.

While SSL protects cookies over the network, it does not prevent them from being modified on
the client computer. To counter the threat of cookie manipulation, encrypt and use an HMAC
with the cookie.

HTTP Header Manipulation

HTTP headers pass information between the client and the server. The client constructs request
headers while the server constructs response headers. If your application relies on request
headers to make a decision, your application is vulnerable to attack.

Do not base your security decisions on HTTP headers. For example, do not trust the HTTP
Referrer to determine where a client came from because this is easily falsified.

Exception Management

Exceptions that are allowed to propagate to the client can reveal internal implementation details
that make no sense to the end user but are useful to attackers. Applications that do not use
exception handling or implement it poorly are also subject to denial of service attacks. Top
exception handling threats include:

 Attacker reveals implementation details


 Denial of service

Attacker Reveals Implementation Details

One of the important features of the .NET Framework is that it provides rich exception details
that are invaluable to developers. If the same information is allowed to fall into the hands of an
attacker, it can greatly help the attacker exploit potential vulnerabilities and plan future attacks.
The type of information that could be returned includes platform versions, server names, SQL
command strings, and database connection strings.

Countermeasures to help prevent internal implementation details from being revealed to the
client include:
 Use exception handling throughout your application's code base.
 Handle and log exceptions that are allowed to propagate to the application boundary.
 Return generic, harmless error messages to the client.

171 www.someakenya.com Contact: 0707 737 890


Denial of Service

Attackers will probe a Web application, usually by passing deliberately malformed input. They
often have two goals in mind. The first is to cause exceptions that reveal useful information and
the second is to crash the Web application process. This can occur if exceptions are not properly
caught and handled.

Countermeasures to help prevent application-level denial of service include:

 Thoroughly validate all input data at the server.


 Use exception handling throughout your application's code base.

Auditing and Logging

Auditing and logging should be used to help detect suspicious activity such as foot printing or
possible password cracking attempts before an exploit actually occurs. It can also help deal with
the threat of repudiation. It is much harder for a user to deny performing an operation if a series
of synchronized log entries on multiple servers indicate that the user performed that transaction.

Top auditing and logging related threats include:

 User denies performing an operation


 Attackers exploit an application without leaving a trace
 Attackers cover their tracks

User Denies Performing an Operation

The issue of repudiation is concerned with a user denying that he or she performed an action or
initiated a transaction. You need defense mechanisms in place to ensure that all user activity can
be tracked and recorded.

Countermeasures to help prevent repudiation threats include:

 Audit and log activity on the Web server and database server, and on the application
server as well, if you use one.
 Log key events such as transactions and login and logout events.
 Do not use shared accounts since the original source cannot be determined.

Attackers Exploit an Application Without Leaving a Trace

System and application-level auditing is required to ensure that suspicious activity does not go
undetected.

Countermeasures to detect suspicious activity include:

 Log critical application level operations.

172 www.someakenya.com Contact: 0707 737 890


 Use platform-level auditing to audit login and logout events, access to the file system,
and failed object access attempts.
 Back up log files and regularly analyze them for signs of suspicious activity.

Attackers Cover Their Tracks

Your log files must be well-protected to ensure that attackers are not able to cover their tracks.

Countermeasures to help prevent attackers from covering their tracks include:

 Secure log files by using restricted ACLs.


 Relocate system log files away from their default locations.

 Corporate risk document

173 www.someakenya.com Contact: 0707 737 890


TOPIC 8

BUSINESS CONTINUITY PLANNING (BCP)


Business continuity planning (or business continuity and resiliency planning) is the process
of creating systems of prevention and recovery to deal with potential threats to a company.
A business continuity plan is a plan to continue operations if a place of business is affected by
different levels of disaster which can be localized short term disasters, to days long building wide
problems, to a permanent loss of a building. Such a plan typically explains how the business
would recover its operations or move operations to another location after damage by events like
natural disasters, theft, or flooding. For example, if a fire destroys an office building or data
center, the people and business or data center operations would relocate to a recovery site.
Any event that could negatively impact operations is included in the plan, such as supply chain
interruption, loss of or damage to critical infrastructure (major machinery or computing /network
resource). As such, risk management must be incorporated as part of BCP.In the US, government
entities refer to the process as continuity of operations planning (COOP).

Analysis
The analysis phase consists of impact analysis, threat analysis and impact scenarios.

Business impact analysis (BIA)


A Business impact analysis (BIA) differentiates critical (urgent) and non-critical (non-urgent)
organization functions/activities. Critical functions are those whose disruption is regarded as
unacceptable. Perceptions of acceptability are affected by the cost of recovery solutions. A
function may also be considered critical if dictated by law. For each critical (in scope) function,
two values are then assigned:
 Recovery Point Objective (RPO) – the acceptable latency of data that will not be
recovered. For example is it acceptable for the company to lose 2 days of data?
 Recovery Time Objective (RTO) – the acceptable amount of time to restore the function.
The recovery point objective must ensure that the maximum tolerable data loss for each activity
is not exceeded. The recovery time objective must ensure that the Maximum Tolerable Period of
Disruption (MTPoD) for each activity is not exceeded.

Next, the impact analysis results in the recovery requirements for each critical function.
Recovery requirements consist of the following information:
 The business requirements for recovery of the critical function, and/or
 The technical requirements for recovery of the critical function

Threat and risk analysis (TRA)


After defining recovery requirements, each potential threat may require unique recovery steps.
Common threats include:
 Epidemic
 Earthquake
 Fire
 Flood
 Cyber attack

174 www.someakenya.com Contact: 0707 737 890


 Sabotage (insider or external threat)
 Hurricane or other major storm
 Utility outage
 Terrorism/Piracy
 War/civil disorder
 Theft (insider or external threat, vital information or material)
 Random failure of mission-critical systems
 Power cut

The impact of an epidemic can be regarded as purely human, and may be alleviated with
technical and business solutions. However, if people behind these plans are affected by the
disease, then the process can stumble.
During the 2002–2003 SARS outbreak, some organizations grouped staff into separate teams,
and rotated the teams between primary and secondary work sites, with a rotation frequency equal
to the incubation period of the disease. The organizations also banned face-to-face intergroup
contact during business and non-business hours. The split increased resiliency against the threat
of quarantine measures if one person in a team was exposed to the disease.

Impact scenarios
After identifying the applicable threats, impact scenarios are considered to support the
development of a business recovery plan. Business continuity testing plans may document
scenarios for each identified threats and impact scenarios. More localized impact scenarios – for
example loss of a specific floor in a building – may also be documented. The BC plans should
reflect the requirements to recover the business in the widest possible damage. The risk
assessment should cater to developing impact scenarios that are applicable to the business or the
premises it operates. For example, it might not be logical to consider tsunami in the region of
Mideast since the likelihood of such a threat is negligible.

Recovery requirement
After the analysis phase, business and technical recovery requirements precede the solutions
phase. Asset inventories allow for quick identification of deployable resources. For an office-
based, IT-intensive business, the plan requirements may cover desks, human resources,
applications, data, manual workarounds, computers and peripherals. Other business
environments, such as production, distribution, warehousing etc. will need to cover these
elements, but likely have additional issues.
The robustness of an emergency management plan is dependent on how much money an
organization or business can place into the plan. The organization must balance realistic
feasibility with the need to properly prepare. In general, every $1 put into an emergency
management plan will prevent $7 of loss.

Solution design
The solution design phase identifies the most cost-effective disaster recovery solution that meets
two main requirements from the impact analysis stage. For IT purposes, this is commonly
expressed as the minimum application and data requirements and the time in which the minimum
application and application data must be available.

175 www.someakenya.com Contact: 0707 737 890


Outside the IT domain, preservation of hard copy information, such as contracts, skilled staff or
restoration of embedded technology in a process plant must be considered. This phase overlaps
with disaster recovery planning methodology. The solution phase determines:
 crisis management command structure
 secondary work sites
 telecommunication architecture between primary and secondary work sites
 data replication methodology between primary and secondary work sites
 applications and data required at the secondary work site
 physical data requirements at the secondary work site.

Implementation
The implementation phase involves policy changes, material acquisitions, staffing and testing.

Testing and organizational acceptance


The purpose of testing is to achieve organizational acceptance that the solution satisfies the
recovery requirements. Plans may fail to meet expectations due to insufficient or inaccurate
recovery requirements, solution design flaws or solution implementation errors. Testing may
include:
 Crisis command team call-out testing
 Technical swing test from primary to secondary work locations
 Application test
 Business process test
At minimum, testing is conducted on a biannual schedule.
The 2008 book exercising for Excellence, published by The British Standards Institution
identified three types of exercises that can be employed when testing business continuity plans.

Tabletop exercises
Tabletop exercises typically involve a small number of people and concentrates on a specific
aspect of a BCP. They can easily accommodate complete teams from a specific area of a
business.
Another form involves a single representative from each of several teams. Typically, participants
work through simple scenario and then discuss specific aspects of the plan. For example, a fire is
discovered out of working hours.
The exercise consumes only a few hours and is often split into two or three sessions, each
concentrating on a different theme.

Medium exercises
A medium exercise is conducted within a "Virtual World" and brings together several
departments, teams or disciplines. It typically concentrates on multiple BCP aspects, prompting
interaction between teams. The scope of a medium exercise can range from a few teams from
one organisation co-located in one building to multiple teams operating across dispersed
locations. The environment needs to be as realistic as practicable and team sizes should reflect a
realistic situation. Realism may extend to simulated news broadcasts and websites.
A medium exercise typically lasts a few hours, though they can extend over several days. They
typically involve a "Scenario Cell" that adds pre-scripted "surprises" throughout the exercise.

176 www.someakenya.com Contact: 0707 737 890


Complex exercises
A complex exercise aims to have as few boundaries as possible. It incorporates all the aspects of
a medium exercise. The exercise remains within a virtual world, but maximum realism is
essential. This might include no-notice activation, actual evacuation and actual invocation of a
disaster recovery site.
While start and stop times are pre-agreed, the actual duration might be unknown if events are
allowed to run their course.

Maintenance
Biannual or annual maintenance cycle maintenance of a BCP manual is broken down into three
periodic activities.
 Confirmation of information in the manual, roll out to staff for awareness and specific
training for critical individuals.
 Testing and verification of technical solutions established for recovery operations.
 Testing and verification of organization recovery procedures.
Issues found during the testing phase often must be reintroduced to the analysis phase.

Information/targets
The BCP manual must evolve with the organization. Activating the call tree verifies the
notification plan's efficiency as well as contact data accuracy. Like most business procedures,
business continuity planning has its own jargon. Organisation-wide understanding of business
continuity jargon is vital and glossaries are available.Types of organisational changes that should
be identified and updated in the manual include:
 Staffing
 Important clients
 Vendors/suppliers
 Organization structure changes
 Company investment portfolio and mission statement
 Communication and transportation infrastructure such as roads and bridges

Technical
Specialized technical resources must be maintained. Checks include:
 Virus definition distribution
 Application security and service patch distribution
 Hardware operability
 Application operability
 Data verification
 Data application

Testing and verification of recovery procedures


As work processes change, previous recovery procedures may no longer be suitable. Checks
include:
 Are all work processes for critical functions documented?
 Have the systems used for critical functions changed?
 Are the documented work checklists meaningful and accurate?

177 www.someakenya.com Contact: 0707 737 890


 Do the documented work process recovery tasks and supporting disaster recovery
infrastructure allow staff to recover within the predetermined recovery time objective?

 BCP scope, teams and roles


Scope of the Business Continuity Plan
a) Category I - Critical Functions
b) Category II - Essential Functions
c) Category III - Necessary Functions
d) Category IV - Desirable Functions

Team Descriptions
1. Business Continuity Management Team
a) Organization Support Teams
b) Damage Assessment/ Salvage Team
c) Transportation Team
d) Physical Security Team
e) Public Information Team
f) Insurance Team
g) Telecommunication Team

Roles of the teams

 Backup types and strategies


There are quite a number of backup types and terms used when it comes to backups of your
digital content. This is a compilation of the most common types of backup with a brief
explanation of their meaning, common examples, advantages and disadvantages of each backup
type.

Full Backup

Full back up is a method of backup where all the files and folders selected for the backup will be
backed up. When subsequent backups are run, the entire list of files and folders will be backed
up again. The advantage of this backup is, restores are fast and easy as the complete list of files
are stored each time. The disadvantage is that each backup run is time consuming as the entire
list of files is copied again. Also, full backups take up a lot more storage space when compared
to incremental or differential backups.

Incremental backup

Incremental backup is a backup of all changes made since the last backup. With incremental
backups, one full backup is done first and subsequent backup runs are just the changes made
since the last backup. The result is a much faster backup then a full backup for each backup run.

178 www.someakenya.com Contact: 0707 737 890


Storage space used is much less than a full backup and less then with differential backups.
Restores are slower than with a full backup and a differential backup.

Differential backup

Differential backup is a backup of all changes made since the last full backup. With differential
backups, one full backup is done first and subsequent backup runs are the changes made since
the last full backup. The result is a much faster backup then a full backup for each backup run.
Storage space used is much less than a full backup but more then with Incremental backups.
Restores are slower than with a full backup but usually faster than with Incremental backups.

Mirror Backup

Mirror backups are as the name suggests a mirror of the source being backed up. With mirror
backups, when a file in the source is deleted, that file is eventually also deleted in the mirror
backup. Because of this, mirror backups should be used with caution as a file that is deleted by
accident or through a virus may also cause the mirror backups to be deleted as well.

Full PC Backup or Full Computer Backup

In this backup, it is not the individual files that are backed up but entire images of the hard drives
of the computer that is backed up. With the full PC backup, you can restore the computer hard
drives to its exact state when the backup was done. With the Full PC backup, not only can the
work documents, picture, videos and audio files be restored but the operating system, hard ware
drivers, system files, registry, programs, emails etc. can also be restored.

Local Backup

Local backups are any kind of backup where the storage medium is kept close at hand or in the
same building as the source. It could be a backup done on a second internal hard drive, an
attached external hard drive, CD/ DVD –ROM or Network Attached Storage (NAS). Local
backups protect digital content from hard drive failures and virus attacks. They also provide
protection from accidental mistakes or deletes. Since the backups are always close at hand they
are fast and convenient to restore.

Offsite Backup

When the backup storage media is kept at a different geographic location from the source, this is
known as an offsite backup. The backup may be done locally at first but once the storage
medium is brought to another location, it becomes an offsite backup. Examples of offsite backup
include taking the backup media or hard drive home, to another office building or to a bank safe
deposit box.

Beside the same protection offered by local backups, offsite backups provide additional
protection from theft, fire, floods and other natural disasters. Putting the backup media in the

179 www.someakenya.com Contact: 0707 737 890


next room as the source would not be considered an offsite backup as the backup does not offer
protection from theft, fire, floods and other natural disasters.

Online Backup

These are backups that are ongoing or done continuously or frequently to a storage medium that
is always connected to the source being backed up. Typically the storage medium is located
offsite and connected to the backup source by a network or Internet connection. It does not
involve human intervention to plug in drives and storage media for backups to run. Many
commercial data centers now offer this as a subscription service to consumers. The storage data
centers are located away from the source being backed up and the data is sent from the source to
the storage data center securely over the Internet.

Remote Backup

Remote backups are a form of offsite backup with a difference being that you can access, restore
or administer the backups while located at your source location or other location. You do not
need to be physically present at the backup storage facility to access the backups. For example,
putting your backup hard drive at your bank safe deposit box would not be considered a remote
backup. You cannot administer it without making a trip to the bank. Online backups are usually
considered remote backups as well.

Cloud Backup

This term is often used interchangeably with Online Backup and Remote Backup. It is where
data is backed up to a service or storage facility connected over the Internet. With the proper
login credentials, that backup can then be accessed or restored from any other computer with
Internet Access.

FTP Backup

This is a kind of backup where the backup is done via FTP (File Transfer Protocol) over the
Internet to an FTP Server. Typically the FTP Server is located in a commercial data center away
from the source data being backed up. When the FTP server is located at a different location, this
is another form of offsite backup.

Backup Strategy

A backup strategy or backup policy is essentially a set of procedures that you prepare and
implement to protect your important digital content from hard drive failures, virus attacks and
other events or disasters.

Features of a Good Backup Strategy

The following are features to aim for when designing your backup strategy:

180 www.someakenya.com Contact: 0707 737 890


 Able to recover from data loss in all circumstances like hard drive failure, virus attacks,
theft, accidental deletes or data entry errors, sabotage, fire, flood, earth quakes and other
natural disasters.
 Able to recover to an earlier state if necessary like due to data entry errors or accidental
deletes.
 Able to recover as quickly as possible with minimum effort, cost and data loss.
 Require minimum ongoing human interaction and maintenance after the initial setup.
Hence able to run automated or semi-automated.

Planning Your Backup Strategy

1. What To Backup

The first step in planning your backup strategy is identifying what needs to be backed up.
Identify the files and folders that you cannot afford to lose? It involves going through your
documents, databases, pictures, videos, music and program setup or installation files. Some of
these media like pictures and videos may be irreplaceable. Others like documents and databases
may be tedious or costly to recover from hard copies. These are the files and folders that need to
be in your backup plan.

2. Where to Backup to

This is another fundamental consideration in your backup plan. In light of some content being
irreplaceable, the backup strategy should protect against all events. Hence a good backup
strategy should employ a combination of local and offsite backups.

Local backups are needed due to its lower cost allowing you to back up a huge amount of data.
Local backups are also useful for its very fast restore speed allowing you to get back online in
minimal time. Offsite backups are needed for its wider scope of protection from major disasters
or catastrophes not covered by local backups.

3. When to Backup

Frequency:How often you backup your data is the next major consideration when planning your
backup policy. Some folders are fairly static and do not need to be backed up very often. Other
folders are frequently updated and should correspondingly have a higher backup frequency like
once a day or more.

Your decision regarding backup frequency should be based on a worst case scenario. For
example, if tragedy struck just before the next backup was scheduled to run, how much data
would you lose since the last backup. How long would it take and how much would it cost to re
key that lost data?

Backup Start Time: You would typically want to run your backups when there’s minimal usage
on the computers. Backups may consume some computer resources that may affect performance.
Also, files that are open or in use may not get backed up.

181 www.someakenya.com Contact: 0707 737 890


Scheduling backups to run after business hours is a good practice providing the computer is left
on overnight. Backups will not normally run when the computer is in “sleep” or “hibernate
mode”. Some backup software will run immediately upon boot up if it missed a scheduled
backup the previous night.

So if the first hour on a business day morning is your busiest time, you would not want your
computer doing its backups then. If you always shut down or put your computer in sleep or
hibernate mode at the end of a work day, maybe your lunch time would be a better time to
schedule a backup. Just leave the computer on but logged-off when you go out for lunch.

Since servers are usually left running 24 hours, overnight backups for servers are a good choice.

4. Backup Types

Many backup software offer several backup types like Full Backup, Incremental Backup and
Differential backup. Each backup type has its own advantages and disadvantages. Full backups
are useful for projects, databases or small websites where many different files(text, pictures,
videos etc.) are needed to make up the entire project and you may want to keep different
versions of the project.

5. Compression & Encryption

As part of your backup plan, you also need to decide if you want to apply any compression to
your backups. For example, when backing up to an online service, you may want to apply
compression to save on storage cost and upload bandwidth. You may also want to apply
compression when backing up to storage devices with limited space like USB thumb drives.

If you are backing up very private or sensitive data to an offsite service, some backup tools and
services also offer support for encryption. Encryption is a good way to protect your content
should it fall into malicious hands. When applying encryption, always ensure that you remember
your encryption key. You will not be able to restore it without your encryption key or phrase.

6. Testing Your Backup

A backup is only worth doing if it can be restored when you need it most. It is advisable to
periodically test your backup by attempting to restore it. Some backup utilities offer a validation
option for your backups. While this is a welcome feature, it is still a good idea to test your
backup with an actual restore once in a while.

7. Backup Utilities & Services

Simply copying and pasting files and folders to another drive would be considered a backup.
However the aim of a good backup plan is to set it up once and leave it to run on its own. You
would check up on it occasionally but the backup strategy should not depend on your ongoing
interaction for it to continue backing up. A good backup plan would incorporate the use of good
quality, proven backup software utilities and backup services.

182 www.someakenya.com Contact: 0707 737 890


 Hot and cold sites

Hot Site:

A Hot Site can be defined as a backup site, which is up and running continuously. A Hot Site
allows a company to continue normal business operations, within a very short period of time
after a disaster. Hot Site can be configured in a branch office, data center or even in cloud. Hot
Site must be online and must be available immediately.

Hot site must be equipped with all the necessary hardware, software, network, and Internet
connectivity. Data is regularly backed up or replicated to the hot site so that it can be made fully
operational in a minimal amount of time in the event of a disaster at the original site. Hot Site
must be located far away from the original site, in order to prevent the disaster affecting the hot
site also.

Hot sites are essentially mirrors of your datacenter infrastructure. The backup site is populated
with servers, cooling, power, and office space (if applicable). The most important feature offered
from a hot site is that the production environment(s) are running concurrently with your main
datacenter. This syncing allows for minimal impact and downtime to business operations. In the
event of a significant outage event to your main datacenter, the hot site can take the place of the
impacted site immediately. However, this level of redundancy does not come cheap, and
businesses will have to weigh the cost-benefit-analysis (CBA) of hot site utilization.

Warm Site:

A Warm Site is another backup site, is not as equipped as a Hot Site. Warm Site is configured
with power, phone, network etc. May have servers and other resources. But a Warm Site is not
ready for immediate switch over. The time to switch over from the disaster affected site to Warm
Site is more than that of a Hot Site. But less cost is the attraction.

A warm site is the middle ground of the two disaster recovery options. Warm sites offer office
space/datacenter space and will have some pre-installed server hardware. The difference between
a hot site and a warm site is that while the hot site provides a mirror of the production data-center
and its environment(s), a warm site will contain only servers ready for the installation of
production environments. Warm sites make sense for aspect of the business which is not critical,
but requires a level of redundancy (ex. Administrative roles). A CBA conducted on whether to
use a warm site versus a hot site should include the downtime associated with the software-
loading/configuration requirements for engineering.

Unplanned outages can severely risk a business' ability to generate revenue, and service clients.
A disaster recovery site can help mitigate the impact of those outages on production systems.
Business owners need only add this detail to their disaster recovery plans to ensure collective
peace-of-mind in the event of an emergency.

183 www.someakenya.com Contact: 0707 737 890


Cold Site:

Cold Site contains even fewer facilities than a Warm Site. Cold Site will take more time than a
Warm Site or Hot Site to switch operation but it is the cheapest option. Cold Site may contain
tables, chairs, bathrooms, and basic technical facilities but will require days or even weeks to set
up properly and start operation from Cold Site.

A cold site is essentially office or datacenter space without any server-related equipment
installed. The cold site provides power, cooling, and/or office space which waits in the event of a
significant outage to the main work site or datacenter. The cold site will require extensive
support from engineering and IT personnel to get all necessary servers and equipment migrated
and functional. Cold sites are the cheapest cost-recovery option for businesses to utilize

 Disaster recovery plans


Businesses use information technology to quickly and effectively process information.
Employees use electronic mail and Voice Over Internet Protocol (VOIP) telephone systems to
communicate. Electronic data interchange (EDI) is used to transmit data including orders and
payments from one company to another. Servers process information and store large amounts of
data. Desktop computers, laptops and wireless devices are used by employees to create, process,
manage and communicate information. What do you when your information technology stops
working?

An information technology disaster recovery plan (IT DRP) should be developed in conjunction
with the business continuity plan. Priorities and recovery time objectives for information
technology should be developed during the business impact analysis. Technology recovery
strategies should be developed to restore hardware, applications and data in time to meet the
needs of the business recovery.

Businesses large and small create and manage large volumes of electronic information or data.
Much of that data is important. Some data is vital to the survival and continued operation of the
business. The impact of data loss or corruption from hardware failure, human error, hacking or
malware could be significant. A plan for data backup and restoration of electronic information is
essential.

Recovery strategies

Recovery strategies should be developed for Information technology (IT) systems, applications
and data. This includes networks, servers, desktops, laptops, wireless devices, data and
connectivity. Priorities for IT recovery should be consistent with the priorities for recovery of
business functions and processes that were developed during the business impact analysis. IT
resources required to support time-sensitive business functions and processes should also be
identified. The recovery time for an IT resource should match the recovery time objective for the
business function or process that depends on the IT resource.

184 www.someakenya.com Contact: 0707 737 890


Information technology systems require hardware, software, data and connectivity. Without one
component of the “system,” the system may not run. Therefore, recovery strategies should be
developed to anticipate the loss of one or more of the following system components:

 Computer room environment (secure computer room with climate control, conditioned
and backup power supply, etc.)
 Hardware (networks, servers, desktop and laptop computers, wireless devices and
peripherals)
 Connectivity to a service provider (fiber, cable, wireless, etc.)
 Software applications (electronic data interchange, electronic mail, enterprise resource
management, office productivity, etc.)
 Data and restoration

Some business applications cannot tolerate any downtime. They utilize dual data centers capable
of handling all data processing needs, which run in parallel with data mirrored or synchronized
between the two centers. This is a very expensive solution that only larger companies can afford.
However, there are other solutions available for small to medium sized businesses with critical
business applications and data to protect.

Internal Recovery Strategies

Many businesses have access to more than one facility. Hardware at an alternate facility can be
configured to run similar hardware and software applications when needed. Assuming data is
backed up off-site or data is mirrored between the two sites, data can be restored at the alternate
site and processing can continue.

Vendor Supported Recovery Strategies

There are vendors that can provide “hot sites” for IT disaster recovery. These sites are fully
configured data centers with commonly used hardware and software products. Subscribers may
provide unique equipment or software either at the time of disaster or store it at the hot site ready
for use.

Data streams, data security services and applications can be hosted and managed by vendors.
This information can be accessed at the primary business site or any alternate site using a web
browser. If an outage is detected at the client site by the vendor, the vendor automatically holds
data until the client’s system is restored. These vendors can also provide data filtering and
detection of malware threats, which enhance cyber security.

185 www.someakenya.com Contact: 0707 737 890


Developing an IT Disaster Recovery Plan

Businesses should develop an IT disaster recovery plan. It begins by compiling an inventory of


hardware (e.g. servers, desktops, laptops and wireless devices), software applications and data.
The plan should include a strategy to ensure that all critical information is backed up.

Identify critical software applications and data and the hardware required to run them. Using
standardized hardware will help to replicate and reimage new hardware. Ensure that copies of
program software are available to enable re-installation on replacement equipment. Prioritize
hardware and software restoration.

Document the IT disaster recovery plan as part of the business continuity plan. Test the plan
periodically to make sure that it works.

186 www.someakenya.com Contact: 0707 737 890


TOPIC 9

SYSTEM SECURITY POLICY IMPLEMENTATION

 Components of systems security policy

Security Basics - Components of Security Policies


Policies are the heart of a security program. They are management's statement of support and
expected outcomes from security controls. In this article, we examine the various components of
a policy.
Components of a Security Policy
Policies form the basic framework of a security program. At the program level, policies represent
senior management's security objectives. At the system level, they provide rules for the
construction and operation of specific systems. Whether program or system specific, policies
help prevent inconsistencies by forming the basis for detailed standards, guidelines, and
procedures. They also serve as tools to inform employees about appropriate activities and
restrictions required for regulatory compliance. Finally, policies make clear management's
expectations of employee involvement in protecting information assets.
When building a policy, make sure it's clear and flexible. It shouldn't provide so much detail that
it forces unreasonable constraints on operational areas of your business. Leave room to make
management decisions that fit particular challenges as they arise.

Program policies establish the security program. They provide its form and character. The
sections that make up a program policy include purpose, scope, responsibilities, and compliance.
Following are the basic components of a security policy:
 Purpose includes the objectives of the program, such as:
 Improved recovery times
 Reduced costs or downtime due to loss of data
 Reduction in errors for both system changes and operational activities
 Regulatory compliance
 Management of overall confidentiality, integrity, and availability
 Scope provides guidance on whom and what are covered by the policy. Coverage may
include:
 Facilities
 Lines of business
 Employees or departments
 Technology
 Processes
 Responsibilities for the implementation and management of the policy are assigned in this
section. Organizational units or individuals are potential assignment candidates.

187 www.someakenya.com Contact: 0707 737 890


 Compliance provides for the policy's enforcement. Describe oversight activities and
disciplinary considerations clearly. But the contents of this section are meaningless unless an
effective awareness program is in place.
More note.
System specific policies provide the framework for system and issue specific security programs.
Like program policies, system policies should be flexible enough to allow managers to make
effective operational decisions while safeguarding the confidentiality, integrity, and availability
of information assets. System policies typically address two areas: security objectives and
operational security standards.
Policies that describe security objectives clearly define measurable, achievable goals. These
goals focus on data owner directives intended to protect specific systems. The policies are
written to take into account the system's functional requirements as seen by business users.
Because policies apply constraints on how a system or a technology may be deployed and used,
there's always a danger that meeting security objectives may adversely impact operational
efficiency. It's important to balance reduction in risk with the cost associated with potential
losses in productivity.
Operational security standards provide a clear set of rules for operating and managing a system
or a technology. As with system policy objectives, these rules shouldn't be so restrictive that they
paralyze your organization. In addition, the administrative burden associated with managing and
enforcing overly restrictive policies may cost your organization more than the business impact
you're trying to protect against. The elements of a system/issue specific policy include purpose,
objectives, scope, roles and responsibilities, compliance, and policy owner and contact
information.
 Purpose defines the challenge management is addressing. Challenges might include
regulatory constraints, protection of highly sensitive data, or the safe use of certain
technologies. In some cases, it may be necessary to define terms. It's important that everyone
affected by the policy clearly understands its content. Finally, clearly state the conditions
under which the policy is applicable.
 Objectives may include actions and configurations prohibited or controlled. Although they're
normally defined outside a policy, circumstances and organizational practices may require
placing certain standards and guidelines in this section. In any case, it's in this section that
you define the results you expect from policy enforcement.
 Scope specifies where, when, how, and to whom the policy applies.
 Roles and Responsibilities identify the business units or individuals responsible for the
various areas of implementation and enforcement of the policy.
 Compliance is just as important in a system or issue level policy as it is in a program policy.
You should clearly state the possible consequences of not conforming to the standards and
guidelines listed in Objectives.
 Policy Owner and Contact Information lists the person who is ultimately responsible for
managing the policy. Since the data owner is responsible for defining the protection required
for a specific system, she may be a good choice for policy owner. Ensure that contact
information for the policy owner is kept up to date. This allows individuals responsible for
implementing systems under the policy to contact the policy owner for clarification on
standards and guidelines.

188 www.someakenya.com Contact: 0707 737 890


The final step in the construction of a policy is approval by senior management. Without their
approval and support, a policy isn't worth very much. One way to ensure management support is
to involve relevant areas of the business in the construction of each policy. This helps prevent the
perception that information security policies, and information security in general, are an IS
problem. It also nurtures a feeling of ownership across the organization. Managers are more
willing to support operational restrictions that result in clear business value they helped define.
Although you can start with a blank sheet, I recommend you look at some example policies. A
good place to start is the SANS Security Policy Project page .

 Policy Implementation
After gaining management support and sign off, implementation planning begins. The roll out of
a new policy includes the following activities:
1. Ensure everyone is aware of the new policy. Post it on your Intranet, send notification email,
or perform whatever other mass distribution actions work well within your organization.
2. Discuss the content of the policy at management and staff meetings. It's important during
these discussions to include a review of the intended results of following the policy. This
helps your organization's employees see the standards and guidelines from the proper
perspective.
3. Conduct training sessions. Training should occur at three levels - management, general staff,
and technical staff.
 Management training is intended to educate managers about their role in enforcement
and compliance activities. It should include a "big picture" view of where the policy fits
in the overall security program.
 General staff training is provided to all staff levels in the organization. In addition to
making employees aware of the contents of the policy, it should also address any
questions about how the objectives, standards, and guidelines will impact day to day
operation of the business. Staff training should always precede any attempts to sanction
an employee for failure to follow a security policy.
 Technical staff training is typically provided for the IS staff. The focus of this training is
how the new policy affects existing system or network configurations and baselines.
4. Development of supporting standards, guidelines, procedures and baselines
5. Implement a user awareness program

What are the Components of a Security Policy?


A key point to consider is to develop a security policy that is flexible and adaptable as
technology changes. Additionally, a security policy should be a living document routinely
updated as new technology and procedures are established to support the mission of the
organization.

The components of a security policy will change by organization based on size, services offered,
technology, and available revenue. Here are some of the typical elements included in a security
policy.

Security Definition – All security policies should include a well-defined security vision for the
organization. The security vision should be clear and concise and convey to the readers the intent
of the policy. In example:

189 www.someakenya.com Contact: 0707 737 890


“This security policy is intended to ensure the confidently, integrity, and availability of data and
resources through the use of effective and established IT security processes and procedures.”
Further, the definition section should address why the security policy is being implemented and
what the corresponding mission will entail. This is where you tie the policy to the mission and
the business rules of the organization.

Enforcement – This section should clearly identify how the policy will beenforced and how
security breaches and/or misconduct will be handled.
The Chief Information Officer (CIO) and the Information Systems Security Officer (ISSO)
typically have the primary responsibility for implementing the policy and ensuring compliance.
However, you should have a member of senior management, preferably the top official,
implement and embrace the policy. This gives you the enforcement clout and much needed ‘buy-
in’.
This section may also include procedures for requesting short-term exceptions to the policy. All
exceptions to the policy should be reviewed and approved, or denied, by the Security Officer.
Senior management should not be given the flexibility to overrule decisions. Otherwise, your
security program will be full of exceptions that will lend themselves toward failure.

User Access to Computer Resources - This section should identify the roles and
responsibilities of users accessing resources on the organization’s network. This should include
information such as:
· Procedures for obtaining network access and resource level permission;
· Policies prohibiting personal use of organizational computer systems;Passwords;
· Procedures for using removal media devices;
· Procedures for identifying applicable e-mail standards of conduct;
· Specifications for both acceptable and prohibited Internet usage;
· Guidelines for applications;
· Restrictions on installing applications and hardware;
· Procedures for Remote Access;
· Guidelines for use of personal machines to access resources (remote access);
· Procedures for account termination;
· Procedures for routine auditing;
· Procedures for threat notification; and
· Security awareness training;

Depending on the size of an organization’s network, a more detailed listing may be required for
the connected Wide Area Networks (WAN), other Local Area Networks (LAN), Extranets, and
Virtual Private Networks (VPN). Some organizations may require that other connected (via
LAN, WAN, VPN) or trusted agencies meet the terms and conditions identified in the
organization’s security policy before they are granted access. This is done for the simple reason
that your security policy is only as good as the weakest link. For example, If Company ‘A’ has a
rigid security policy and Company ‘B’ has a substandard policy and wants to partner with
Company ‘A’, Company ‘B’ may request to have a network connection to Company ‘A’ (behind
the firewall). If Company’ A’ allows this without validating Company ‘B’s’ security policy then
Company ‘A’ can now be compromised by exploits launched from Company ‘B’. When
developing a security policy one should take situations such as this one very serious and develop

190 www.someakenya.com Contact: 0707 737 890


standards that must be met in order for other organizations to be granted access. One method is
to require the requesting organization to meet, at a minimum, your policy and guidelines.

Security Profiles - A good security policy should also include information that identifies how
security profiles will be applied uniformly across common devices (e.g., servers, workstations,
routers, switches, firewalls, proxy servers, etc.). The policy should reference applicable standards
and procedures for locking downdevices. Those standards may include security checklists to
follow when adding and/or reconfiguring devices.
New devices come shipped with the default configuration for ease of deployment and it also
ensures compatibility with most architectures. This is very convenient for the vendor, but a
nightmare for security professionals. An assessment needs to be completed to determine what
services are necessary on which devices to meet the organizational needs and requirements. All
other services should be turned off and/or removed and documented in the corresponding
standard operating procedure.
For example, if your agency does not have a need to host Internet or Intranet based applications
then do not install Microsoft IIS. If you have a need to host HTML services, but do not have a
requirement for allowing FTP, then disable it.

Passwords - Passwords are a critical element in protecting the infrastructure.


Remember, your security policy is only as good as the weakest link. If you have weak passwords
then you are at a higher risk for compromise not only by external threats, but also from insiders.
If a password is compromised through social engineering or password cracking techniques, an
intruder now has access to your resources. The result will mean that, you have just lost
confidentiality and possibly the integrity of the data, and availability may have been
compromised or in progress.
The policy should clearly state the requirements imposed on users for passwords.
Passwords should not be any of the following:
 Same as the username;Password;
 Any personal information that a hacker may be able to obtain (e.g., street address, social
security number, names of children, parents, cars, boats, etc.);
 A dictionary word; or
 Telephone number
These are some examples of passwords not to use. You should force users through automated
password policy techniques to require a minimum of eight characters, use of a combination of
symbols, alpha charters, and numerals, and amixture of uppercase and lowercase. Users should
be required to change their password at least quarterly. Previous passwords should not be
authorized. Lastly, an account lockout policy should be implemented after a predetermined
number of unsuccessful logon attempts.

Another tip to consider is that you should be logging all successful and failed logon attempts. A
hacker may be trying several accounts to logon to your network. If you see several ‘failed’ logon
attempts in a row and then no activity; does this mean the hacker gave up or did he
“successfully” logon?

E-mail – An email usage policy is a must. Several viruses, Trojans, and malware use email as
the vehicle to propagate themselves throughout the Internet. A few of the more recent worms

191 www.someakenya.com Contact: 0707 737 890


were Code Red, Nimda, and Gonner. These types ofexploits prey on the unsuspecting user to
double click on the attachment thereby infecting the machine and launching propagation
throughout the entire network.
This could cause several hours and/or days of downtime while remedial efforts are taken.
A couple of things you may want to address in your policy are content filtering of email
messages. Filtering out attachments with extensions such as *.exe, *.scr, *.bat, *.com, and *.inf
will enhance your prevention efforts. Also, personal use of the email system should be
prohibited. Email messages can and have been used in litigation (Microsoft anti-trust case). This
includes all email messages both personal and business. Additionally, some institutions archive
email messagesindefinitely (Federal Government). Those messages are subject to the Freedom of
Information Act (FOIA) requirements. Just think how embarrassing it would be if several email
messages with vulgar content were released to a law firm or the media. This could have
significant negative publicity for your organization.

Internet – The World Wide Web was the greatest invention, but the worst nightmare from a
security standpoint. The Internet is the pathway in which vulnerabilities are manifested. The
black-hat community typically launches their ‘zero day’ and old exploits on the Internet via IRC
chat rooms, through Instant Messengers, and free Internet email providers (Hotmail, yahoo, etc.).
Therefore, the Internet usage policy should restrict access to these types of sites and should
clearly identify what, if any, personal use is authorized. Moreover, software should be employed
to filter out many of the forbidden sitesthat include pornographic, chat rooms, free web-based
email services (Hotmail, Yahoo, etc.), personals, etc. There are several Internet content filtering
applications available that maintain a comprehensive database of forbidden URLs.
The following are being provided for additional information Back-up and Recovery – A
comprehensive back-up and recovery plan is critical to mitigating incidents. You never know
when a natural or other disaster may occur. For example take the 9/11 incident. What would have
happened if there were no off-site storage locations for the companies in the World Trade
Center?

Answer: All data would have been permanently lost! Back-ups are your key to the past.
Organizations must have effective back-up and recovery plans that are established through a
comprehensive risk assessment of all systems on the network. Your back-up procedures may be
different for a number of systems on your network. For example, your budget and payroll system
will have different back-up requirements than a miscellaneous file server.
You may be required to restore from a tape back-up, if the system crashes, you get hacked,
upgrade hardware, and/or files get inadvertently deleted. You should be prepared. Your back-up
and recovery policy (separate document) should stand on its own , but be reflected in the security
policy. At a minimum, your back-up recovery plan should include:
· Back-up schedules;
· Identification of the type of tape back-up (full, differential, etc.)
· The type of equipment used;
· Tape storage location (on and off-site);
· Tape labeling convention;
· Tape rotation procedures;
· Testing restorations; and
· Checking log files.

192 www.someakenya.com Contact: 0707 737 890


Intrusion Detection – A Network Intrusion Detection System (NIDS) is a system that is
responsible for detecting anomalous, inappropriate, or other data that may be considered
unauthorized occurring on a network. Unlike a firewall, an NIDScaptures and inspects all traffic,
regardless of whether it's permitted or not. Based on the contents, at either the IP or application
level, an alert is generated.

Intrusion detection tools will help assists in the detection and mitigation of access attempts into
your network. You need to make the decision through the risk assessment process of whether to
implement network or host based NDIS or a combination of both. Additional standard operating
procedures should be derived from the policy to specifically address intrusion detection
processes and procedures. Following are some examples of NDIS systems:
· ISS - (http://www.iss.com)
· Cisco - (http://www.cisco.com/warp/public/cc/pd/sqsw/sqidsz/)
· Snort - (http://www.linuxsecurity.com/feature_stories/usingsnort. html)
· Zone Alarm – (http://www.zonealram.com)

Remote Access - Dial-up access to your network will represent one of your greatest risks. Your
policy should identify the procedures that one must follow in order to be granted dial-up access.
You also need to address whether or not personal machines will be allowed to access your
organization’s resources.

The whole issue of remote access causes heartburn for security officials. You can lock down
your perimeter, but all it takes in one remote access client dialing into the network (behind the
firewall) who has been compromised while surfing the Internet with that Trojan ready and
willing to start looking for other unsuspecting prey. Next thing you know your network has been
compromised.
Following are some examples to include in your policy:
 Install and configure personal firewall on remote client machines (examples, Norton o
BlackIce Defender);
 Ensure antivirus software, services packs and security patches are maintained and up-to-date;
 Ensure modems are configured to not auto answer;
 Ensure file sharing is disabled;
 If not using token or PKI certificates, then username and password should be encrypted;
 If possible push policies from server to client machines; and
 Prohibit the use of organizational machines from being configured to access personal Internet
Service Provider accounts.

Auditing - All security programs should be audited on a routine and random basis to assess their
effectiveness. The security officer must be given the authority, inwriting, by the head of the
organization to conduct audits of the program. If not, he or she could be subject to legal action
for malicious conduct. Random and scheduled audits should be conducted and may include:
 Password auditing using password cracking utilities such as LC3 (Windows) and
PWDump (Unix and Windows);
 Auditing user accounts database for active old accounts (persons who left the agency)

193 www.someakenya.com Contact: 0707 737 890


 Penetration testing to check for vulnerabilities using technical assessment tools such as
ISS and Nessus;
 Social Engineering techniques to determine if you can get a username or password from a
staff member;
 Simulate (off hours) network failure and evaluate your incident response team’s
performance and readiness;
 Test your back-up recovery procedures;
 Use Tripwire or similar product to monitor your critical binary files;
 Configure your Server OS to audit all events and monitor several timesa day for
suspicious activity;
 Use a port scanner (Nmap, Nessus, etc.) within your network to determine if your system
administrators catch the traffic and take appropriate action.
These are just a few examples of the things to audit. The extent of your auditing will depend on
the level of your security program.

Awareness Training - Security Awareness training for organizational staff must be performed
to ensure a successful program. Training should be provided at different levels for staff,
executives, system administrators, and security officers.
Additionally, staff should be retrained on a periodic basis (e.g., every two years).
A process should be in place for training newly hired staff within a certain time period. Staff
completing training should be required to sign a written certification statement. This signed
statement helps the security officer and management enforce the organization’s security policies.
Trained staff can help alleviate some of the security burden from security officers.
Trained staff can and often do provide advanced notification of suspicious events encountered on
their machines which could prevent a worm or other Trojan from propagating throughout the
entire network.

KEY ELEMENTS OF AN INFORMATION SECURITY POLICY

1. Definition & Intro

Information Security Policy /ISP/ is a set or rules enacted by an organization to ensure that all
users or networks of the IT structure within the organization’s domain abide by the prescriptions
regarding the security of data stored digitally within the boundaries the organization stretches its
authority.

An ISP is governing the protection of information, which is one of the many assets a corporation
needs to protect. The present writing will discuss some of the most important aspects a person
should take into account when contemplates developing an ISP. Putting to work the logical
arguments of rationalization, one could say that a policy can be as broad as the creators want it to
be: Basically, everything from A to Z in terms of IT security, and even more. For that reason, the
emphasis here is placed on a few key elements, but you should make a mental note of the liberty
of thought organizations have when they forge their own guidelines.

2 Elements of Information Security Policy

194 www.someakenya.com Contact: 0707 737 890


2.1 Purpose

Institutions create ISPs for a variety of reasons:

 To establish a general approach to information security


 To detect and forestall the compromise of information security such as misuse of data,
networks, computer systems and applications.
 To protect the reputation of the company with respect to its ethical and legal
responsibilities.
 To observe the rights of the customers; providing effective mechanisms for responding to
complaints and queries concerning real or perceived non-compliances with the policy is
one way to achieve this objective.

2.2 Scope

ISP should address all data, programs, systems, facilities, other tech infrastructure, users of
technology and third parties in a given organization, without exception.

2.3 Information security objectives

An organization that strive to compose a working ISP needs to have well-defined objectives
concerning security and strategy on which management have reached an agreement. Any
existing dissonances in this context may render the information security policy project
dysfunctional. The most important thing that a security professional should remember is that his
knowing the security management practices would allow him to incorporate them into the
documents he is entrusted to draft, and that is a guarantee for completeness, quality and
workability.

Simplification of policy language is one thing that may smooth away the differences and
guarantee consensus among management staff. Consequently, ambiguous expressions are to be
avoided. Beware also of the correct meaning of terms or common words. For instance, “musts”
express negotiability, whereas “shoulds” denote certain level of discretion. Ideally, the policy
should be briefly formulated to the point. Redundancy of the policy’s wording (e.g., pointless
repetition in writing) should be avoided as well as it would make documents long-winded and

195 www.someakenya.com Contact: 0707 737 890


out of sync, with illegibility that encumbers evolution. In the end, tons of details may impede the
complete compliance at the policy level.

So how management views IT security seems to be one of the first steps when a person intends
to enforce new rules in this department. Furthermore, a security professional should make sure
that the ISP has an equal institutional gravity as other policies enacted within the corporation. In
cases where an organization has sizeable structure, policies may differ and therefore be
segregated in order to define the dealings in the intended subset of this organization.

Information security is deemed to safeguard three main objectives:

 Confidentiality – data and information assets must be confined to people


authorized to access and not be disclosed to others;
 Integrity – keeping the data intact, complete and accurate, and IT systems
operational;
 Availability – an objective indicating that information or system is at disposal of
authorized users when needed.

Donn Parker, one of the pioneers in the field of IT security, expanded this threefold paradigm by
suggesting also “authenticity” and “utility”.

2.4 Authority & Access Control Policy

Typically, a security policy has a hierarchical pattern. It means that inferior staff is usually bound
not to share the little amount of information they have unless explicitly authorized. Conversely, a
senior manager may have enough authority to make a decision what data can be shared and with
whom, which means that they are not tied down by the same information security policy terms.
So the logic demands that ISP should address every basic position in the organization with
specifications that will clarify their authoritative status.

196 www.someakenya.com Contact: 0707 737 890


Policy refinement takes place simultaneously with defining the administrative control, or
authority in other words, people in the organization have. In essence, it is hierarchy-based
delegation of control in which one may have authority over his own work, project manager has
authority over project files belonging to a group he is appointed to, and the system administrator
has authority solely over system files – a structure reminiscent of the separation of powers
doctrine. Obviously, a user may have the “need-to-know” for a particular type of information.
Therefore, data must have enough granularity attribute in order to allow the appropriate
authorized access. This is the thin line of finding the delicate balance between permitting access
to those who need to use the data as part of their job and denying such to unauthorized entities.

Access to company’s network and servers, whether or not in the physical sense of the word,
should be via unique logins that require authentication in the form of either passwords,
biometrics, ID cards, or tokens etc. Monitoring on all systems must be implemented to record
logon attempts (both successful ones and failures) and exact date and time of logon and logoff.

197 www.someakenya.com Contact: 0707 737 890


Speaking of evolution in the previous point – as the IT security program matures, the policy may
need updating. While doing so will not necessarily be tantamount to improvement in security, it
is nevertheless a sensible recommendation.

2.5 Classification of Data

Data can have different value. Gradations in the value index may impose separation and specific
handling regimes/procedures for each kind. An information classification system therefore may
succeed to pay attention to protection of data that has significant importance for the organization,
and leave out insignificant information that would otherwise overburden organization’s
resources. Data classification policy may arrange the entire set of information as follows:

1. High Risk Class– data protected by state and federal legislation (the Data Protection Act,
HIPAA, FERPA) as well as financial, payroll, and personnel (privacy requirements) are
included here.
2. Confidential Class – the data in this class does not enjoy the privilege of being under the
wing of law, but the data owner judges that it should be protected against unauthorized
disclosure.
3. Class Public – This information can be freely distributed.

Data owners should determine both the data classification and the exact measures a data
custodian needs to take to preserve the integrity in accordance to that level.

198 www.someakenya.com Contact: 0707 737 890


2.6 Data Support & Operations

In this part we could find clauses that stipulate:

 The regulation of general system mechanisms responsible for data protection

199 www.someakenya.com Contact: 0707 737 890


 The data backup

 Movement of data

200 www.someakenya.com Contact: 0707 737 890


2.7 Security AwarenessSessions

Sharing IT security policies with staff is a critical step. Making them read and sign to
acknowledge a document does not necessarily mean that they are familiar with and understand
the new policies. A training session would engage employees in positive attitude to information
security, which will ensure that they get a notion of the procedures and mechanisms in place to
protect the data, for instance, levels of confidentiality and data sensitivity issues. Such an
awareness training should touch on a broad scope of vital topics: how to collect/use/delete data,
maintain data quality, records management, confidentiality, privacy, appropriate utilization of IT
systems, correct usage social networking, etc. A small test at the end is perhaps a good idea.

2.8 Responsibilities, Rights and Duties of Personnel

General considerations in this direction lean towards responsibility of persons appointed to carry
out the implementation, education, incident response, user access reviews, and periodic updates
of an ISP.

Prevention of theft, information know-how and industrial secrets that could benefit competitors
are among the most cited reasons why a business may want to employ an ISP to defend its digital
assets and intellectual rights.

201 www.someakenya.com Contact: 0707 737 890


2.10 Other Items that An ISP May Include:

Virus Protection Procedure, Intrusion Detection Procedure, Remote Work Procedure, Technical
Guidelines, Audit, Employee Requirements, Consequences for Non-compliance, Disciplinary
Actions, Terminated Employees, Physical Security of IT, References to Supporting Documents
and so on.

Conclusion. Importance of ISP

Out of carelessness mostly, many organizations without giving a much thought choose to
download IT policy samples from a website and copy/paste this ready-made material in attempt
to readjust somehow their objectives and policy goals to a mould that is usually crude and has
too broad-spectrum protection. Understandably, if the fit is not a quite right, the dress would
eventually slip off.

A high-grade ISP can make the difference between growing business and successful one.
Improved efficiency, increased productivity, clarity of the objectives each entity has,
understanding what IT and data should be secured and why, identifying the type and levels of
security required and defining the applicable information security best practices are enough
reasons to back up this statement. To put a period to this topic in simple terms, let’s say that if
you want to lead a prosperous company in today’s digital era, you certainly need to have a good
information security policy.

 Systems security policy development

An IT security policy should:

1. Protect people and information


2. Set the rules for expected behavior by users, system administrators, management, and
security personnel
3. Authorize security personnel to monitor, probe, and investigate
4. Define and authorize the consequences of violations
5. Define the company consensus baseline stance on security
6. Help minimize risk
7. Help track compliance with regulations and legislation
8. Ensure the confidentiality, integrity and availability of their data
9. Provide a framework within which employees can work, are a reference for best
practices, and are used to ensure users comply with legal requirements

Development of organizational measures

1.1. Development of strategy

The successful development of any company depends on correctly formulated strategic purposes
and the methods of their reaching. It is customary to assume that financial indices, for example

202 www.someakenya.com Contact: 0707 737 890


the achievement of the specific market share or increase of profit by some value, can come out as
such purposes. However, as shows practice, very few leaders and owners of business give
enough attention to questions of long-term planning in the field of Information Security. The
contemporary conditions of conducting the business shows that in the absence or illegibly
formulated strategies of Information Security, financial indices can be unattainable.

1.2. Development of Information Security policy

Problems
Development and growth of enterprises are very tightly connected with an increase in company's
IT-infrastructure, complexity and scales of which is constantly growing, generating new forms of
threats, vulnerabilities and risks, which has its influence on the activity of organization.

The appearance of problems in Information Security leads both to the financial and reputation
losses. Important task of management is to avoid these threats, to minimize risks and to ensure
the proper level of IT-infrastructure safety.

On Information Security policy


The Information Security Policy - is a high-level document, which includes principles and rules,
determines and limits the specific forms of object's activity and IT-infrastructure participants,
and is directed toward the protection of company's information resources.

As is known, strategic planning makes it possible to determine the basic directions of


organization activity, after connecting marketing, production and finances together. Long-term
strategic plan makes it possible to build all company's business processes, taking into account of
micro and macro-level for achieving the best financial indices and rates. Important component in
the strategic planning is the Information Security Policy, which must be a cornerstone in
determination of intermediate-term and long-term objectives and tasks of organization. The
policy also must be reexamined with the growth of company and the revision of plans. The low-
level documents of Information Security must be reexamined in accordance with realization of
short-term plans.

The Information Security Policy is inseparably connected with the development of company, her
strategic planning, it determines general principles and order of providing Information Security
in enterprise. The Information Security Policy is tightly integrated with work of enterprise in
entire stage of its existence. All solutions, undertaken in enterprise, must consider its
requirements.

The effective guarantee of Information Security required level is possible only with formalized
approach to the fulfillment of measures for the protection of information. The main purpose of
Information Security Policy is to create the united system of views and understanding of
purposes, tasks and principles by which Information Security provides.

The basic stages of Information Security Policy are the following:

 Study the current state of organization Information Security level;

203 www.someakenya.com Contact: 0707 737 890


 Analysis of obtained information according to the results of study;
 Forming other job schedule on the development of Information Security Policy;
 Development of Information Security Policy.

The packet of documents on providing Information Security includes the following types of
documents:

 Information Security Policy of organization - high-level document, which describes basic


principles and rules, directed toward the protection of the information resources;
 Regulations of Information Security, which reveal in more detail procedures and methods
of providing Information Security in accordance with the basic principles and rules,
described in the policy;
 Instructions on providing of Information Security for the officials of organization taking
into account the requirements of policy and regulations;
 Other documents, such as reports, registration periodicals and other low-level leading
documents.

The concrete drafts of necessary documents are determined during the inspection of customer's
Information Security existing level, its organizational structure and business processes.

1.3. Control of staff awareness

The program of staff awareness increase is the complex of educational measures, which make it
possible to reach the required level of understanding the importance and needs for fulfilling the
requirements of Information Security.

The package of measures includes:

 Development of educational program depending on the standard of personnel knowledge,


 Conducting the training seminars in accordance with the nature of daily personnel work
and the degree of responsibility.

2. Control of Information Security risks

Control of risks is the continuous process, which ensures development, estimation and
minimization of Information Security threats, directed toward the organization assets.

Control of risks allows:

 to obtain the urgent idea about the level of organization Information Security at the
current moment;
 to determine the most vulnerable places in Information Security system;
 to determine the cost substantiation of expenditures for guaranteeing Information
Security;
 to minimize expenses on Information Security.

204 www.someakenya.com Contact: 0707 737 890


Control of risks assumes solution of the following problems:

 the construction of business processes interaction model for the purpose of isolation
organization's most critical assets;
 the construction of disturber model and the model of threats;
 the estimation of threats realization, directed toward the critical organization assets;
 the development of measures for reducing risks of threats;
 the development of plan for reduction risks;
 the estimation of the residual risk after the introduction of decrease measures

3. Control of vulnerabilities

Existing software is imperfect, new vulnerabilities are appearing constantly. Frequently, after the
detection of such vulnerabilities appears malware software, which gives the probability of using
the vulnerabilities by criminals for the theft, distortion of information or failure in servicing of
critical systems. Control of vulnerabilities makes it possible to minimize risks and to decrease
losses with the appearance of destructive software or actions of criminals. specialist solve the
following tasks:

 Search for vulnerabilities, their ranking depending on the assets value;


 Estimation of threats realization risks, connected with the obtained vulnerabilities;
 Implementation of automatic vulnerabilities search means, and also the instruction of
work with them;
 Integration of vulnerabilities monitoring systems with different external sources
(IPS/IDS, antivirus systems etc.) and control system of Information Security.

 System security policy implementation

Implementation of information security systems and products

Active tasks:

1. Protection against internal information security threats

Internal information security threats include company employee threats both intentional (fraud,
theft, confidential data corruption or destruction, industrial espionage and so on) and
unintentional (changes or destruction of information caused by the employee’s poor qualification
or carelessness) – as well as failures in the software or hardware used to process and store
information.

Companies are offered the following services to help reduce their internal information security
threats:

1.1. Data leakage prevention

This service envisages designing a comprehensive control and counteraction system against

205 www.someakenya.com Contact: 0707 737 890


internal information security threats (deliberate insider acts violating the integrity, access and
confidentiality of information. This system’s implementation allows the company to protect its
business reputation and prevent the unsanctioned access, copying and corruption:

 of data transmission channels – content filter systems (Internet, e-mail, ICQ, P2P);
 at employees’ workstations – the control of information-carrying media (USB devices –
flash drives, external hard drives), print queues, access to network resources.

This system allows the company to establish centralized control management and conduct
effective countermeasures. It also helps the company collect the necessary evidence of security
incidents. At the same time, the system remains completely transparent to its users.

1.2. Guarantee of confidentiality during storage and transmissions

This represents a package of organizational and technical measures aimed at preventing the
compromise, theft, modification or destruction of confidential information by internal security
intruders and third parties. These services offer:

 data link encryption (organization of VPN, SSL, PKI);


 storage media encryption (creation of secure containers, design of corporate encryption
and data storage systems).

1.3. Vulnerability management system

This involves the design of a centralized application-oriented, server and firmware vulnerability
management system. This system helps to provide real-time and effective responses to any
emerging information system vulnerabilities, which in turn helps to reduce the risk of these
vulnerabilities being attacked by malicious software or intruders of local computer networks and
workstations.

2. Protection against external information security threats

External security threats include threats that emerge from the external environments. These
include:

 Internet-based attacks aimed at corrupting, destroying or stealing information, or


otherwise denying the service of a company’s information system;
 The spread of malicious codes – viruses, software Trojans, spy programs, Internet
worms;
 Unsolicited mail (spam).

Point lane offers the following external threat protection solutions of information security:

2.1. Multilevel malicious code and spam protection

This package of measures includes the implementation of the following corporate systems:

206 www.someakenya.com Contact: 0707 737 890


 Antivirus protection for workstations and servers;
 Malicious software filter;
 Spam protection.

2.2. The company’s perimeter protection

 The deployment of intrusion detection and intrusion prevention services (IDS/IPS). These
systems represent firmware packages that analyze traffic for the signature of the attack
and then automatically react to block it;
 The organization of cross-network firewalls, and the design of Internet or inter-branch
access systems.

2.3. Data transmission channels security

The organization and development of encrypted channels of communication between various


company branches, and the organization of secure remote company resource access systems.

2.4. Protection of the organization’s Internet resources

 An analysis of the protection-level of the organization’s external Internet resources (sites,


Internet portals, corporate resources for the company’s partners and staff), development
and implementation of external security threat protection recommendations.
 The implementation of firmware packages for protecting against distributed denial-of-
service (DDoS) attacks.

 Systems security strategies

Introduction

The security methodology described in this document is designed to help security professionals
develop a strategy to protect the availability, integrity, and confidentiality of data in an
organization's information technology (IT) system. It will be of interest to information resource
managers, computer security officials, and administrators, and of particular value to those trying
to establish computer security policies. The methodology offers a systematic approach to this
important task and, as a final precaution, also involves establishing contingency plans in case of
a disaster.

Data in an IT system is at risk from various sources—user errors and malicious and non-
malicious attacks. Accidents can occur and attackers can gain access to the system and disrupt
services, render systems useless, or alter, delete, or steal information.

An IT system may need protection for one or more of the following aspects of data:

207 www.someakenya.com Contact: 0707 737 890


 Confidentiality. The system contains information that requires protection from
unauthorized disclosure. Examples: Timed dissemination information (for example, crop
report information), personal information, and proprietary business information.
 Integrity. The system contains information that must be protected from unauthorized,
unanticipated, or unintentional modification. Examples: Census information, economic
indicators, or financial transactions systems.
 Availability. The system contains information or provides services that must be available
on a timely basis to meet mission requirements or to avoid substantial losses. Examples:
Systems critical to safety, life support, and hurricane forecasting.

Security administrators need to decide how much time, money, and effort needs to be spent in
order to develop the appropriate security policies and controls. Each organization should analyze
its specific needs and determine its resource and scheduling requirements and constraints.
Computer systems, environments, and organizational policies are different, making each
computer security services and strategy unique. However, the principles of good security remain
the same, and this document focuses on those principles.

Although a security strategy can save the organization valuable time and provide important
reminders of what needs to be done, security is not a one-time activity. It is an integral part of the
system lifecycle. The activities described in this document generally require either periodic
updating or appropriate revision. These changes are made when configurations and other
conditions and circumstances change significantly or when organizational regulations and
policies require changes. This is an iterative process. It is never finished and should be revised
and tested periodically.

Overview of How to Compile a Security Strategy

Reviewing Current Policies

Establishing an effective set of security policies and controls requires using a strategy to
determine the vulnerabilities that exist in our computer systems and in the current security
policies and controls that guard them. The current status of computer security policies can be
determined by reviewing the list of documentation that follows. The review should take notice of
areas where policies are lacking as well as examine documents that exist:

 Physical computer security policies such as physical access controls.


 Network security policies (for example, e-mail and Internet policies).
 Data security policies (access control and integrity controls).
 Contingency and disaster recovery plans and tests.
 Computer security awareness and training.
 Computer security management and coordination policies.

Other documents that contain sensitive information such as:

o Computer BIOS passwords.


o Router configuration passwords.

208 www.someakenya.com Contact: 0707 737 890


o Access control documents.
o Other device management passwords.

Identifying Assets and Vulnerabilities to Known Threats

Assessing an organization's security needs also includes determining its vulnerabilities to known
threats. This assessment entails recognizing the types of assets that an organization has, which
will suggest the types of threats it needs to protect itself against. Following are examples of some
typical asset/threat situations:

 The security administrator of a bank knows that the integrity of the bank's information is
a critical asset and that fraud, accomplished by compromising this integrity, is a major
threat. Fraud can be attempted by inside or outside attackers.
 The security administrator of a Web site knows that supplying information reliably (data
availability) is the site's principal asset. The threat to this information service is a denial
of service attack, which is likely to come from an outside attacker.
 A law firm security administrator knows that the confidentiality of its information is an
important asset. The threat to confidentiality is intrusion attacks, which might be
launched by inside or outside attackers.
 A security administrator in any organization knows that the integrity of information on
the system could be threatened by a virus attack. A virus could be introduced by an
employee copying games to his work computer or by an outsider in a deliberate attempt
to disrupt business functions.

Identifying Likely Attack Methods, Tools, and Techniques

Listing the threats (and most organizations will have several) helps the security administrator to
identify the various methods, tools, and techniques that can be used in an attack. Methods can
range from viruses and worms to password and e-mail cracking. It is important that
administrators update their knowledge of this area on a continual basis, because new methods,
tools, and techniques for circumventing security measures are constantly being devised.

Establishing Proactive and Reactive Strategies

For each method, the security plan should include a proactive strategy as well as a reactive
strategy.

The proactive or pre-attack strategy is a set of steps that helps to minimize existing security
policy vulnerabilities and develop contingency plans. Determining the damage that an attack will
cause on a system and the weaknesses and vulnerabilities exploited during this attack helps in
developing the proactive strategy.

The reactive strategy or post-attack strategy helps security personnel to assess the damage
caused by the attack, repair the damage or implement the contingency plan developed in the
proactive strategy, document and learn from the experience, and get business functions running
as soon as possible.

209 www.someakenya.com Contact: 0707 737 890


Testing

The last element of a security strategy, testing and reviewing the test outcomes, is carried out
after the reactive and proactive strategies have been put into place. Performing simulation attacks
on a test or lab system makes it possible to assess where the various vulnerabilities exist and
adjust security policies and controls accordingly.

These tests should not be performed on a live production system because the outcome could be
disastrous. Yet, the absence of labs and test computers due to budget restrictions might preclude
simulating attacks. In order to secure the necessary funds for testing, it is important to make
management aware of the risks and consequences of an attack as well as the security measures
that can be taken to protect the system, including testing procedures. If possible, all attack
scenarios should be physically tested and documented to determine the best possible security
policies and controls to be implemented.

Certain attacks, such as natural disasters such as floods and lightning cannot be tested, although a
simulation will help. For example, simulate a fire in the server room that has resulted in all the
servers being damaged and lost. This scenario can be useful for testing the responsiveness of
administrators and security personnel, and for ascertaining how long it will take to get the
organization functional again.

Testing and adjusting security policies and controls based on the test results is an iterative
process. It is never finished and should be evaluated and revised periodically so that
improvements can be implemented.

The Incident Response Team

Good practice calls for forming an incident response team. The incident response team should be
involved in the proactive efforts of the security professional. These include:

 Developing incident handling guidelines.


 Identifying software tools for responding to incidents/events.
 Researching and developing other computer security tools.
 Conducting training and awareness activities.
 Performing research on viruses.
 Conducting system attack studies.

These efforts will provide knowledge that the organization can use and information to issue
before and during incidents.

After the security administrator and incident response team have completed these proactive
functions, the administrator should hand over the responsibility for handling incidents to the
incident response team. This does not mean that the security administrator should not continue to
be involved or be part of the team, but the administrator may not always be available and the
team should be able to handle incidents on its own. The team will be responsible for responding
to incidents such as viruses, worms, or other malicious code; intrusions; hoaxes; natural

210 www.someakenya.com Contact: 0707 737 890


disasters; and insider attacks. The team should also be involved in analyzing any unusual event
that may involve computer or network security.

Methodology for Defining Security Strategies

The following section discusses a methodology for defining a computer security strategy that can
be used to implement security policies and controls to minimize possible attacks and threats. The
methods can be used for all types of attacks on computer systems, whether they are malicious,
non-malicious or natural disasters, and can thus be re-used repeatedly for different attack
scenarios. The methodology is based on the various types of threats, methods of attack, and
vulnerabilities discussed in "Security Threats." The following flow chart outlines the
methodology.

Flowchart 1

Predict Possible Attacks / Analyze Risks

The first phase of the methodology outlined in Flowchart 1 is to determine the attacks that can be
expected and ways of defending against these attacks. It is impossible to prepare against all
attacks; therefore, prepare for the most likely attacks that the organization can expect. It is
always better to prevent or minimize attacks than to repair the damage after an attack has already
occurred.

211 www.someakenya.com Contact: 0707 737 890


In order to minimize attacks it is necessary to understand the various threats that cause risks to
systems, the corresponding techniques that can be used to compromise security controls, and the
vulnerabilities that exist in the security policies. Understanding these three elements of attacks
helps us to predict their occurrence, if not their timing or location. Predicting an attack is a
matter of predicting its likelihood, which depends upon understanding its various aspects. The
various aspects of an attack can be shown in an equation:

Threats + Motives + Tools and Techniques + Vulnerabilities = Attack

For Each Type of Threat

Consider all of the possible threats that cause attacks on systems. These will include malicious
attackers, non-malicious threats, and natural disasters. The figure below classifies the various
threats to systems.

Diagram 1: Threats to systems

Threats such as ignorant or careless employees and natural disasters do not involve motives or
goals; therefore no predetermined methods, tools, or techniques are used to launch an attack.
Almost all of these attacks or security infiltrations are internally generated; rarely will they be
initiated by someone outside of the organization.

For these types of threats, security personnel need to implement separate proactive and reactive
strategies, following the guidelines in Flowchart 1.

For Each Type of Method of Attack

In order to launch an attack, a malicious attacker needs a method, tool or technique to exploit
various vulnerabilities in systems, security policies, and controls. A malicious attacker can use
different methods to launch the same attack. Therefore, the defense strategy must be customized
for each type of method used in each type of threat. Again, it is important that security
professionals keep current on the various methods, tools, and techniques used by attackers. A
detailed discussion of these can be found in "Security Threats." Following is a short list of these
techniques:

 Denial of service attacks


 Intrusion attacks
 Social engineering
 Viruses

212 www.someakenya.com Contact: 0707 737 890


 Worms
 Trojan horses
 Packet modification
 Packet replay
 Password cracking
 E-mail cracking

Proactive Strategy

The proactive strategy is a set of predefined steps that should be taken to prevent attacks before
they occur. The steps include looking at how an attack could possibly affect or damage the
computer system and the vulnerabilities it exploits (steps 1 and 2). The knowledge gained in
these assessments can help in implementing security policies that will control or minimize the
attacks. These are the three steps of the proactive strategy:

1. Determine the damage that the attack will cause.


2. Determine the vulnerabilities and weaknesses that the attack will exploit.
3. Minimize the vulnerabilities and weaknesses that are determined to be weak points in the
system for that specific type of attack.

Following these steps to analyze each type of attack has a side benefit; a pattern will begin to
emerge, because many factors will overlap for different attacks. This pattern can be helpful in
determining the areas of vulnerability that pose the greatest risk to the enterprise. It is also
necessary to take note of the cost of losing data versus the cost of implementing security
controls. Weighing the risks and the costs are part of a system risk analysis, and are discussed in
the white paper "Security Planning."

Security policies and controls will not, in every case, be completely effective in eliminating
attacks. For this reason it is necessary to develop contingency and recovery plans in the event
that security controls are penetrated.

Determine Possible Damage Resulting from an Attack

Possible damages can range the gamut from minor computer glitches to catastrophic data loss.
The damage caused to the system will depend on the type of attack. Use a test or lab
environment to clarify the damages resulting from different types of attacks, if possible. This will
enable security personnel to see the physical damage caused by an experimental attack. Not all
attacks cause the same damage. Here are some examples of tests to run:

 Simulate an e-mail virus attack on the lab system, and see what damage was caused and
how to recover from the situation.
 Use social engineering to acquire a username and password from an unsuspecting
employee and observe whether he or she complies.
 Simulate what would happen if the server room burned down. Measure the production
time lost and the time taken to recover.

213 www.someakenya.com Contact: 0707 737 890


 Simulate a malicious virus attack. Note the time required to recover one computer and
multiply that by the number of computers infected in the system to ascertain the amount
of downtime or loss of productivity.

It is also a good idea to involve the incident response team mentioned earlier, because a team is
more likely than an individual to spot all of the different types of damage that have occurred.

Determine the Vulnerabilities or Weaknesses that an Attack can exploit

If the vulnerabilities that a specific attack exploits can be discovered, current security policies
and controls can be altered or new ones implemented to minimize these vulnerabilities.
Determining the type of attack, threat, and method makes it easier to discover existing
vulnerabilities. This can be proved by an actual test.

Following is a list of possible vulnerabilities. These represent just a few of the many that exist
and include examples in the areas of physical, data, and network security.

Physical Security:

 Are there locks and entry procedures to gain access to servers?


 Is there sufficient air conditioning and are air filters being cleaned out regularly? Are air
conditioning ducts safeguarded against break-ins?
 Are there uninterruptible power supplies and generators and are they being checked
through maintenance procedures?
 Is there fire suppression and pumping equipment, and proper maintenance procedures for
the equipment?
 Is there protection against hardware and software theft? Are software packages and
licenses and backups kept in safes?
 Are there procedures for storing data, backups, and licensed software off-site and onsite?

Data Security:

 What access controls, integrity controls, and backup procedures are in place to limit
attacks?
 Are there privacy policies and procedures that users must comply to?
 What data access controls (authorization, authentication, and implementation) are there?
 What user responsibilities exist for management of data and applications?
 Have direct access storage device management techniques been defined? What is their
impact on user file integrity?
 Are there procedures for handling sensitive data?

Network Security:

 What kinds of access controls (Internet, wide area network connections, etc.) are in
place?

214 www.someakenya.com Contact: 0707 737 890


 Are there authentication procedures? What authentication protocols are used for local
area networks, wide area networks and dialup servers? Who has the responsibility for
security administration?
 What type of network media, for example, cables, switches, and routers, are used? What
type of security do they have?
 Is security implemented on file and print servers?
 Does your organization make use of encryption and cryptography for use over the
Internet, Virtual Private Networks (VPNs), e-mail systems, and remote access?
 Does the organization conform to networking standards?

Minimize Vulnerabilities and Weaknesses Exploited by a Possible Attack

Minimizing the security system's vulnerabilities and weaknesses that were determined in the
previous assessment is the first step in developing effective security policies and controls. This is
the payoff of the proactive strategy. By minimizing vulnerabilities, security personnel can
minimize both the likelihood of an attack, and its effectiveness, if one does occur. Be careful not
to implement too stringent controls because the availability of information could then become a
problem. There must be a careful balance between security controls and access to information.
Information should be as freely available as possible to authorized users.

Make Contingency Plans

A contingency plan is an alternative plan that should be developed in case an attack penetrates
the system and damages data or any other assets with the result of halting normal business
operations and hurting productivity. The plan is followed if the system cannot be restored in a
timely manner. Its ultimate goal is to maintain the availability, integrity and confidentiality of
data—it is the proverbial "Plan B."

There should be a plan per type of attack and/or per type of threat. Each plan consists of a set of
steps to be taken in the event that an attack breaks through the security policies. The contingency
plan should:

 Address who must do what, when, and where to keep the organization functional.
 Be rehearsed periodically to keep staff up-to-date with current contingency steps.
 Cover restoring from backups.
 Discuss updating virus software.
 Cover moving production to another location or site.

The following points outline the various evaluation tasks that should be evaluated to develop a
contingency plan:

 Evaluate the organization's security policies and controls to accommodate any


opportunities found for minimizing vulnerabilities. The evaluation should address the
organization's current emergency plan and procedures, and their integration into the
contingency plan.

215 www.someakenya.com Contact: 0707 737 890


 Evaluate current emergency response procedures and their effect on the continuous
operation of business.
 Develop planned responses to attacks and integrate them into the contingency plan,
noting the extent to which they are adequate to limit damage and minimize the attack's
impact on data processing operations.
 Evaluate backup procedures, including the most recent documentation and disaster
recovery tests, to assess their adequacy, and include them in the contingency plan.
 Evaluate disaster recovery plans to determine their adequacy in providing a temporary or
longer term operating environment. Disaster recovery plans should include testing the
required levels of security so that security personnel can see if they continue to enforce
security throughout the process of recovery, temporary operations, and the organization's
move back to its original processing site or to a new processing site.

Draw up a detailed document outlining the various findings in the above tasks. The document
should list:

 Any scenarios to test the contingency plan.


 The impact that any dependencies, planned-for assistance from outside the organization,
and difficulties in obtaining essential resources will have on the plan.
 A list of priorities observed in the recovery operations and the rationale in establishing
those priorities.

Reactive Strategy

A reactive strategy is implemented when the proactive strategy for the attack has failed. The
reactive strategy defines the steps that must be taken after or during an attack. It helps to identify
the damage that was caused and the vulnerabilities that were exploited in the attack, determine
why it took place, repair the damage that was caused by it, and implement a contingency plan if
one exists. Both the reactive and proactive strategies work together to develop security policies
and controls to minimize attacks and the damage caused during them.

The incident response team should be included in the steps taken during or after the attack to
help assess it and to document and learn from the event.

Assess the Damage

Determine the damage that was caused during the attack. This should be done as swiftly as
possible so that restore operations can begin. If it is not possible to assess the damage in a timely
manner, a contingency plan should be implemented so that normal business operations and
productivity can continue.

Determine the Cause of the Damage

To determine the cause of the damage it is necessary to understand what resources the attack was
aimed at and what vulnerabilities were exploited to gain access or disrupt services. Review

216 www.someakenya.com Contact: 0707 737 890


system logs, audit logs, and audit trails. These reviews often help in discovering where the attack
originated in the system and what other resources were affected.

Repair the Damage

It is very important that the damage be repaired as quickly as possible in order to restore normal
business operations and any data lost during the attack. The organization's disaster recovery
plans and procedures (discussed in "Security Planning") should cover the restore strategy. The
incident response team should also be available to handle the restore and recovery process and to
provide guidance on the recovery process.

Document and Learn

It is important that once the attack has taken place, it is documented. Documentation should
cover all aspects of the attack that are known, including: the damage that is caused (hardware,
software, data loss, loss in productivity), the vulnerabilities and weaknesses that were exploited
during the attack, the amount of production time lost, and the procedures taken to repair the
damage. Documentation will help to modify proactive strategies for preventing future attacks or
minimizing damages.

Implement Contingency Plan

If a contingency plan already exists, it can be implemented to save time and to keep business
operations functioning correctly. If no contingency plan exists, develop an appropriate plan
based on the documentation from the previous step.

Review Outcome / Do Simulations

The second major step in the security strategy is to review the findings established in the first
step (Predicting the Attack). After the attack or after defending against it, review the attack's
outcome with respect to the system. The review should include: loss in productivity, data or
hardware lost, and time taken to recover. Also document the attack and, if possible, track where
the attack originated from, what methods were used to launch the attack and what vulnerabilities
were exploited. Do simulations in a test environment to gain the best results.

Review Policy Effectiveness

If policies exist for defending against an attack that has taken place, they should be reviewed and
checked for their effectiveness. If no policies exist, new ones must be drawn up to minimize or
prevent future attacks.

Adjust Policy Accordingly

If the policy's effectiveness is not up to standard, the policy should be adjusted accordingly.
Updates to policies must be coordinated by the relevant managerial personnel, security officer,
administrators, and the incident response team. All policies should comply with the

217 www.someakenya.com Contact: 0707 737 890


organization's general rules and guidelines. For example working times might be from 8am to
6pm. A security policy could exist or be created that allows users to logon to the system only
during these times.

Examples

Example 1: Non-malicious Threat

An employee, John Doe, does not want to lose any information that he has saved to his hard disk.
He wants to make a backup of this information, so he copies it to his home folder on the server
that happens to also be the company's main application server. The home folders on the server
have no disk quotas defined for the users. John's hard drive has 6.4 Gigabytes of information and
the server has 6.5 Gigabytes of free space. The application server stops responding to updates
and requests because it is out of disk space. The result is that users are denied the applications
server services and productivity stops. Below is the methodology that should have taken place
before John decided to back up his hard drive to his home folder.

Example 2: Malicious Threat (Outside Attacker)

Jane Doe writes viruses and hacks into systems as a hobby. Jane releases a new virus that will
disrupt e-mail systems throughout the world.

Example 3: Malicious Threat (Inside Attacker)

An employee, Bob Roberts, works for a company that designs space ships. Bob is contacted by
the competition and is offered a large amount of money to steal information on his company's

218 www.someakenya.com Contact: 0707 737 890


latest design, the "Flingbot 2000." Bob does not have the necessary access rights to the
information. He disguises himself as an administrator over the telephone, having a conversation
with an employee who does have access rights. Bob tells the employee that he is doing routine
administrative work on the server and requires the employee's username and password to verify
it against the records on the server. The employee complies and gives Bob the username and
password.

Example 4: Non-malicious Threat (Natural Disaster)

Company XYZ does not have fire protection and detection systems in their server room. An
administrator of the company's computer systems leaves a couple of manuals lying on the air-
conditioner. During the night the air conditioner overheats and starts a fire that burns down the
server room and a couple of offices.

219 www.someakenya.com Contact: 0707 737 890


 Audit
Information is any organization’s key resource that directly determines the company’s
profitability and success.

The company’s management must have confidence that its business is protected and able to
prevent any attempt to steal information – whether these attempts come from outside the
company or from its own staff.
The uncontrolled use of the Internet and portable storage media, as well as inability to monitor
the information coming off company’s printers vastly increases the chances that your
strategically important information will be stolen and transferred to your competitors. It also
increases the chances that your business might grind to a halt because its key asset – information
– has been destroyed.
Information security (IS) auditing involves the study and assessment of the current state of the
organization’s information resources and corporate systems, checking them for their
correspondence to the standards and requirements demanded by the client.
the following are types of auditing of information security services:

 Expert audit
 Penetration test
 Web security audit
 Comprehensive audit
 Preparation for ISO certification

220 www.someakenya.com Contact: 0707 737 890


TOPIC 10

INTRODUCTION TO COMPUTER FORENSICS

 Computer forensics concepts

Concepts and Standards


Regardless of specific case, technology used, concept of computer forensics is constant.
Forensics case work consists of five basic steps:

1. Preparation – first and one of the most important step is proper forensic case preparation
this can include: understanding local law and legal issues (this can determine tools and
procedures we can or cannot use), understanding of assignment (what we are asked to
do), reconnaissance of amount and type of computers and operation systems we will have
to deal with. Preparing our team, checking equipment and much more…
2. Collection – from technical point of view distinguish three types of digital evidence
collection models: First is on site acquisition – in this type we are making binary copy of
hard drives, and then leave original ones. Second is evidence collecting and taking it to
the lab where one can make acquisition. Third is live forensic when one is collecting
evidences from powered on computers.

Blocker Set for traditional forensic HDD acquisition

3. Examination and Analysis – key area of forensic investigation. Examination of data,


internet artifacts, temporary files, spool files, shortcuts, keyword search, dealing with
encryption. Time line analysis and much more…

221 www.someakenya.com Contact: 0707 737 890


4. Reporting – In some countries court expert report has to be made according to valid
templates, sometimes it must be all printed. First of all it must be written the way the
judge and prosecutor understands it.

Computer Forensics Standards

It is hard to point word wide standards in computer forensics, reason of this are differences in
legal systems. There are efforts to change this. Many organizations and institutions presents its
best practices. Below I present sample Best Practices from: computer-forensics-recruiter.com

 Whenever possible, do not examine the original media. Write protect the original, copy it,
and examine only the copy.
 Use write blocking technology to preserve the original while it is being copied.
 Computer forensic examiners must meet minimum proficiency standards.
 Examination results should be reviewed by a supervisor and peer reviewed on a regular
schedule.
 All hardware and software should be tested to ensure they produce accurate and reliable
results.
 Forensic examiners must observe the highest ethical standards.
 Forensic examiners must remain objective at all times.
 Forensic examiners must strictly observe all legal restrictions on their examinations.

 Incidence handling
SEE PPT ATTACHED

 Investigating desktop incidents


Host-based Information

Host-based evidence includes logs, records, documents, and any other information that is found
on a system and not obtained from network-based nodes.
For example, host-based information might be a system backup that harbors evidence at a
specific period in time. Host-based data collection efforts should include gathering information
in two different manners:
 live data collection
 forensic duplication
In some cases, the evidence that is required to understand an incident is ephemeral (temporary or
fleeting) or lost when the victim/relevant system is powered down. This volatile data can provide
critical information when attempting to understand the nature of an incident. Therefore, the first
step of data collection is the collection of any volatile information from a host before this
information is lost. The volatile data provides a “snap-shot” of a system at the time you respond.
You record the following volatile information:
The system date and time
 The applications currently running on the system
 The currently established network connections
222 www.someakenya.com Contact: 0707 737 890
 The currently open sockets (ports)
 The applications listening on the open sockets

The state of the network interface (promiscuous or not)


In order to collect this information, alive response must be performed. A live response is
conducted when a computer system is still powered on and running. This means that the
information contained in these areas must be collected without impacting the data on the
compromised device. There are three variations of live response:

 Initial live response


This involves obtaining only the volatile data from a target or victim system. An initial live
response is usually performed when you have decided to conduct a forensic duplication of the
media.

 In-depth response
This goes beyond obtaining merely the volatile data. The CSIRT obtains enough additional
information from the target/victim system to determine a valid response strategy. Nonvolatile
information such as logfiles are collected to help understand the nature of the incident.

 Full live response


This is a full investigation on a live system. All data for the investigation is collected from the
live system, usually in lieu of performing a forensic duplication, which requires the system to be
powered off.
At some point (usually during your initial response), you need to decide whether or not to
perform a forensic duplication of the evidence media. Generally, if the incident is severe or
deleted material may need to be recovered, a forensic duplication is warranted.
The forensic duplication of the target media provides the “mirror image” of the target system,
which shows due diligence when handling critical incidents. It also provides a means to have
working copies of the target media for analysis without worrying about altering or destroying
potential evidence. If the intent is to take judicial action, law enforcement generally prefers
forensic “bit-for-bit, byte-for-byte” duplicates of target systems. If the incident could evolve into
a corporate-wide issue with grave consequences, it is prudent to perform a forensic duplication.

 Investigating network incidents


Network-based Evidence

Network-based evidence includes information obtained from the following sources:


 IDS logs
 Consensual monitoring logs
 Nonconsensual wiretaps
 Pen-register/trap and traces
 Router logs
 Firewall logs

223 www.someakenya.com Contact: 0707 737 890


 Authentication servers
An organization often performs network surveillance (consensual monitoring) to confirm
suspicions, accumulate evidence, and identify co-conspirators involved in an incident. Where
host-based auditing may fail, network surveillance may fill in the gaps.
Network surveillance is not intended to prevent attacks. Instead, it allows an organization to
accomplish a number of tasks:
Confirm or dispel suspicions surrounding an alleged computer security incident.
 Accumulate additional evidence and information.
 Verify the scope of a compromise.
 Identify additional parties involved.
 Determine a timeline of events occurring on the network.
Ensure compliance with a desired activity

 Securing and preserving evidence


Discovering a suspected data breach in your systems can be a harrowing experience. Available
details might be confusing or unclear at first, and concerns about what the incident may mean for
your business, your customers, and others can be overwhelming. As chaotic as the situation may
be, a calm and organized response can save considerable difficulty. In this post, we'll discuss
actions you should take following a potential data breach that can help preserve evidence for an
investigation and protect against additional danger.

Seek Legal Counsel: Before continuing, we wish to note the importance of legal guidance in
responding to a possible data breach. Your legal obligations in the event of exposing patient
medical records differ dramatically from your obligations in the event of revealing a partner
company's business plans or your customers' credit card numbers. A prompt call to an attorney
who specializes in privacy and data security issues is critical. Preferably, your business would
have prepared a data breach response plan under legal guidance in advance, helping avoid the
possibility of early missteps. Nothing in this post should be interpreted as a substitute for legal
advice.

Seek Technical Help: Specialists such as Elysium Digital with experience in assisting firms
facing a possible breach can be retained to investigate. The goals of the technical investigation
are:

 To reconstruct the attack narrative to uncover the enabling vulnerabilities


 To determine the scope of the attack
 To determine the data exposed
 To decide on immediate and long-term remediation steps
 In some cases, to identify the responsible parties

Preserve the Evidence: The success of the investigation depends on the quality of the available
evidence. To foil a potential investigation, attackers may delete files or perform other
modifications to cover their tracks. By using or modifying a system after a breach, you may
inadvertently destroy evidence of actions that a forensic investigation could otherwise uncover.

224 www.someakenya.com Contact: 0707 737 890


Thus, your first concern after discovering a possible breach is to preserve the evidence. While
the exact steps to take depend on the situation, advisable steps may include:

 Turn off your server(s) (just pull out the power plug)
 Swap all hard drives out of the affected servers
 Use a properly-trained forensic consultant to create court-defensible forensic images of
server hard drives.
 Rebuild a secured system on new drives
 Create forensically-sound images of backup media, network monitoring details (such as
network logging, router/firewall logs, or intrusion detection systems), and all relevant log
files, as these may contain evidence of the attack over time
 Document and preserve copies of your network layout and configuration at the time of
the attack, including network topology and the configuration of any routers and firewalls.

Be careful how images are collected, and always use forensic specialists for this task. For all of
their other invaluable skills, IT departments often are not aware of the specific steps that enable
a preservation effort to stand up in court. Forensic images are perfect copies of the entire
contents of a storage drive, including deleted and fragmentary data that cannot be captured by
doing an ordinary file copy. GHOST and similar backup tools do not capture forensically-sound
images. If in doubt about how to collect forensic images, do not hesitate to call Elysium for free
advice.

If you cannot remove the hard drives and cannot immediately call in a forensic investigator, you
should attempt to back up as much of the system as possible before modifying the system to
secure it. A complete copy of all data would be ideal, but at a minimum, you should preserve
originals of any modified files and take care to ensure that your preservation process retains
metadata such as creation and last-modified dates. You should also store copies of any system,
application, server, FTP, database, and other logs as soon as possible. Even if an attacker has
modified log files, they still may contain useful information. Preservation of backups and logs is
particularly urgent if they may be deleted or overwritten as time passes.

In addition, you should document any changes that you make (system settings, accounts, firewall
settings, etc.) and any remediation steps that you undertake. If these changes can be
independently documented or verified via log files, copies of files, etc., you should also preserve
evidence supporting the changes so that others can verify them later. For example, you could
take a screenshot of the configuration screen or back up the configuration files both before and
after a change is made. Among other benefits, this evidence demonstrates your remediation
process to any interested parties who may assess your efforts to mitigate the breach.

When working with a specialist such as Elysium on the response to a data breach, the specialist
has a duty to provide independent analysis. They may:

 Ask you questions about your operations


 Ask you to make your IT staff and policies available to help inform the investigation
 Ask you to entrust them with confidential data belonging to your company or your clients
(subject, of course, to contractual requirements to keep this data confidential)

225 www.someakenya.com Contact: 0707 737 890


 Uncover and suggest additional ways of strengthening your system, including
suggestions that may seem unrelated to the attack under investigation but reduce the
potential for future attacks
 Uncover evidence that confirms or narrows the possible scope of the attack

Having an outsider dig through your systems following a suspected data breach may be
intimidating. If you believe that you have already patched any vulnerabilities, you may be
tempted to simply move on. However, an investigation can be a critical step, even when not
legally required. An investigation may settle questions regarding the data accessed and provide
confidence in any remediation steps, including confirming that the attacker has not left any “back
doors” in your system to maintain access for future attacks. The investigation also may uncover
ways to reduce future risk, preempting the need to repeat an unpleasant process. Experienced
investigators understand that this review may be unpleasant, and they attempt to perform their
work objectively and professionally. Preservation of evidence can make this process as smooth
and painless as possible, helping you to achieve the goal of protecting both your data and the
trust you have built with your customers and clients.

226 www.someakenya.com Contact: 0707 737 890


TOPIC 11

PROFESSIONAL VALUES AND ETHICS IN COMPUTING

 Intellectual property and fraud

Controlling Fraud and Protecting Intellectual Property: Today's Challenge

Even though data privacy is high on the security agenda these days, security has other important
goals including protecting corporate data and intellectual property (IP) as well as controlling
fraud. The relative importance of this protection will always be driven by the likelihood of
attack, coupled with the value of the information or product that is lost. Stakes are clearly highest
for organizations performing transactions in untrusted locations and over the Internet, those
whose competitive position is driven by the data they own, or manufacturers of high value
products, particularly in outsourced facilities. Examples of fraud include:

Risks

 In less-trusted manufacturing environments, insiders can potentially access valuable


intellectual property, and authorize production overruns to build counterfeits. From a
security perspective, they can manipulate device identities and corrupt embedded
firmware and product configurations to stage wide-ranging attacks.
 Attackers can modify electronic documents, instructions, transactions and records to
affect supply chain processes, legal claims, or the outcomes of decisions unless rigorous
integrity tests are put in place.
 Organizations that cannot adequately protect outsourced manufacturing operations or
online services will not only suffer direct financial losses but will also limit their
flexibility to manage their business efficiently, potentially damaging their competitive
position.
 Fraud and theft not only damage customer perceptions and experiences, but it also runs
the risk of attracting the attention of regulators and compromising commercial and legal
agreements with third parties such as content owners.

Controlling Fraud and Protecting Intellectual Property: Thales e-Security Solutions

Products and services from Thales e-Security can help many different types of organizations
reduce the risk of fraud and theft of intellectual property. Cryptography can play a vital role in
ensuring the confidentiality of information, particularly as it is exposed in hostile environments,
and can be used to verify the integrity and authenticity of almost any form of electronic
document or message. In some cases cryptographic protection, particularly in the form of
encryption, can be easily deployed in a completely transparent way. Network level encryption
using the Datacryptor family of encryption platforms can be used to protect virtually any form of
backbone network connection and is particularly valuable in protecting virtual private networks
(VPNs) to remote manufacturing or logistics locations.

227 www.someakenya.com Contact: 0707 737 890


Other forms or protection, specifically those that introduce the use of digital identities and digital
signatures, rely on public key operations and typically rely on an underlying public key
infrastructure (PKI). In some cases commercial applications support PKI-based techniques as
standard; whereas in-house applications may need to be modified to support this more
sophisticated but more secure approach to security. In all cases the protection of keys within a
PKI and its associated applications needs to be strongly enforced and tightly managed. In this
context the nShield hardware security module (HSM) is a perfect fit and benefits from pre-
qualified integration with a host of leading commercial applications.

Looking beyond even key management, organizations also need to protect the application
processes that actually use those keys, for example to approve the issuance of an embedded
digital ID for a manufactured device, approve the loading of secure firmware, signing of a
transaction, or counting of a vote. In remote and often untrusted locations these processes can be
made secure only through advanced levels of physical and logical security. The CodeSafe
capability of nShield HSMs enables high-tech manufacturers and software providers to create
tamper-resistant processes that protect their critical processes, business models, and intellectual
property, reducing the risk of abuses and counterfeiting. With CodeSafe, organizations can
secure sensitive processes (such as identity management or metering) behind a physically
tamper-resistant barrier. As a result, manufacturers can be more confident in their ability to
outsource securely, while software providers can maximize revenue by enforcing license
agreements through secure metering capabilities.

Benefits:

 Encrypt information to ensure confidentiality as it flows over networks, as it is stored,


and as it is used—either within the corporate datacenter or at remote locations.
 Digitally sign documents, transactions, and messages to create a mechanism that can
easily validate their integrity and authenticity.
 Comply with regional digital signing laws through the use of security certified HSMs to
establish legally sound documents.
 Efficiently generate cryptographic keys and digital credentials to support high volume
production processes with high assurance credential management capabilities for secure
device authentication.
 Create secure outsourcing environments by establishing tamper-resistant, trusted
environments to protect critical application processes such as software loading, license
provisioning, and identity management.
 Strengthen critical web infrastructure with leading-edge DNS security capabilities
(DNSSEC) to reduce the risk of web site spoofing and service disruption.

1. Criminal offences (counterfeiting and piracy)

Infringement of trademarks and copyrights can be criminal offences, as well as being actionable
in civil law. A range of criminal provisions are set out in the relevant Acts, and other offences
such as those under the Fraud Act 2006 may also be applied. These criminal offences are most
often associated with organized crime groups who are dealing for profit in fake branded goods or
pirated products. However, these offences can also occur in legitimate business, for example if

228 www.someakenya.com Contact: 0707 737 890


an employee uses the workplace to produce and/or sell quantities of fake DVDs or branded
goods to colleagues or outside the office.

1.1. What is criminal intellectual property (IP) rights infringement?

Criminal IP offences are also known as “IP crime” or “counterfeiting” and “piracy”.
Counterfeiting can be defined as the manufacture, importation, distribution and sale of products
which falsely carry the trade mark of a genuine brand without permission and for gain or loss to
another. Piracy, which includes copying, distribution, importation etc. of infringing works, does
not always require direct profits from sales - wider and indirect benefits may be enough along
with inflicting financial loss onto the rights holder. For example possession of an infringing copy
of a work protected by copyright in the course of your business may be a criminal offence under
section 107 (1)(c) of the Copyright, Designs and Patents Act 1988.

Not all cases that fall within the criminal law provisions will be dealt with as criminal offences
and in many cases business to business type disputes are tackled by the civil law. Further
information is available on what is the law and the guide to offences.

1.2. What does infringement mean

“Infringement” is a legal term for an act that means breaking a law. IP rights are infringed when
a product, creation or invention protected by IP laws are exploited, copied or otherwise used
without having the proper authorization, permission or allowance from the person who owns
those rights or their representative.

It can range from using technology protected by a patent to selling counterfeit


medicines/software or copying a film and making it available online.

All of these acts will constitute a civil infringement but some copyright and trade mark
infringements may also be a criminal offence such as the sale of counterfeits including clothing.

1.3. How will action be taken against you

Trading standards are primarily responsible for enforcing the criminal IP laws, with support from
the police, and with investigative assistance from the IP rights owners. Private criminal
investigations and prosecutions may also be launched by the right owners in some cases.

Criminal IP offences may be taking place in your workplace in a variety of ways. These include:

 employees selling copies of protected works or supplying fake goods within the working
environment
 company servers and equipment being used to make available (i.e. uploading) infringing
content to the internet with the knowledge of management
 using the work intranet to offer for sale infringing products to colleagues
 external visitors entering your premises, to sell counterfeit and pirated items

229 www.someakenya.com Contact: 0707 737 890


 using unlicensed software on business computer systems with the knowledge of
management

Not only can IP crime make you and your business liable to a potential fine of up to £50,000, and
a custodial sentence of up to 10 years, counterfeiting and piracy can affect your business security
and reputation, threaten your IT infrastructure and risk the health and safety of your staff and
consumers.

2 Risks for business

IP rights infringement and in particular IP crime threaten legitimate businesses, their staff, and
undermines consumer confidence. Your business may face a number of risks if you do not take
appropriate steps to tackle IP crime within your working environment.

Failure to address the problem could leave you and your business liable and at risk to criminal
and/or civil action. Under civil law you may be subject to court action and have to pay damages.
Criminal action may lead to unlimited fines, or a custodial sentence (which could be up to a
maximum of 10 years). You may also be vulnerable to threats from computer viruses and
malware.

You need to think about not only the way your business is conducted, but also be aware that the
behaviour of your staff – and their actions at work may also incur liability for the organisation as
a whole.

2.1. Legal liability

Activities which results in IP rights being infringed can raise both civil and criminal law
liabilities. In some cases these activities may relate to something done directly by the business.
In other instances it may relate to an independent action of a member of staff at work.

2.2. Security risks

There are many security risks to a business from IP crime. These include the infiltration of
viruses and malware which can aid identity theft, threaten system security and slow down IT
networks.

2.3. Reputational risks

Good businesses attract respect and the trust of future partners. Adverse publicity relating to any
civil or criminal court action could affect how other businesses view you and how they choose to
deal with you.

230 www.someakenya.com Contact: 0707 737 890


2.4. Resource implications

IP crime can impact on the productivity of your business. Resource implications, such as staff
neglecting work tasks to carry out illegal activities, and IT system failure due to malware
problems, can have a detrimental effect.

3. Potential problem areas

IP rights are unfamiliar to many and can be complicated. One item can be protected by a number
of different IP rights, which can be infringed in different ways. A music CD will have copyright
in the music, so-called “mechanical” rights in the recording, design rights in the cover, and well-
known brands often register their names as trademarks.

In order to protect your business, and avoid serious legal and security risks, it is important:

 to understand how IP rights infringements can occur


 to have a strategy for avoiding them, and
 to know how to address such a problem if it arises

To assist in identifying instances where IP rights infringement can occur, a range of activities
and examples have been identified. Advice is available on steps to help you deal with an IP
rights infringement in your business.

3.1. Business activities

A business can infringe the IP rights of others by not having the correct license to support the
activities that take place within the business.

3.2. Staff activities

Staff infringing IP rights at work can impact productivity, put your systems at risk from malware
and put you and your business at risk of legal liability for their actions.

3.3. People visiting your workplace

Letting traders onto your premises to sell items to your staff could leave your business facing
legal liability. It can also compromise your site security plans.

There are many more potential problem areas, therefore it is vital that you and your business
understand how these problems might arise, so you can take steps to avoid them.

4. Dealing with infringement

The needs of businesses will vary. What is right for a factory unit or a small office may not suit
larger more complex organisations. The common thread is that doing nothing is not a sensible
option given the risks it can pose for you and your business. Whether your business is small or

231 www.someakenya.com Contact: 0707 737 890


large there is a range of actions you can take to make sure that IP rights infringement is not
occurring within your business environment.

Preventative steps will help to safeguard you and your business, but once infringing activities
have been identified, a fast and effective response is essential. You therefore need to be prepared,
even if you are not currently aware of any such problems in your business.

Clear processes and procedures will help you to embed respect for IP with managers and staff,
creating the right company ethos and ensuring that you identify potential problem areas and
manage them properly.

Staff and managers need to understand what IP is, how IP rights can be infringed and the risks
this can pose - both for them and for the business. Staff in corporate functions, such as Human
Resources (HR), Information Technology (IT), finance and procurement have a particularly
important role to play in spreading information and good practice.

4.1. Preventative procedures and policies

Guidance is available on the procedures and processes you and your business can adopt to
prevent infringement occurring. Information includes: HR policies, license management and
processes for site visits. Advice on what to do if you identify any criminal IP offences relating to
IP rights infringement taking place in your business is also covered.

4.2. Raising awareness within your business

Practical tools have been developed to help you educate staff and management about the
importance of IP and how to comply with the relevant law. These include sample slide packs to
help raise awareness and improve understanding.

5 Civil Infringement

The infringement of an IP right is a civil matter in the case of patents, trademarks, designs and
copyright. In the case of trademarks and copyright the act may also constitute a criminal IP
offence.

There are many potential problem areas; therefore it is vital that you and your business take
action to avoid these problems. Advice and guidance on dealing with IP rights infringement is
available.

5.1. How to avoid infringement

It is important that you and your business take preventative steps to avoid infringing the IP rights
of others by seeking permission - which usually means obtaining a license for the activity.

232 www.someakenya.com Contact: 0707 737 890


5.2. How will an IP rights owner take action against you?

If you are believed to be infringing IP rights, the owner may wish to take action through the civil
courts; other methods can also be used, such as mediation, the use of “cease and desist” letters or
by seeking to use other services in resolving disputes.

You may be liable for damages relating to any infringement.

6. Copyright infringement

Copyright owners generally have the right to authorize or prohibit any of the following things in
relation to their works:

 Copying the work in any way. For example, photocopying, reproducing a printed page by
handwriting, typing or scanning into a computer, or making a copy of recorded music
 issuing copies of the work to the public
 Renting or lending copies of the work to the public. However, some lending of copyright
works falls within the Public Lending Right Scheme and this lending does not infringe
copyright
 Performing, showing or playing the work in public. Obvious examples are performing
plays and music, playing sound recordings and showing films or videos in public. Letting
a broadcast be seen or heard in public also involves performance of music and other
copyright material contained in the broadcast
 Broadcasting the work or other communication to the public by electronic transmission.
This includes putting copyright material on the internet or using it in an on demand
service where members of the public choose the time that the work is sent to them
 making an adaptation of the work, such as by translating a literary or dramatic work,
transcribing a musical work and converting a computer program into a different computer
language or code

Copyright is infringed when any of the above acts are done without permission, whether directly
or indirectly and whether the whole or a substantial part of a work is used, unless what is done
falls within the scope of exceptions to copyright permitting certain minor uses.

Copyright is essentially a private right so decisions about how to enforce your right that is what
to do when your copyright work is used without your permission, are generally for you to take.

Deliberate infringement of copyright on a commercial scale may be a criminal offence.

7. Patent infringement

Infringing a patent means manufacturing, using, selling or importing a patented product or


process without the patent owner’s permission.

The owner of a patent can take legal action against you and claim damages if you infringe their
patent.

233 www.someakenya.com Contact: 0707 737 890


7.1. How to avoid infringing

Patent applicants have to provide a full description of the invention. You can ask for an opinion
to check if what you want to do would infringe a particular patent. If it would infringe, you may
be able to agree terms with the owner, or even buy the patent from them.

If you are infringing get professional advice quickly from a patent attorney or solicitor, because
the owner can sue you.

7.2. What if someone sues you for infringing?

There are two basic types of defence if someone claims you are infringing their patent:

You are not infringing - what you are doing does not infringe their patent claims, or the patent is
invalid - you can take legal action to challenge the validity of the patent. If you win, their patent
may be cancelled (revoked). The loser usually has to pay both sides’ costs, so think hard before
starting legal action. If someone intends to sue you for infringement, you can try to reach
agreement with them on using their patent. Get professional advice from a patent attorney or
solicitor, but do not do or say anything yourself.

8. Design infringement

By registering a design the proprietor obtains the exclusive right for 25 years (provided renewal
fees are paid every 5 years) to make, offer, put on the market, import or export the design, or
stock the product for the above purposes.

These rights are infringed by a third party who does any of the above with the design, for
commercial gain.

8.1. How to avoid infringing

The Intellectual Property Office (IPO) cannot advise you on whether your design would infringe
an existing design. If you are concerned that you may be infringing, you may wish to obtain
professional advice from a patent attorney, trade mark attorney or a solicitor.

If you are infringing you should be aware that the owner may be able to sue you. The legal
practitioner may also be able to advise you on agreeing, if it is possible, some form of terms
between you and the owner of the registered design (such as licensing the right to use the design
or buying it from them).

8.2. What if someone sues you for infringing?

There are two basic types of defence if someone claims you are infringing their design:

 you are not infringing - what you are doing does not infringe their design, or

234 www.someakenya.com Contact: 0707 737 890


 The design is invalid - you can take legal action to challenge the validity of the design. If
you win, their design may be cancelled (invalidated). The loser usually has to pay the
legal costs of both sides, so think hard before starting legal action. If someone intends to
sue you for infringement, you can try to reach agreement with them on using their design.

8.3. I think someone else maybe infringing, what should I do

Get professional advice. You may be able to get a court order to force the infringer to cease
trading. You should then consider whether to negotiate or to take legal action for compensation.
However, infringement actions must be taken to the High Court of England and Wales, the High
Court of Northern Ireland or the Court of Session in Scotland. The IPO does not handle such
actions.

8.4. How to avoid infringement

It is important that you and your business take preventative steps to avoid infringing the IP rights
of others by seeking permission - which usually means obtaining a license for the activity.

8.5. How will an IP rights owner take action against you?

If you are believed to be infringing IP rights, the owner may wish to take action through the civil
courts; other methods can also be used, such as mediation, the use of “cease and desist” letters or
by seeking to use other services in resolving disputes.

You may be liable for damages relating to any infringement.

9. Trade mark infringement

If you use an identical or similar trade mark for identical or similar goods and services to a
registered trade mark - you may be infringing the registered mark if your use creates a likelihood
of confusion on the part of the public. This includes the case where because of the similarities
between the marks the public are led to the mistaken belief that the trade marks, although
different, identify the goods or services of one and the same trader.

Where the registered mark has a significant reputation, infringement may also arise from the use
of the same or a similar mark which, although not causing confusion, damages or takes unfair
advantage of the reputation of the registered mark. This can occasionally arise from the use of
the same or similar mark for goods or services which are dissimilar to those covered by the
registration of the registered mark.

9.1. What about unregistered trade marks

There is no available remedy for trade mark infringement if the earlier trade mark is
unregistered. Some unregistered trademarks may be protected under Common Law and this is
known as Passing off. However, whether or not they are protected will depend on the particular
circumstances, in particular:

235 www.someakenya.com Contact: 0707 737 890


Whether, and to what extent, the owner of the unregistered trade mark was trading under the
name at the date of commencement of the use of the later mark; Whether the two marks are
sufficiently similar, having regard to their fields of trade, so as to be likely to confuse and
deceive (whether or not intentionally) a substantial number of persons into thinking that the
junior user’s goods and services are those of the senior user; The extent of the damage that such
confusion would cause to the goodwill in the senior user’s business.

9.2. I think that I may be infringing, what I should do

Get legal advice. There may be a number of potential courses of action or defenses open to you,
but this will very much depend on the particular circumstances of your case.

Some traders who think they may be infringing an earlier trade mark choose to cease trading
under the offending sign; others choose to approach the earlier trade mark owner and attempt to
negotiate a way forward that suits both parties, which may include a co-existence agreement.

If you decide that you are not infringing, or you have a good defence, you may decide to stand
your ground or even to sue the trade mark holder for making unjustified threats. In the worst case
scenario, you may have to change your trade mark and re-brand your products or services.

9.3. I think that someone else may be infringing, what should I do

Get legal advice as the most suitable course of action will depend on the particular circumstances
of your case.

One potential option open to you is to write to the infringer. However you must be satisfied that
the earlier trade mark that you own and the activities of the infringer justify this. This is because
the law also protects traders from unjustifiable threats of trade mark infringement.

You may be able to negotiate a settlement which suits both parties, which may involve a co-
existence agreement. Another option is that you may be able to get a court order to force the
infringer to cease trading and pay compensation for damages. However, infringement actions
must be taken to the High Court or in Scotland, the Court of Session. We do not handle such
actions.

10. What is a coexistence agreement?

A coexistence agreement is a legal agreement whereby two parties agree to trade in the same or
similar market using an identical or similar trade mark.

The agreement is drawn up between parties and sets the parameters for each to use their trade
mark without the fear of infringement or legal action from the other(s).

The coexistence agreement set the terms and conditions the parties have agreed, to allow each
other to undertake their respective business activities.

236 www.someakenya.com Contact: 0707 737 890


Whilst coexistence agreements may take many forms, and may also include designs, copyright
and patents, entering into a formal binding coexistence agreement will ensure that the parties
avoid the likelihood of becoming involved in any future costly and lengthy legal dispute.

The specific details of a coexistence agreement are a matter only for the parties involved to
negotiate and the IPO cannot become a party to the negotiations.

11. Organisations representing copyright owners

Many groups of copyright owners are represented by a collecting society. A collecting society
will be able to agree licenses with users on behalf of owners and will collect any royalties the
owners are owed. In many cases a collecting society will offer a blanket license for all the works
by owners it represents, for example for music to be played in a shop or restaurant.

There are many collecting societies who operate for various types of copyright material:

 printed material
 artistic works and characters
 broadcast material
 TV listings
 film

12. The copyright tribunal

The Copyright Tribunal is an independent tribunal established by the Copyright, Designs and
Patents Act 1988. Its main role is to adjudicate in commercial licensing disputes between
collecting societies and users of copyright material in their business. It does not deal with
copyright infringement cases or with criminal “piracy” of copyright works. Copyright
infringement can be dealt with in the civil courts such as the High Court (Chancery Division),
the Intellectual Property Enterprise Court and certain county courts where there is also a
Chancery District Registry. Criminal matters are dealt with in the criminal courts. Where parties
are unable to reach agreement in commercial licensing disputes they might also wish to consider,
as an alternative to the Copyright Tribunal, mediation services.

13. Professional advice

Legal professionals who specialize in IP are useful in helping you to understand, obtain and
defend your IP rights. Details of professionals in your area can be obtained from any of the
following organisations:

 Institute of Trade Mark Attorneys (ITMA)


 Chartered Institute of Patent Attorneys (CIPA)
 Law Society - Can provide details of suitable solicitors in your area
 Bar Council - Can provide details of barristers licensed for public access

14. Annual ip crime report

237 www.someakenya.com Contact: 0707 737 890


The latest IP Crime Report 2012/13 was published on 29 July 2013. The report highlights current
and emerging threats surrounding counterfeiting and piracy, including those conducted via the
internet. The report also contains statistical data and enforcement activities from UK law
enforcement agencies such as trading standards, police and HM Revenue and Customs along
with industry bodies

15. Reporting intellectual property crime

If you have concerns or are aware of any person that may be involved in IP crime, then you may
report this through your local trading standards services - who are the leading authority enforcing
IP legislation - via Citizens Advice Bureau and/or the anonymous reporting system of the charity
CrimeStoppers and Action Fraud

People involved with IP crime are generally involved with other types of crime such as benefit
fraud, drugs and people trafficking. Therefore, it is imperative that you report any instance of IP
crime that you are aware of, to the enforcement authorities.

 Information systems ethical and social concerns

Ethical and Social Issues in Information Systems

Technology can be a double-edged sword. It can be the source of many benefits but it can also
create new opportunities for invading your privacy, and enabling the reckless use of that
information in a variety of decisions about you.

Understanding Ethical and Social Issues Related to Systems


In the past 10 years, we have witnessed, arguably, one of the most ethically challenging periods
for U.S. and global business. In today’s new legal environment, managers who violate the law
and are convicted will most likely spend time in prison. Ethics refers to the principles of right
and wrong that individuals, acting as free moral agents, use to make choices to guide their
behaviors. When using information systems, it is essential to ask, “What is the ethical and
socially responsible course of action?”

A Model for Thinking about Ethical, Social and Political Issues


Ethical, social, and political issues are closely linked. The ethical dilemma you may face as a
manager of information systems typically is reflected in social and political debate.

238 www.someakenya.com Contact: 0707 737 890


Fig. The Relationship between Ethical, Social, and Political Issues inan Information Society

Five Moral Dimensions ofthe Information Age


The major ethical, social, and political issues raised by information systems include the
following moral dimensions:

Information rights and obligations: What information rights do individuals and organizations
possess with respect to themselves? What can they protect?

Property rights and obligations: How will traditional intellectual property rights be protected
in a digital society in which tracing and accounting for ownership is difficult and ignoring such
property rights is so easy?

Accountability and control: Who can and will be held accountable and liable for the harm
done to individual and collective information and property rights?

System quality: What standards of data and system quality should we demand to protect
individual rights and the safety of society?

Quality of Life: What values should be preserved in an information- and knowledge-based


society?

Key Technology Trends that Raise Ethical Issues

239 www.someakenya.com Contact: 0707 737 890


Profiling – the use of computers to combine data from multiple sources and create electronic
dossiers of detailed information on individuals.

Nonobvious relationship awareness (NORA) – a more powerful profiling capabilities


technology, can take information about people from many disparate sources, such as
employment applications, telephone records, customer listings, and “wanted” lists, and correlated
relationships to find obscure hidden connections that might help identify criminals or terrorists.

Fig. Nonobvious relationship awareness (NORA)

Ethics inan Information Society


Basic Concepts: Responsibility, Accountability, and Liability

Ethical choices are decisions made by individuals who are responsible for the consequences of
their actions. Responsibility is a key element and means that you accept the potential costs,
duties, and obligations for the decisions you make. Accountability is a feature of systems and
social institutions and means mechanisms are in place to determine who took responsible action,
and who is responsible. Liability is a feature of political systems in which a body of laws is in
place that permits individuals to recover the damages done to them by other actors, systems, or
organizations. Due process is a related feature of law-governed societies and is a process in
which laws are known and understood, and there is an ability to appeal to higher authorities to
ensure that the laws are applied correctly.

240 www.someakenya.com Contact: 0707 737 890


The Moral Dimensions of Information Systems
Information Rights: Privacy and Freedom inthe Internet Age

Privacy is the claim of individuals to be left alone, free from surveillance or interference from
other individuals or organizations, including the state. Most American and European privacy law
is based on a regime called Fair Information Practices (FIP) first set forth in a report written in
1973 by a federal government advisory committee (U.S. Department of Health, Education, and
Welfare, 1973).

The European Directive on Data Protection

In Europe, privacy protection is much more stringent than in the United States. Unlike the United
States, European countries do not allow businesses to use personally identifiable information
without consumers’ prior consent. Informed consent can be defined as consent given with
knowledge of all the facts needed to make a rational decision.

Working with the European Commission, the U.S. Department of Commerce developed a safe
harbor framework for U.S. firms. A safe harbor is a private self-regulating policy and
enforcement mechanism that meets the objectives of government regulators and legislation but
does not involve government regulation or enforcement.

Internet Challenges to Privacy

Internet technology has posed new challenges for the protection of individual privacy.
Information sent over this vast network of networks may pass through many different computer
systems before it reaches its final destination. Each of these systems is capable of monitoring,
capturing, and storing communications that pass through it.

241 www.someakenya.com Contact: 0707 737 890


Cookies are small text files deposited on a computer hard drive when a user visits to the web
sites. Cookies identify the visitor’s web browser software and track visits to the website. Web
beacons, also called web bugs, are tiny objects invisibly embedded in e-mail messages and Web
pages that are designed to monitor the behavior of the user visiting a web site or sending e-mail.
Spyware can secretly install itself on an Internet user’s computer by piggybacking on larger
applications. Once installed, the spyware calls out to Web sites to send banner ads and other
unsolicited material to the user, and it can also report the user’s movements on the Internet to
other computers.

Property Rights: Intellectual Property


Intellectual property is considered to be intangible property created by individuals or
corporations. Information technology has made it difficult to protect intellectual property
because computerized information can be so easily copied or distributed on networks.
Intellectual property is subject to a variety of protections under three different legal traditions:
trade secrets, copyright, and patent law.

Trade Secrets
Any intellectual work product – a formula, device, pattern, or compilation of data-used for a
business purpose can be classified as a trade secret, provided it is not based on information in the
public domain.

Copyright
Copyright is a statutory grant that protects creators of intellectual property from having their
work copied by others for any purpose during the life of the author plus an additional 70 years
after the author’s death.

Patents
A patent grants the owner an exclusive monopoly on the ideas behind an invention for 20 years.
The congressional intent behind patent law was to ensure that inventors of new machines,
devices, or methods receive the full financial and other rewards of their labor and yet make
widespread use of the invention possible by providing detailed diagrams for those wishing to use
the idea under license from the patent’s owner.

System Quality: Data Quality and System Errors


Three principle sources of poor system performance are (1) software bugs and errors (2)
hardware or facility failures caused by natural or other causes and (3) poor input data quality.
The software industry has not yet arrived at testing standards for producing software of
acceptable but not perfect performance.

Quality of Life: Equity, Access, and Boundaries


Balancing Power: Center versus Periphery

Lower level employees many be empowered to make minor decisions but the key policy
decisions may be as centralized as in the past.

242 www.someakenya.com Contact: 0707 737 890


Rapidity of Change: Reduced Response Time to Competition
Information systems have helped to create much more efficient national and international
market. The now-more-efficient global marketplace has reduced the normal social buffers that
permitted businesses many years to adjust to competition. We stand the risk of developing a
“just-in-time society” with “just-in-time jobs” and “just-in-time” workplaces, families, and
vacations.

Maintaining Boundaries: Family, Work, and Leisure


The danger to ubiquitous computing, telecommuting, nomad computing, and the “do anything
anywhere” computing environment is that it is actually coming true. The traditional boundaries
that separate work from family and just plain leisure have been weakened. The work umbrella
now extends far beyond the eight-hour day.

Dependence and Vulnerability


Today our businesses, governments, schools, and private associations, such as churches are
incredibly dependent on information systems and are, therefore, highly vulnerable if these
systems fail. The absence of standards and the criticality of some system applications will
probably call forth demands for national standards and perhaps regulatory oversight.

Computer Crime and Abuse


New technologies, including computers, create new opportunities for committing crimes by
creating new valuable items to steal, new way to steal them, and new ways to harm others.
Computer crime is the commission illegal acts through the use of a computer or against a
computer system. Simply accessing a computer system without authorization or with intent to do
harm, even by accident, is now a federal crime.

Computer abuse is the commission of acts involving a computer that may not be illegal but that
are considered unethical. The popularity of the Internet and e-mail has turned one form of
computer abuse – spamming – into a serious problem for both individuals and businesses. Spam
is junk e-mail sent by an organization or individual to a mass audience of Internet users who
have expressed no interest in the product or service being marketed.

Employment: Trickle-Down Technology and Reengineering Job Loss

Reengineering work is typically hailed in the information systems community as a major benefit
of new information technology. It is much less frequently noted that redesigning business
processes could potentially cause millions of mid-level managers and clerical workers to lose
their jobs. One economist has raised the possibility that we will create a society run by a small
“high tech elite of corporate professionals…in a nation of permanently unemployed” (Rifkin,
1993). Careful planning and sensitivity to employee needs can help companies redesign work to
minimize job losses.

Equity and Access: Increasing Racial and Social Class Cleavages

Several studies have found that certain ethnic and income groups in the United States are less
likely to have computers or online Internet access even though computer ownership and Internet
access have soared in the past five years. A similar digital divide exists in U.S. schools, with

243 www.someakenya.com Contact: 0707 737 890


schools in high-poverty areas less likely to have computers, high-quality educational technology
programs, or internet access availability for their students. Public interest groups want to narrow
this digital divide by making digital information services – including the Internet – available to
virtually everyone, just as basic telephone service is now.

Health Risks: RSI, CVS, and Technostress

The most common occupational disease today is repetitive stress injury (RSI). RSI occurs when
muscle groups are forced through repetitive actions often with high-impact loads (such as tennis)
or tens of thousands of repetitions under low-impact loads (such as working at a computer
keyboard).

The single largest source of RSI is computer keyboards. The most common kind of computer-
related RSI is carpal tunnel syndrome (CTS), in which pressure on the median nerve through the
wrist’s bony structure, called a carpal tunnel, produces pain. Millions of workers have been
diagnosed with carpal tunnel syndrome. Computer vision syndrome (CVS) refers to any
eyestrain condition related to display screen use in desktop computers, laptops, e-readers, smart-
phones, and hand-held video games. Its symptoms, which are usually temporary, include
headaches, blurred vision, and dry and irritated eyes.

The newest computer-related malady is technostress, which is stress induced by computer use.
Its symptoms include aggravation, hostility toward humans, impatience, and fatigue.
Technostress is thought to be related to high levels of job turnover in the computer industry, high
levels of early retirement from computer-intense occupations, and elevated levels of drug and
alcohol abuse.

Summary
Technology can be a double-edged sword. It can be the source of many benefits but it can also
create new opportunities for invading your privacy, and enabling the reckless use of that
information in a variety of decisions about you. The computer has become a part of our lives –
personally as well as socially, culturally, and politically. It is unlikely that the issues and our
choices will become easier as information technology continues to transform our world. The
growth of the Internet and the information economy suggests that all the ethical and social issues
we have described will be heightened further as we move into the first digital century.

244 www.someakenya.com Contact: 0707 737 890


 Telecommuting and ethical issues of the worker

Workplace Issues in Telecommuting

A Utilitarian Analysis of Telecommuting

A recent decision by Yahoo CEO Marissa Mayer to implement a ban on telecommuting --


precluding employees from working from home -- may have detrimental effects on both worker
productivity and morale, according to faculty experts.

“To become the absolute best place to work, communication and collaboration will be important,
so we need to be working side-by-side,” Mayer wrote in a memo to employees. “Speed and
quality are often sacrificed when [employees] work from home.”

Just three days prior to the memo’s distribution, Nicholas Bloom, professor of economics,
published a study called “Does Working from Home Work? Evidence from a Chinese
Experiment.” The study found that employees who worked from home enjoyed a 13 percent
increase in productivity compared to their office-bound peers, and has since been extensively
cited in articles contesting Mayer’s approach.

“It’s very far out there to say that no one [at all] can work at home,” Bloom said. “I can see two
reasons [for the extreme action]. One [is] to ‘reset’ everything and reconnect people to the office.
The second is that it’s a cheap way to downsize—to make people quit.”

I don’t know about the motivation “to make people quit,” but I can sympathize with the idea of
getting people to reconnect. We have become too dependent on electronic devices in our lives –
smart phones, tablet, laptops, and desktops. It seems as though few people want to talk directly
with others. We vent our feelings through the impersonal world of Facebook and Twitter and say
things – sometimes hurtful things – that we would not say in person and to someone’s face.

I like to analyze ethical issues using a utilitarian analysis, which calls for evaluating benefits and
harms of alternative actions. Starting with the benefits, telecommuting supports alternative life-
styles especially two-wage-earner families with children. The quality of life can improve through
work-balance decisions and children benefit by having a parent around at times when the child
would otherwise be in day care.

Telecommuting also opens up opportunities for the disabled to be more productive members of
the workforce by utilizing the skills they have developed. In other words there is the human
element of telecommuting that seems to be more important than the fact there is little “face
time.” Face-to-face meetings still can occur when needed through advance planning.

A variety of concerns have been raised about telecommuting including the following:

 Is the home-based worker doing productive work?


 How should time worked be measured?

245 www.someakenya.com Contact: 0707 737 890


 How should the company establish that the home-based employee work a 30-hour week?
 Should employers inspect home offices for potential OSHA violations?

Researchers have found that while teleworkers have more flexibility to manage work and life, it
also creates distractions, particularly when a home office is not clearly defined. Experts say
working from the kitchen table is not the best as distractions in the home can make it difficult to
focus on work.

From a corporate culture standpoint, there are certain positions more conducive to telework,
which include professional specialty type positions; executive, administrative, managerial, sales
and administrative support, including clerical. The services industry employs the most
telecommuters than any other industry. Teleworkers and employers both agree it takes discipline
to telecommute. Tact and communications skills are important because of the loss of face-to-face
contact with clients, coworkers and bosses. This is especially true with email because the intent
of messages can be lost in the interpretation without the ability to see the nonverbal cues.

From an ethical viewpoint, a conclusion can be drawn that teleworking has advantages for
employers and employees for the reasons stated earlier. As technology continues to improve and
the need to reduce costs remains at the forefront of improved profits and earnings, more
companies will begin to look for ways to implement telework programs. Companies that have
succeeded use telework as a competitive advantage to recruit and retain the best talent, as a cost
efficiency measure to improve profit margins, and as a cost effective way to do business.
Yahoo's policy may be bucking a trend in this regard and it might cost them good employees in
the long run.

Public-sector agencies, like their counterparts in the private sector, are embracing the idea of
telework. President Barack Obama signed the Telework Enhancement Act of 2010, which
required Federal agencies to improve their use of telework as a strategic management tool. As
early as 2003, the federal government began experimenting with telework when 130 employees
from nine federal departments and agencies participated in a free telecenter program offered by
the General Services Administration. The GSA surveyed the workers after a 60-day pilot
program and found 75 percent of those that participated chose to continue teleworking.

Teleworkers can avoid ethical dilemmas by exceeding performance measurements and


contributing to the success of the company. As the concept of telework grows and spreads across
industries where it is feasible to occur, the ethical issues dissipate and create cultural
environments that foster trust, autonomy and efficient use of time and technology. Where work is
done becomes secondary to how work is done, which encourages a utilitarian theory of ethics.

It has been said that ethics is all about what you do when no one is looking. This applies to
telecommuting in particular since issues of supervision and what one does while working on a
job create challenges for those who monitor behavior.

246 www.someakenya.com Contact: 0707 737 890


 Codes of ethics for IT professionals

Code of Ethics

I acknowledge:

 That I have an obligation to management, therefore, I shall promote the understanding of


information processing methods and procedures to management using every resource at
my command.
 That I have an obligation to my fellow members, therefore, I shall uphold the high ideals
of AITP as outlined in the Association Bylaws. Further, I shall cooperate with my fellow
members and shall treat them with honesty and respect at all times.
 That I have an obligation to society and will participate to the best of my ability in the
dissemination of knowledge pertaining to the general development and understanding of
information processing. Further, I shall not use knowledge of a confidential nature to
further my personal interest, nor shall I violate the privacy and confidentiality of
information entrusted to me or to which I may gain access.
 That I have an obligation to my College or University, therefore, I shall uphold its ethical
and moral principles.
 That I have an obligation to my employer whose trust I hold, therefore, I shall endeavor
to discharge this obligation to the best of my ability, to guard my employer's interests,
and to advise him or her wisely and honestly.
 That I have an obligation to my country, therefore, in my personal, business, and social
contacts, I shall uphold my nation and shall honor the chosen way of life of my fellow
citizens.
 I accept these obligations as a personal responsibility and as a member of this
Association. I shall actively discharge these obligations and I dedicate myself to that end.

Standard of Conduct

thesestandards expand on the Code of Ethics by providing specific statements of behavior in


support of each element of the Code. They are not objectives to be strived for, they are rules that
no true professional will violate. It is first of all expected that an information processing
professional will abide by the appropriate laws of their country and community. The following
standards address tenets that apply to the profession.

In recognition of my obligation to management I shall:

 Keep my personal knowledge up-to-date and insure that proper expertise is available
when needed.
 Share my knowledge with others and present factual and objective information to
management to the best of my ability.
 Accept full responsibility for work that I perform.
 Not misuse the authority entrusted to me.
 Not misrepresent or withhold information concerning the capabilities of equipment,
software or systems.

247 www.someakenya.com Contact: 0707 737 890


 Not take advantage of the lack of knowledge or inexperience on the part of others.

In recognition of my obligation to my fellow members and the profession I shall:

 Be honest in all my professional relationships.


 Take appropriate action in regard to any illegal or unethical practices that come to my
attention. However, I will bring charges against any person only when I have reasonable
basis for believing in the truth of the allegations and without any regard to personal
interest.
 Endeavor to share my special knowledge.
 Cooperate with others in achieving understanding and in identifying problems.
 Not use or take credit for the work of others without specific acknowledgement and
authorization.
 Not take advantage of the lack of knowledge or inexperience on the part of others for
personal gain.

In recognition of my obligation to society I shall:

 Protect the privacy and confidentiality of all information entrusted to me.


 Use my skill and knowledge to inform the public in all areas of my expertise.
 To the best of my ability, insure that the products of my work are used in a socially
responsible way.
 Support, respect, and abide by the appropriate local, state, provincial, and federal laws.
 Never misrepresent or withhold information that is germane to a problem or situation of
public concern nor will I allow any such known information to remain unchallenged.
 Not use knowledge of a confidential or personal nature in any unauthorized manner or to
achieve personal gain.

In recognition of my obligation to my employer I shall:

 Make every effort to ensure that I have the most current knowledge and that the proper
expertise is available when needed.
 Avoid conflict of interest and insure that my employer is aware of any potential conflicts.
 Present a fair, honest, and objective viewpoint.
 Protect the proper interests of my employer at all times.
 Protect the privacy and confidentiality of all information entrusted to me.
 Not misrepresent or withhold information that is germane to the situation.
 Not attempt to use the resources of my employer for personal gain or for any purpose
without proper approval.
 Not exploit the weakness of a computer system for personal gain or personal satisfaction.

248 www.someakenya.com Contact: 0707 737 890


Codes of ethics in computing
Four notable examples of ethics codes for IT professionals are listed below:
RFC 1087
In January 1989, the Internet Architecture Board (IAB) in RFC 1087 defines an activity as
unethical and unacceptable if it:
1. Seeks to gain unauthorized access to the resources of the Internet.
2. Disrupts the intended use of the Internet.
3. Wastes resources (people, capacity, and computer) through such actions.
4. Destroys the integrity of computer-based information, or
5. Compromises the privacy of users.

The Code of Fair Information Practices


The Code of Fair Information Practices is based on five principles outlining the requirements for
records keeping systems. This requirement was implemented in 1973 by the U.S. Department of
Health, Education and Welfare.
1. There must be no personal data record-keeping systems whose very existence is secret.
2. There must be a way for a person to find out what information about the person is in a
record and how it is used.
3. There must be a way for a person to prevent information about the person that was
obtained for one purpose from being used or made available for other purposes without
the person's consent.
4. There must be a way for a person to correct or amend a record of identifiable information
about the person.
5. Any organization creating, maintaining, using, or disseminating records of identifiable
personal data must assure the reliability of the data for their intended use and must take
precautions to prevent misuses of the data.

Ten Commandments of Computer Ethics

The ethical values as defined in 1992 by the Computer Ethics Institute; a nonprofit organization
whose mission is to advance technology by ethical means, lists these rules as a guide to computer
ethics:
1. Thou shalt not use a computer to harm other people.
2. Thou shalt not interfere with other people's computer work.
3. Thou shalt not snoop around in other people's computer files.
4. Thou shalt not use a computer to steal.
5. Thou shalt not use a computer to bear false witness.
6. Thou shalt not copy or use proprietary software for which you have not paid.
7. Thou shalt not use other people's computer resources without authorization or proper
compensation.
8. Thou shalt not appropriate other people's intellectual output.
9. Thou shalt think about the social consequences of the program you are writing or the
system you are designing.
10. Thou shalt always use a computer in ways that ensure consideration and respect for your
fellow humans.

249 www.someakenya.com Contact: 0707 737 890


(ISC)² code of ethics
(ISC) an organization committed to certification of computer security professional has further
defined its own code of ethics generally as:
1. Act honestly, justly, responsibly, and legally, and protecting the commonwealth.
2. Work diligently and provide competent services and advance the security profession.
3. Encourage the growth of research – teach, mentor, and value the certification.
4. Discourage unsafe practices, and preserve and strengthen the integrity of public
infrastructures.
5. Observe and abide by all contracts, expressed or implied, and give prudent advice.
6. Avoid any conflict of interest, respect the trust that others put in you, and take on only
those jobs you are qualified to perform.
7. Stay current on skills, and do not become involved with activities that could injure the
reputation of other security professionals.

 Professional ethics and values on the web and Internet

The types of activities which are taking place on the Net can be analysed as follows:

 Communications, previously through e-mail, but increasingly through telephony using


the Internet Protocol (IP) networks
 The provision of information whether through data bases to which access is normally
limited or through Web sites which are open to all Internet users with a suitable browser
 E-commerce whether it is business to customer (B2C) or – currently four times the size –
business to business (B2B)
 E-Government whereby Government departments interact with citizens, from the simple
provision of information to the completion of forms, through to various transactions.

What are the implications of this range of services for the ethics debate?

1. The Internet is not one network but many – indeed it is a network of networks. It does not
provide one type of service offering but many – and this range will increase. These
services have many different characteristics and the ethics debate has to take account of
this – how we approach chat rooms many not be how we approach newsgroups,
especially where children are concerned.
2. The Internet has many actors with different interests. Infrastructure companies like Cisco
or Oracle may have little or no involvement in content. Microsoft may start by ‘simply’
providing a browser (Explorer) and then go into the portal business (MSN). Not all
Internet Service Providers (ISPs) provide access to all newsgroups and most chat rooms
are not hosted by ISPs. If one is attempting to bring a sense of ethics to the Internet in any
particular instance, it is essential to know who has the control and the responsibility.
3. There is still a poor sense of understanding of the issues. On the one hand, those who
campaign for more ‘control’ of the Internet often have little understanding of the
technological complexities. Typically they do not know how newsgroups and chat rooms
are hosted and many politicians do not know the difference between a newsgroup and a

250 www.someakenya.com Contact: 0707 737 890


Web site. On the other hand, many providers of Internet infrastructure and services have
little understanding of, let alone sympathy for, the concerns of users. Frequently
complaints about material or requests for meetings are dealt with in a cavalier fashion or
even ignored.
4. Increasingly the debate about the content of the Internet is not national but global, not by
specialists but by the general populace. There is a real need for this debate to be
stimulated and structured and for it to lead to ‘solutions’ which are focused, practical and
urgent.

IS THERE A PLACE FOR ETHICS?

In considering whether there is a place for ethics on the Internet, we need to have understanding
of what such a grand word as ‘ethics’ means in this context. I suggest that it means four things:

1. Acceptance that the Internet is not a value-free zone

This means that the World Wide Web is not the wild Web, but instead a place where
values in the broadest sense should take a part in shaping content and services. This is
recognition that the Internet is not something apart from civil society, but increasingly a
fundamental component of it.

2. Application of off-line laws to the on-line world

This means that we do not invent a new set of values for the Internet but, for all the
practical problems, endeavor to apply the law which we have evolved for the physical
space to the world of cyberspace. These laws might cover issues like child pornography,
race hate, libel, copyright and consumer protection.

3. Sensitivity to national and local cultures

This means recognizing that, while originally most Internet users were white, male
Americans, now the Internet belongs to all. As a pervasively global phenomenon, it
cannot be subject to one set of values like a local newspaper or national television station;
somehow we have to accommodate a multiplicity of value systems.

4. Responsiveness to customer or user opinion

This means recognizing that users of the Internet – and even non-users – are entitled to
have a view on how it works. At the technical level, this is well understood – bodies like
the Internet Engineering Task Force (IETF), the Internet Corporation for Assigned
Names and Numbers (ICANN) and the World Wide Web Consortium (W3C) endeavor to
understand and reflect user views. However, at no level do we have similar mechanisms
for capturing user opinions on content and access to it.

251 www.someakenya.com Contact: 0707 737 890


Now that we have a better understanding of what ethics means in the context of the Internet, we
need to address the question: whose responsibility is ethics on the net? The answer is that
responsibility should be widely spread.

 Government is the democratic mechanism for deciding what activity is unacceptable –


and therefore should be criminalized – in a particular society. As far as practical, these
same laws should be applied to the Internet. Not many new laws – hacking is one
example – are necessary.
 Having made laws, they should be enforced – in cyberspace as much as in the real world
– and, in many jurisdictions, the police themselves have too little technical expertise and
resource.
 Internet service providers have to accept that they are not the same as the
telecommunications operator or the postal service which deliver private one-to-one
messages. Although, given the nature of the Internet, they cannot possibly be expected to
pre-check content, once they receive a notification or a complaint about something they
are carrying or hosting, they have to take a view.
 Equally, the operators of services on the Internet have to take account of how that service
might reasonably be expected to be used. For instance, if a Web hosting company carries
a site providing information on bomb making or suicide assistance, they cannot claim to
have no responsibility if that information is used. Or, if a chat room is used bypedophiles
to groom a young girl before he manages to meet and abuse her, the operator of the chat
room cannot deny any responsibility. This is not a matter of legal liability but of moral
responsibility.
 Of course, Governments, law enforcement, ISPs and service operators can only do so
much - which is why we have to empower end users. Consumers should be given the
knowledge and the tools to apply their own ethical codes to use of the Internet by
themselves and their families. Parents and teachers have a special responsibility in this
regard.
 Finally, we need a compelling recognition that children must have special protection. Use
of the Internet is not like watching television: the device is not shared in real time with
other members of the family in a public space like the living room and broadcasting
conventions like the ‘watershed’ (no adult content before 9pm) do not apply. We need
new defence mechanisms.

In seeking to apply a sense of ethics to cyberspace, there are some major problems but also some
useful solutions.

Among the problems are:

 Jurisdictional competence: Laws are nation-based but cyberspace is global. How does
one apply up to 170 separate and different legal systems to the Internet?
 Technological complexities: The Internet is a complex technical network and one cannot
simply apply ‘old’ regulatory conventions from the worlds of publishing or broadcasting.
 The ‘geeks’ vs the ‘suits’: As many Internet-related companies have grown, there is
now an internal tension between the old-timers, with their vast technical knowledge, and

252 www.someakenya.com Contact: 0707 737 890


the new-comers who are more likely to be marketing people much more aware of
consumer concerns.
 Populist campaigns: The Internet is still so new and so mysterious for many that it is
still relatively easy for a populist campaign to be whipped up which exaggerates the
dangers of Internet content and/or minimizes the technical complexities of dealing with
it. We must be sensitive to consumer concerns, but the agenda cannot be determined by
ill-informed politicians looking for votes or newspapers seeking to boost circulation.

Among the solutions are:

 Modernization of laws: Governments need to consider whether pre-Internet laws need


up-dating to take account of new crimes such as cyber stalking or grooming in chat
rooms.
 More high tech crime fighters: Law enforcement agencies need more people with
greater technical training and resource to tackle increasingly sophisticated cyber
criminals such as pedophiles rings. One example is the recent creation of the National
High Tech Crime Unit in the UK.
 ‘Note and take down’ mechanisms: We need organisations to which Internet users can
report allegedly criminal content in the confident knowledge that this hotline is equipped
to judge the legality and identify the hosting of material so that, if it is illegal and if it is
in their jurisdictional area, they can issue a notice to the relevant ISP to remove it. A
good example of such an operation is the Internet Watch Foundation in the UK.
 Labelling and filtering: We can best empower end users by greater labelling or rating of
Internet content and greater use of more sophisticated filtering software. The Internet
Content Rating Association (ICRA) has made considerable progress in developing and
promoting a genuinely global, culturally independent labelling system. A wide range of
companies provide filtering software which operates on different principles. In this way,
households can make their own decisions based on their own cultural or ethical values.
 Walled gardens: For young children or as a transitional stage to full Internet access, one
could use a ‘walled garden’ which restricts access to those sites pre-selected by a
particular provider, typically with a child-friendly brand.
 Better supervision of children: All those with responsibility for children – especially
parents, guardians, teachers and careers– need to become better aware of some of the
problems of Internet use by children and the range of solutions which are available. They
cannot rely, though, on technical solutions – regular conversation with, and observation
of, the child is essential.

So, how will all this come about?

 We need to give Internet users more relevant information. This should start at the point at
which one purchases a PC or other Internet-enabled device. There should then be further
information in both appropriate physical places – like school rooms – and relevant
cyberspaces – like child-focused chat rooms.
 We need a more informed debate through education and awareness campaigns. We
cannot leave the terrain to civil libertarian ‘purists’, who too often see the Internet as a

253 www.someakenya.com Contact: 0707 737 890


value-free or (as they would put it) censorship-free zone, or to the scare merchants who
would have us believe that the Internet has more filth than facts.
 Ideally, there should be some sort of organisational focus for this debate and the
promotion of advice, education and awareness. In the UK, the Home Office has recently
established a Task Force chaired by Lord Bassam; there is the Internet Crime Forum
which brings together law enforcement, children’s organisations and others and there is
the Internet Watch foundation which started as a hotline to combat criminal content but is
now developing a wide-ranging education and awareness programme
 In a sense, the passage of time and greater familiarity with the new Internet medium may
– almost of itself – ease some of the difficulties. In some respects, we are experiencing
the kind of reactions seen in the early days of the telegraph or television and we will learn
to adjust to the new challenges and opportunities.

 Objectivity and integrity in computing


Integrity and objectivity: In the performance of any professional service, a member shall
maintain objectivity and integrity, shall be free of conflicts of interest, and shall not knowingly
misrepresent facts or subordinate his or her judgment to others.

Integrity and Objectivity

1. Knowing misrepresentations in the preparation of financial statements or records. A


member shall be considered to have knowingly misrepresented facts in violation of rules
when he or she knowingly—

a. Makes, or permits or directs another to make, materially false and misleading entries in
an entity’s financial statements or records; or
b. Fails to correct an entity’s financial statements or records that are materially false and
misleading when he or she has the authority to record an entry; or
c. Signs, or permits or directs another to sign, a document containing materially false and
misleading information.

2. Conflicts of interest. A conflict of interest may occur if a member performs a professional


service for a client or employer and the member or his or her firm has a relationship with
another person, entity, product, or service that could, in the member's professional judgment,
be viewed by the client, employer, or other appropriate parties as impairing the member's
objectivity. If the member believes that the professional service can be performed with
objectivity, and the relationship is disclosed to and consent is obtained from such client,
employer, or other appropriate parties, the rule shall not operate to prohibit the performance
of the professional service.
Certain professional engagements, such as audits, reviews, and other attest services, require
independence. Independence impairments under rule 101 [ET section 101.01], its
interpretations, and rulings cannot be eliminated by such disclosure and consent.

254 www.someakenya.com Contact: 0707 737 890


The following are examples, not all-inclusive, of situations that should cause a member to
consider whether or not the client, employer, or other appropriate parties could view the
relationship as impairing the member's objectivity:

 A member has been asked to perform litigation services for the plaintiff in connection
with a lawsuit filed against a client of the member's firm.
 A member has provided tax or personal financial planning (PFP) services for a married
couple who are undergoing a divorce, and the member has been asked to provide the
services for both parties during the divorce proceedings.
 In connection with a PFP engagement, a member plans to suggest that the client invest in
a business in which he or she has a financial interest.
 A member provides tax or PFP services for several members of a family who may have
opposing interests.
 A member has a significant financial interest, is a member of management, or is in a
position of influence in a company that is a major competitor of a client for which the
member performs management consulting services.
 A member serves on a city's board of tax appeals, which considers matters involving
several of the member's tax clients.
 A member has been approached to provide services in connection with the purchase of
real estate from a client of the member's firm.
 A member refers a PFP or tax client to an insurance broker or other service provider,
which refers clients to the member under an exclusive arrangement to do so.
 A member recommends or refers a client to a service bureau in which the member or
partner(s) in the member's firm hold material financial interest(s).

The above examples are not intended to be all-inclusive.

3. Obligations of a member to his or her employer's external accountant.


A member must maintain objectivity and integrity in the performance of a professional
service. In dealing with his or her employer's external accountant, a member must be candid
and not knowingly misrepresent facts or knowingly fail to disclose material facts. This would
include, for example, responding to specific inquiries for which his or her employer's
external accountant requests written representation.
4. Subordination of judgment by a member.prohibits a member from knowingly
misrepresenting facts or subordinating his or her judgment when performing professional
services. Under this rule, if a member and his or her supervisor have a disagreement or
dispute relating to the preparation of financial statements or the recording of transactions, the
member should take the following steps to ensure that the situation does not constitute a
subordination of judgment:

1. The member should consider whether (a) the entry or the failure to record a transaction in
the records, or (b) the financial statement presentation or the nature or omission of
disclosure in the financial statements, as proposed by the supervisor, represents the use of
an acceptable alternative and does not materially misrepresent the facts. If, after
appropriate research or consultation, the member concludes that the matter has

255 www.someakenya.com Contact: 0707 737 890


authoritative support and/or does not result in a material misrepresentation, the member
need do nothing further.
2. If the member concludes that the financial statements or records could be materially
misstated, the member should make his or her concerns known to the appropriate higher
level(s) of management within the organization (for example, the supervisor's immediate
superior, senior management, the audit committee or equivalent, the board of directors,
the company's owners). The member should consider documenting his or her
understanding of the facts, the accounting principles involved, the application of those
principles to the facts, and the parties with whom these matters were discussed.
3. If, after discussing his or her concerns with the appropriate person(s) in the organization,
the member concludes that appropriate action was not taken, he or she should consider
his or her continuing relationship with the employer. The member also should consider
any responsibility that may exist to communicate to third parties, such as regulatory
authorities or the employer's (former employer's) external accountant. In this connection,
the member may wish to consult with his or her legal counsel.
4. The member should at all times be cognizant of his or her obligations under interpretation
102-3 [ET section 102.04].
5. Applicability of rule 102 to members performing educational services. Educational
services (for example, teaching full- or part-time at a university, teaching a continuing
professional education course, or engaging in research and scholarship) are professional
services as defined in ET section 92.11, and are therefore subject to rule 102 [ET section
102.01]. Rule 102 [ET section 102.01] provides that the member shall maintain
objectivity and integrity, shall be free of conflicts of interest, and shall not knowingly
misrepresent facts or subordinate his or her judgment to others.
6. Professional services involving client advocacy. A member or a member's firm may be
requested by a client—

1. To perform tax or consulting services engagements that involve acting as an advocate for
the client.
2. To act as an advocate in support of the client's position on accounting or financial
reporting issues, either within the firm or outside the firm with standard setters,
regulators, or others.

Services provided or actions taken pursuant to such types of client requests are professional
services [ET section 92.11] governed by the Code of Professional Conduct and shall be
performed in compliance with Rule 201, General Standards [ET section 201.01], Rule 202,
Compliance With Standards [ET section 202.01], and Rule 203, Accounting Principles [ET
section 203.01], and interpretations thereof, as applicable. Furthermore, in the performance of
any professional service, a member shall comply with rule 102 [ET section 102.01], which
requires maintaining objectivity and integrity and prohibits subordination of judgment to others.
When performing professional services requiring independence, a member shall also comply
with rule 101 [ET section 101.01] of the Code of Professional Conduct.

Moreover, there is a possibility that some requested professional services involving client
advocacy may appear to stretch the bounds of performance standards, may go beyond sound and
reasonable professional practice, or may compromise credibility, and thereby pose an

256 www.someakenya.com Contact: 0707 737 890


unacceptable risk of impairing the reputation of the member and his or her firm with respect to
independence, integrity, and objectivity. In such circumstances, the member and the member's
firm should consider whether it is appropriate to perform the service.

 The role of professional Societies in enforcing professional standards in


Computing

257 www.someakenya.com Contact: 0707 737 890

You might also like