You are on page 1of 40

1.

Testing Technical Controls


 A technical control is a security control implemented through the use of an IT asset.
This asset is usually, but not always, some sort of software or hardware that is
configured in a particular way.

1.1. Vulnerability Testing


NOTE: Before carrying out vulnerability testing, a written agreement from management is
required! This protects the tester against prosecution for doing his job and ensures there
are no misunderstandings by providing in writing what the tester should—and should not—
do.

 A vulnerability test is an examination of a system for the purpose of identifying,


defining, and ranking its vulnerabilities.
 Vulnerability testing, whether conducted manually, through automated tools, or a
combination, necessitates individuals or consultants with a deep security background
and a high level of trustworthiness. Automated vulnerability scanning tools, while
valuable, may generate results that can be misinterpreted as false positives or highlight
vulnerabilities that may not be significant in a specific environment or are adequately
addressed elsewhere. Additionally, there could be cases where two seemingly minor
vulnerabilities, when combined, pose a critical threat. Therefore, a comprehensive
approach, involving skilled and trustworthy personnel, is essential for accurate
vulnerability testing and interpretation.
 The goals of the assessment are to
 Evaluate the true security posture of an environment [less false positives].
 Identify as many vulnerabilities as possible, with honest evaluations and
prioritizations of each.
 Test how systems react to certain circumstances and attacks, to learn not only what
the known vulnerabilities are [such as this version of the database, that version of
the operating system, or a user ID with no password set] but also how the unique
elements of the environment might be abused [SQL injection attacks, buffer
overflows, and process design flaws that facilitate social engineering].

1.1.1. Different Types of Assessments


 Management must understand that new vulnerabilities arise with the change in
environment, As the environment changes, new vulnerabilities can arise. The
management should recognize various assessment types, each capable of revealing
different weaknesses.
 The three main types of assessments discussed are
 Personnel testing
 Physical testing
 System/network testing
Assessment Description
Personnel  Involves reviewing employee tasks to identify vulnerabilities in standard
Testing practices and procedures.
 Demonstrates social engineering attacks and highlights the value of
training users to detect and resist such attacks.
 Reviews employee policies and procedures to address security risks that
cannot be mitigated through physical and logical controls, utilizing
administrative controls as a final measure.
Physical  Focuses on evaluating facility and perimeter protection mechanisms.
Testing  Examines aspects such as the functionality of automatic door closures,
alarms for open doors, and interior protection mechanisms for critical
areas.
 Considers threats like dumpster diving to ensure proper destruction of
sensitive information.
 Addresses protection mechanisms against various threats, including
manmade, natural, and technical factors, such as fire suppression
systems and flood protection measures.
System  Involves automated scanning tools to identify known vulnerabilities in
And systems.
Network  Some tools may attempt to exploit vulnerabilities, contingent on
Testing management approval regarding performance impact and the risk of
disruption.
 It’s important to conduct security assessments regularly due to the dynamic nature of
environments. lower-priority and well-protected areas may be assessed less frequently
[once or twice a year], while high-priority and vulnerable targets, like e-commerce
servers, should undergo continuous scanning.
 The use of automated tools is recommended, more than one tool or different tools are
used since no single tool can identify every known vulnerability. Vendors update their
tools at different rates, and their databases may include vulnerabilities in varying orders.
It is advised to update the vulnerability database of each tool just before usage.
 Networks consist of heterogeneous devices, each with its own potential vulnerabilities.
Different devices have different vulnerabilities as depicted in the picture below.
Therefore, leveraging a diverse team or set of tools improves the likelihood of
identifying blind spots in security assessments.
1.1.2. Commonly Exploited Vulnerabilities
Kernel Flaws
Description  These occur below the user level interface and deep inside the
OS.
 Any flaw in kernel, if exploited, gives the attacker the most
powerful level of control over the system.
Countermeasure  Apply security patches to OS to keep vulnerability window as
s small as possible.

Buffer Overflows
Description  A buffer overflow attack is a type of cyberattack that exploits a
vulnerability in software to execute malicious code. It occurs
when a program attempts to write more data to a buffer than the
buffer can hold. This can overwrite adjacent memory locations,
potentially altering program instructions or data. Attackers can
craft specific input data to trigger a buffer overflow, allowing
them to inject their own code and gain control of the program.
Countermeasure  Good programming practices and developer education
s  Automated source code scanners
 Enhanced programming libraries, and strongly typed languages
that disallow buffer overflows

Symbolic Links aka SymLinks


Description  Symbolic links are special files that act as pointers to other files or
directories. They are similar to shortcuts in Windows or aliases in
macOS. Instead of containing the actual data, a symbolic link
contains a path to the target file or directory. This means that
when you access a symbolic link, you are actually accessing the
target file or directory.
 An attacker can use symlinks to redirect programs to access files
or directories that they control. This could allow the attacker to
read or modify sensitive data, or even execute malicious code.
 For example, if an attacker creates a symlink called "password"
that points to the system's password file, any program that tries
to read or modify the password file will actually be accessing the
attacker's symlink. This could allow the attacker to steal or modify
the passwords in the password file.
Countermeasure  Programs, and especially scripts, must be written to ensure that
s the full path to the file cannot be circumvented.

File Descriptor Attacks


Description  File descriptors are small integers that are used to identify open
files and other resources. When a program opens a file, the
operating system assigns it a file descriptor. The program can
then use the file descriptor to read, write, or close the file.
 An attacker can exploit vulnerabilities in the way file descriptors
are handled to gain unauthorized access to a system or to execute
malicious code. For example, an attacker could create a large
number of file descriptors to exhaust the system's supply of file
descriptors, making it difficult or impossible for legitimate
programs to open files. Or, an attacker could exploit a
vulnerability in a program that handles file descriptors to
overwrite data or execute malicious code.
Countermeasure  Good programming practices and developer education
s  Automated source code scanners
 Application security testing

File & Directory Permissions


Description  If a system administrator makes a mistake that results
in decreasing the security of the permissions on a critical file, such
as making
a password database accessible to regular users, an attacker can
take advantage
of this to add an unauthorized user to the password database or
an untrusted
directory to the DLL search path.
Countermeasure  File integrity checkers, which should also check expected file
s and directory permissions, can detect such problems in a timely
fashion, hopefully
before an attacker notices and exploits them.

Race Conditions
Description  Race condition attacks are a type of cyberattack that exploits
vulnerabilities in the way software processes multiple requests or
tasks simultaneously. These attacks occur when two or more
threads or processes attempt to access shared data at the same
time, and at least one of them modifies the data. This can lead to
unpredictable and often undesirable outcomes because the final
state of the data depends on the relative timing of these
operations.
 For example, consider a website that allows users to transfer
money between their accounts. If two users attempt to transfer
money from the same account at the same time, a race condition
could occur if the software is not designed to handle this situation
correctly. In this case, the software might end up transferring
money to both users, or it might not transfer money to either
user.
Countermeasure  Good programming practices and developer education
s  Automated source code scanners
 Application security testing

 In a nutshell, vulnerability scanners provide the following capabilities:


 The identification of active hosts on the network
 The identification of active and vulnerable services [ports] on hosts
 The identification of operating systems
 The identification of vulnerabilities associated with discovered OS & application
 The identification of misconfigured settings
 Test for compliance with host applications’ usage/security policies
 The establishment of a foundation for penetration testing

1.2. Penetration Testing [or Pen Testing/ PT]


1.2.1. Introduction and Overview
 PT is the process of simulating attacks on a network and its systems at the request of
the owner, senior management.
 Penetration testing uses a set of procedures and tools designed to test and possibly
bypass the security controls of a system. Its goal is to measure an organization’s level of
resistance to an attack and to uncover any exploitable weaknesses within the
environment. Organizations need to determine the effectiveness of their security
measures and not just trust the promises of the security vendors.
 A penetration test emulates the same methods attackers would use. Attackers can be
clever, creative, and resourceful in their techniques, so penetration test attacks should
align with the newest hacking techniques along with strong foundational testing
methods. The test should look at each and every computer in the environment, as
shown in below figure, because an attacker will not necessarily scan one or two
computers only and call it a day.
 The type of penetration test that should be used depends on the organization, its
security objectives, and the management’s goals. Some organizations perform periodic
penetration tests on themselves using different types of tools. Other organizations ask a
third party to perform the vulnerability and penetration tests to provide a more
objective view.
 Penetration tests can evaluate web servers, DNS servers, router configurations,
workstation vulnerabilities, access to sensitive information, open ports, and available
services’ properties that a real attacker might use to compromise the organization’s
overall security. Some tests can be quite intrusive and disruptive. The timeframe for the
tests should be agreed upon so productivity is not affected and personnel can bring
systems back online if necessary.

NOTE: Penetration tests are not necessarily restricted to information technology, but may
include physical security as well as personnel security. Ultimately, the purpose is to
compromise one or more controls, which could be technical, physical, or administrative.

1.2.2. Goal of Penetration Testing


 Goal of Penetration Testing is to Identify vulnerabilities and estimate the true protection
the security mechanisms within the environment are providing.
 Security professionals should obtain an authorization letter that includes the extent of
the testing authorized, and this letter or memo should be available to members of the
team during the testing activity. This type of letter is commonly referred to as a Get Out
of Jail Free Card. Contact information for key personnel should also be available, along
with a call tree in the event something does not go as planned and a system must be
recovered.

NOTE: A “Get Out of Jail Free Card” is a document you can present to someone who thinks
you are up to something malicious, when in fact you are carrying out an approved test.
More than that, it’s also the legal agreement you have between you and your customer that
protects you
from liability, and prosecution.

1.2.3. Penetration Test Types


 The penetration testing team can have varying degrees of knowledge about the
penetration target before the tests are actually carried out:
Knowledge Level Description
Zero Knowledge The team does not have any knowledge of the target and must start
from ground zero.
Partial Knowledge The team has some information about the target.
Full Knowledge The team has some information about the target.
 Tests can be conducted externally [from a remote location] or internally [meaning the
tester is within the network]. Combining both can help better understand the full scope
of threats from either domain [internal and external].
 Penetration tests may be blind, double-blind, or targeted.
Test Type Description
Blind Test  A blind test is one in which the assessors only have publicly available
data to work with and the network security staff is aware that the
testing will occur.
 Part of the planning for this type of test involves determining what
actions, if any, the defenders are allowed to take. Stopping every
detected attack will slow down the pen testing team and may not show
the depths they could’ve reached without forewarning to the staff.
Double-  A double-blind test [stealth assessment] is also a blind test to the
Blind Test assessors, as mentioned previously, but in this case the network
security staff is not notified.
 This enables the test to evaluate the network’s security level and the
staff’s responses, log monitoring, and escalation processes, and is a
more realistic demonstration of the likely success or failure of an attack.
Targeted  Targeted tests can involve external consultants and internal staff
Test carrying out focused tests on specific areas of interest.
 For example, before a new application is rolled out, the team might test
it for vulnerabilities before installing it into production. Another example
is to focus specifically on systems that carry out e-commerce
transactions and not the other daily activities of the organization.

1.2.4. Penetration Testing Process


 When performing a penetration test, the team goes through a five-step process:
No. Step Description
1 Discovery Foot printing and gathering information about the target
2 Enumeration Performing port scans and resource identification methods
3 Vulnerability Identifying vulnerabilities in identified systems and resources
Mapping
4 Exploitation Attempting to gain unauthorized access by exploiting
vulnerabilities
5 Report To Delivering to management documentation of test findings along
Management with suggested countermeasures

1.2.5. Output of Penetration Testing


 The result of a penetration test is a report given to management that describes
 Identified vulnerabilities and the severity of those vulnerabilities, along with
descriptions of how they were exploited by the testers.
 Include suggestions on how to deal with the vulnerabilities properly. From there, it is
up to management to determine how to address the vulnerabilities and what
countermeasures to implement.
 It is critical that senior management be aware of any risks involved in performing a
penetration test before it gives the authorization for one. In rare instances, a system or
application may be taken down inadvertently using the tools and techniques employed
during the test.

1.2.6. Vulnerability Assessment vs Penetration Testing


Differences
Feature Penetration Testing Vulnerability Assessment
Definition A process of actively attempting to A process of identifying, classifying,
exploit vulnerabilities in a system to and prioritizing vulnerabilities in a
find and assess security weaknesses. system.
Goal To uncover exploitable To identify and prioritize
vulnerabilities that could be used by vulnerabilities so that they can be
attackers to compromise a system. remediated.
Scope Typically focused on specific systems Can be used to assess the security
or applications. posture of an entire organization.
Methodolog Uses a variety of techniques, such as Typically uses automated tools to
y scanning, fuzzing, and social scan for vulnerabilities.
engineering.
Output A report that identifies vulnerabilities A report that lists vulnerabilities and
and provides recommendations for their severity.
remediation.
Frequency Typically conducted on an ad-hoc Can be conducted regularly or as
basis. needed.
Cost More expensive than vulnerability Less expensive than penetration
assessment. testing.
 Penetration testing is more hands-on: Penetration testers will attempt to exploit
vulnerabilities using real-world techniques. Vulnerability assessment is more automated,
and vulnerability scanners will not attempt to exploit vulnerabilities.
 Penetration testing is more intrusive: Penetration testers may need to install software
on the target system or modify system configurations. Vulnerability assessment is less
intrusive, and vulnerability scanners will not typically make any changes to the target
system.
 Penetration testing is more time-consuming: Penetration testing can be a time-
consuming process, as penetration testers must carefully plan and execute their attacks.
Vulnerability assessment is typically less time-consuming.

Vulnerability and Penetration Testing: What Color Is Your Box?


 Vulnerability testing and penetration testing come in boxes of at least three colors:
black, white, and gray. The color, of course, is metaphorical, but security professionals
need to be aware of the three types. None is clearly superior to the others in all
situations, so it is up to us to choose the right approach for our purposes.
Box Description
Black  Black box testing treats the system being tested as completely opaque.
Box This means that the tester has no a priori knowledge of the internal design
Testing or features of the system. All knowledge will come to the tester only
through the assessment itself.
 This approach simulates an external attacker best and may yield insights
into information leaks that can give an adversary better information on
attack vectors.
 The disadvantage of black box testing is that it probably won’t cover all of
the internal controls since some of them are unlikely to be discovered in the
course of the audit. Another issue is that, with no knowledge of the innards
of the system, the test team may inadvertently target a subsystem that is
critical to daily operations.
 Advantages:
 Provides an external perspective: Black-box testing provides an external
perspective on the software, which can help to identify defects that
might not be apparent to developers who are familiar with the code.
 Can be used to test early in the development process: Black-box testing
can be used to test software early in the development process, when
the source code may not be fully developed or documented.
 Can be automated: Black-box testing can be automated using tools that
can generate test cases and execute them.

White  White box testing affords the pen tester complete knowledge of the inner
Box workings of the system even before the first scan is performed.
Testing  This approach allows the test team to target specific internal controls and
features and should yield a more complete assessment of the system.
 The downside is that white box testing may not be representative of the
behaviors of an external attacker, though it may be a more accurate
depiction of an insider threat.
 Advantages:
 Can identify code-level errors: White-box testing can identify code-level
errors that might not be apparent through black-box testing.
 Can be used to test complex algorithms: White-box testing can be used
to test complex algorithms and data structures.
 Can be used to test for specific code coverage: White-box testing can be
used to test for specific code coverage, such as statement coverage or
branch coverage.
Gray  Gray box testing meets somewhere between the other two approaches
Box [black and white box].
Testing  Some, but not all, information on the internal workings is provided to the
test team. This helps guide their tactics toward areas we want to have
thoroughly tested, while also allowing for a degree of realism in terms of
discovering other features of the system. This approach mitigates the issues
with both white and black box testing.
 Advantages:
 Can identify integration defects: Gray-box testing can identify
integration defects that might not be apparent through black-box
testing or white-box testing.
 Can identify security vulnerabilities: Gray-box testing can identify
security vulnerabilities that might not be apparent through black-box
testing or white-box testing.
 Can be used to test early and late in the development process: Gray-box
testing can be used to test software early in the development process,
when the source code may not be fully developed or documented, and
late in the development process, when the software is integrated with
other systems.

Feature Black-box testing White-box testing Gray-box testing


Knowledge of No Yes Partial
internal structure
Focus Functional defects, Structural defects, Integration defects,
usability issues code-level errors security vulnerabilities
Techniques Equivalence Code inspection, code A combination of
partitioning, boundary coverage, unit testing black-box and white-
value analysis, box techniques
decision coverage

1.3. Red Teaming


 Red teaming is the practice of emulating a specific threat actor [or type of threat actor]
with a particular set of objectives. Whereas pen testing answers the question “How
many ways can I get in?” red teaming answers the question “How can I get in and
accomplish this objective?
 Red Team operation occurs in following steps:
i. Begins by determining the adversary to be emulated and a set of objectives.
ii. Conducts reconnaissance to understand how the systems work and locate the
team’s objectives
iii. Draw up a plan on how to accomplish the objectives while remaining undetected.
iv. Launches the attack on the actual system and tries to reach its objectives.
 There’s high cost associated with red teaming, a comprehensive security testing
approach, making it feasible only for well-resourced organizations. A more practical
alternative is a hybrid approach that is more focused than traditional penetration testing
but less intense than red teaming. Many organizations, including small ones, opt for
establishing an internal red team. This team periodically assesses various aspects of the
business, such as information systems, business processes, or marketing campaigns, by
adopting an adversarial perspective to identify potential vulnerabilities and weaknesses.
The primary goal is to proactively evaluate and enhance security measures within the
organization with the available resources.

EXAM TIP: Don’t worry about differentiating penetration testing and red teaming during the
exam. If the term “red team” shows up on your test, it will most likely describe the group of
people who conduct both penetration tests and red teaming.

1.4. Breach Attack Simulations [BAS]


 Both penetration testing and red teaming provide only a snapshot of an organization's
defenses at a specific moment. Success against penetration testers does not guarantee
resilience against current threats due to the dynamic nature of security risks. The need
for automated testing is suggested as a complement to human testing to address these
limitations.
 Breach and attack simulations [BAS] are automated systems that launch simulated
attacks against a target environment and then generate reports on their findings.
 For example, a ransomware simulation might use “defanged” malware that looks and
propagates just like the real thing but, when successful, will only encrypt a sample file on
a target host as a proof of concept. Its signature should be picked up by your network
detection and response [NDR] or your endpoint detection and response [EDR] solutions.
Its communications with a command-and-control [C2] system via the Internet will follow
the same processes that the real thing would. In other words, each simulation is very
realistic and meant to test your ability to detect and respond to it.
 BAS is typically offered as a Software as a Service [SaaS] solution. All the tools,
automation, and reporting occur in the provider's cloud. Additionally, BAS agents can be
deployed in the target environment to enhance coverage, especially in scenarios where
an adversary breaches the environment using a zero-day exploit or other evasive
mechanisms. The goal is to assess the effectiveness of the organization's defense in-
depth strategy under assumed breach scenarios.

1.5. Log Reviews


 A log review is the examination of system log files to detect security events or to verify
the effectiveness of security controls.
 Log review plays a crucial role in security incident detection and evaluating the
effectiveness of security controls. Meaningful log reviews depend on defining
appropriate log events, guided by industry best practices and the organization's risk
management process. Continuous adaptation to the evolving threat landscape is
emphasized for effective log reviews.
 Network Time Protocol [NTP] ensures synchronized timestamps, crucial for accurate
event analysis. Centralized log storage is advocated for facilitating event correlation,
incident investigation, and enhancing security against log tampering.
 Efficient log archiving is deemed crucial due to the substantial volume of log data
generated, often reaching thousands or millions of events daily. The need for efficient
filtering is emphasized, recognizing that seemingly unimportant events may provide
valuable clues for security incident analysis. Log retention policies are recommended to
balance long-term retention with data filtering.
 The fourth paragraph discusses the availability of log analysis and management
solutions, both commercial and open-source. Security information and event
management [SIEM] systems are highlighted for centralizing, correlating, analyzing, and
retaining event data. SIEM systems play a role in generating automated alerts to
highlight potential security incidents, with security specialists investigating alerts to
determine the need for further action. The importance of minimizing false positives and
false negatives for effective SIEM utilization is emphasized.

1.5.1. Prevention of Log Tampering


 Log files are often among the first artifacts that attackers will use to attempt to hide
their actions. Knowing this, it is up to us as security professionals to do what we can to
make it infeasible, or at least very difficult, for attackers to successfully tamper with our
log files.
 Followings can be adopted to prevent the tampering of logs:
Preventive Description
Measure
Remote Logging When attackers compromise a device, they often gain sufficient privileges
to modify or erase the log files on that device. Putting the log files on a
separate box requires the attackers to target that box too, which at the
very least buys you some time to notice the intrusion.
Simplex Some high-security environments use one-way [or simplex]
Communication communications between the reporting devices and the central log
repository. This is easily accomplished by severing the “receive” pairs on an
Ethernet cable. The term data diode is sometimes used to refer to this
approach to physically ensuring a one-way path.
Replication It is never a good idea to keep a single copy of such an important resource
as the consolidated log entries. By making multiple copies and keeping
them in different locations, you make it harder for attackers to alter the log
files.
Write-once media If one of the locations to which you back up your log files can be written to
only once, you make it impossible for attackers to tamper with that copy of
the data. Of course, they can still try to physically steal the media, but now
you force them to move into the physical domain, which many attackers
[particularly ones overseas] will not do.
Cryptographic Hash A powerful technique for ensuring events that are modified or deleted are
Chaining easily noticed is to use cryptographic hash chaining. In this technique, each
event is appended the cryptographic hash [e.g., SHA-256] of the preceding
event. This creates a chain that can attest to the completeness and the
integrity of every event in it.

1.6. Synthetic Transactions


 Many of our information systems operate on the basis of transaction, where users,
typically individuals, initiate actions ranging from web page requests to significant
financial transactions. These transactions, processed by various servers, fulfill the user's
request, constituting real transactions. The distinction is made between real
transactions initiated by users and synthetic transactions generated by scripts.
 Synthetic transactions are scripted events that mimic the behaviors of real users and
allow security professionals to systematically test the performance of critical services.
 The use of synthetic transactions is that it allows us to systematically test the behavior
and performance of critical services. It provides a practical example of using a script to
periodically check the functionality of a home page, ensuring it is operational. This
proactive approach helps identify issues such as web server hacking or distributed
denial-of-service [DDoS] attacks early on, allowing for timely investigation.
 Synthetic transactions can do beyond determining service availability. It highlights their
ability to measure performance parameters like response time, aiding in the detection of
network congestion or server overutilization. Additionally, synthetic transactions can
assist in testing new services by replicating typical end-user behaviors, ensuring proper
system functionality. Finally, these transactions can be written to behave as malicious
users by, for example, attempting a cross-site scripting [XSS] attack and ensuring your
controls are effective. This is an effective way of testing software from the outside.

1.6.1. Real User Monitoring [RUM] vs Synthetic Transactions


Real User Monitoring Synthetic Transactions
 Real user monitoring [RUM] is a passive  Synthetic transactions, are very
way to monitor the interactions of real predictable and can be very regular,
users with a web application or system. because their behaviors are scripted.
 It uses agents to capture metrics such  They can also detect rare occurrences
as more reliably than waiting for a user to
delay, jitter, and errors from the user’s actually trigger that behavior.
perspective.  Synthetic transactions also have the
 RUM differs from synthetic advantage of not having to wait for a
transactions in that it uses real people user to become dissatisfied or
instead of scripted commands. While encounter a problem, which makes
RUM them a more proactive approach.
more accurately captures the actual  It is important to note that RUM and
user experience, it tends to produce synthetic transactions are different
noisy data ways
[e.g., incomplete transactions due to of achieving the same goal. Neither
users changing their minds or losing approach is the better one in all cases,
mobile so it is
connectivity] and thus may require common to see both employed
more back-end analysis. It also lacks the contemporaneously.
elements of predictability and
regularity, which could mean that a
problem won’t be
detected during low utilization periods.

1.7. Code Reviews


 A Code review is a systematic examination of the instructions that comprise a piece of
software, performed by someone other than the author of that code.
 This approach is a hallmark of mature software development processes. In fact, in many
organizations, developers are not allowed to push out their software modules until
someone else has signed off on them after doing a code review. Think of this as
proofreading an important document before you send it to an important person. If you
try to proofread it yourself, you will probably not catch all those embarrassing typos and
grammatical errors as easily as someone else could who is checking it for you.
 Step by Step approach of Code Reviews:
No Step Description
1 Checking  Verify that the author followed the team's style guide or
for documented coding standards.
adherence  Check for consistent indentation, naming conventions, and other
to coding formatting guidelines.
standards  Ensure that the code uses appropriate libraries and functions, as
specified in the standards.
2 Identifying  Look for uncalled or unneeded functions or procedures that
unnecessary contribute to "code bloat."
code  Identify excessively complex modules that should be restructured
or divided into multiple routines.
 Seek opportunities to simplify the code by eliminating redundant
or unnecessary code blocks.
3 Refactoring  Identify blocks of repeated code that could be refactored into
repeated reusable components.
code  Create external library functions or modules to encapsulate the
reusable code.
 Replace the repetitive code with calls to the newly created
reusable components.

 Developers frequently embed code stubs and test routines into their developmental
software, which can serve as prime examples of unnecessary and hazardous procedures.
Incidents where developers have left test code, including occasionally hardcoded
credentials, in final software versions have been all too common. Upon discovering this
vulnerability, adversaries can effortlessly exploit the software and circumvent security
controls. This issue is particularly insidious because developers sometimes comment out
the code during final testing as a precaution in case the tests fail and they need to revisit
and modify it. They may make a mental note to return to the file and remove this
hazardous code, but they may eventually forget to do so, which is exploited by attackers
if they get access to the source code.
 Defensive programming is a crucial practice for software development teams to adopt. It
entails anticipating potential issues and implementing measures to prevent them.
 A key aspect of defensive programming is treating all inputs with suspicion,
regardless of their source, until they are verified as legitimate. User input validation
can be a complex process, as it requires understanding the context and expectations
associated with the input. For instance, if you anticipate a numerical value, you need
to define the acceptable range and determine whether this range might change over
time. Addressing these questions is essential for validating input accuracy. Many
security vulnerabilities stem from a lack of input validation. By implementing
defensive programming techniques, you can significantly reduce the risk of such
vulnerability’s exploitation.

1.7.1. A Code Review Process


Step1: Identify the code to be reviewed [usually a specific function or file].
Step2: The team leader organizes the inspection and ensures that everyone has access to
the correct version of the source code, along with all supporting artifacts.
Step3: Everyone on the team reads through the code and making notes.
Step4: A designated team member collates all the obvious errors offline [not in a meeting]
so they don’t have to be discussed during the inspection meeting [which would be a
waste of time].
Step5: If everyone agrees the code is ready for inspection, then the meeting goes ahead.
Step6: The team leader displays the code [with line numbers] via an overhead projector so
everyone can read through it. Everyone discusses bugs, design issues, and anything
else that comes up about the code. A scribe [not the author of the code] writes
everything down.
Step7: At the end of the meeting, everyone agrees on a “disposition” for the code:
Disposition Passed Passed with rework Reinspect
Description Code is good to go Code is good so long Fix problems and
as small changes are have another
fixed inspection
Step9: If the disposition of the code in step 7 was passed with rework, the team leader
checks off the bugs that the scribe wrote down and makes sure they’re all fixed.
Step10: If the disposition of the code in step 7 was reinspect, the team leader goes back to
step 2 and starts over again.
1.8. Code Testing
 Before transitioning software from development to production, it is crucial to ensure
that it adheres to our security policies. This entails verifying that data in transit is
encrypted, authentication and authorization controls are robust, sensitive data is not
stored in unencrypted temporary files, and there are no unauthorized connections to
external resources.
 The incentives between software developers and security personnel differ significantly.
Developers are primarily focused on implementing features, while security practitioners
prioritize system protection. To address this divide, mature organizations establish a
standardized process for certifying the security of software systems before deployment.
This process typically culminates in a senior manager authorizing or accrediting the
system based on the certification results.

1.9. Misuse Case Testing


 Use cases are structured scenarios that describe the required functionality of an
information system. They are typically depicted using Unified Modeling Language
[UML] use case diagrams. In the below figure, a customer attempts to place an order
and may be prompted to log in if she hasn’t already done so, but she will always be
asked to provide her credit card information.
 A misuse case is a use case that describe how a threat actor can misuse the system.
They are typically depicted with shaded ovals and are connected to legitimate use cases
with arrows labeled <<threaten>>.

 Misuse case testing is a process of ensuring that the system's security controls are
effective in mitigating the risks identified in the risk management process. This process
helps to ensure that security is incorporated into the design of the system from the
outset.

1.10. Test Coverage


 Test coverage is a measure of how comprehensively a set of tests covers the
functionality of a software system.
 Test coverage is typically expressed in percentage
 In the context of information security, test coverage is particularly important for
ensuring that the system is adequately protected against potential attacks.
 For example: Suppose you have 100 security controls in your organization. Testing all of
them in one assessment or audit may be too disruptive or expensive [or both], so you
schedule smaller evaluations throughout the year. Each quarter, for instance, you run an
assessment with tests for one quarter of the controls. In this situation, your quarterly
test coverage is 25 percent but your annual coverage is 100 percent.
 The higher the test coverage, the more confident you can be that the system has been
thoroughly tested and is less likely to contain security vulnerabilities. However, it is
important to note that there is no such thing as 100% test coverage. There will always
be some code that is not exercised by the test suite, and it is impossible to test every
possible input to a system.
 In addition to test coverage, there are several other factors that can affect the security
of a system, such as the quality of the code, the effectiveness of the security controls,
and the presence of known vulnerabilities. Here are some examples of how test
coverage can be used to improve information security:
 Identifying potential vulnerabilities: By identifying code that is not covered by tests,
security testers can focus their efforts on areas that may be more vulnerable to
attack.
 Prioritizing test efforts: If certain parts of the code are more critical to the security of
the system, testers can prioritize testing those areas to ensure that they are
adequately covered.
 Measuring the effectiveness of security controls: By testing how the system reacts
to different attack scenarios, security testers can assess the effectiveness of the
security controls in place.
 Tracking progress over time: By tracking test coverage over time, security teams can
monitor the progress of their testing efforts and identify areas where they need to
improve.

1.11. Interface Testing


 Graphical User Interface [GUI] is only one kind of interface. In essence, an interface is an
exchange point for data between systems and/or users.
 Different example of interfaces:
 Your system’s Network Interface Card [NIC], which is the exchange point for data
between your system and another system.
 Application Program Interface [API], is a set of points at which a software system
[application] exchanges information with another software system [such as libraries].
 Interface testing is the systematic evaluation of a given set of these exchange points for
data between systems and/or users. This assessment should include both known good
exchanges and known bad exchanges in order to ensure the system behaves correctly at
both ends of the spectrum. The real rub is in finding test cases that are somewhere in
between. In software testing, these are called boundary conditions because they lie at
the boundary that separates the good from the bad.
 For example, if a given packet should contain a payload of no more than 1024 bytes,
how would the system behave when presented with 1024 bytes plus one bit [or byte] of
data? What about exactly 1024 bytes? What about 1024 bytes minus one bit [or byte] of
data? As you can see, the idea is to flirt with the line that separates the good from the
bad and see what happens when we get really close to it.
 There are many other test cases we could consider, but the most important lesson here
is that the primary task of interface testing is to dream up all the test cases ahead of
time, document them, and then insert them into an automated test engine. This way
you can ensure that as system evolves, a specific interface is always tested against the
right set of test cases.
 For now, Interface testing is a special case of integration testing, which is the
assessment of how different parts of a system interact with each other.

1.12. Compliance Checks


 It’s important to evaluate the efficacy of technical controls which should be the primary
focus of security testing. However, many organizations must adhere to specific
regulations or standards, whether mandated or voluntarily adopted. Additionally, testing
is crucial for verifying the effectiveness of controls outlined in an information security
management system [ISMS]. In all these scenarios, the testing techniques discussed can
be employed to demonstrate compliance.
 Compliance checks are point-in-time verifications that specific security controls are
implemented and performing as expected.
 For example, your organization may process payment card transactions for its
customers and, as such, be required by PCI DSS to run annual penetration tests and
quarterly vulnerability scans by an approved vendor. Your compliance checks would
be the reports for each of these tests.

2. Conducting Security Audits


 Compliance checks are point-in-time tests, while audits cover a longer period of time.
Audits are not about whether a control is in place today, but whether it has been in
place for a specific period. Audits can use compliance checks as evidence of compliance.
 Audit Process:

Step No. Step Description


1 Determine  Establishing clear goals is the most important step in
the goals planning a security audit. An audit could be driven by
 Regulatory requirements
 Compliance requirements
 Significant change to the architecture of the information
system [e.g.: merger of organizations]
 New developments in the threat facing the organization.
2&3 Involve the  Once the goals of an audit are established, the scope of the
right business audit should be determined in coordination with business
leaders & unit managers. The scope must define the following:
determine the  Which subnets and systems are we going to test?
scope  Are we going to look at user artifacts, such as passwords,
files, and log entries, or at user behaviors, such as their
response to social engineering attempts?
 Which information will we assess for confidentiality,
integrity, and availability?
 What are the privacy implications of our audit?
 How will we evaluate our processes, and to what extent?
4 Choose the  The audit team should be composed of individuals with the
Audit Team necessary skills and experience. The audit team should be
independent of the systems and processes being audited.
5 Plan the Audit  Once the audit team is selected, the next step is to plan the
audit. The audit plan should include the following:
 The goals of the audit
 The scope of the audit
 The timeframe for the audit
 The resources required for the audit
 The communication plan for the audit
 The risk mitigation plan for the audit
6 Conduct the  After planning, conduct the audit while sticking to the plan
Audit and documenting any changes
7 Document the  Document the results, because the wealth of information
results generated is both valuable and volatile.
 Documentation is essential throughout the entire audit
process.
 The large amount of data and information generated
during an audit is invaluable for benchmarking the
effectiveness of controls and identifying trends.
 Detailed documentation allows security staff to
investigate unexpected or unexplainable results and
conduct root cause analysis.
 Capturing all information facilitates the production of
reports for target audiences without the risk of missing
important data points.
8 Communicate  Communicate the results to right leaders to achieve and
the results sustain strong security posture. The manner in which we
communicate results to executives is very different from the
manner in which we communicate results to the IT team
members. Many times, a security audit has been ultimately
unsuccessful because the team has not been able to
communicate effectively with the key stakeholders.

NOTE: In certain cases, such as regulatory compliance, the parameters of the audit may be
dictated and performed by an external team of auditors. This means that the role of the
organization is mostly limited to preparing for the audit by ensuring all required resources
are available to the audit team.

Domain 6 – Security Assessment and Testing


Chapter 17 – Measuring Security
1. Quantifying Security
 Assessing the security posture is crucial for ensuring the effectiveness of your
organization's security measures. An organization needs metrics to evaluate the security
posture. Using inaccurate metrics can lead to misguided decisions and worsen your
security posture.
 ISO has developed a standard, ISO/IEC 27004, to guide organizations in developing and
utilizing effective security metrics. ISO/IEC 27004 outlines a process for measuring the
performance of security controls and processes, enabling continuous improvement in an
organization's security posture.

1.1. Key terms to understand:


Term Description
Factor  A factor is an attribute of an ISMS that has a value that can change over
time.
OR
A factor is a characteristic or attribute that influences the security of an
information system.
 Factors can be internal or external to the organization.
 Internal factors may include the organization's security policies and
procedures, the training and awareness of its employees, and the
security of its IT infrastructure. Number of alerts generated by IDS or
number of incidents investigated by IR team.
 External factors may include the threat landscape, the regulatory
environment, and the organization's supply chain.
Measurement  A measurement is the act of quantifying a factor.
OR
A measurement is a quantitative observation of a factor at a particular
point in time.
 Measurements are used to track the effectiveness of security controls
and processes, and to identify trends and patterns that may indicate
potential security risks.
 Examples of measurements in information security include the number
of security incidents, the mean time to respond to a security incident,
and the percentage of employees who have completed security
awareness training.
Baseline  A baseline is a reference point against which future measurements can
be compared.
 Baselines are used to establish a starting point for tracking progress and
to identify areas for improvement.
 Examples of baselines in information security include the number of
security vulnerabilities, the average time to patch security
vulnerabilities, and the percentage of systems that are up to date on
security patches.
Metric  A metric is a quantifiable measure of a factor that is used to track
progress toward a goal.
OR
A metric is a derived value that is generated by comparing multiple
measurements against each other or against a baseline.
 Metrics are used to assess the effectiveness of security controls and
processes, and to identify areas for improvement.
 Examples of metrics in information security include the number of
security incidents per year, the mean time to respond to a security
incident, and the percentage of employees who have completed
security awareness training.
Indicator  An indicator is a qualitative measure of a factor that provides
information about the state of an information system. Indicators are
used to identify trends and patterns that may indicate potential security
risks. Examples of indicators in information security include the number
of security alerts, the severity of security incidents, and the number of
employee complaints about security issues.
OR
An indicator is a particularly important metric that describes a key
element of the effectiveness of an ISMS.
 Example: Let's say an organization is concerned about the number of
security incidents it is experiencing. The organization could use the
following factors to track its progress in reducing the number of security
incidents:

 Summarizing
 Factor: Number of security incidents
• Measurement: Count the number of security incidents that occur each month
• Baseline: The number of security incidents that occurred in the previous month
• Metric: % decrease in the number of security incidents from the previous month
• Indicator: The severity of security incidents
1.2. Security Metrics
 Security metrics are often viewed as tedious and irrelevant, with organizations focusing
on easily measurable metrics like ticket closure rates. This approach fails to provide
valuable insights and can even harm security efforts, as prioritizing ticket closure can
lead to missed evidence of ongoing attacks.
 Effective security metrics should tell a story, catering to the specific needs of different
audiences.
 Board members are interested in strategic threats and opportunities, while business
managers prioritize business unit protection.
 Security operations leaders focus on team performance. Each audience requires
tailored security metrics to effectively engage them.
 6 characteristics of a good metric that you should consider:
 Relevant: The metric should be directly tied to your security goals and indirectly tied
to your organization’s business goals.
 Quantifiable: A good metric should be relatively easy to measure with the available
tools at your disposal
 Actionable: The best metrics inform your immediate actions, justify your requests,
and directly lead to improved outcomes.
 Robust: Will it be relevant in a year? Can you track it over many years? A good
metric must allow you to track your situation over time to detect trends. It should
capture information whose value endures.
 Simple: Does it make intuitive sense? Will all stakeholders “get it?” Will they know
what was measured and why it matters to them? If you can’t explain the metric in a
simple sentence, it probably is not a good one.
 Comparative: Can it be evaluated against something else? Good metrics are the
result of comparing measurements to each other or to some baseline or standard.
This is why the best metrics are ratios, or percentages, or changes over time.

NOTE: Another commonly used approach is to ensure metrics are SMART [specific,
measurable, achievable, relevant, and time-bound].

1.2.1. Risk Metrics


 Assess the likelihood and potential impact of security threats.
 Risk metrics capture the organizational risk and how it is changing over time.
 These are the metrics you’re most likely to use if you are communicating with an
executive audience, because these metrics are not technical. They are also forward-
looking because they address things that may happen in the future. For these reasons,
risk metrics are the best ones to support strategic analyses.
 Examples:
 The percentage of change in your aggregated residual risks
 The percentage of change in your current worst-case risk
 The ratio of organizational security incidents to those reported by comparable
organizations.
1.2.2. Preparedness Metrics
 These metrics indicate how prepared an organization is when dealing with security incidents.
 Examples:
 Monthly change in the mean time to patch a system
 Percentage of systems that are fully patched
 Percentage of staff that is up to date on security awareness training
 Ratio of privileged accounts to nonprivileged accounts
 Annual change in vendor security rating [i.e., how prepared is your organization’s
supply chain]

1.2.3. Performance Metrics


 Measure the effectiveness of security controls and processes.
 If risk metrics are fairly strategic and preparedness metrics are more operational,
performance metrics are as tactical as they come.
 They measure how good your team and systems are at detecting, blocking, and
responding to security incidents. In other words, performance metrics tell you how good
you are at defeating your adversaries day in and day out
 Examples:
 Number of alerts analyzed this week/month compared to last week/month
 Number of security incidents declared this week/month compared to last
week/month
 Percent change in mean time to detect [MTTD]
 Percent change in mean time to resolve [MTTR]

1.3. KPIs, KGIs and KRIs


KPIs measure how well things are going now, while KRIs measure how badly things could go
in the future.

1.3.1. Key Performance Indicators [KPIs]


 KPIs are quantitative metrics that measure the effectiveness of security controls and
processes in achieving specific goals. They are used to track progress towards these
goals, identify areas for improvement, and make informed decisions about security
programs.
 KPIs typically focus on measuring the following aspects of security posture:
 Security Incident Response: KPIs such as mean time to detection [MTTD], mean time
to resolution [MTTR], and the number of security incidents per year provide insights
into the organization's ability to detect, respond to, and resolve security incidents
effectively.
 Vulnerability Management: KPIs such as vulnerability density, mean time to
remediate [MTTR], and percentage of systems patched measure the organization's
effectiveness in identifying, prioritizing, and remediating vulnerabilities.
 Security Awareness and Training: KPIs such as employee security awareness training
completion rate and phishing simulation click-through rates assess the effectiveness
of security awareness programs in educating employees and reducing their
susceptibility to phishing attacks.
 Compliance: KPIs such as percentage of systems compliant with security standards
and the number of audit findings measure the organization's adherence to industry
regulations and internal security policies.

1.3.2. Key Goal Indicators [KGIs]


 Define measures that tell management, after the fact—whether an IT process has
achieved its business requirements.

1.3.3. Key Risk Indicators [KRIs]


 KRIs are qualitative metrics that measure the likelihood and potential impact of security
risks. They are used to identify and prioritize risks so that they can be mitigated and
prevented.
 In the context of InfoSec, KRIs typically focus on assessing the following aspects of risk:
 Threat Landscape: KRIs such as the number of new vulnerabilities discovered, the
frequency of cyberattacks, and the emergence of new attack vectors provide insights
into the evolving threat landscape and potential threats to the organization.
 Asset Valuation: KRIs such as the value of sensitive data assets, the criticality of IT
systems, and the potential financial impact of a security breach help prioritize risk
mitigation efforts.
 Vulnerability Assessment and Penetration Testing [VAPT] Results: KRIs such as the
severity of identified vulnerabilities, the potential exploitability of vulnerabilities, and
the potential impact of exploited vulnerabilities guide remediation efforts and risk
mitigation strategies.
 Compliance Gaps: KRIs such as the number of non-compliant systems, the severity
of compliance violations, and the potential regulatory consequences of non-
compliance identify areas where risk mitigation efforts are needed.

1.3.4. Relationship between KPIs and KRIs


 KPIs and KRIs are complementary metrics that provide a comprehensive view of an
organization's security posture.
 KPIs measure the effectiveness of current security controls and processes, while KRIs
assess the likelihood and potential impact of future risks.
 By tracking both KPIs and KRIs, organizations can identify and prioritize risks, measure
the effectiveness of their security controls, and make informed decisions about their
security programs.
 In essence, KPIs provide a snapshot of the current state of security, while KRIs offer a
forward-looking perspective on potential risks. By using both types of metrics together,
organizations can achieve a balanced approach to information security management,
ensuring both the effectiveness of current controls and the proactive mitigation of
future threats.
1.3.5. Summary
Feature KPIs KRIs
Purpose Measure the effectiveness of security Measure the likelihood and
controls and processes potential impact of security risks
Data Type Quantitative Qualitative
Usage Track progress towards security goals, Identify and prioritize risks, guide
identify areas for improvement, and resource allocation, and inform
make informed decisions about security risk mitigation strategies
programs
Examples Number of security incidents, mean time Vulnerability density, mean time
to detection [MTTD], percentage of to remediate [MTTR], annualized
phishing emails identified and blocked loss expectancy [ALE]

NOTE: KPIs and KRIs are used to measure progress toward attainment of strategic business
goals.

2. Security Process Data


 To assess the effectiveness of security controls, collect security process data from
administrative and technical processes. Administrative controls are more pervasive and
less visible than technical controls, making them targets for sophisticated threat actors.
Collect data from a variety of administrative processes to assess current posture and
improve it over time.

2.1. Account Management


 A preferred technique of attackers is to become “normal” privileged users of the
systems
they compromise as soon as possible. They can accomplish this in at least three ways:
compromise an existing privileged account, create a new privileged account, elevate the
privileges of a regular user account. The first approach can be mitigated through the use
of strong authentication [e.g., strong passwords or, better yet, multifactor
authentication] and by having administrators use privileged accounts only for specific
tasks. The second and third approaches can be mitigated by paying close attention to
the creation, modification, or misuse of user accounts. These controls all fall in the
category of account management.

NOTE: Privileged user accounts pose significant risk to the organization and should be
carefully managed and controlled.

2.1.1. Adding Accounts


 When new employees arrive, organizations should implement a well-defined onboarding
process that ensures they understand their duties and responsibilities, are assigned the
required organizational assets, and have access to the necessary information. While the
specifics of this process may vary, there are some universal administrative controls that
should be in place.
 All new users should be required to read and acknowledge [typically by signing] all
relevant policies.
 Every organization should have an acceptable use policy [AUP] that outlines the
acceptable use of company computers and other resources. The AUP should
specifically prohibit activities such as viewing pornography, sending hate mail, or
hacking other computers.
 Organizations should also audit user accounts to ensure that all employees are
aware of the AUP and other applicable policies. This can be done by comparing a list
of all users with the files containing the signed AUPs. Cross-checking AUPs and user
accounts can also verify that HR and IT are communicating effectively.
 Organization should establish clear policies for password management, account
expiration dates, and user access permissions. These policies should be updated
regularly to reflect the changing information needs of individual users.

2.1.2. Modifying Accounts


 It is important to have a controlled and documented process for adding, removing, or
modifying permissions for users. Organizations that are mature in their security
processes have a change control process in place to address user privileges. While many
auditors focus on who has administrative privileges in the organization, there are many
custom sets of permissions that approach the level of an admin account. It is important,
then, to have and test processes by which elevated privileges are issued.

2.1.3. Suspending accounts


 It’s important to monitor the account and suspend the unused accounts in order to
protect the organization's network from unauthorized access. Accounts may become
unneeded for a variety of reasons, such as the termination of an employee or the
expiration of an account's default expiration date. Once an account is identified as being
unneeded, it should be suspended until the employee returns or the term of the
retention policy is met. Testing the administrative controls on suspended accounts
involves looking at each account or taking a representative sample and comparing it
with the status of its owner according to HR records. It is important that accounts are
deleted only in strict accordance with the data retention policy.

2.2. Security Training and Security Awareness Training


2.2.1. Security Training vs Security Awareness
Security Training Security Awareness
Security training is the process of teaching a Security awareness training the process of
skill or set of skills that exposing people to security issues so that
enables people to perform specific security they are
functions better. able to recognize and respond to them
better.
Security training is typically provided to Security awareness is provided to every
security personnel. member of organization.
Security training teaches specific security Security awareness training helps people to
skills that can help to protect the recognize and respond to security threats.
organization from cyberattacks.
The effectiveness of security training The effectiveness of security awareness
programs can be assessed by measuring the training programs is more difficult to assess.
performance of individuals on specific One key measure is the degree to which
security functions before and after the people change their behaviors when
training. presented with certain security situations.
There are many common components of These components include training on social
security awareness training programs. engineering, password management, and
phishing.

EXAM TIP: Security awareness [and the training required to attain it] is one of the most
critical controls in any ISMS. Expect exam questions on this topic.

2.2.2. Social Engineering


 Social engineering, in the context of information security, is the process of manipulating
individuals so that they perform actions that violate security protocols. Whether the
action is divulging a password, letting someone into the building, or simply clicking a link,
it has been carefully designed by the adversaries to help them exploit our information
systems. A common misconception is that social engineering is an art of improvisation.
While improvising may help the attacker better respond to challenges, the reality is that
most effective social engineering is painstakingly designed against a particular target,
sometimes a specific individual.

Common Social-Engineering Attacks


Attack Type Description
Phishing  Phishing is a type of social engineering attack that aims to
trick users into revealing personal information or clicking on
malicious links. Phishers often send emails or text messages
that appear to be from a legitimate source, such as a bank
or credit card company. These messages will often contain a
link or attachment that, when clicked, will take the user to a
fake website that looks like the real website. Once the user
enters their personal information on the fake website, the
phisher can steal it.
 Example - A phisher might send an email that appears to be
from PayPal, asking the user to update their password. The
email will contain a link that takes the user to a fake PayPal
website. Once the user enters their password on the fake
website, the phisher can steal it.
Spear-Phishing  Spear-phishing is a more targeted form of phishing that aims
to trick specific individuals or organizations. Spear-phishers
will often gather information about their targets before
sending them a phishing email. This information might
include the target's name, job title, and company. The
spear-phishing email will then be tailored to the target,
making it more likely that the target will fall for the scam
 Example - A spear-phisher might send an email to a CEO of a
company, asking them to approve a wire transfer. The email
will contain the name of the CEO and the company, and it
will look like it was sent from a legitimate source. The spear-
phisher might also have gathered information about the
CEO's recent business dealings, which they can use to make
the email seem even more convincing.
Whaling  Whaling is a type of spear-phishing that targets high-profile
individuals, such as CEOs, executives, and government
officials. Whalers often use very sophisticated techniques to
trick their targets, such as using fake websites and social
media accounts.
 Example - A whaler might create a fake social media account
that looks like it belongs to a CEO. The whaler will then use
this account to send messages to the CEO's friends and
colleagues, asking them to transfer money to a fraudulent
account.
Smishing / SMS-  SMS-phishing, or smishing, is a social engineering attack
phishing conducted specifically through SMS messages. In this attack,
scammers attempt to lure the user into clicking on a link
which directs them to a malicious site. Once on the site, the
victim is then prompted to download malicious software
and content.
 A smishing attack requires little effort for threat actors and
is often carried out by simply purchasing a spoofed number
and setting up the malicious link.
Drive-By Download  A drive-by download is a type of malware attack that infects
a computer when the user visits a malicious website. The
user does not need to click on anything to get infected;
simply visiting the website is enough. Drive-by download
attacks are often carried out by exploiting vulnerabilities in
web browsers or plugins.
 Example - A user visits a website that has been hacked. The
website contains a piece of malware that takes advantage of
a vulnerability in the user's web browser. The malware is
automatically downloaded to the user's computer, and it
can then steal personal information or install other
malware.
Pretexting  Pretexting is a type of social engineering attack in which the
attacker uses a false story to trick the victim into revealing
personal information or performing an action. The attacker
will often pose as someone from a legitimate organization,
such as a bank or government agency.
 Example - A pretender calls a user and pretends to be from a
bank. The pretender tells the user that their account has
been compromised and that they need to verify their
personal information. The user, believing that they are
speaking to a legitimate bank employee, provides the
pretender with their personal information. The pretender
then uses this information to steal money from the user's
account.
Baiting  Baiting is a type of social engineering attack wherein
scammers make false promises to users in order to lure
them into revealing personal information or installing
malware on the system.
 Baiting scams can be in the form of tempting ads or online
promotions, such as free game or movie downloads, music
streaming or phone upgrades. The attacker hopes that the
password the target uses to claim the offer is one they have
also used on other sites, which can allow the hacker to
access the victim’s data or sell the information to other
criminals on the dark web.
 Baiting can also be in a physical form, most commonly via a
malware-infected flash drive. The attacker would leave the
infected flash drive in an area where the victim is most likely
to see it. This would prompt the victim to insert the flash
drive into the computer to find out who it belongs to. In the
meantime, malware is installed automatically.
Business Email  Business Email Compromise [BEC] is a social engineering
Compromise [BEC] tactic where the attacker poses as a trustworthy executive
who is authorized to deal with financial matters within the
organization.
 In this attack scenario, the scammer closely monitors the
executive’s behavior and uses spoofing to create a fake
email account. Through impersonation, the attacker sends
an email requesting their subordinates make wire transfers,
change banking details and carry out other money-related
tasks.
 BEC can result in huge financial losses for companies. Unlike
other cyber scams, these attacks do not rely on malicious
URLS or malware that can be caught by cybersecurity tools,
like firewalls or endpoint detection and response [EDR]
systems. Rather, BEC attacks are carried out strictly by
personal behavior, which is often harder to monitor and
manage, especially in large organizations.
Quid Pro Quo  A quid pro quo attack involves the attacker requesting
sensitive information from the victim in exchange for a
desirable service.
 For example, the attacker may pose as an IT support
technician and call a computer user to address a common IT
issue, such as slow network speeds or system patching to
acquire the user’s login credentials. Once the credentials are
exchanged, this information is used to gain access to other
sensitive data stored on the device and its applications, or it
is sold on the dark web.
Honey Trap  A honeytrap attack is a social engineering technique that
specifically targets individuals looking for love on online
dating websites or social media. The criminal befriends the
victim by creating a fictional persona and setting up a fake
online profile. Over time, the criminal takes advantage of
the relationship and tricks the victim into giving them
money, extracting personal information, or installing
malware.
Tailgating/Piggybacking  Tailgating, also known as piggybacking, is a physical breach
whereby an attacker gains access to a physical facility by
asking the person entering ahead of them to hold the door
or grant them access. The attacker may impersonate a
delivery driver or other plausible identity to increase their
chances. Once inside the facility, the criminal can use their
time to conduct reconnaissance, steal unattended devices or
access confidential files.
 Tailgating can also include allowing an unauthorized person
to borrow an employee’s laptop or other device so that the
user can install malware.

2.3. Disaster Recovery and Business Continuity


 Business continuity planning and emergency response procedures are crucial for
organizations to maintain operations during disruptions or crises.
 The acceptable downtime varies depending on the nature of the organization, but it's
essential to have procedures in place to minimize disruptions and ensure business
continuity. Emergency response procedures are the first line of defense, and training
and drills are essential for effective execution.

NOTE:
 Business continuity is the term used to describe the processes enacted by an
organization to ensure that its vital business processes remain unaffected or can be
quickly restored following a serious incident. Business continuity looks holistically at the
entire organization.
 A subset of this effort, called disaster recovery, focuses on restoring the information
systems after a disastrous event.

EXAM TIP: Protection of human life is always the top priority in situations where it is
threatened.

 Human should be prioritized over the safety of material possessions during emergencies.
Emergency procedures should focus on safely evacuating personnel, and all personnel
should be familiar with designated exits and gathering spots. These gathering spots
should be chosen considering seasonal weather conditions. A designated person in each
group should ensure everyone is accounted for, while another individual should be
responsible for notifying authorities such as the police, security guards, fire department,
emergency rescue, and management. Proper training will better equip employees to
handle emergencies and prevent impulsive actions like simply running towards the
nearest exit.
 If the situation is not immediately life-threatening, designated staff should follow a
specific order of operations to shut down systems in an orderly fashion and remove
critical data files or resources for safekeeping during evacuation. This order is crucial to
avoid causing more harm than good by skipping or adding steps.
 When things have settled down, one or more individuals will likely need to interact with
external entities such as the press, customers, shareholders, and civic officials. These
representatives should be prepared to provide a uniform and reasonable response that
explains the circumstances, the organization's response to the disaster, and what
customers and others can expect moving forward. It's important to present this
information quickly to prevent the spread of false rumors. At least one person should be
designated as a media liaison to ensure accurate messaging.
 Organizations also need to address the potential for looting, vandalism, and fraud in the
aftermath of a disaster, when they are most vulnerable. Careful planning, such as
providing sufficient security personnel on site, can help mitigate these risks and provide
the necessary level of protection.
 Data collection for assessing disaster recovery and business continuity processes should
ideally be conducted before any real emergencies arise, but the best data is often
captured during an actual emergency situation. Debriefings, also known as hot washes,
should be held immediately after any real or training events while memories are still
fresh. These discussions should cover what happened, how it was handled, what went
well, and how improvements can be made in the future. Hot wash notes and after-
action review [AAR] reports are valuable sources of security process data for disaster
recovery and business continuity.

2.3.1. Hot Washes and After-Action Review [AAR]


 Hot washes and after-action reviews [AARs] are two important tools for improving
information security business continuity.
 Hot washes are informal discussions held immediately after an incident or event to
gather information and identify lessons learned. They are typically led by a facilitator
who asks participants to share their experiences and observations. Hot washes can be
helpful for identifying immediate areas for improvement and for preventing similar
incidents from happening again.
 AARs are more formal reviews that are conducted after an incident or event has been
thoroughly investigated. They are typically led by a team of experts who review all of the
available information and identify the root cause of the incident or event. AARs also
provide recommendations for preventing similar incidents from happening again and for
improving the organization's overall security posture.
 Both hot washes and AARs can be valuable tools for improving information security
business continuity. Hot washes can help organizations to identify and address
immediate problems, while AARs can help organizations to identify and address systemic
problems.
 Key differences between hot-washes and AARs:
Feature Hot Wash After-Action Review
Purpose Gather information and identify Identify the root cause of an incident
lessons learned or event and provide
recommendations for prevention
Timing Immediately after an incident or After an incident or event has been
event thoroughly investigated
Formality Informal Formal
Participants Typically led by a facilitator and Typically led by a team of experts who
includes participants who were review all of the available information
involved in the incident or event
Outputs Verbal feedback Written report

2.3.2. Benefits of using hot washes and AARs in information security business
continuity:
 Hot washes and AARs are valuable tools that can be used to improve information
security business continuity. By using these tools, organizations can identify and address
problems before they escalate, and they can improve their overall security posture.
 Followings highlight the benefits of using hot-washes and AARs:
i. Improved communication: Hot washes and AARs can help to improve
communication between different departments and teams within an organization.
ii. Increased awareness: Hot washes and AARs can help to increase awareness of
security risks and vulnerabilities.
iii. Enhanced decision-making: Hot washes and AARs can provide valuable information
that can be used to improve decision-making.
iv. Reduced downtime: Hot washes and AARs can help to reduce downtime following
an incident or event.

3. Reporting
 Report writing is an essential but often disliked task for security professionals. While
they prefer hands-on work, they must be able to effectively communicate their findings
to both technical and non-technical audiences. True security professionals understand
the role of information security within the organization's broader context and can tailor
their communication accordingly. Technical reports are important but may not resonate
with decision-makers who are not inherently technical. To have a business impact,
reports must be both technically sound and written in the language of business.

3.1. Analyzing Results


 Before writing a security report, it's crucial to analyze the results to provide actionable
insights and recommendations.
 The analysis process involves three steps:
Step1: What? Gather, organize, and study the data to identify relevant facts. For
example, you might find that 12 servers are not running the latest software release and
three have vulnerabilities being exploited in the wild.
Step2: So what? Determine the business impact of the facts. Consider the broader
organizational context. For instance, the 12 servers might be critical for business
operations and have compensatory controls that mitigate the risk.
Step3: Now what? Identify actionable recommendations based on the analysis. In the
example, you might decide to keep an extra-close eye on the unpatched servers for a
few weeks before making a decision about updating them.
Effective security reporting involves moving from facts to actionable information that
helps maintain or improve an organization's security posture.

3.1.1. Remediation
 Vulnerability assessments often uncover issues beyond software defects. Many
vulnerabilities stem from misconfigured systems, inadequate policies, flawed business
processes, or untrained personnel. Addressing these vulnerabilities requires
collaboration beyond IT or security teams. Even mundane system patches need careful
coordination with all affected departments. Vulnerability remediation should involve all
stakeholders, including those outside the IT or security realm. Effective remediation
necessitates the sound analyses described earlier. Gaining support from all levels of the
organization is crucial, so educating stakeholders about the findings, their impact, and
necessary actions is essential. Remediation efforts may impact business operations, so
contingency plans and exceptional case handling strategies are vital.

3.1.2. Exception Handling


 In certain situations, patching vulnerabilities is not feasible within a reasonable
timeframe. For instance, medical devices may require extensive and costly
recertification processes before patching. In such cases, implementing compensatory
controls, documenting the exception, and periodically revisiting the vulnerability for
future remediation is necessary. An example of compensatory controls could be micro-
segmenting the medical device within its own VLAN, restricting communication to a
single authorized device via a specific port and protocol, and shielding it behind a
firewall.

NOTE:
 The language of your audience
 You cannot be an effective communicator if you don’t know your audience. Learning
to speak the language[s] of those you are trying to inform, advise, or lead is
absolutely critical. It has been said that accounting is the language of business, which
means you can generally do well communicating in terms of the financial impacts of
your findings. The fact that risks are expressed as the probability of a certain amount
of loss should make this fairly easy as long as you have some sort of risk
management program in place.
 Still, in order to up your game, you want to be able to communicate in the language
of the various disciplines that make up a business. Human resource leaders will care
most about issues like staff turnover and organizational culture. Your marketing [or
public affairs] team will be focused on what external parties think about your
organization. Product managers will be very reluctant to support proposals that can
slow down their delivery tempo. We could go on, but the point is that, while the
facts and analyses must be unassailable, you should always try to communicate
them in the language of…whoever it is you’re trying to persuade.

 Ethical Disclosure
 Occasionally, security assessments reveal vulnerabilities that affect other
organizations. These vulnerabilities may be discovered during code reviews,
penetration tests, or other security assessments. Upon discovering such
vulnerabilities, an ethical obligation arises to promptly disclose them to the
appropriate parties. If the vulnerability affects your own product, inform your
customers and partners as soon as possible. If the vulnerability affects a third-party
product, notify the vendor or manufacturer immediately to allow for timely
patching. Ethical disclosure aims to inform affected parties swiftly to enable patch
development before threat actors exploit the vulnerability.

3.2. Writing Technical Reports


 Following the analysis of assessment results, documentation is the next step. A technical
report should not be merely the output of an automated scanning tool or a generic
checklist with yes and no boxes. Many so-called auditors simply initiate a scanning tool,
wait for it to complete its task, and then print a report devoid of the analysis mentioned
earlier.
 An effective technical report presents a compelling and engaging narrative for its
intended audience. It is challenging to create one without a thorough understanding of
its readers, especially the most influential ones. Your ultimate goal is to persuade them
to take the necessary actions to balance risks and business functions for the
organization's betterment. Simultaneously, anticipate potential objections that could
derail the discussion. Above all, maintain absolute honesty and base all conclusions
directly on empirical facts. To enhance your credibility, always include relevant raw data,
technical details, and automated reports in an appendix.
 The following are key elements of a good technical audit report:
Element Description
Executive  You should always consider that some readers may not be able
Summary to devote more
than a few minutes to your report. Preface it with a hard-hitting
summary of key
take-aways.
Background  Explain why you conducted the experiment/test/assessment/
audit in the first place. Describe the scope of the event, which
should be tied to the reason for doing it in the first place.
 This is a good place to list any relevant references such as
policies, industry standards, regulations, or statutes.
Methodology  As most of us learned in our science classes, experiments [and
audits] must be repeatable. Describe the process by which you
conducted the study.
 This is also a good section in which to list the personnel who
participated, dates, times, locations, and any parts of the system
that were excluded [and why].
Findings  You should group your findings to make them easier to search
and read for your audience.
 If the readers are mostly senior managers, you may want to
group your findings by business impact. Technologists may prefer
groupings by class of system. Each finding should include the
answer to “so what?” from your analysis.
Recommendations  This section should mirror the organization of your findings and
provide the “now what?” from your analysis. This is the
actionable part of the report, so you should make it compelling.
When writing it, you should consider how each key reader will
react to your recommendations. For instance, if you know the
CFO is reluctant to make new capital investments, then you could
frame expensive recommendations in terms of operational costs
instead.
Appendices  You should include as much raw data as possible, but you
certainly want to include enough to justify your
recommendations. Pay attention to how you organize the
appendices so that readers can easily find whatever data they
may be looking for.

3.3. Executive Summaries


 Technical audit reports may be informative for IT professionals, but they are often
ineffective in communicating key findings and recommendations to business leaders. To
write impactful reports, it is crucial to translate technical jargon into language that is
approachable and meaningful to senior leadership. This involves explaining audit
findings in terms of risk exposure, quantifying the financial impact of recommended
changes, and considering the lifetime costs of implementing proposed controls.
 One way to quantify risk is to express it in monetary terms. Risk can be calculated by
multiplying the value of an asset by the probability of its loss. For instance, if customer
data is valued at $1 million and there is a 10% chance of a data breach, the risk of this
breach would be $100,000.
 There are three primary approaches to valuing assets:
Approach Description
Cost  This approach considers the cost of acquiring or replacing the asset.
Approach  For example, if a threat intelligence report cost the organization
$10,000, the cost approach would assign that value to the asset.
Income  This approach considers the asset's expected contribution to the firm's
Approach revenue stream.
 It utilizes the formula: value = expected income / capitalization rate.
 For instance, if the $10,000 threat intelligence report generated $1,000
in net income last year and is projected to generate $2,000 this year, its
present value would be $20,000.
Market  This approach involves determining the market value of similar assets.
Approach It requires transparency in the marketplace to understand the prices
other organizations are paying for similar assets.

 When proposing security controls, it is essential to consider their lifecycle costs and
compare them to the risks they mitigate. If the cost of implementing a control
[$180,000] is less than the risk it mitigates [$1,000,000], it is generally advisable to
implement the control. However, controls are not perfect and may fail. Therefore, it is
important to factor in the likelihood of a control's effectiveness.
 For example, if a control has an 80% success rate and the organization has a 10% chance
of being attacked, the residual risk would be 2% of $1,000,000, or $20,000. This residual
risk should be added to the control's cost [$180,000] to determine the total effective
cost [$200,000].
1.Given Values:
Control's Success Rate = 80%
Organization's Chance of Being Attacked = 10%

Initial Control's Cost = $180,000


2. Calculate Residual Risk: The residual risk is calculated as the product of the
control's failure rate and the organization's chance of being attacked. In this case, it
would be [1−Control’s Success Rate]×Organization’s Chance of Being
Attacked[1−Control’s Success Rate]×Organization’s Chance of Being Attacked.
Residual Risk=[1−0.80]×0.10Residual Risk=[1−0.80]×0.10
Residual Risk=0.20×0.10Residual Risk=0.20×0.10
Residual Risk=0.02Residual Risk=0.02

3. Calculate Residual Risk in Dollars: The residual risk in dollars is found by


multiplying the residual risk by the total asset value.
\text{Residual Risk in Dollars} = 0.02 \times $1,000,000
\text{Residual Risk in Dollars} = $20,000
4. Calculate Total Effective Cost: The total effective cost is obtained by adding the
residual risk in dollars to the initial control's cost.
Total Effective Cost=Initial Control’s Cost+Residual Risk in DollarsTotal Effective
Cost=Initial Control’s Cost+Residual Risk in Dollars
\text{Total Effective Cost} = $180,000 + $20,000
\text{Total Effective Cost} = $200,000
So, according to the given information and calculations, the total effective cost,
including the residual risk, would be $200,000.
 By addressing these considerations and presenting information in a clear and concise
manner, security professionals can create impactful audit reports that resonate with
senior leaders and drive informed decision-making.

4. Management Review and Approval


 A management review is a formal meeting of senior organizational leaders to assess the
effectiveness of management systems, particularly the ISMS. It is based on the Plan-Do-
Check-Act loop, where the Plan phase sets goals and drives policies, the Do phase
focuses on security operations, the Check phase involves reviewing and assessing
performance, and the Act phase entails making adjustments to continuously improve
the organization's security posture.
 The management review takes a holistic view of the organization and makes strategic
decisions, requiring involvement from all key decision-makers to provide legitimacy and
power to the ISMS. Effective communication with senior executives involves using
business language and conveying information in a concise manner.
 Here's a simplified breakdown of the management review process:
 Plan: Set goals and objectives for the ISMS.
 Do: Implement the ISMS and carry out security operations.
 Check: Review and assess the performance of the ISMS.
 Act: Make adjustments to continuously improve the ISMS.

 By following this cycle and communicating effectively with senior leadership,


organizations can ensure the ongoing effectiveness of their ISMS and maintain a strong
security posture.
4.1. Before the Management Review
 The frequency of management reviews should be determined based on the maturity of
the management system and the organization. More frequent reviews are
recommended for less mature systems and organizations. Scheduling should consider
the availability of key leaders and aim to establish an operational rhythm that supports
senior-level decision-making. The review cycle should also align with the implementation
timeframe of decisions made in previous reviews to ensure informed decision-making
based on the outcomes of prior actions.

4.2. Reviewing Inputs


 Inputs to the management review come from various sources, including:
 Audit Results: Reports from both external and internal audits provide valuable
insights into the effectiveness of the ISMS.
 Action Items: Review the list of unresolved issues and action items from the
previous management review to ensure timely completion.
 Customer Feedback: Gather feedback from customers through surveys, social media
analysis, or real user monitoring [RUM] to understand their satisfaction and identify
areas for improvement.
 Recommendations for Improvement: Present a set of high-level recommendations
for improving the ISMS based on the analysis of all inputs.
 Range of Options: Provide senior leaders with a range of options, including
maintaining the status quo, implementing a comprehensive solution, or choosing
from intermediate options with varying levels of risk, resource requirements, and
business appeal.
 Objective Evaluative Criteria: Present objective criteria for evaluating each option,
such as life-cycle cost, impact on existing systems, training requirements, and
complexity.
 By carefully considering and presenting these inputs, the ISMS team can effectively set
the stage for senior leaders to make informed decisions that will enhance the
organization's overall security posture.

4.3. Management Approval


 Senior leaders review the inputs, ask clarifying questions, and ultimately decide to
approve, reject, or defer the recommendations. The extent of debate reflects the ISMS
team's effectiveness in presenting well-supported arguments aligned with business
goals. Leadership's decisions are a testament to the team's persuasiveness.
 Typically, senior management will either fully approve, approve with modifications,
reject, or send the ISMS team back for additional data or redesign of options. Regardless
of the outcome, a list of deliverables for the next management review is established.
The review concludes with a review of open and action items, assigning ownership and
deadlines. These items become inputs for the next management review, forming an
ongoing cycle.

You might also like