You are on page 1of 57

Cybersecurity

Operations
Course Objectives
• Understand defense in-depth approach
• Understand different cybersecurity operations
management
• Identify physical security measures that can aid
cybersecurity
• Identify and understand vulnerabilities and
misconfiguration in security defenses
• Design high availability system and how to
maintain them
• Understand attack framework that can aid
incidence detection and response
Outline
• Defense in depth
• Cybersecurity operations management
• Physical security
• Security Assessment
• Cybersecurity resilience
• Penetration testing
Defense In-Depth
Defense in depth approach, which uses multiple security mechanisms and controls
layered throughout the organization’s infrastructure. Providing multiple defenses to
protect the confidentiality, integrity and availability of the organization’s network
and data will, of course, create greater levels of security.

Layering
To make sure data and information remains available, layering in an organization
storing its top-secret documents on a password-protected server in a locked
building that is surrounded by an electric fence.
A layered approach provides the most comprehensive protection because, even if
cybercriminals attack one layer, they still must contend with several more defenses.
Ideally, each layer should be more complicated to overcome!
Limiting
Limiting access to data and information reduces the possibility of a security threat.
An organization should have the right tools and settings, such as file permissions, in
place to limit access, as well as the right procedural measures, which define specific
steps for doing anything that can affect security. For example, a limiting procedure
which requires employees to always consult sensitive documents in a room which
has CCTV, ensures that they would never remove such documents from the
premises.
Diversity
If all defense layers were the same, it would not be very difficult for cybercriminals
to succeed in an attack. The layers must be different so that if one layer is
penetrated, the same technique will not work on all the others which would
compromise the whole system. Furthermore, an organization will normally use
different encryption algorithms and authentication systems to protect data in
different states.
To accomplish the goal of diversity in defenses, organizations can use security
products by different companies as different factors of authentication, such as a
swipe card from one company and a fingerprint reader manufactured by a different
company — as well as varied security measures, such as time-delay locks on
cabinets and supervision by a security staff member upon unlocking it.
Obscurity
Obscuring information can also protect data and information. An organization
should not reveal any information that cybercriminals can use to identify which
Operating System (OS) a server is running, or the type or make of equipment or
software it uses.
Error messages or system information should not contain any details that a
cybercriminal could use to determine what vulnerabilities are present. Concealing
certain types of information makes it more difficult for cybercriminals to attack.
Simplicity
Complexity does not necessarily guarantee security. If an organization implements
complex systems that are hard to understand and troubleshoot, this may backfire.
If employees do not understand how to configure a solution properly, such as
setting up their account using an unnecessarily complex process, this may make it
just as easy for cybercriminals to compromise those systems.
Cybersecurity
Operations
Management
Operational Management
Good operational management is key to maintaining a high level of security
across organizations, so @Larry must be aware of some important
management processes and procedures.

Configuration Management
Configuration management refers to identifying, controlling and auditing the
implementation and any changes made to a system’s established baseline.
The baseline configuration includes all the configuration settings that you
configure for a system which provide the foundation for all similar systems.
For example, those responsible for deploying Windows workstations to users
must install the required applications and set up the system settings according
to a documented configuration. This is the baseline configuration for Windows
workstations within this organization.
Documented configuration resources might include the following:
• Network maps, cabling and wiring diagrams, application configuration
specifications.
• Standard naming conventions used for computers.
• IP schema to track IP addresses.

Hardening
Hardening the operating system is an important part of making sure that systems
have secure configurations. Configuring log files along with auditing, changing
default account names and passwords, and implementing account policies and file-
level access control are all used to create a secure OS.
Log Files
A log records all events as they occur. Log entries make up a log file, with each
log entry containing all the information related to a specific event. Accurate and
complete logs are very important in cybersecurity.
Management of computer security log data should determine the procedures
for the following:
• Generating log files.
• Transmitting log files.
• Storing log files.
• Analyzing log data.
• Disposing of log data.
Operating Systems Logs
Operating system logs record events that are linked to actions that have to do
with the operating system. System events include the following:
• Client requests and server responses such as successful user
authentications.
• Usage information that contains the number and size of transactions in each
period.
Application Security Logs
Organizations use network-based and system-based security software to detect
malicious activity.
These logs are useful for performing auditing analysis and identifying trends
and long-term problems. Logs also enable an organization to provide
documentation for accountability
Protocol Analyzer
Packet analyzers, otherwise known as packet sniffers, intercept and log
network traffic.
The packet analyzer captures each packet, looks at the values of various fields
in the packet and analyzes its content. It can capture network traffic on both
wired and wireless networks.
Packet analyzers perform the following functions:
• Traffic logging.
• Network problem analysis.
• Detection of network misuse.
• Detection of network intrusion attempts.
• Isolation of exploited systems.
Physical Security
@Larry has video surveillance set up at each of its offices, but there are many other
physical security measures that organizations can use. Let’s explore these further.

Physical Barrier
• Perimeter fence system
• Security gate system
• Bollards (short posts used to stop vehicle intrusions)
• Vehicle entry barriers
• Guard shelters
• Fencing

Fencing
A fence is a barrier that encloses secure areas and designates boundaries. When
designing a perimeter fencing system, the following height guidelines apply:
Biometrics
Biometrics are the physiological or behavioral characteristics of an individual, and
there are one of the safest security practices based on identifying and granting
access using biometrics. This includes face, fingerprint, hand geometry, iris, retina,
signature and voice.
The popularity and use of biometric systems have increased because of the rise in
security breaches and transaction fraud. Biometrics can ensure confidential
financial transactions and personal data privacy — a well-known example being
smartphones which use fingerprint readers to unlock the device and access apps,
including online banking and payment systems.
When selecting biometric systems, there are several important factors to consider,
including:
• Accuracy.
• Speed or throughput rate.
• Acceptability to users.
• Uniqueness of the biometric organ and action.
• Resistance to counterfeiting.
• Reliability.
• Data storage requirements.
• Enrollment time.
• Intrusiveness of the scan.
The most important of these factors is accuracy, which is expressed in error types
and rates.
The first error rate is Type I errors or false rejections. A Type I error rejects a person
that registers and is an authorized user. In access control, where the main objective
is to keep cybercriminals away, false rejection is the least important error. It means
that someone who should gain access is not granted access.
However, in many biometric applications, particularly retail or banking, false
rejections can have a very negative impact on business due to a transaction or sale
being lost.
False acceptance is a Type II error. Type II errors allow entry to people who should
not have entry, meaning a cybercriminal can potentially gain access. For this reason,
Type II errors are normally considered the most important error for a biometric
access control system.
The acceptance rate is also an important concept here. Stated as a percentage, it is
the rate at which a system accepts unenrolled individuals or imposters as authentic
users – so the rate of Type II errors per total instances of granting permission.
Surveillance
Guards
Security guards are a great solution for access control requiring an
instantaneous and appropriate response. However, there are numerous
disadvantages to using security guards, including cost and the inability to
monitor and record high volumes of traffic. The use of guards also introduces
the risk of human error.
In highly secure information system facilities, guards control access to the
organization’s sensitive areas. The benefit of using guards here is that they can
adapt more than automated systems. Guards can learn and distinguish many
different conditions and situations and make decisions on the spot.
Video and Electronic Surveillance
Video and electronic surveillance can supplement has in some cases, replace
security guards. The benefits of video and electronic surveillance include the
ability to monitor areas when no other persons are present, the ability to record
and log surveillance videos and data for long periods, plus being able to link to
motion detection technology and notifications where appropriate.
@Larry already has surveillance cameras installed in the reception area, which
accurately captures events before a situation occurs. Another major advantage
is that surveillance cameras can be positioned to provide clear viewpoints and
they are far more economical when monitoring the entire perimeter of a facility.
Security
Assessments
Vulnerability Scanner
A vulnerability scanner assesses computers, computer systems, networks or
applications for weaknesses. It helps to automate security auditing by scanning
the network for security risks and producing a prioritized list to address
vulnerabilities.
A vulnerability scanner search for the following types of vulnerabilities:
• Use of default passwords or common passwords.
• Missing patches.
• Open ports.
• Misconfigurations in operating systems and software.
• Active IP addresses, including any unexpected devices connected.
Commonly used vulnerability scanners on the market include Nessus, Retina,
Core Impact and GFI LanGuard.
Their functions include:
• Performing compliance auditing.
• Providing patches and update.
• Identifying misconfiguration.
• Supporting mobile and wireless devices.
• Tracking malware.
• Identifying sensitive data.
Types of Scan
When evaluating a vulnerability scanner, look at how it is rated for accuracy,
reliability, scalability and reporting. You can choose a software-based or cloud-
based vulnerability scanner.

Categories
Vulnerability scanners categories:
• Network scanners probe hosts for open ports, enumerate information
about users and groups and look for known vulnerabilities on the network.
• Application scanners access application source code to test an application
from the inside.
• Web application scanners identify vulnerabilities in web applications.
Intrusive and Credentials Scans
Intrusive scans try to exploit vulnerabilities and may even crash the target,
while a non-intrusive scan will try not to cause harm to the target.
In a credentialed scan, usernames and passwords provide authorized access to
a system, allowing the scanner to harvest more information. Non-credentialed
scans are less invasive and give an outsider’s point of view.
You need to review all logs and configurations to take care of any vulnerabilities
that require attention.
Command Line Diagnostic Tools
There are several command line tools that can be used to assess the security
position of an organization such as @Larry.
• ipconfig displays TCP/IP settings (IP address, subnet mask, default gateway,
DNS and MAC information (ifconfig is the Mac/Linux equivalent).
• ping tests network connectivity by sending an ICMP request to a host and
determines whether a route is available to a host.
• arp provides a table that maps known MAC addresses to its associated IP
address and is a fast way to find an end device’s MAC address.
• tracert traces the route a packet takes to a destination and records the hops
along the way, helping locate where a packet is getting hung up (traceroute is
the Mac/Linux equivalent).
• nslookup queries a a DNS database (dig is the Mac/Linux equivalent). DNS
server to help troubleshoot
• netstat displays all the ports that a computer is listening on and can determine
active connections.
• nbtstat helps to troubleshoot NetBIOS name resolution problems in a Windows
system.
• nmap is used in security auditing. It locates network hosts, detects operating
systems and identifies services.
• netcat gathers information from TCP and UDP network connections and can be
used for port scanning, monitoring, banner grabbing and file copying.
• hping assembles and analyzes packets and is used for port scanning, path
discovery, OS fingerprinting and firewall testing.
Security Automation: SIEM
Security Information and Event Management (SIEM) systems use log collectors to
aggregate log data from sources such as security devices, network devices, servers
and applications. Logs can generate many events in a day, so SIEM systems help to
reduce event volume by combining similar events to reduce the event data load.
SIEM identifies deviations from the norm and then takes the appropriate action.
The goals of a SIEM system for security monitoring are:
• Identify internal and external threats.
• Monitor activity and resource usage.
• Conduct compliance reporting for audits.
• Support incident response.
Security Automation: SOAR
Orchestration Automation and Response (SOAR) tools allow an organization to
collect data about security threats from various sources and respond to low-level
events without human intervention. SOAR has three important capabilities:
• Threat and vulnerability management
• Security incident response
• Security operations automation
An organization can integrate SOAR in to its SIEM solution.
Cybersecurity
Resilience
High Availability
High availability describes systems designed to avoid downtime as much as
possible. The continuous availability of information systems is imperative, not only
to organizations but to modern life, as we are all using and relying on computer
and information systems more than ever before.
High availability systems typically are based on three design principles.

Eliminating Single Points of Failure


The first principle that defines high availability systems starts with identification.
Identifying all system devices and components whose failure would result in
system-wide failure. Methods to eliminate single points of failure include replacing
or removing hot stand-by devices, redundant components and multiple connections
or pathways.
Providing for Reliable Crossrover
Redundant power supplies, backup power systems and backup communications
systems all provide for reliable crossover.

Real – Time Failures Detection


The third principle is active device and system monitoring to detect many types of
events including system and device failures. Monitoring systems may even trigger
the backup system in the case of failure
Five Nines
One of the most popular high availability goals is often called ‘five nines’ which gets
its name from its aim to achieve an availability rate of 99.999%, which is five nines
in a row. In practice, this means that downtime is less than 5.26 minutes per year.

Standardized Systems
Systems standardization provides for systems that use the same components, as a
result, parts inventories are easier to maintain and it is possible and easy to swap
components, even during an emergency.

Clustering
Multiple devices grouped together provide a service that, to users, appears to be a
single entity. If one device in a cluster fails, the other devices remain available and
can step in.

Shared Component Systems


Systems are built so that a complete system can stand in for one that failed.
Single Points of Failure
Single points of failure are weak links in the chain that can cause disruption of the
organization's operations. A single point of failure is any part of the operation of the
organization whose failure means complete failure of the entire system — in other
words, if it fails, the entire system fails.
A single point of failure can be a specific piece of hardware, a process, a specific
piece of data, or even an essential utility. The organization can also build redundant
components into the operation to take over the process should one of these points
fail.
N+1 Redundancy
N+1 redundancy helps ensure system availability in the event of a component
failure. It means that components (N) need to have at least one backup component
(+1).
Although a system using N+1 architecture contains redundant equipment, it is not a
fully redundant system.
In a network, N+1 redundancy means that the system design can withstand the
loss of one of each component.
For example, a data center includes servers, power supplies, switches and routers.
The +1 is the additional component or system that is standing by, ready when
needed. N+1 redundancy in a data center that consists of the above elements
means that we have a server, a power supply, a switch and a router on standby,
ready to come online if something happens to the main server, the main power
source, switch or router.
RAID
How it works?
RAID takes data that is normally stored on a single disk and spreads it out
among several drives. If any single disk is lost, the user can recover data from
the other disks where the data also resides.
RAID can also increase the speed of data recovery as multiple drives will be
faster retrieving requested data than one disk doing the same.
RAID Data Storage
A RAID solution can be either hardware-based or software-based. A hardware-
based solution requires a specialized hardware controller on the system that
contains the RAID drives, while software RAID is managed by utility software in
the OS.
The following terms describe the various ways RAID can store data in the array
of disks.
• Mirroring — Stores data, then duplicates and stores the same on a second
drive.
• Striping — Writes data across multiple drives so that consecutive segments
are stored on different drives.
• Parity — More precisely, striping with parity. After striping, checksums are
generated to check that no errors exist in the striped data. These checksums
are stored on a third drive.
Router Redundancy
To avoid single points of failure, an organization can choose to install an
additional standby router.
• A redundancy protocol determines which router should take the active role
in forwarding traffic; the forwarding router or the standby router? Each is
configured with a physical IP address and a virtual router IP address. End
devices use the virtual IP address as the default gateway, which is
192.0.2.100.
• The forwarding router and the standby router use their physical IP
addresses to send periodic messages. The purpose of these messages is to
make sure both are still online and available.
• If the standby router stops receiving these periodic messages from the
forwarding router, it realizes it is the only router available and assumes the
forwarding role for itself. Meanwhile, because the PCs on the network still
communicate with the virtual router at 192.0.2.100, they stay online despite
everything that has happened, since the virtual router now forwards to what
was previously the standby router.
• The ability of a network to dynamically recover from the failure of a device
acting as a default gateway is known as first-hop redundancy, as we’ve
seen in this scenario.
Location Redundancy
Synchronous Replication
• Synchronizes both locations in real time.
• Requires high bandwidth.
• Locations must be close together to reduce latency.
Asynchronous Replication
• Not synchronized in real time but close to it.
• Requires less bandwidth.
• Sites can be further apart because latency is less of an issue.
Point-in-time Replication
• Updates the backup data location periodically, at certain points in time.
• More bandwidth conservative because it does not require a constant connection.
The correct balance between cost and availability will determine the correct choice
for an organization.
Resilient Design
Application Resilience
Application resilience is an application’s ability to react to component problems while
still functioning. Resiliency of application infrastructure means avoiding losing
customers, employee morale or business due to an application failure.
There are three availability solutions to address application resilience.
• Fault tolerant hardware: A system designed by building multiples of all critical
components into the same computer.
• Cluster architecture: A group of servers acting like a single system.
• Backup and restore: Copying files for the purpose of being able to restore
them if data loss occurs.
System and Data Backup
A data backup stores a copy of the information from a computer to backup media.
When such media is removable, the operator then stores this backup media in a safe
place.
Backing up data is one of the most effective ways of protecting against data loss. If
the hardware fails, the user can restore the data from the backup once the system is
functional again, or even when moving to a new system.
A sound security policy should include regular data backups. Backups are usually
stored off-site to protect the data if anything happens to the main facility.
Frequency
Backups can take a long time. Sometimes, it is easier to make a full backup monthly
or weekly and then do frequent partial backups of any data that has changed since
the last full backup.
Storage
For extra security, transport backups to an approved off-site storage location on a
daily, weekly or monthly rotation, as required by the security policy.
Security
Protect backups with passwords. The operator will enter the password before
restoring the data from the backup media.
Validation
Always validate backups to ensure the integrity of the data.
Designing High Availability
System
High availability incorporates three major principles to achieve the goal of
uninterrupted access to data and services.
Elimination or Reduction of Single Points of Failure
It is important to understand the ways to address a single point of failure. A single
point of failure can be a central router or switch, a network service and even a
highly skilled IT staff member.
System Resiliency
System resiliency refers to the capability to maintain availability of data and
operational processing despite attacks or disrupting events. Fault Tolerance
Fault tolerance enables a system to continue to operate if one or more of its
components fail. Data mirroring is one example of fault tolerance.
Power
A critical issue in protecting information systems is electrical power systems and
power considerations. A continuous supply of electrical power is essential for today’s
massive server and data storage facilities.
Here are some general rules in building effective electrical supply systems:
• Data centers should be on a different power supply from the rest of the building.
• Use redundant power sources — two or more feeds coming from two or more
electrical substations.
• Implement power conditioning.
• Backup power systems are often required.
• Uninterruptible power supply (UPS) should be available to gracefully shut down
systems.
Power Excess
Spike: momentary high voltage
Surge: prolonged high voltage

Power Loss
Fault: momentary loss of power
Blackout: complete loss of power

Power Degradation
Sag/Drip: momentary low voltage
Brownout: prolonged low voltage
Inrush Current: initial surge of power
Managing Threats to Physical
Facilities
Organizations can implement various measures to manage threats to the physical
facilities. For example:
• Access Control and Closed-Circuit TV (CCTV - Video Surveillance) coverage at all
entrances.
• Policies and procedures for guests visiting the facility.
• Building security testing, including using both digital and physical means to
covertly gain access.
• Badge encryption for entry access.
• Disaster recovery planning.
• Business continuity planning.
• Regular security awareness training.
• Asset tagging system.
Penetration Testing
Kill Chain
Reconnaissance Weaponization Delivery
•The attacker gathers •The attacker creates an exploit •The attacker sends the exploit
information about the target. and malicious payload to send and malicious payload to the
to the target. target, for example by email.

Exploitation Installation Command and Control


•The exploit is executed. •A backdoor access point is •Remote control of the target is
installed on the target. gained through a command-
and-control channel or server
taking advantage of the
backdoor access..

Action
•The attacker performs malicious
actions like information theft or
deletion, or executes additional
attacks on other devices from
within the network by working
through the kill chain stages
again.
Other Framework
MITRE Attack Framework
The MITRE ATT&CK framework focuses on different attack techniques to understand
attack planning and defend against both device and operating system attacks.
Organizations can use this framework to provide a point of reference for incident
response.

The Diamond Model of Intrusion Analysis


The diamond model of intrusion analysis represents the four basic components of
every malicious activity as a diamond shape. The four points are connected, to
represent the relationships present between any two points — for example using a
capability over infrastructure against a victim.
Meta-Features:
• Timestamp
• Phase
• Result
• Direction
• Methodology
• Resources
Penetration Testing
Penetration testing, or pen testing, is a way of testing the areas of weaknesses in
systems by using various malicious techniques. A penetration test simulates methods
that an attacker would use to gain unauthorized access to a network and
compromise the systems and allows an organization to understand how well it would
tolerate a real attack.
It’s important to note that pen testing is not the same as vulnerability testing, which
only identifies potential problems. Pen testing involves hacking a website, network
or server with an organization’s permission to try to gain access to resources using
various methods that real-life black hat hackers would use.
One of the primary reasons why an organization would use pen testing is to find and
fix vulnerabilities before the cybercriminals do. Penetration testing is a technique
used in ethical hacking.
• Black box testing is the least time consuming and the least expensive.
When conducting black box testing, the specialist has no knowledge of the
inner workings of the system and attempts to attack it from the viewpoint of a
regular user.
• Gray box testing is a combination of black box and white box testing. The
specialist will have some limited knowledge about the system, so it is a
partially known environment, which gives some advantage to these hacking
attempts.
• White box testing is the most time consuming and the most expensive
because it is carried out by a specialist with knowledge of how the system
works. It is therefore a known environment when they attempt to hack into it,
emulating a malicious attack by an insider or by someone who has managed
to gain such information beforehand, at the recon stage.
Penetration Testing Phases
Planning
Establishes the rules of engagement for conducting the test.
Discovery
Conducting reconnaissance on the target to gain information. This can include:
• Passive techniques, which do not require active engagement with the targeted system and
are referred to as foot printing — for instance, you might look at the organization’s website
or other public sources for information.
• Active reconnaissance, such as port scanning, which requires active engagement with the
target.
Attack
At this phase, you seek to gain access or penetrate the system using the information gathered
in the previous phase.
Reporting
At this phase, the tester delivers to the organization detailed documentation that includes the
vulnerabilities identified, actions taken and the results.
Penetration Testing: Teaming
Some organizations create competing teams to conduct penetration exercises that
are longer than a penetration test.
For instance, in such a scenario, there can be three or four teams:
• Red Team: is the adversary, trying to attack the system while remaining
unnoticed.
• Blue Team: the members of the blue team are the defenders, and they try to
thwart the efforts of the red team.
• White Team is a neutral team that defines the goals and rules and oversees the
exercise.
• Purple Team: Sometimes, there is also a purple team, where members of the
red and blue team work together to identify vulnerabilities and explore ways to
improve controls.
other organizations may conduct a bug bounty program.

You might also like