You are on page 1of 24

CCNA Cyber Ops – SECOPS 1.

1.0 Endpoint Threat Analysis and Computer Forensics

1.1 Interpret the output report of a malware analysis tool such as AMP Threat Grid and Cuckoo
Sandbox.

1.2 Describe these terms as they are defined in the Common Vulnerability Scoring System (CVSS 3.0):

a. Attack Vector (AV): This metric reflects the context by which vulnerability exploitation is possible.
This metric value (and consequently the Base score) will be larger the more remote (logically, and
physically) an attacker can be in order to exploit the vulnerable component. The assumption is that
the number of potential attackers for a vulnerability that could be exploited from across the Internet
is larger than the number of potential attackers that could exploit a vulnerability requiring physical
access to a device, and therefore warrants a greater score. The list of possible values is presented in
Figure 1.

Figure 1: Attack Vector

b. Attack Complexity (AC): This metric describes the conditions beyond the attacker’s control that
must exist in order to exploit the vulnerability. As described below, such conditions may require the
collection of more information about the target, the presence of certain system configuration
settings, or computational exceptions. Importantly, the assessment of this metric excludes any
requirements for user interaction in order to exploit the vulnerability (such conditions are captured
in the User Interaction metric). This metric value is largest for the least complex attacks. The list of
possible values is presented in Figure 2.

Figure 2: Attack Complexity


c. Privileges Required (PR): This metric describes the level of privileges an attacker must possess
before successfully exploiting the vulnerability. This metric is greatest if no privileges are required.
The list of possible values is presented in Figure 3.

Figure 3: Privileges Required

d. User interaction (UI): This metric captures the requirement for a user, other than the attacker, to
participate in the successful compromise of the vulnerable component. This metric determines
whether the vulnerability can be exploited solely at the will of the attacker, or whether a separate
user (or user-initiated process) must participate in some manner. This metric value is greatest when
no user interaction is required. The list of possible values is presented in Figure 4.

Figure 4: User Interaction


e. Scope (S): An important property captured by CVSS v3.0 is the ability for a vulnerability in one
software component to impact resources beyond its means or privileges. This consequence is
represented by the metric Authorization Scope, or simply Scope. Formally, Scope refers to the
collection of privileges defined by a computing authority (e.g., an application, an operating system,
or a sandbox environment) when granting access to computing resources (e.g., files, CPU, memory,
etc.). These privileges are assigned based on some method of identification and authorization. In
some cases, the authorization may be simple or loosely controlled based on predefined rules or
standards. For example, in the case of Ethernet traffic sent to a network switch, the switch accepts
traffic that arrives on its ports and is an authority that controls the traffic flow to other switch ports.
When the vulnerability of a software component governed by one authorization scope can affect
resources governed by another authorization scope, a Scope change has occurred. Intuitively, one
may think of a scope change as breaking out of a sandbox, and an example would be a vulnerability
in a virtual machine that enables an attacker to delete files on the host OS (perhaps even its own
VM). In this example, there are two separate authorization authorities: one that defines and
enforces privileges for the virtual machine and its users, and one that defines and enforces privileges
for the host system within which the virtual machine runs. A scope change would not occur, for
example, with a vulnerability in Microsoft Word that allows an attacker to compromise all system
files of the host OS, because the same authority enforces privileges of the user’s instance of Word,
and the host’s system files. The Base score is greater when a scope change has occurred. The list of
possible values is presented in Figure 5.

Figure 5: Scope

1.3 Describe these terms as they are defined in the Common Vulnerability Scoring System (CVSS 3.0):

a. Confidentiality Impact (C): This metric measures the impact to the confidentiality of the
information resources managed by a software component due to a successfully exploited
vulnerability. Confidentiality refers to limiting information access and disclosure to only authorized
users, as well as preventing access by, or disclosure to, unauthorized ones. The list of possible values
is presented in Figure 6. This metric value increases with the degree of loss to the impacted
component.

Figure 6: Confidentiality Impact


b. Integrity Impact (I): This metric measures the impact on the integrity of a successfully exploited
vulnerability. Integrity refers to the trustworthiness and veracity of information. The list of possible
values is presented in Figure 7. This metric value increases with the consequence to the impacted
component.

Figure 7: Integrity Impact

c. Availability Impact (A): This metric measures the impact to the availability of the impacted
component resulting from a successfully exploited vulnerability. While the Confidentiality and
Integrity impact metrics apply to the loss of confidentiality or integrity of data (e.g., information,
files) used by the impacted component, this metric refers to the loss of availability of the impacted
component itself, such as a networked service (e.g., web, database, email). Since availability refers
to the accessibility of information resources, attacks that consume network bandwidth, processor
cycles, or disk space all impact the availability of an impacted component. The list of possible values
is presented in Figure 8. This metric value increases with the consequence to the impacted
component.

Figure 8: Availability Impact


1.4 Define these items as they pertain to the Microsoft Windows file system

a. FAT32: is an updated version of the FAT (File Allocation Table) file system created in 1977 by
Microsoft. It is a computer file system architecture and a family of industry-standard file systems
utilizing it. With FAT32 you’re limited to 2TB FAT32 partitions and 4GB maximum size files.

b. NTFS (New Technology Files System): is a proprietary file system developed by Microsoft.

c. Alternative data streams:Alternate Data Streams (ADS) is a file attribute only found on the NTFS
file system. You’ll need a tool like streams to view this data. Here is an example.

d. MACE: NTFS keeps track of lots of time stamps. Each file has a time stamp for ‘Create’, ‘Modify’,
‘Access’, and ‘Entry Modified’. The latter refers to the time when the MFT entry itself was modified.
These four values are commonly abbreviated as the ‘MACE’ values. Note that other attributes in
each MFT record may also contain timestamps that are of forensic value.

1. MFT (Master File Table): The NTFS file system contains a file called the master file table, or MFT.
There is at least one entry in the MFT for every file on an NTFS file system volume, including the MFT
itself. All information about a file, including its size, time and date stamps, permissions, and data
content, is stored either in MFT entries, or in space outside the MFT that is described by MFT
entries.

e. EFI: The EFI system partition (ESP) is a partition on a data storage device (usually a hard disk drive
or solid-state drive) that is used by computers adhering to the Unified Extensible Firmware Interface
(UEFI).

1. UEFI: The Unified Extensible Firmware Interface (UEFI) is a specification that defines a software
interface between an operating system and platform firmware.
f. Free space: See File system, it refers to the unallocated space in a file system.
g. Timestamps on a file system: File properties in regards of date and time.

1.5 Define these terms as they pertain to the Linux file system

a. EXT4: The ext4 or fourth extended filesystem is a journaling file system for Linux, developed as the
successor to ext3.

b. Journaling: A journaling file system is a file system that keeps track of changes not yet committed
to the file system’s main part by recording the intentions of such changes in a data structure known
as a “journal”, which is usually a circular log.

c. Master Boot Record (MBR): is a special type of boot sector at the very beginning of partitioned
computer mass storage devices like fixed disks or removable drives intended for use with IBM PC-
compatible systems and beyond. The MBR holds the information on how the logical partitions,
containing file systems, are organized on that medium. The MBR also contains executable code to
function as a loader for the installed operating system—usually by passing control over to the
loader’s second stage, or in conjunction with each partition’s volume boot record (VBR). This MBR
code is usually referred to as a boot loader.

d. Swap filesystem: Swap space in Linux is used when the amount of physical memory (RAM) is full. If
the system needs more memory resources and the RAM is full, inactive pages in memory are moved
to the swap space. While swap space can help machines with a small amount of RAM, it should not
be considered a replacement for more RAM. Swap space is located on hard drives, which have a
slower access time than physical memory.

e. MAC: In cryptography, a message authentication code (MAC), sometimes known as a tag, is a


short piece of information used to authenticate a message—in other words, to confirm that the
message came from the stated sender (its authenticity) and has not been changed. The MAC value
protects both a message’s data integrity as well as its authenticity, by allowing verifiers (who also
possess the secret key) to detect any changes to the message content.

1.6 Compare and contrast three types of evidence

a. Best evidence: Original, unaltered evidence. In court, this is preferred over secondary
evidence.The best evidence rule is a legal principle that holds an original copy of a document as
superior evidence.

b. Corroborative evidence: (or corroboration) is evidence that supports a proposition already


supported by initial evidence, therefore confirming the original proposition.

c. Indirect Evidence (Circumstantial): Circumstantial evidence is evidence that relies on an inference


to connect it to a conclusion of fact—like a fingerprint at the scene of a crime. By contrast, direct
evidence supports the truth of an assertion directly—i.e., without need for any additional evidence
or inference.

1.7 Compare and contrast two types of image (both refer to Integrity, see above)

a. Altered disk image: A system image with a compromised integrity.


b. Unaltered disk image: An image that has not been tampered with and that will provide the same
result as the original when applied a hash algorithm like MD5.

1.8 Describe the role of attribution (“action of bestowing or assigning”) in an investigation. (Cyber
attribution is the process of tracking, identifying and laying blame on the perpetrator of a
cyberattack or other hacking exploit). This a nice read on the problem of attribution.

a. Assets: In information security, computer security and network security, an asset is any data,
device, or other component of the environment that supports information-related activities.

b. Threat actor: Responsible for the cyberattack.


CCNA Cyber Ops – 2.0 Security Concepts
2.0 Security Concepts

2.1 Describe the principles of the defense in depth strategy: Defense in depth is the coordinated use
of multiple security countermeasures to protect the integrity of the information assets in an
enterprise. The strategy is based on the military principle that it is more difficult for an enemy to
defeat a complex and multi-layered defense system than to penetrate a single barrier. Defense in
depth can be divided into three areas: Physical, Technical, and Administrative.

Physical controls are anything that physically limits or prevents access to IT systems. Fences, guards,
dogs, and CCTV systems.

Technical controls are hardware or software whose purpose is to protect systems and resources.
Examples of technical controls would be disk encryption, fingerprint readers, and Windows Active
Directory. Hardware technical controls differ from physical controls in that they prevent access to
the contents of a system, but not the physical systems themselves.

Administrative controls are an organization’s policies and procedures. Their purpose is to ensure
that there is proper guidance available in regards to security and that regulations are met. They
include things such as hiring practices, data handling procedures, and security requirements.

2.2 Compare and contrast these concepts

2.2.a Risk: the potential that a given threat will exploit vulnerabilities of an asset or group of assets
and thereby cause harm to the organization. It is measured in terms of a combination of the
probability of occurrence of an event and its consequence.
Risk = Likelihood * Impact
2.2.b Threat: In computer security, a threat is a possible danger that might exploit a vulnerability to
breach security and therefore cause possible harm.
2.2.c Vulnerability: In computer security, a vulnerability is a weakness which allows an attacker to
reduce a system’s information assurance. A vulnerability is the intersection of three elements: a
system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw.
2.2.d Exploit: An exploit is a piece of software, a chunk of data, or a sequence of commands that
takes advantage of a bug or vulnerability in order to cause an unintended or unanticipated behavior
to occur on computer software, hardware, or something electronic (usually computerized). Such
behavior frequently includes things like gaining control of a computer system, allowing privilege
escalation, or a denial-of-service (DoS or related DDoS) attack.
2.3 Describe these terms

2.3.a Threat actor: A threat actor, or malicious actor, is a person or entity that is responsible for an
event or incident that impacts, or has the potential to impact, the safety or security of another
entity. Most often, the term is used to describe the individuals and groups that perform malicious
acts against organizations of various types and sizes. From a threat intelligence perspective, threat
actors are often categorized as unintentional or intentional and external or internal.
2.3.b Run book automation (RBA): Runbook automation (RBA) is the ability to define, build,
orchestrate, manage, and report on workflows that support system and network operational
processes. A runbook workflow can potentially interact with all types of infrastructure elements,
such as applications, databases, and hardware.
2.3.c Chain of custody (evidentiary): Chain of custody (CoC), in legal contexts, refers to the
chronological documentation or paper trail, showing the seizure, custody, control, transfer, analysis,
and disposition of physical or electronic evidence. It is essential that any items of evidence can be
traced from the crime scene to the courtroom, and everywhere in between. This known as
maintaining the ‘chain of custody’ or ‘continuity of evidence. You must have the ability to prove that
a particular piece of evidence was at a particular place, at a particular time and in a particular
condition. This applies to the physical hardware as well as the information being retrieved from that
hardware. If the chain of custody is broken, the forensic investigation may be fatally compromised.
This is where proper management of the evidence is important.
2.3.d Reverse engineering: Reverse engineering is taking apart an object to see how it works in order
to duplicate or enhance the object. The practice, taken from older industries, is now frequently used
in computer hardware and software. Software reverse engineering involves reversing a program’s
machine code (the string of 0s and 1s that are sent to the logic processor) back into the source code
that it was written in, using program language statements.
2.3.e Sliding window anomaly detection: The time span used to collect data to build your traffic
profile is called the profiling time window (PTW). The PTW is a sliding window; that is, if your PTW is
one week (the default), your traffic profile includes connection data collected over the last week.
You can change the PTW to be as short as an hour or as long as several weeks. A traffic profile is
based on connection data collected over a time span that you specify. `After you create a traffic
profile, you can detect abnormal network traffic by evaluating new traffic against your profile, which
presumably represents normal network traffic.
2.3.f PII: Personally identifiable information (PII), or sensitive personal information (SPI), as used in
information security and privacy laws, is information that can be used on its own or with other
information to identify, contact, or locate a single person, or to identify an individual in context.
2.3.g PHI: Protected health information (PHI) under US law is any information about health status,
provision of healthcare, or payment for health care that is created or collected by a “Covered Entity”
(or a Business Associate of a Covered Entity), and can be linked to a specific individual.
2.4 Describe these security terms

2.4.a Principle of least privilege: In information security, computer science, and other fields, the
principle of least privilege (also known as the principle of minimal privilege or the principle of least
authority) requires that in a particular abstraction layer of a computing environment, every module
(such as a process, a user, or a program, depending on the subject) must be able to access only the
information and resources that are necessary for its legitimate purpose.
2.4.b Risk scoring/risk weighting: First, gather information about the threat agent involved, the
attack that will be used, the vulnerability involved, and the impact of a successful exploit on the
business. Then, assign a score or weight to the risk, this value will be used in the risk assessment.
2.4.c Risk reduction: The application of one or more measures to reduce the likelihood of an
unwanted occurrence and/or lessen its consequences.
2.4.d Risk assessment: is the process of assessing the probabilities and consequences of risk events if
they are realized. The results of this assessment are then used to prioritize risks to establish a most-
to-least-critical importance ranking. Ranking risks in terms of their criticality or importance provides
insights to the project’s management on where resources may be needed to manage or mitigate the
realization of high probability/high consequence risk events.
2.5 Compare and contrast these access control models: Access control is basically identifying a
person doing a specific job, authenticating them by looking at their identification, then giving that
person only the key to the door or computer that they need access to and nothing more. In the
world of information security, one would look at this as granting an individual permission to get onto
a network via a username and password, allowing them access to files, computers, or other
hardware or software the person requires, and ensuring they have the right level of permission (i.e.
read only) to do their job.

2.5.a Discretionary access control: this access control model is based on a user’s discretion. The
owner of the resource can give access rights to that resource to other users based on his discretion.
2.5.b Mandatory access control: In this Model, users/owners do not enjoy the privilege of deciding
who can access their files. In this model, the operating system is the decision maker overriding the
user’s wishes. Every Subject (users) and Object (resources) are classified and assigned a security
label. The security labels of the subject and the object along with the security policy determine if the
subject can access the object. The rules for how subjects access objects are made by the security
officer, configured by the administrator, enforced by the operating system, and supported by
security technologies.
2.5.d Nondiscretionary access control: The Role Based Access Control (RBAC) model provides access
control based on the subject’s role in the organization. So, instead of assigning John permissions as a
security manager, the position of security manager already has permissions assigned to it.
2.6 Compare and contrast these terms

2.6.a Network and host antivirus: A Network antivirus prevent unknown programs and processes
from accessing the system. A host antivirus is computer software used to prevent, detect and
remove malicious software once it reached a system.
2.6.b Agentless and agent-based protections: Agentless monitoring is deployed in one of two ways:
Using a remote API exposed by the platform or service being monitored or directly analyzing
network packets flowing between service components. In either, there is no special deployment of
agents required. In agent-based protection, the monitoring endpoint requires an installation of the
software agent. Monitoring with agents has the cost of installation, configuration (proportionate to
the number of managed elements), platform support needs and dependencies. You also need to
worry about patching.
2.6.c Security Information and Event Management (SIEM) and Log Collection: SIEM provides real-
time analysis of security alerts generated by network hardware and applications. In log collection,
the events from the assets on the network, such as servers, switches, routers, storage arrays,
operating systems, and firewalls are saved to a location for further analysis.
2.6.d Log management (LM): comprises an approach to dealing with large volumes of computer-
generated log messages (also known as audit records, audit trails, event-logs, etc.). Log Management
generally covers:
Log collection
Centralized log aggregation
Long-term log storage and retention
Log rotation
Log analysis (in real-time and in bulk after storage)
Log search and reporting.
2.7 Describe these concepts

2.7.a Asset management (ITAM): It is the set of business practices that join financial, contractual and
inventory functions to support life cycle management and strategic decision making for the IT
environment. Assets include all elements of software and hardware that are found in the business
environment.
2.7.b Configuration management: It is a systems engineering process for establishing and
maintaining consistency of a product’s performance, functional, and physical attributes with its
requirements, design, and operational information throughout its life. Attackers are looking for
systems that have default settings that are immediately vulnerable. Once an attacker exploits a
system, they start making changes. These two reasons are why Security Configuration Management
(SCM) is so important. SCM can not only identify misconfigurations that make your systems
vulnerable but also identify “unusual” changes to critical files or registry keys.
2.7.c Mobile device management: Mobile device management (MDM) is an industry term for the
administration of mobile devices, such as smartphones, tablet computers, laptops and desktop
computers. MDM is usually implemented with the use of a third party product that has management
features for particular vendors of mobile devices. Mobile Device Management (MDM) servers
secure, monitor, manage and support mobile devices deployed across mobile operators, service
providers, and enterprises. MDM servers consist of a policy server that controls the use of some
applications on a mobile device (for example, an e-mail application) in the deployed environment.
However, the network is the only entity that can provide granular access to endpoints based on
ACLs, SGTs, etc. To do its job, Cisco ISE queries the MDM servers for the necessary device attributes
to ensure it is then able to provide network access control for those devices.

mobile-cisco-ise

2.7.d Patch management: A patch is a piece of software designed to update a computer program or
its supporting data, to fix or improve it. This includes fixing security vulnerabilities and other bugs,
with such patches usually called bugfixes or bug fixes, and improving the usability or performance.
Patch management is a strategy for managing patches or upgrades for software applications and
technologies. A patch management plan can help a business or organization handle these changes
efficiently. (Patch Management Example for Windows)
2.7.e Vulnerability management: In computer security, a vulnerability is a weakness which allows an
attacker to reduce a system’s information assurance. Vulnerability management is the “cyclical
practice of identifying, classifying, remediating, and mitigating vulnerabilities”, especially in software
and firmware. Vulnerability management is integral to computer security and network security.
Glossary of Cyber Security terms here
CCNA Cyber Ops – 3.0 Cryptography
3.1 Describe the uses of a hash algorithm

A hash function is any function that can be used to map data of arbitrary size to data of fixed size.
The values returned by a hash function are called hash values, hash codes, digests, or simply hashes.

A cryptographic hash function is a special class of hash function that has certain properties which
make it suitable for use in cryptography. It is a mathematical algorithm that maps data of arbitrary
size to a bit string of a fixed size (a hash function) which is designed to also be a one-way function,
that is, a function which is infeasible to invert. The only way to recreate the input data from an ideal
cryptographic hash function’s output is to attempt a brute-force search of possible inputs to see if
they produce a match. The input data is often called the message, and the output (the hash value or
hash) is often called the message digest or simply the digest.

3.2 Describe the uses of encryption algorithms

Cryptographic hash functions have many information-security applications, notably in digital


signatures, message authentication codes (MACs), and other forms of authentication. They can also
be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect
duplicate data or uniquely identify files, and as checksums to detect accidental data corruption.
Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital)
fingerprints, checksums, or just hash values, even though all these terms stand for more general
functions with rather different properties and purposes.

3.3 Compare and contrast symmetric and asymmetric encryption algorithms: Symmetric-key
algorithms are algorithms for cryptography that use the same cryptographic keys for encryption of
plaintext and decryption of ciphertext. Public key cryptography, or asymmetric cryptography, is any
cryptographic system that uses pairs of keys: public keys which may be disseminated widely, and
private keys which are known only to the owner. This accomplishes two functions: authentication,
which is when the public key is used to verify that a holder of the paired private key sent the
message, and encryption, whereby only the holder of the paired private key can decrypt the
message encrypted with the public key.

3.4 Describe the processes of digital signature creation and verification

A digital signature is a mathematical scheme for demonstrating the authenticity of digital messages
or documents. A valid digital signature gives a recipient reason to believe that the message was
created by a known sender (authentication), that the sender cannot deny having sent the message
(non-repudiation), and that the message was not altered in transit (integrity).

Digital signatures are based on public key cryptography, also known as asymmetric cryptography.
Using a public key algorithm such as RSA, one can generate two keys that are mathematically linked:
one private and one public. To create a digital signature, signing software (such as an email program)
creates a one-way hash of the electronic data to be signed. The private key is then used to encrypt
the hash. The encrypted hash — along with other information, such as the hashing algorithm — is
the digital signature. The reason for encrypting the hash instead of the entire message or document
is that a hash function can convert an arbitrary input into a fixed length value, which is usually much
shorter. This saves time since hashing is much faster than signing.
The value of the hash is unique to the hashed data. Any change in the data, even changing or
deleting a single character, results in a different value. This attribute enables others to validate the
integrity of the data by using the signer’s public key to decrypt the hash. If the decrypted hash
matches a second computed hash of the same data, it proves that the data hasn’t changed since it
was signed. If the two hashes don’t match, the data has either been tampered with in some way
(integrity) or the signature was created with a private key that doesn’t correspond to the public key
presented by the signer (authentication).

3.5 Describe the operation of a PKI

A public key infrastructure (PKI) is a set of roles, policies, and procedures needed to create, manage,
distribute, use, store, and revoke digital certificates and manage public-key encryption.

In the following video the concept of digital signatures is explained in a simple way:

https://youtu.be/GSIDS_lvRv4

3.6 Describe the security impact of these commonly used hash algorithms

3.6.a MD5: The MD5 algorithm is a widely used hash function producing a 128-bit hash value.
Although MD5 was initially designed to be used as a cryptographic hash function, it has been found
to suffer from extensive vulnerabilities. It can still be used as a checksum to verify data integrity, but
only against unintentional corruption.
3.6.b SHA-1: Secure Hash Algorithm 1 is a cryptographic hash function designed by the United States
National Security Agency and is a U.S. Federal Information Processing Standard published by the
United States NIST. SHA-1 produces a 160-bit (20-byte) hash value known as a message digest. A
SHA-1 hash value is typically rendered as a hexadecimal number, 40 digits long. SHA-1 is no longer
considered secure against well-funded opponents.
3.6.c SHA-2: Secure Hash Algorithm 2 is a set of cryptographic hash functions designed by the
National Security Agency (NSA). SHA-2 includes significant changes from its predecessor, SHA-1. The
SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512
bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256.
3.6.c.1 SHA-256
3.6.c.2 SHA-512
3.6.d SHA-3
3.7 Describe the security impact of these commonly used encryption algorithms and secure
communications protocols

3.7.a DES: Data Encryption Standard is a symmetric-key algorithm for the encryption of electronic
data. Although now considered insecure, it was highly influential in the advancement of modern
cryptography.
3.7.b 3DES: Triple DES, officially the Triple Data Encryption Algorithm (TDEA or Triple DEA), is a
symmetric-key block cipher, which applies the Data Encryption Standard (DES) cipher algorithm
three times to each data block.
3.7.c AES: The Advanced Encryption Standard, also known by its original name Rijndael, is a
specification for the encryption of electronic data established by the U.S. National Institute of
Standards and Technology (NIST) in 2001. AES is based on a design principle known as a substitution-
permutation network, a combination of both substitution and permutation, and is fast in both
software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a
variant of Rijndael which has a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By
contrast, the Rijndael specification per se is specified with block and key sizes that may be any
multiple of 32 bits, both with a minimum of 128 and a maximum of 256 bits.
3.7.d AES256-CTR: AES256 is a symmetrical encryption algorithm that has become ubiquitous, due to
the acceptance of the algorithm by the U.S. and Canadian governments as standards for encrypting
transited data and data at rest. Because of the length of the key (256 bits) and the number of hashes
(14), it takes a murderously long time for a malware hacker to perform a dictionary attack.
Block cipher mode of operation: (ECB, CBC, OFB, CTR and CFB) In cryptography, a mode of operation
is an algorithm that uses a block cipher to provide an information service such as confidentiality or
authenticity.

3.7.e RSA: RSA is one of the first practical public-key cryptosystems and is widely used for secure
data transmission. In such a cryptosystem, the encryption key is public and differs from the
decryption key which is kept secret. In RSA, this asymmetry is based on the practical difficulty of
factoring the product of two large prime numbers, the factoring problem. RSA is made of the initial
letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman, who first publicly described
the algorithm in 1977.
3.7.f DSA: The Digital Signature Algorithm (DSA) is a Federal Information Processing Standard for
digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in
August 1991 for use in their Digital Signature Standard (DSS) and adopted as FIPS 186 in 1993.
3.7.g SSH: Secure Shell (SSH) is a cryptographic network protocol for operating network services
securely over an unsecured network. The best known example application is for remote login to
computer systems by users.
3.7.h SSL/TLS: Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), both
frequently referred to as “SSL”, are cryptographic protocols that provide communications security
over a computer network.
3.8 Describe how the success or failure of a cryptographic exchange impacts security investigation

The key exchange problem is how to exchange whatever keys or other information are needed so
that no one else can obtain a copy. Historically, this required trusted couriers, diplomatic bags, or
some other secure channel. With the advent of public key / private key cipher algorithms (ie,
asymmetric ciphers), the encrypting key (aka, the public key of a pair) could be made public, since (at
least for high quality algorithms) no one without the decrypting key (aka, the private key of that pair)
could decrypt the message.

In terms of a “security investigation” let’s first take the case of a failed exchange between the
authorized parties. If the exchange fails the concepts of authentication, non-reputation, and
integrity are affected. Then an investigation can’t take place and also the systems are left vulnerable.
If the exchange is successful then there is no problem, but that makes me think that this question
could be referring to the attack itself. If the attack is protected and the exchange between the
system and the attacker is successful, then the investigation is going to be really hard because the
investigator will have limited access to the facts of the attack, like where it comes from or the actual
code of the virus, if that were the case.

3.9 Describe these items in regards to SSL/TLS

3.9.a Cipher-suite: is a concept used in Transport Layer Security (TLS) / Secure Sockets Layer (SSL)
network protocol. Before TLS version 1.3, a cipher suite is a named combination of authentication,
encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the
security settings. The format of cipher suites is modified since TLS 1.3. In the current TLS 1.3 draft
document, cipher suites are only used to negotiate encryption and HMAC algorithms. When a TLS
connection is established, a handshaking, known as the TLS Handshake Protocol, occurs. Within this
handshake, a client hello (ClientHello) and a server hello (ServerHello) message are passed. First, the
client sends a list of the cipher suites that it supports, in order of preference. Then the server replies
with the cipher suite that it has selected from the client’s list. To test which TLS ciphers a server
supports, an SSL/TLS Scanner may be used.
3.9.b X.509 certificates: In cryptography, X.509 is an important standard for a public key
infrastructure (PKI) to manage digital certificates and public-key encryption and a key part of the
Transport Layer Security protocol used to secure both web and email communication. An ITU-T
standard, X.509 specifies formats for public key certificates, certificate revocation lists, attribute
certificates, and a certification path validation algorithm.
3.9.c Key exchange: Key exchange (also known as “key establishment”) is any method in
cryptography by which cryptographic keys are exchanged between two parties, allowing the use of a
cryptographic algorithm.
3.9.d Protocol version: TLS 1.0, TLS 1.1, TLS 1.2, TLS 1.3.
3.9.e PKCS: stands for “Public Key Cryptography Standards”. These are a group of public-key
cryptography standards devised and published by RSA Security Inc, starting in the early 1990s. The
company published the standards to promote the use of the cryptography techniques to which they
had patents, such as the RSA algorithm, the Schnorr signature algorithm, and several others.
CCNA Cyber Ops – 4.0 Host-Based Analysis
4.1 Define these terms as they pertain to Microsoft Windows

4.1.a Processes: A process is an executing program.


4.1.b Thread: is the basic unit to which the operating system allocates processor time.
4.1.c Memory allocation: The task of fulfilling an allocation request consists of locating a block of
unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large
pool of memory called the heap or free store.
4.1.d Windows Registry: Windows stores its configuration information in a database called the
registry. The registry contains profiles for each user of the computer and information about system
hardware, installed programs, and property settings. Windows continually reference this
information during its operation.
4.1.e WMI: Windows Management Instrumentation (WMI) is a set of specifications from Microsoft
for consolidating the management of devices and applications in a network from Windows
computing systems. WMI is the Microsoft implementation of Web-Based Enterprise Management
(WBEM), which is built on the Common Information Model (CIM), a computer industry standard for
defining device and application characteristics so that system administrators and management
programs can control devices and applications from multiple manufacturers or sources in the same
way.
4.1.f Handles: An object is a data structure that represents a system resource, such as a file, thread,
or graphic image. An application cannot directly access object data or the system resource that an
object represents. Instead, an application must obtain an object handle, which it can use to examine
or modify the system resource. Each handle has an entry in an internally maintained table. These
entries contain the addresses of the resources and the means to identify the resource type.
4.1.g Services: Microsoft Windows services, formerly known as NT services, enable you to create
long-running executable applications that run in their own Windows sessions. These services can be
automatically started when the computer boots, can be paused and restarted, and do not show any
user interface. These features make services ideal for use on a server or whenever you need long-
running functionality that does not interfere with other users who are working on the same
computer. You can also run services in the security context of a specific user account that is different
from the logged-on user or the default computer account. For more information about services and
Windows sessions, see the Windows SDK documentation in the MSDN Library. A Windows service is
a computer program that operates in the background.
4.2 Define these terms as they pertain to Linux

4.2.a Processes: An instance of a program that is being executed. Each process has a unique PID,
which is that process’s entry in the kernel’s process table.
4.2.b Fork: creates a new process by duplicating the calling process. The new process is referred to
as the child process. The calling process is referred to as the parent process.
4.2.c Permissions: a system to control the ability of the users and processes to view or make changes
to the contents of the filesystem.
4.2.d Symlink: is the nickname for any file that contains a reference to another file or directory in the
form of an absolute or relative path and that affects pathname resolution.
4.2.e Daemon: In multitasking computer operating systems, a daemon is a computer program that
runs as a background process, rather than being under the direct control of an interactive user.

4.3 Describe the functionality of these endpoint technologies in regards to security monitoring

4.3.a Host-based intrusion detection: Intrusion detection (or prevention) software installed on the
endpoints as opposed to the network.
4.3.b Antimalware and antivirus: Let’s start with the differences between “viruses” and “malware.”
Viruses are a specific type of malware (designed to replicate and spread), while malware is a broad
term used to describe all sorts of unwanted or malicious code. Malware can include viruses,
spyware, adware, nagware, trojans, worms, and more.
4.3.c Host-based firewall: A host-based firewall is a piece of software running on a single host that
can restrict incoming and outgoing network activity for that host only. They can prevent a host from
becoming infected and stop infected hosts from spreading malware to other hosts.
4.3.d Application-level whitelisting/blacklisting: In Windows, it is possible to configure two different
methods that determine whether an application should be allowed to run. The first method, known
as blacklisting, is when you allow all applications to run by default except for those you specifically
do not allow. The other and more secure method is called whitelisting, which blocks every
application from running by default, except for those you explicitly allow.
4.3.e Systems-based sandboxing (such as Chrome, Java, Adobe reader): Sandboxing is a technique
for creating confined execution environments to protect sensitive resources from illegal access. A
sandbox, as a container, limits or reduces the level of access its applications have.

4.4 Interpret these operating system log data to identify an event

4.4.a Windows security event logs: Event logs are special files that record significant events on your
computer, such as when a user logs on to the computer or when a program encounters an error.
Whenever these types of events occur, Windows records the event in an event log that you can read
by using Event Viewer.The Security log is designed for use by the system. However, users can read
and clear the Security log if they have been granted the SE_SECURITY_NAME privilege (the “manage
auditing and security log” user right).
4.4.b Unix-based syslog: Syslog is a way for network devices to send event messages to a logging
server – usually known as a Syslog server. The Syslog protocol is supported by a wide range of
devices and can be used to log different types of events.

4.4.c Apache access logs: In order to effectively manage a web server, it is necessary to get feedback
about the activity and performance of the server as well as any problems that may be occurring. The
Apache HTTP Server provides very comprehensive and flexible logging capabilities.
4.4.d IIS access logs: IIS uses a flexible and efficient logging architecture. When a loggable event,
usually an HTTP transaction, occurs, IIS calls the selected logging module, which then writes to one
of the logs stored in %SystemRoot%\system32\Logfiles\<service_name>.
CCNA Cyber Ops – 5.0 Security Monitoring

5.1 Identify the types of data provided by these technologies

5.1.a TCP Dump: a tool that displays network traffic


5.1.b NetFlow: NetFlow provides valuable information about network users and applications, peak
usage times, and traffic routing. The basic output of NetFlow is a flow record.
5.1.c Next-Gen firewall:Cisco Firepower NGFW appliances combine our proven network firewall with
the industry’s most effective next-gen IPS and advanced malware protection.
5.1.d Traditional stateful firewall: is a network firewall that tracks the operating state and
characteristics of network connections traversing it.
5.1.e Application visibility and control: The Cisco Application Visibility and Control (AVC) solution is a
suite of services in Cisco network devices that provides application-level classification, monitoring,
and traffic control, to:
Improve business-critical application performance
Support capacity management and planning
Reduce network operating costs
5.1.f Web content filtering: A Web filter is a program that can screen an incoming Web page to
determine whether some or all of it should not be displayed to the user. The data here comes in the
form of a URL by browsing or a click on a link.
5.1.g Email content filtering: Cisco Email Security protects against ransomware, business email
compromise, spoofing, and phishing. It uses advanced threat intelligence and a multilayered
approach to protect inbound messages and sensitive outbound data. The data or message here
comes in the form of an email.
5.2 Describe these types of data used in security monitoring

5.2.a Full packet capture: A packet consists of control information and user data, which is also
known as the payload. Control information provides data for delivering the payload, for example:
source and destination network addresses, error detection codes, and sequencing information.
Typically, control information is found in packet headers and trailers. Actual packets collected by
storing network traffic.
5.2.b Session data: Session data is the summary of the communication between two network
devices. Also known as a conversation or a flow, this summary data is one of the most flexible and
useful forms of NSM (Network Security Monitoring) data.
5.2.c Transaction data: application-specific records generated from network traffic. Logs deeper
connection-level information, which may span multiple packets within a connection. Must have
predefined templates for protocol formatting. Common for logging HTTP header/request
information, SMTP command data, etc.
5.2.d Statistical data: Overall summaries or profiles of network traffic.
5.2.f Extracted content: Metadata. In a typical NSM deployment, this data would be captured
through a network tap or switch. This type of data includes data streams, files, web pages contrary
to the full content that would refer to the unfiltered collection of packets.
5.2.g Alert data: Judgments made by tools that inspect network traffic. Typically the result of finely-
tuned signatures matching against packet content, and similar in nature to transaction data. This
information, rather than being for logging purposes is intended to indicate discrete events which
might be attacks.
5.3 Describe these concepts as they relate to security monitoring

5.3.a Access control list (ACL): specifies which users or system processes are granted access to
objects, as well as what operations are allowed on given objects. Each entry in a typical ACL specifies
a subject and an operation. IP ACLs control whether routed packets are forwarded or blocked at the
router interface. Your router examines each packet in order to determine whether to forward or
drop the packet based on the criteria that you specify within the ACL. A Filesystem ACLs is a data
structure (usually a table) containing entries that specify individual user or group rights to specific
system objects such as programs, processes, or files.
5.3.b NAT/PAT: NAT (Network Address Translation) replaces a private IP address with a public IP
address, translating the private addresses in the internal private network into legal, routable
addresses that can be used on the public Internet. Dynamic Port Address Translation (PAT)—A group
of real IP addresses are mapped to a single IP address using a unique source port of that IP address.
5.3.c Tunneling: Tunneling is a technique that enables remote access users to connect to a variety of
network resources (Corporate Home Gateways or an Internet Service Provider) through a public data
network. In general, tunnels established through the public network are point-to-point (though a
multipoint tunnel is possible) and link a remote user to some resource at the far end of the tunnel.
Major tunneling protocols (ie: Layer 2 Tunneling Protocol (L2TP), Point to Point Tunneling Protocol
(PPTP), and Layer 2 Forwarding (L2F)) encapsulate Layer 2 traffic from the remote user and send it
across the public network to the far end of the tunnel where it is de-encapsulated and sent to its
destination. The most significant benefit of Tunneling is that it allows for the creation of VPNs over
public data networks to provide cost savings for both end users, who do not have to create
dedicated networks, and for Service Providers, who can leverage their network investments across
many VPN customers.
5.3.d TOR (The Onion Router): Tor aims to conceal its users’ identities and their online activity from
surveillance and traffic analysis by separating identification and routing. It is an implementation of
onion routing, which encrypts and then randomly bounces communications through a network of
relays run by volunteers around the globe.
5.3.e Encryption: is the process of encoding messages or information in such a way that only
authorized parties can access it.
5.3.f P2P (Peer to Peer): in computing or networking is a distributed application architecture that
partitions tasks or workloads between peers.
5.3.g Encapsulation: is a method of designing modular communication protocols in which logically
separate functions in the network are abstracted from their underlying structures by inclusion or
information hiding within higher level objects.
5.3.h Load balancing: When a router learns multiple routes to a specific network via multiple routing
processes (or routing protocols, such as RIP, RIPv2, IGRP, EIGRP, and OSPF), it installs the route with
the lowest administrative distance in the routing table. In a more general sense it improves the
distribution of workloads across multiple computing resources, such as computers, a computer
cluster, network links, central processing units, or disk drives.

5.4 Describe these NextGen IPS event types


5.4.a Connection event: Connection events are the records of any connection that occurs in a
monitored network.
5.4.b Intrusion event: When the system recognizes a packet that is potentially malicious.
5.4.c Host or endpoint event: events that happen the endpoints connected to your network.
5.4.d Network discovery event: Discovery events alert you to the activity on your network and
provide you with the information you need to respond appropriately. They are triggered by the
changes that your managed devices detect in the network segments they monitor.
5.4.e NetFlow event: significant events in the life of a flow, like creation tear-down, and flows denied
by an access rule.
5.5 Describe the function of these protocols in the context of security monitoring

5.5.a DNS: is a globally distributed, scalable, hierarchical, and dynamic database that provides a
mapping between hostnames, IP addresses (both IPv4 and IPv6), text records, mail exchange
information (MX records), name server information (NS records), and security key information
defined in Resource Records (RRs). DNS primarily translates hostnames to IP addresses or IP
addresses to hostnames. Flaws in the implementation of the DNS protocol allow it to be exploited
and used for malicious activities like DOS and DDOS.
5.5.b NTP: Network Time Protocol (NTP) is a protocol designed to time-synchronize devices within a
network. It is very valuable to have the correct time settings in the events logging systems, in this
way the analysis of the events will be accurate.
5.5.c SMTP/POP/IMAP: The email servers and the way to connect to them influence heavily in the
way monitoring and intrusion prevention are configured. The server that provides the service must
be hardened and the connection and download method should be secured with the different
methods we’ve read through the post.
5.5.d HTTP/HTTPS: The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed,
collaborative, and hypermedia information systems. HTTP is the foundation of data communication
for the World Wide Web. HTTPS (also called HTTP over TLS, HTTP over SSL, and HTTP Secure) is a
protocol for secure communication over a computer network which is widely used on the Internet.
HTTPS consists of communication over Hypertext Transfer Protocol (HTTP) within a connection
encrypted by Transport Layer Security, or its predecessor, Secure Sockets Layer. The main
motivation for HTTPS is authentication of the visited website and protection of the privacy and
integrity of the exchanged data.
CCNA Cyber Ops – 6.0 Attack Methods
6.1 Compare and contrast an attack surface and vulnerability: The attack surface of a software
environment is the sum of the different points (the “attack vectors”) where an unauthorized user
(the “attacker”) can try to enter data to or extract data from an environment. A vulnerability is a
weakness which allows an attacker to reduce a system’s information assurance. Vulnerability is the
intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and
attacker capability to exploit the flaw.

6.2 Describe these network attacks

6.2.a Denial of service: (DoS attack) is a cyber-attack where the perpetrator seeks to make a machine
or network resource unavailable to its intended users by temporarily or indefinitely disrupting
services of a host connected to the Internet.
6.2.b Distributed denial of service: A distributed denial-of-service (DDoS) is a cyber-attack where the
perpetrator uses more than one, often thousands of, unique IP addresses.
6.2.c Man-in-the-middle: an attack where the attacker secretly relays and possibly alters the
communication between two parties who believe they are directly communicating with each other.
6.3 Describe these web application attacks

6.3.a SQL injection: is a code injection technique, used to attack data-driven applications, in which
nefarious SQL statements are inserted into an entry field for execution (e.g. to dump the database
contents to the attacker).
6.3.b Command injections: Command injection is an attack in which the goal is the execution of
arbitrary commands on the host operating system via a vulnerable application. Command injection
attacks are possible when an application passes unsafe user supplied data (forms, cookies, HTTP
headers etc.) to a system shell.
6.3.c Cross-site scripting: (XSS) attacks are a type of injection, in which malicious scripts are injected
into otherwise benign and trusted web sites.
6.4 Describe these attacks

6.4.a Social engineering: An attack based on deceiving end users or administrators at a target site.
Social engineering attacks are typically carried out by email or by contacting users by phone and
impersonating an authorized user, in an attempt to gain unauthorized access to a system or
application.
6.4.b Phishing: Phishing is misrepresentation where the criminal uses social engineering to appear as
a trusted identity.
6.4.c Evasion methods: bypassing an information security device in order to deliver an exploit,
attack, or another form of malware to a target network or system, without detection.
6.5 Describe these endpoint-based attacks

6.5.a Buffer overflows: is an anomaly where a program, while writing data to a buffer, overruns the
buffer’s boundary and overwrites adjacent memory locations.
6.5.b Command and control (C2): the term refers to the influence an attacker has over a
compromised computer system that they control.
6.5.c Malware: short for malicious software, is any software used to disrupt computer or mobile
operations, gather sensitive information, gain access to private computer systems, or display
unwanted advertising.
6.5.d Rootkit: is a collection of computer software, typically malicious, designed to enable access to a
computer or areas of its software that would not otherwise be allowed (for example, to an
unauthorized user) and often masks its existence or the existence of other software.
6.5.e Port scanning: probing a server or host for open ports.
6.5.f Host profiling: Identifying groups of Internet hosts with a similar behavior or configuration.
6.6 Describe these evasion methods

6.6.a Encryption and tunneling: One common method of evasion used by attackers is to avoid
detection simply by encrypting the packets or putting them in a secure tunnel.
6.6.b Resource exhaustion: A common method of evasion used by attackers is extreme resource
consumption, though this subtle method doesn’t matter if such a denial is against the device or the
personnel managing the device. Specialized tools can be used to create a large number of alarms
that consume the resources of the IPS device and prevent attacks from being logged.
6.6.c Traffic fragmentation: Fragmentation of traffic was one of the early network IPS evasion
techniques used to attempt to bypass the network IPS sensor.
6.6.d Protocol-level misinterpretation: Attackers also evade detection by causing the network IPS
sensor to misinterpret the end-to-end meaning of network protocols.
6.6.e Traffic substitution and insertion: is when that attacker attempts to substitute payload data
with other data in a different format, but the same meaning. A network IPS sensor may miss such
malicious payloads if it looks for data in a particular format and doesn’t recognize the true meaning
of the data.
6.6.f Pivot: refers to a method used by penetration testers that use the compromised system to
attack other systems on the same network to avoid restrictions such as firewall configurations,
which may prohibit direct access to all machines.
6.7 Define privilege escalation

Privilege Escalation is the act of exploiting a bug, design flaw or configuration oversight in an
operating system or software application to gain elevated access to resources that are normally
protected from an application or user.

6.8 Compare and contrast remote exploit and a local exploit

A remote exploit works over a network and exploits the security vulnerability without any prior
access to the vulnerable system. A local exploit requires prior access to the vulnerable system and
usually increases the privileges of the person running the exploit past those granted by the system
administrator.

You might also like