You are on page 1of 6

3.

0 Security Operations and Monitoring


3.1 Given a scenario, analyze data as part of security monitoring activities

Data Analysis Methods


• Trend analysis
-Trend analysis is the study of data patterns over time to determine how, when, and why they change.
-Focuses on predicting behaviors based on existing data.
-Trend analysis is typically used to identify large-scale changes from the norm, and it is more likely to be useful for a network than for a single PC
- Trend analysis can help to identify future outcomes such as network congestion based on usage patterns and observed growth. It is not used
as frequently as a security analysis method but can be useful to help guarantee availability of services by ensuring that they are capable of
handling an organization's growth or increasing needs.

--Temporal trends = show patterns related to time. Eg; Breach happened late Friday Night where traffic is low & not many people working, most likely
incident will not be detected until 3 days later.

--Spatial trends = exists in specific place/region. It is a common practice, for instance, to give staff members a “burner” laptop when they travel to
certain countries. This device is not allowed to connect to the corporate network, stores a limited set of files, and is digitally wiped immediately upon
the user’s return. This practice is the result of observing a trend of sophisticated compromises of devices traveling to particular countries. Another
example would be the increasing connection of devices to free Wi-Fi networks at local coffee shops.

*trend analysis helps predict future events, and historical analysis helps compare new observations to past ones.

• Heuristics
-Heuristic analysis focuses on behaviors, allowing a tool using it to identify malware behaviors instead of looking for a specific package.
-is used to detect threats based on their behavior. Unlike signature detection, heuristic detection can detect unknown threats since it focuses on what
the threat does rather than attempting to match it to a known fingerprint.
-Antimalware tools often use heuristic analysis to analyze suspected malware - detect unknown malware.

• Anomaly
-Anomaly analysis looks for differences from established patterns or expected behaviors (baseline). Anomaly detection requires knowledge of what
“normal” is to identify differences to build a base model. IDSs and IPSs often use anomaly detection as part of their detection methods.

• Endpoint Security / Endpoint Data Analysis


(1)Malware
-Recognizing malicious software and its associated behaviors is critical in protecting endpoints.
- In-depth malware analysis is a complex skillset, but analysts should know about the approaches that are commonly used to analyze malware.

- Reverse engineering
- Reverse engineering malware requires using tools like disassemblers, debuggers, monitoring tools, unpackers, and code and binary analysis tools to
pull apart malware packages to determine what they do, who wrote them, and other details of their functionality and construction.
- common types of tools used for reverse engineering:
(i)Debuggers - allow you to run programs in a controlled environment, modifying variables and how the program is running, including adding
stop points and monitoring what it is doing.
(ii)Disassemblers - used to convert machine code into assembly language, whereas decompilers attempt to convert machine code into a high-level
language like C or Java.
(iii)Unpackers and packer identifiers - used to identify what packing and encryption techniques were used to obfuscate a program and then to
undo the packing process.
)iv)System monitoring tools - used to monitor impact to a system like changes to the filesystem, registry, or other settings or configuration
changes.

(2)Memory Analysis
-For Windows systems, the Resource Monitor (resmon), can be a useful built-in tool -- show processes, PID, memory being used.
-For Linux, use 'top' or 'ps' command-line tools.

(3)System and application behavior


- Understanding typical system and application behavior helps security professionals compare known-good behavior to suspected malicious or
abnormal behavior.
- Known-good behavior = monitoring normal behaviors over a period of time.
- Anomalous behavior = behavior that doesn't match the normal behavior of the system or network.
- Exploit techniques = Knowing which exploit techniques are most commonly used and how to find artifacts of those exploits helps analysts
detect compromises. Detecting these exploit techniques in action requires a combination of active
detection tools, logging and analysis, and forensic and incident response tools for those occasions when something gets
through your defenses.

* Tools like Amazon's AWS Inspector tool check for expected behaviors and settings and then flag when they aren't correct.
(4)File system monitoring
-Monitoring filesystems can help detect unauthorized or unexpected changes.
-Tools like Tripwire, OSSEC, and commercial host intrusion detection system (HIDS) tools are used to monitor and report on filesystem changes.

(5)User and entity behavior analytics (UEBA)


- UEBA enables analysis to detect anomalous behavior quickly when there are deviations from normal patterns, without the need for predefined rules
(Machine Learning techniques & Statistical Analysis technique used)
-UEBA will collect information related to user behavior and trends on a network, then it will create reliable baselines of user behavior
patterns.
-From this point, UEBA can then continuously monitor future network behavior and alert upon deviations from the norm.
-UEBA can be implemented as part of a SIEM or endpoint tool or as a separate application. The key differentiator for UEBA tools is the focus on user
and entity behaviors and a focus on anomalous behavior based on previous normal behavior combined with typical indicators of compromise.

• Network Data Analysis


Security practitioners need to know how to analyze network traffic, including the contents of packets that are sent, the destinations that traffic is sent
to or where traffic is coming from, the overall flow of traffic, and the individual packets and protocols in use.

- Uniform Resource Locator (URL) and Domain name system (DNS) analysis
-URLs (uniform resource locators) are used to point web browsers and other tools to their destinations. That means that you need to know how to
identify suspect URLs and the domains that they may point to.
-Manual analysis starts with a review of the URL itself. Does it appear to be a legitimate domain name, or does it have suspect elements like a
deceptive hostname or domain name or an uncommon top-level domain (TLD).
-Google's Safe Browsing tool (safebrowsing.google.com) is one example of a tool that analyzes URLs. It also provides information about malicious
content hosted on domains, allowing domain administrators to receive notifications if their domains are hosting malware.

-Domain generation algorithm (DGA)


-A subset of URL and domain name analysis. DGA is used as part of malware packages to dynamically generate domain names from a known seed.

- Network Flow analysis


-To capture data about which host talked to which host, via what protocol and which port, and how much data was transferred. Flow analysis can be
helpful for detecting abnormal traffic patterns.
-NetFlow tools - Can be use to analyze what is happening on your network.

- Packet and protocol analysis


-performed using automated tools like IPS and IDS systems, as well as with manual analysis tools like Wireshark. As a security analyst, you will need to
know the basics of using Wireshark or similar packet analyzer tools, as well as common protocols like DHCP, DNS, HTTP, and others.

- Malware
-Identifying malware on your network through packet and protocol analysis relies on a strong knowledge of what traffic should look like and what
behaviors and content are abnormal.
-Finding malware traffic when you can't see the content of the packets due to encryption can be more challenging. In cases where packets are
encrypted, you may have to rely on behavior-based analysis by looking at traffic patterns that are indicative of malware like visiting known-bad sites,
sending unexpected traffic on uncommon ports, or other abnormal behaviors.

• Log review
Security analysts need to know what logs exist by default on systems, how to access them, how to find information about the content of those logs,
and how to interpret that content.

- Event logs
- Windows event log can be viewed directly on workstations using the Event Viewer.
- By default, Windows includes Application, Security, Setup, and System logs, which can all be useful for analysts.

- Syslog
-Used by Linux, typically in the /var/log directory.
-Eg; auth.log file shows sudo events. indicating all enabled logging activity on a Linux server. Auditing and analysis of login events.
Do research.

- Firewall logs
-They typically identify the source and destination IP address, the port and protocol, and what action was taken on the traffic.

- Web application firewall (WAF)


- Many WAF systems have default rulesets that look for attacks that match the OWASP Top 10 or other common application security
risks, allowing administrators to quickly enable a common ruleset.

- Proxy logs
-Proxies are often used to either centralize access traffic or to filter traffic. Thus, proxy logs will contain the source and destination IP address, the
source and destination port, the requested resource, the date and time, and often the content type and HTTP referrer as well as details about the
content, such as the amount of traffic that was sent.
-When analyzing proxy logs, you should look for data such target host IP, HTTP request method, unusual user agent & protocol versions.

- Intrusion detection system (IDS)/ Intrusion prevention system (IPS)


-IDS and IPS systems rely on rules to identify unwanted traffic. That means that when a rule is triggered on an IDS and IPS, the logs will contain
information about the rule that was activated and information about the traffic that was captured and analyzed to trigger the rule.
-IDS generate alerts when threat detected (not block on the spot, send alert to admins for next action), IPS actively block the threat.

*SNORT - Open source NIDS/NIPS system. Snort rules have 2 parts; header and options.
Header (action alert or drop, protocol,IP,port no, direction), while Options (what to look in the content and message to display to user).

• Impact analysis
The results of an attack are referred to as impact.

- Organization impact vs. localized impact


-An organization impact = one that affects mission essential functions, meaning that the organization cannot operate as intended. Along with the
scope, the duration of the impact will have a substantial effect on costs.

-A localized impact = means that the scope is limited to a single department, small user group, or one or two systems.

- Immediate vs. total


-Immediate impact =refers to direct costs incurred because of an incident, such as downtime, asset damage, and fees and penalties.
-Total impact = refers to costs that arise following an incident, including damage to the company's reputation.

• Security information and event management (SIEM) review


- Rule writing
-Rule writing for most SIEM devices focuses on correlation rules, which look at activities or events and then match them to unwanted behaviors.
-tells your SIEM system which sequences of events could be indicative of anomalies which may suggest security weaknesses or cyber-attack.

- Known-bad Internet protocol (IP)


-Known IP from IP reputation lists with suspected malicious behavior -- can be used to create Correlation Rules.
-Block traffic if internal host try to go to these bad IP list.

- Dashboard
-SIEM systems typically provide the ability to create a dashboard, which shows the status of rules, data sources, actions taken, and other critical
operational and analytic information that the SIEM provides.
• Query writing
-Ability to use query for terms of interest to search data is a core function of any data aggregation platform.

- String search
- The platform’s features and functions are often heavily driven from searches.
-Search languages - Splunk Search Processing Language (SPL), Kibana Query Language (KQL), and Apache Lucene
-Each of these languages enables analysts to perform simple string searches or queries for terms of interest, to more advanced search techniques using
Boolean logic.

- Script
-Depending on the platform used, you may be able to search and then perform automated actions such as alert delivery via scripting.
-The most commonly supported types include shell, batch, Perl, and Python scripts.
-When creating automation scripts, using the appropriate working directories, configuring the environment correctly, and ensuring that arguments are
passed correctly ----Eg, use script to initiate searches and retrieve results automatically.

- Piping
-Passing data using built-system functionalities such as piping and redirection can be used to test functionality quickly or for low-volume
processing.
-Piping is a useful function in that it enables the standard output (stdout) of a command to be connected to standard in (stdin) of another
command.

• E-mail analysis
- Malicious payload
-Attackers attach malicious file to email or conceal malware inside doc, zip files, PDF files.
-Attackers embed a malicious script or macro into a legitimate looking document and try to trick the user into enabling functionality to get their
malware in the door.

- Domain Keys Identified Mail (DKIM)


-The DomainKeys Identified Mail (DKIM) standard was introduced as a way for e-mail senders to provide a method for recipients to verify messages.
-It specifically offers three services: identity verification, identification of an identity as known or unknown, and determination of an
identity as trusted or untrusted.
-DKIM uses a pair of keys, one private and one public, to verify messages. The organization’s public key is published to DNS records, which will later be
queried for and used by recipients.
-When sending a message using DKIM, the sender includes a special signature header in all outgoing messages. The DKIM header will include a
hash of the e-mail header, a hash of some portion of the body, and information about the function used to compute the hash, as shown here:

-Upon receiving a message, the destination server will look up the previously published public key and use this key to verify the message.
-With this process, DKIM can effectively protect against spam and spoofing, and it can also alert recipients to the possibility of message tampering.
-Importantly, DKIM is not intended to give insight into the intent of the sender, protect against tampering after verification, or prescribe any actions
for the recipient to take in the event in a verification failure.
-Do research

- Sender Policy Framework (SPF)


The Sender Policy Framework (SPF) enables domain owners to prevent e-mail spoofing using their domains by leveraging DNS functionality.
-An SPF TXT record lists the authorized mail servers associated with a domain.
-Before a message is fully received by a recipient server, that server will verify the sender’s SPF information in DNS records. Once this is verified, the
entirety of the message can be downloaded.
-If a message is sent from a server that’s not in that TXT record, the recipient’s server can categorize that e-mail as suspicious and mark it for further
analysis.
TXT records from DNS lookup of comptia.org highlighting SPF entry

- Domain-based Message Authentication, Reporting, and Conformance (DMARC)


-DMARC is an e-mail authentication protocol designed to give e-mail domain owners the ability to prevent spoofing and reduce spam that appears
to originate from their domain. Like SPF, an entry is created in the domain owner’s DNS record. DMARC can be used to tell receiving servers how to
handle messages that appear to be spoofed using a legitimate domain.
-DMARC uses SPF and DKIM to verify that messages are authentic, so it’s important that both SPF and DKIM are correctly configured for the
DMARC policy to work properly.
-Once the DMARC DNS entry is published, any e-mail server receiving a message that appears to be from the domain can check against DNS records,
authenticated via SPF or DKIM.
-The results are passed to the DMARC module along with message author’s domain. Messages that fail SPF, DKIM, or domain tests may invoke the
organization’s DMARC policy. DMARC also makes it possible to record the results of these checks into a daily report, which can be sent to domain
owners, usually on a daily basis.
-This allows for DMARC policies to be improved or other changes in infrastructure to be made. As with SPF, DMARC TXT information can be manually
queried for using any number of DNS utilities.

TXT records from DNS lookup of comptia.org highlighting DMARC entry

DKIM, SPF, DMARC Summary


SPF DKIM DMARC
SPF helps prevent spoofing by verifying the DKIM shows that the email belongs to a specific DMARC (Domain-based Message
sender’s IP address. a DNS record (DNS TXT organization. Authentication, Reporting & Conformance)
record) containing information about IP/servers defines how the recipient’s mail server should
allowed to send emails from a specific domain. DKIM adds a digital signature to the header of process incoming emails if they don’t pass the
your email message, which email servers then authentication check (either SPF, DKIM, or
SPF can prevent domain spoofing. check to ensure that the email content hasn’t both).
changed. Like SPF, a DKIM record exists in the
DNS. (DNS TXT record). Ties the first two protocols together with a
consistent set of policies
DKIM uses an encryption algorithm to create a
pair of electronic keys -- a public and a private Basically, if there’s a DKIM signature, and the
key -- that handles this “trust”. The private key sending server is found in the SPF records, the
remains on the server it was created on, which email is sent to the recipient’s inbox. If the
is your mail server. The public key is what’s message fails authentication, it’s processed
placed in the DNS TXT record. according to the selected DMARC policy: none,
reject, or quarantine.

- Phishing
-In a social engineering campaign, an attacker uses deception, often influenced by the profile they’ve built about the target, to
manipulate the target into performing an act that may not be in their best interest.
-Despite the most advanced technical countermeasures, the human element remains the most vulnerable part of the network.

- Forwarding
-Users provide the most useful information to a security team by forwarding an e-mail in its entirety, with headers and body intact, rather than just
copying and pasting the text within it the e-mail. Or attach multiple e-mails in a forwarded message.

- Digital signature
- digital signature provides verification of the sender’s authenticity verification, message integrity, and nonrepudiation (the assurance that a sender
cannot deny having sent a message.)
-This kind of signature requires the presence of public and private cryptographic keys
- S/MIME and PGP can both provide authentication, message integrity, and nonrepudiation. In practice, S/MIME is often used in commercial setting,
while PGP tends to be used by individuals.

- E-mail signature block


-Signature block content is simply additional text that is automatically or manually inserted with messages to enable users to share contact
information. It offers no security advantage.

- Embedded links
-Some security devices perform real-time analysis of inbound messages for the presence of URLs and domains and modify the messages so that links
are either disabled or redirected to a valid domain.

- Impersonation
-Impersonation attacks are highly targeted efforts designed to trick victims into performing actions such as wiring money to attacker accounts.
-By pretending to be a CEO, for example, an attacker may use tailored language to convince her targets to perform the requested task without thinking
twice.
-Key staff must be aware of current attacker trends and take the required training to resist them.

- Header
-An e-mail header is the portion of a message that contains details about the sender, the route taken, and the recipient.
-Analysts can use this information to detect spoofed or suspicious e-mails that have made it past filters.
-Note the SPF and DKIM verdicts are also captured in the header information along with various server addresses.

You might also like