You are on page 1of 21

LOG

ANALYSIS
( PART 3)
“Log analysis is the process
of interpreting & reviewing
computer-generated event
logs to proactively identify
bugs, security threats or
other risks.”
Log analysis is typically done within a Log
Management System, a software solution
that gathers, sorts and stores log data and
event logs from a variety of sources.
e.g: ELK Stack (Elasticsearch, Logstash,
Kibana), Splunk, Graylog.

Log analysis generally follows these steps:


1. Data Collection
2. Data Indexing
3. Analysis
4. Monitoring
5. Reports
LOG COLLECTION
METHODS
Agents
Syslog
Log Forwarders
Centralized Logging Solutions
Windows Event Forwarding (WEF)
Packet Sniffing
AGENTS
Software components installed on
individual systems that collect and
forward logs to a central repository.

Real-time collection
Lightweight
Suitable for remote or disconnected
systems
Requires installation on each system
Potential resource consumption
SYSLOG
A standard protocol for forwarding log
messages within an IP network.

Standardized
Supports UDP and TCP
Widely used in UNIX-based systems
Limited security features in the
original syslog protocol.
LOG FORWARDERS
Specialized tools that collect logs from
various sources and forward them to
centralized log management systems.

Aggregation of logs from multiple


sources
Enhanced security features
Configuration and maintenance
overhead
CENTRALIZED LOGGING
SOLUTIONS
Comprehensive platforms that centralize
log storage, analysis, and visualization.
e.g: ELK Stack , Splunk, Graylog.

Scalability
Powerful search and analysis
capabilities
Visualization tools
Cost (some solutions may be expensive)
Resource-intensive
WINDOWS EVENT
FORWARDING
Windows-specific mechanism for
collecting and forwarding event logs.

Built-in Windows feature


Supports subscriptions and
custom event forwarding
Limited to Windows environments
PACKET SNIFFING
Capturing and analyzing network
traffic to extract log-like information.

Provides insights into network-


level activities
Limited application-layer details
Potential ethical and legal
considerations.
FACTORS TO CONSIDER IN
LOG COLLECTION
Scalability: The chosen method
should scale with the volume of
log data.
Security: Ensure the
confidentiality and integrity of
collected logs.
Compatibility: Compatible with
the log sources and analysis tools
used.
LOG
SOURCES
A log source is a data source that
creates an event log.
LOG COLLECTION
SOURCES
Windows Event Logs Splunk, ELK Stack
Linux/Unix Syslogs (Elasticsearch, Logstash,
Routers and Switches Kibana)
Firewalls Docker Logs
Apache Access and Error Kubernetes Events and Logs
Logs Application-specific logs
Nginx Access and Error Active Directory Logs
Logs LDAP Logs
Java Application Logs Custom Application Logs
.NET Application Logs Terraform Logs
Database Server Logs Antivirus and Anti-malware
AWS CloudWatch Logs Logs
Azure Monitor Logs VMware/Hyper-V Logs
Syslog Aggregation) Logs from MDM Systems
LOG
ANALYSIS
TECHNIQUES
SEARCH AND FILTERING
Regular Expressions (Regex)
Utilize regular expressions to
search and filter log entries
based on patterns and criteria.

For Example: Searching for IP


addresses, specific error codes,
or user names.
SEARCH AND FILTERING
Command-Line Tools
Leverage tools like grep and awk
for efficient log searching and
filtering in the command-line
interface.

For Example: Using grep to filter


lines containing specific keywords.
CORRELATION
Analysts can combine logs from multiple sources
to help decode an event not readily visible with
data from just a single log.

It is useful during and after cyber-attacks, where


correlation between logs from network devices,
servers, firewalls, and storage systems could
indicate data relevant to the attack and indicate
patterns that were not apparent from a single
log.

Align events based on timestamps to understand


the sequence of activities.

Identify and link related events from different log


sources to establish a comprehensive view of an
incident.
ANOMALY DETECTION
Establish baseline behavior and use
statistical methods to detect deviations
indicating potential anomalies.

e.g: Identifying a sudden spike in network


traffic or a significant increase in failed login
attempts.

Apply machine learning algorithms to learn


normal patterns and detect deviations.

e.g: Training a model to recognize unusual


patterns in user behavior.
NORMALIZATION
Converting diverse log
element data into a standard
format can assist ensure that
comparisons can be made
and that data can be stored
and indexed centrally
regardless of log source
FACTORS TO CONSIDER IN
LOG ANALYSIS
Understand the context in which
logs were generated for accurate
analysis.
Implement measures to reduce false
positives in anomaly detection.
Utilize scripting and automation to
streamline repetitive analysis tasks.
Document analysis methodologies
and findings for future reference.
READY
TO
PUT
THEORY
INTO
ACTION?

You might also like