You are on page 1of 66

Wazuh-Elastic Stack Training

Deck 1

Emiliano Fontana
May 2021
Introduction

Deck 1, Slide 2
Wazuh

● Built on the OSSEC project (GPLv2 fork)


○ publicly recommended by original author of OSSEC
● Over 6 years of aggressive development
● Massive expansions to legacy OSSEC functionality
● Integration with other major tools and service
● Constant improvement and support

Deck 1, Slide 3
What is Wazuh?

● A situational awareness tool for your electronic assets


● An important resource for achieving regulatory security
compliance

Main Components
● Log Collection
● Log Analysis (customizable set of over 3000 HIDS rules)
● File Integrity Monitoring
● Host-based anomaly detection
● Security compliance scanning for known vulnerabilities
● Real time alerting (e-mail, SMS, Slack, etc)
● Active Response (a HIDS-driven IPS implementation)

Deck 1, Slide 4
What is Wazuh?
Agents available for many diverse platforms

● Linux (Debian, CentOS, RedHat, SUSE, Amazon Linux, etc)


● BSD (FreeBSD, OpenBSD, NetBSD)
● Solaris (10 & 11)
● AIX (5.3 or greater)
● MacOS
● Windows
● HP-UX (11v3)

note
For downloading and installing the latest Wazuh and related
packages: https://wazuh.com/start/

Deck 1, Slide 5
Wazuh Architecture

https://documentation.wazuh.com

Deck 1, Slide 6
Wazuh Architecture

Deck 1, Slide 7
Main Components

https://documentation.wazuh.com

Deck 1, Slide 8
Wazuh Processes

● Each process is executed with limited privileges


○ Processes are run in a chroot environment where feasible.
○ Processes are executed as unprivileged users where
feasible.
● Wazuh processes on Linux systems are controlled using the
relevant tool (i.e. systemctl, service, initctl)
● C:\Program Files (x86)\ossec-agent\win32ui.exe is the Windows
management tool for controlling the Wazuh service on Windows
agents.

Deck 1, Slide 9
Network Communication

● Agent-Manager connections are compressed and encrypted


with per-agent pre-shared keys (AES) over tcp or udp 1514.
● Remoted can directly accept TCP and/or UDP port 514
messages from syslog-sending devices.
● For more robust centralized syslog collection, syslog
server(s)can be used on agent(s) or manager.
Deck 1, Slide 10
Wazuh Secure Communication

● All Wazuh communications are authenticated and encrypted via


AES or TLS encryption.
● Wazuh manager worker nodes use TLS to sync config and state
data with the manager master node.
● Each agent is assigned its own crypto key for reporting to the
manager.
● While significant privilege separation and isolation have been
built, it is still wise to further harden the Wazuh server since so
many other systems will rely on and be influenced by it,
particularly if remote commands are enabled.

Deck 1, Slide 11
Flood protection

The Leaky Bucket

● Variable rate input


● Fixed max rate out
● Various thresholds
● Flood & Recovery alerts
● Very configurable

https://documentation.wazuh.com

Deck 1, Slide 12
Leaky bucket buffer flooding scenario with alerts and final recovery

Deck 1, Slide 13
Lab Exercise 1a
Wazuh Server Configuration

Deck 1, Slide 14
Lab Exercise 1a
Wazuh Server Configuration

Lab Objective
Do some basic configuration of the Wazuh Manager and
authenticate with and query the Wazuh API for the first time.

Deck 1, Slide 15
Lab Exercise 1b
Wazuh Web UI

Deck 1, Slide 16
Lab Exercise 1b
Wazuh Web UI

Lab Objective
Briefly explore the Wazuh Web UI and the Kibana environment
where it is housed.

Deck 1, Slide 17
Agent registration

Deck 1, Slide 18
Authd registration service

● Agents must have a registration allocated in the Wazuh system


before they can report in.
● Authd on the Wazuh master node manager services agent
requests for a registration.
● This is unauthenticated by default but at least password
protection is recommended.
● Certificate based self-registration authentication is also
possible.
● Agents have multiple methods they can use to request a
registration.

Deck 1, Slide 19
Agents initiating registration

1. Agent auto-enrollment (default)


2. Using agent-auth tool
3. With agent installer via deployment variables
4. Requesting registration directly from Wazuh API (rare)

Deck 1, Slide 20
Lab Exercise 1c
Auto enrollment

Deck 1, Slide 21
Lab Exercise 1c
Auto enrollment

Lab Objective
Register your linux-agent with your Wazuh manager using auto
enrollment.

Deck 1, Slide 22
Lab Exercise 1d
Deployment variables

Deck 1, Slide 23
Lab Exercise 1d
Deployment variables

Lab Objective
Install and register your windows-agent with deployment variables.

Deck 1, Slide 24
Lab Exercise 1e
agent-auth tool

Deck 1, Slide 25
Lab Exercise 1e
agent-auth tool

Lab Objective
Register your elastic-agent with the agent-auth tool.

https://documentation.wazuh.com

Deck 1, Slide 26
Remotely upgrading
Wazuh agents

https://documentation.wazuh.com

Deck 1, Slide 27
Ways to remotely upgrade Wazuh agents
Instead of manually upgrading Wazuh directly on agent systems, or using
yum/apt repositories which bring the risk of agents being prematurely
upgraded to a newer version than what is on your managers, you can
push Wazuh agent upgrades out from your Wazuh managers to
connected agents, even to remote ones.

● Wazuh API
○ automatically routes upgrade tasks to the right managers
○ up to 100 agents queued up at a time (must be connected)
● agent_upgrade (legacy)
○ a CLI tool to upgrade agent(s) in a single-manger setups
○ not for use with Wazuh managed cloud, or manager clusters
● Limitations
○ not manageable in Wazuh web interface (yet)
○ scripting needed for upgrades of hundreds or more agents
○ upgrade tasks not queued up for disconnected agents
Deck 1, Slide 28
Lab Exercise 1f
Agent remote upgrade

Deck 1, Slide 29
Lab Exercise 1f
Agent remote upgrade

Upgrade an agent from the manager


From the Wazuh Manager, push an upgrade to the outdated
Wazuh Agent on the elastic system. Do this with the
agent_upgrade tool and then via the Wazuh API.

Deck 1, Slide 30
General configuration

https://documentation.wazuh.com

Deck 1, Slide 31
ossec.conf

● primary configuration file on managers and agents


● location
○ /var/ossec/etc/ossec.conf
○ C:\Program Files (x86)\ossec-agent\ossec.conf
● ossec.conf controls the core components of Wazuh
○ log analysis
○ file integrity monitoring (syscheck)
○ rootkit detection
○ active-response
○ loads the decoders & the rules.xml files
○ controls the notification (e.g. e-mail)

Deck 1, Slide 32
internal_options.conf

Low-level config file for managers and agents

● Location:
○ internal_options.conf
■ shows all options, but is overwritten by Wazuh upgrades
○ local_internal_options.conf
■ copy items from internal_options.conf here to customize
● internal options are for
○ controlling debug level for specific daemons
○ enabling/disabling grouping of email alerts
○ enabling/disabling full subject line in email alerts
○ enabling/disabling remote commands
○ various other obscure settings generally best left alone
● Handle with care!
Deck 1, Slide 33
Agent configuration

https://documentation.wazuh.com

Deck 1, Slide 34
agent.conf

● Location
○ /var/ossec/etc/shared/*GROUP*/agent.conf
○ Multiple possible *GROUP* locations, each servicing a
different group of agents. Agents can be in multiple groups
○ Controlled by agent_groups command on the manager.
Default group is called default
● Agents pull it from the Wazuh manager, quickly fetching new
versions and automatically restarting to apply them.
● agent.conf should never be edited on the agent side as
changes will quickly be overwritten with the manager’s version.
● Specific agent config sections are possible on a per-OS, per-
profile, and per-agent basis, allowing great flexibility.
● Editable from the Web Interface

Deck 1, Slide 35
Agent groups and profiles

Important tools for organizing the different configuration setting you


will need use on different groups/types of agents.

● agent groups
● configuration profiles

Deck 1, Slide 36
Agent configuration profiles
In an agent's ossec.conf, the <config-profile> line can include
multiple profiles separated by a comma and space.

Example ossec.conf on agent


<client>
<config-profile>rhel, rhel7</config-profile>
<server>
<address>siem.company.org</address>
</server>
</client>

Example agent.conf on manager (in sca agent group)


<agent_config profile="rhel7">
<sca>
<policies>
<policy>cis_rhel7_linux_rcl.yml</policy>
</policies>
</sca>
</agent_config>

Deck 1, Slide 37
agents.conf large example
agent.conf (on manager)
<agent_config>
...
</agent_config>

<agent_config os="Linux">
...
</agent_config>

<agent_config os="Windows">
...
</agent_config>

<agent_config profile="rhel7">
...
</agent_config>

<agent_config profile="ubuntu18.04">
...
</agent_config>

<agent_config name="alpha">
...
</agent_config>

Deck 1, Slide 38
Lab Exercise 1g
Centralized agent configuration

Deck 1, Slide 39
Lab Exercise 1g
Centralized agent configuration

Practice centralized agent configuration


Configure two agent groups, each with a multi-level agent.conf.
Confirm agents are getting and using the config content relevant to
them within their group.

Deck 1, Slide 40
Mass deployment

Mass deployment discussion


Things to consider as you plan a mass deployment of Wazuh
agents. See lab guide.

Deck 1, Slide 41
Log Analysis

https://documentation.wazuh.com

Deck 1, Slide 42
Log Analysis with Wazuh

Wazuh’s log analysis engine is capable of

● extracting important fields from a log message


● identifying & evaluating the content of a log message
● categorizing it by matching specific rules
● and consequently generating an alert from it.

Deck 1, Slide 43
Log Flow (agent/server)

● ossec-logcollector on the agent collects the logs


● ossec-analysisd on the manager analyzes the log entries
● ossec-maild sends out alerts
● ossec-execd used for Active Response

Deck 1, Slide 44
Stages of Log Analysis
● Log collection on agents as defined in <localfile> sections
○ These define a <log_format> that informs pre-decoding.
○ For json logs, these also can define one or more additional
fields to mark the json logs to clearly indicate the log type
or source, to inform rule-based analysis.
● Pre-decoding
○ extracts basic fields based on <log_format> value of
source, like program_name from from the syslog header
● Decoding
○ extracts program-specific fields like srcip or username
● Rule-based analysis of decoded log
○ One more more instances and types of matching criteria
can be against individual log fields or the whole log.
○ Matching of field values against CDB lists also supported.
Deck 1, Slide 45
Example <localfile> sections
<localfile>
<log_format>syslog</log_format>
<location>/var/log/messages</location>
</localfile>

<localfile>
<log_format>eventchannel</log_format>
<location>Application</location>
</localfile>

<localfile>
<log_format>json</log_format>
<location>/var/log/suricata/eve-*.json</location>
<label key="@source">suricata</label>
</localfile>

Deck 1, Slide 46
Various log samples

pam / squid / apache log samples

2016-03-15T15:22:10.078830+01:00 tron su:pam_unix(su-l:auth):authentication


failure;logname=tm uid=500 euid=0 tty=pts/0 ruser=tm rhost= user=root

1265939281.764 1 172.16.167.228 TCP_DENIED/403 734 POST


http://lbcore1.metacafe.com/test/SystemInfoManager.php - NONE/- text/html

[Sun Mar 06 08:52:16 2016] [error] [client 187.172.181.57] Invalid URI in request
GET: index.php HTTP/1.0

Deck 1, Slide 47
Example 1/2

Log
Dec 5 00:08:49 manager6 sshd[25467]: Failed password for root from
113.195.145.13 port 19044 ssh2

Pre-decoding

● hostname: manager6
● program_name: sshd
● log: Failed password for root from 113.195.145.13 port 19044 ssh2

Deck 1, Slide 48
Example 2/2

Log
Dec 5 00:08:49 manager6 sshd[25467]: Failed password for root from
113.195.145.13 port 19044 ssh2

Decoding

● decoder: sshd
● dstuser: root
● srcip: 113.195.145.13
● srcport: 19044

Deck 1, Slide 49
Logging alerts to alerts.json
When <jsonout_output> is enabled in the manager's ossec.conf,
alerts are recorded as JSON records in alerts.json. These are
normally shipped by Filebeat to Elasticsearch or by Splunk
Universal Forwarder to Splunk.

/var/ossec/logs/alerts/alerts.json
{"timestamp":"2020-12-07T21:44:37.313+0000","rule":{"level":5,"description":"sshd: Reverse lookup error (bad ISP
or attack).","id":"5702","firedtimes":58,"mail":false,"groups":["syslog","sshd"],"pci_dss":["11.4"],"gpg13":
["4.12"],"gdpr":["IV_35.7.d"],"nist_800_53":["SI.4"],"tsc":["CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":
{"id":"000","name":"manager1"},"manager":{"name":"manager1"},"id":"1607377477.3731351","cluster":
{"name":"wazuh","node":"master"},"full_log":"Dec 7 21:44:37 ip-10-0-1-1 sshd[22566]: reverse mapping checking
getaddrinfo for 190.202.147.253.estatic.cantv.net [190.202.147.253] failed - POSSIBLE BREAK-IN
ATTEMPT!","predecoder":{"program_name":"sshd","timestamp":"Dec 7 21:44:37","hostname":"ip-10-0-1-
1"},"decoder":{"parent":"sshd","name":"sshd"},"data":{"srcip":"190.202.147.253"},"location":"/var/log/secure"}

Deck 1, Slide 50
Expanded alerts.json record
{
"timestamp": "2020-12-07T21:44:37.313+0000",
"rule": {
"level": 5, ...
"description": "sshd: Reverse lookup error (bad ISP or attack).", "predecoder": {
"id": "5702", "program_name": "sshd",
"firedtimes": 58, "timestamp": "Dec 7 21:44:37",
"mail": false, "hostname": "ip-10-0-1-1"
"groups": [ },
"syslog", "decoder": {
"sshd" "parent": "sshd",
], "name": "sshd"
"pci_dss": [ },
"11.4" "data": {
], "srcip": "190.202.147.253"
"gpg13": [ },
"4.12" "location": "/var/log/secure"
], }
"gdpr": [
"IV_35.7.d"
],
"nist_800_53": [
"SI.4"
],
"tsc": [
"CC6.1",
"CC6.8",
"CC7.2",
"CC7.3"
]
},
"agent": {
"id": "000",
"name": "manager1"
},
"manager": {
"name": "manager1"
},
"id": "1607377477.3731351",
"cluster": {
"name": "wazuh",
"node": "master"
},
"full_log": "Dec 7 21:44:37 ip-10-0-1-1 sshd[22566]: reverse mapping checking getaddrinfo for 190.202.147.253.estatic.cantv.net [190.202.147.253] failed - POSSIBLE
BREAK-IN ATTEMPT!",
...
Deck 1, Slide 51
Logging alerts to archives.json
Alternatively, when <logall_json> is enabled, all events are logged
to archives.json whether or not they match a rule. Such logging
may take up a great deal of space but at times is needed in order
to discover classes of events that should be tripping rules but are
not doing so. Consider at times routing archives.json temporarily
to a separate index pattern from wazuh-alerts-*, like to wazuh-
archives-*, since you presumably will not want to retain the non-
alert events as long.

/var/ossec/logs/archives/archives.json
{"timestamp":"2017-12-05T02:51:36+0000","rule":{},"agent":{"id":"000",
"name":"manager6"},"manager":{"name":"manager6"},"id":"1512442296.
149532","full_log":"Dec 5 02:51:35 manager6 sshd[382]: Disconnected
from 113.195.145.13 port 48727 [preauth]","predecoder":{"program_name":
"sshd","hostname":"manager6"},"decoder":{"name":"sshd"},
"location":"/var/log/secure"}

Deck 1, Slide 52
JSON Logging Issues
Issues to consider with the alerts.json and archives.json

● These files are rotated and compressed daily by default.


● They accumulate indefinitely unless you add a process to delete old files.
● Sample cron one-liner to daily remove 7+ day old rotated files:
0 2 * * * root find /var/ossec/logs/{alerts,archives} -mtime +7 -exec rm {} \;
● The archives.json file contains both alerts and non alerting events so if
you ship both alerts.json and archives.json to Elasticsearch you will
double-index all alerts.
● To split the routing of archives.json across multiple index patterns, the
Wazuh Filebeat module must be customized.
● It is worth keeping the more recent of these rotated files, since in rare
instances, you may find it very helpful to be able to re-feed one or more
past day's worth of alerts.json or archives.json files back to Elasticsearch.
See:
https://wazuh.com/blog/recover-your-data-using-wazuh-alert-backups/

Deck 1, Slide 53
Log Analysis and Regulatory Compliance

● Computer-aided log analysis is a powerful tool for identifying


threats or potential problems within a vast stream of collected
log events.
● Many regulatory compliance requirements call for regular
review of security logs, which is generally not feasible nor
sustainable without the aid of machine analysis of log events
before they are reviewed by human eyes.
● Furthermore, it can help classify which events need to be
stored in order to comply with regulatory requirements,
whether or not they represent actionable items. Without this, it
would necessary to store all log events, at an exorbitant cost
from both a computational and storage perspective.

Deck 1, Slide 54
Why analyze logs?
Log analysis is a requirement for:

● PCI DSS compliance ● FISMA compliance


● GDPR compliance ● SOX compliance
● HIPAA compliance ● NIST 800-53 compliance
● SOC 2 Trust Service Criteria

Deck 1, Slide 55
Compliance mapping in Wazuh rules

● The ruleset maintained by Wazuh contains mappings to specific


compliance requirements.
● A list of all related Wazuh rules can be found here:
https://wazuh.com/resources/Wazuh_PCI_DSS_Guide.pdf
https://wazuh.com/resources/Wazuh_GDPR_White_Paper.pdf

PCI-tagged Wazuh rule

<rule id="5402" level="3">


<if_sid>5400</if_sid>
<regex> ; USER=root ; COMMAND=| ; USER=root ; TSID=\S+ ; COMMAND=</regex>
<description>Successful sudo to ROOT executed.</description>
<group>pci_dss_10.2.5,pci_dss_10.2.2,gpg13_7.6,gpg13_7.8,gpg13_7.13,
gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,
nist_800_53_AC.6,</group>
</rule>

Deck 1, Slide 56
Lab Exercise Set 2

Deck 1, Slide 57
Lab Exercise Set 2

2a
Generate a brute-force attack -- Repeatedly attempt to use a
wrong password with an agent. Monitor the alerts.log, watching for
the generation of the brute-force alert.

Deck 1, Slide 58
Lab Exercise Set 2

2a
Generate a brute-force attack -- Repeatedly attempt to use a
wrong password with an agent. Monitor the alerts.log, watching for
the generation of the brute-force alert.

2b
Log Analysis: Analyze the log entries resulting from the previous
exercise. What is shown and what does it mean? How can you
distinguish an attack from a harmless log event?

Deck 1, Slide 59
Lab Exercise Set 2

2a
Generate a brute-force attack -- Repeatedly attempt to use a
wrong password with an agent. Monitor the alerts.log, watching for
the generation of the brute-force alert.

2b
Log Analysis: Analyze the log entries resulting from the previous
exercise. What is shown and what does it mean? How can you
distinguish an attack from a harmless log event?

2c
Looking up and tracing Wazuh rules for better understanding of
alerts
Deck 1, Slide 60
Elastic Stack

Deck 1, Slide 61
Elasticsearch

Elasticsearch is a highly scalable


full-text search and analytics
engine, to which data is shipped,
and which Kibana accesses as its
primary data back end source.

https://www.elastic.co/products/elasticsearch

Deck 1, Slide 62
Kibana

Kibana is the web front-end to the


data in Elasticsearch. The Wazuh
Web UI is installed inside the
Kibana environment, adding
greatly to the standard Kibana
offerings, and tying into the
Wazuh API in addition to
Elasticsearch for a rich and
powerful end user web
experience.

https://www.elastic.co/products/kibana

Deck 1, Slide 63
Beats family (including Filebeat)

Beats is a family of data shippers.


Using Filebeat specifically,
Wazuh's alerts.json or
archives.json data can be sent to
Elasticsearch. It also can do data
transformation and enrichment,
like parsing strings, normalizing
field names, and doing geoip
lookups.

https://www.elastic.co/products/beats

Deck 1, Slide 64
Elastic Stack Integration

Deck 1, Slide 65
Elastic Stack Show and Tell

Deck 1, Slide 66

You might also like