Professional Documents
Culture Documents
Lab-Guide - Session 4
Wazuh 4.1.5
Elastic Stack 7.10.0
OpenDistro 1.12.0
Table of Contents
Vulnerability Detection
Lab Exercise 4c: Syscollector and Vulnerability-detector
Ensure your windows-agent is running a vulnerable package.
Configure vulnerability-detector
Look at the logs
See the alerts in Kibana
Rootkit Detection
How can Wazuh detect rootkits?
Detecting User-Mode rootkits with Wazuh
Detecting Kernel-Mode rootkits with Wazuh
User mode rootkits
Kernel mode rootkits
Prevention and mitigation of a rootkit
Lab Exercise 4d - Rootkit Detection
To configure syscheck for the Windows agent, replace the entire <syscheck> section in the
os="Windows" portion of /var/ossec/etc/shared/windows/agent.conf on the manager with
this simple configuration:
<syscheck>
<disabled>no</disabled>
<scan_on_start>yes</scan_on_start>
<frequency>300</frequency>
<directories check_all="yes" whodata="yes" report_changes="yes" tags="core,worms">C:/apple</directories>
<directories check_all="yes">C:/orange</directories>
</syscheck>
The above enables syscheck FIM on Windows agents, such that a periodic syscheck scan of
C:/orange will take place shortly after the start/restart of the Wazuh agent, and then every
300 seconds thereafter. The C:/apple directory will be monitored in real time for all
possible kinds of file changes, while the C:/orange directory will only be periodically
scanned for changes. Changes to existing text files in C:/apple will trigger an alert that
includes the details of the actual text that was changed. Just don't restart the agent to force
a syscheck scan sooner when testing report_changes since that feature does not apply for
the very first scan after an agent start/restart so as to mitigate over-reporting of text
changes that may have accumulated while the agent was not running.
Note that syscheck scans may produce many events when enabled during OS updates, as
usually many files are modified/added/removed during an update.
In the Windows agent log, you should see a couple of entries like this accounting for the
new syscheck monitoring of your two test directories:
2017/12/07 03:42:18 ossec-agent: INFO: Monitoring directory: 'C:/apple', with options perm | size | owner |
group | md5sum | sha1sum | realtime | mtime | inode.
2017/12/07 03:42:18 ossec-agent: INFO: Monitoring directory: 'C:/orange', with options perm | size | owner |
group | md5sum | sha1sum | report_changes | mtime | inode.
At this point, add, modify, and delete files in these two test directories on the Windows agent,
and watch your search results in Kibana for the query text "apple orange" (without quotes), to
see syscheck events as they appear. Notice that alerts about changes in c:\apple\ show up
promptly, while alerts about changes in c:\orange\ are not reported until the next
every-300-second syscheck scan. You can force a syscheck scan sooner by restarting the
Windows agent, but still expect to wait a couple of minutes before the scan actually runs.
Here is an example debug message in agent ossec.log showing a syscheck event being sent to
the manager. Messages marked DEBUG only appear when debug logging has been enabled.
On Ubuntu systems (like linux-agent) do not have the auditd service installed by
default, and this is required for whodata to work, so install it now on linux-agent.
<syscheck>
<disabled>no</disabled>
<scan_on_start>yes</scan_on_start>
<frequency>300</frequency>
<directories check_all="yes" whodata="yes" report_changes="yes">/apple</directories>
<directories check_all="yes" realtime="yes">/orange</directories>
<directories check_all="yes">/pear</directories>
</syscheck>
Make changes in the /apple and /orange and /pear directories and watch for them to be
accounted for in Kibana. Search for "syscheck".
Watch especially for the who-data fields. They start with "syscheck.audit."
To avoid false positive noise created by changes that are made to a system's
monitored files during a maintenance window, you could simply add a new agent
group that disables FIM. Then you could simply member agents into this group
when they enter a maintenance window and remove them from the group when
their maintenance window closes.
For example, you could create a new agent group called fim-disabled and set the
content of its agent.conf file to be:
<agent_config>
<syscheck>
<disabled>yes</disabled>
</syscheck>
</agent_config>
Note that upon membering an agent into the fim-disabled group, that will become
the last agent group in the agent's group list, and thus it will take precedence over
FIM settings defined in other groups that the agent is a part of.
When the maintenance window has passed for the agents you previously added to
this group, simply remove them from the fim-disabled group. The agents will then
automatically restart and will silently create a new FIM baseline.
<agent_config>
<labels>
<label key="state">maintenance</label>
</labels>
</agent_config>
This would cause a new field agent.labels.state to be added to all alerts from any
agent you put into the maintenance group, only while it is in that group. Among
other things, this would allow you to use the following criteria in the integrity
You might want to consider what other kinds of states a given server might enter
and exit over time that would be meaningful to account for via agent labels in this
way.
Of the many software packages installed on your Red Hat, CentOS, and/or Ubuntu systems,
which ones have known vulnerabilities that might impact your security posture? Wazuh helps
you answer this question with the syscollector and vulnerability-detector modules. On each
agent, syscollector can scan the system for the presence and version of all software packages.
This information is submitted to the Wazuh manager where it is stored in an agent-specific
database for later assessment. On the Wazuh manager, vulnerability-detector maintains a fresh
copy of the desired CVE sources of vulnerability data, and periodically compares agent
packages with the relevant CVE database and generates alerts on matches.
In this lab, we will configure syscollector to run on the wazuh server and on both of the Linux
agents. We will also configure vulnerability-detector on the wazuh server to periodically scan the
collected inventory data for known vulnerable packages. We will observe relevant log messages
and vulnerability alerts in Kibana including a dashboard dedicated to this. We will also interact
with the Wazuh API to more deeply mine the inventory data, and even take a look at the
databases where it is stored.
Configure vulnerability-detector
In /var/ossec/etc/ossec.conf on the manager, find the <vulnerability-detector> section. At the
top of that section switch the <enabled> value from "no" to "yes". Lower down in the same
section, switch <enabled> from "no" to "yes" directly under <provider name="canonical"> and
<provider name="nvd">.
For the purposes of this class, only the Canonical and NVD feeds will be used for vulnerability
detection purposes. NVD is being used for Windows vulnerabilities. Debian and Redhat are
also supported by not needed for this lab. The other OS we use in the lab environment is
Amazon Linux which is not fully compatible with vulnerability-detector at this time due to
package version mismatches between Amazon Linux and Redhat making the Redhat CVE
source unreliable for identifying vulnerable packages on Amazon Linux platforms.
Note it may take 10 minutes or more for the CVE database downloads to complete after the
restart of the manager. Only after will vulnerability detection proceed.
Up to now we have only seen the Wazuh API enable the Wazuh Kibana App to interface directly
with the Wazuh manager. However, you can also access the API directly from your own scripts
or from the command line with curl. This is especially helpful here as full software inventory data
is not stored in Elasticsearch or visible in Kibana – only the CVE match alerts are. The actual
inventory data is kept in agent-specific databases on the Wazuh manager. To see that, plus
other information collected by syscollector, you can mine the Wazuh API. Not only are software
packages inventoried, but basic hardware and operating system data is also tracked.
Run "agent_control -l" on the wazuh manager to list your agents as you will need to query the
API by agent id number:
On the wazuh manager, query the Wazuh API for scanned hardware data about agent 002.
APIUSER="wazuh"
APIPASS="wazuh"
TOKEN=$(curl -sS -u $APIUSER:$APIPASS -k -X GET
"https://localhost:55000/security/user/authenticate?raw=true")
curl -sS -k -X GET "https://127.0.0.1:55000/syscollector/002/hardware?pretty" -H "Authorization:
Bearer $TOKEN" | jq
{
"ram": {
"usage": 67,
"total": 1048176,
"free": 343996
},
"cpu": {
"cores": 1,
"mhz": 2400,
"name": "Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz"
},
"scan": {
"id": 707357457,
Now from the Wazuh Kibana App, click on the "Dev tools" in the upper right for an even more
convenient way to probe the depths of the Wazuh API. Explore possible Wazuh API
syscollector-related queries based on these examples:
GET /syscollector/002/os
GET /syscollector/001/netiface
GET /syscollector/002/netproto
GET /syscollector/000/netaddr
GET /syscollector/003/ports
GET /syscollector/003/processes?select=pid,name
GET /syscollector/002/packages
Take time to look over the online documentation about the syscollection section of the Wazuh
API:
https://documentation.wazuh.com/4.0/user-manual/api/reference.html#tag/Syscollector
This is a powerful facility that puts all sorts of data, configuration details, and state information at
your fingertips once you know how to ask for it. With the right parameters, not only can you
query these data sources, but you can select, filter, sort, and limit the results.
Make sure to take a look at the Vulnerabilities dashboard (WAZUH -> Modules ->
Vulnerabilities). Also drill into a specific Wazuh agent in the Wazuh Web UI and from there click
on Inventory to get to a dashboard that makes a nice presentation of the broader set of items
inventoried on that agent.
A rootkit is a program that can hide itself as well as running processes, file or network
connections from the host where it is running. The malicious program can change access
rights of files and directories and its aim is to run “incognito”, meaning in the background for
as long as possible. The purpose for the intruder is to gain full access to the victim’s system
all the time. The only precondition is that a rootkit needs to be installed with root privileges
in the first place. This might be the biggest challenge for the attacker. We distinguish
between two kinds of rootkits:
● /bin
● /usr/bin
● /sbin
● /usr/sbin
● /etc
● /boot
Basically every directory that includes binaries, if modified with malicious code could
contain a rootkit. The /etc directory contains all the config files that are necessary to run
programs. The /var filesystem is not included because it usually is the place where all
systemwide log files are stored, monitoring this directory would cause a lot of alerts
because log files usually change very often. Every time a file changes it automatically
generates a new checksum.
The interval of each syscheck is 43200 seconds, which translates to every 12 hours. The
interval is measured in seconds but is easily configurable in the Wazuh config. Wazuh’s
rootkit detection module looks specifically for traces of rootkits, malware, and trojans on
configured systems.
There are two other configuration files responsible for detecting a rootkit within Wazuh. The
rootcheck module which works similar to the aforementioned syscheck module
continuously parses the following text files during a rootcheck analysis.
Below is an example of how a typical config for Rootkit detection looks like
<rootcheck>
<frequency>3600</frequency>
<rootkit_files>/var/ossec/etc/shared/rootkit_files.txt</rootkit
_files>
<rootkit_trojans>/var/ossec/etc/shared/rootkit_trojans.txt</roo
tkit_trojans>
</rootcheck>
# T.R.K rootkit
usr/bin/soucemask ! TRK rootkit ::/rootkits/trk.php
usr/bin/sourcemask ! TRK rootkit ::/rootkits/trk.php
# Volc Rootkit
usr/lib/volc ! Volc Rootkit ::
usr/bin/volc ! Volc Rootkit ::
ps !/dev/ttyo|\.1proc|proc\.h|bash|^/bin/sh!
netstat !bash|^/bin/sh|/dev/[^aik]|/prof|grep|addr\.h!
1. Read the rootkit_files.txt which contains a database of rootkits and files commonly
used by them. It will try to stats, fopen and opendir each specified file. We use all
these system calls because some kernel-level rootkits hide files from some system
calls. The more system calls we try, the better the detection. This method is more
like an anti-virus rule that needs to be updated constantly. The chances of
false-positives are small, but false negatives can be produced by modifying the
rootkits.
3. Scan the /dev directory looking for anomalies. The /dev should only have device
files and the Makedev script. A lot of rootkits use the /dev to hide files. This
technique can detect even non-public rootkits.
4. Scan the whole filesystem looking for unusual files and permission problems. Files
owned by root, with write permission to others are very dangerous, and the rootkit
detection will look for them. Suid files, hidden directories and files will also be
inspected.
5. Look for the presence of hidden processes. We use getsid() and kill() to check if
any pid is being used or not. If the pid is being used, but “ps” can’t see it, it is the
indication of kernel-level rootkit or a trojaned version of “ps”. We also verify that the
output of kill and getsid are the same.
6. Look for the presence of hidden ports. We use bind() to check every tcp and udp
port on the system. If we can’t bind to the port (it’s being used), but netstat does not
show it, we probably have a rootkit installed.
7. Scan all interfaces on the system and look for the ones with “promisc” mode
enabled. If the interface is in promiscuous mode, the output of “ifconfig” should
show that. If not, we probably have a rootkit installed.
Wazuh performs several tests to detect rootkits, one of them is to check the hidden files in
/dev. The /dev directory should only contain device-specific files such as the primary IDE
hard disk (/dev/hda), the kernel random number generators (/dev/random and
/dev/urandom), etc. Any additional files, outside of the expected device-specific files,
should be inspected because many rootkits use /dev as a storage partition to hide files. In
the following example we have created the file .hid which is detected by Wazuh and
generates the corresponding alert.
We can see that Wazuh detected our fake hidden file, by scanning the /dev file system for
hidden files which usually are not supposed to be there.
Detecting Hidden
Command System Call
Processes
Process hidden by
ps setsid(). getpgid(), kill()
trojaned ps
Other
This shows that Wazuh uses system calls to detect both User mode and Kernel mode
rootkits. Particularly interesting to use will be the system calls setsid(), getpid() and
kill() as those are used to detect hidden processes from the process listing utility “ps”.
Rootcheck attempts to detect a rootkit by issuing those system calls and comparing the
A kernel mode rootkit makes changes to the kernel with the goal of intercepting system
calls. It can manipulate information sent to and from the user mode tools. Wazuh is a user
mode application and thus also relies on data passed to it from the kernel. The rootcheck
module looks for possible intercepted system calls that may be hiding files and processes.
In the picture below we can see that Wazuh looks for the existence of hidden files used by
known rootkits.
The list of files from known rootkits that rootcheck attempts to locate can be found on the
Wazuh manager in /var/ossec/etc/shared/rootkit_files.txt
The rootcheck module tries to open each file by using the system calls opendir(), chdir(),
stats() and fopen(). We can see that rootcheck is able to open a file with the opendir() and
chdir() system calls but unable to do with stats() or fopen(). This may be an indication that a
rootkit has intercepted the stats() and fopen() system calls. Following this, rootcheck
would report a possible kernel level rootkit based on that discrepancy. However, this
detection method will have limited success since the file names must be present in
rootkit_files.txt or rootcheck will fail to detect this.
We have previously mentioned that rootkits try to hide its existence by manipulating the
process listing. A user-mode rootkit would replace the original “ps” binary with a modified
one that is capable of hiding its existence. However, this would not be very efficient because
Wazuh’s syscheck module would detect this bogus binary during the file integrity checks or
it may be detected by the rootcheck module when comparing the output from the ps utility
to the output from three different system calls that query running processes. As shown in
the table above we can see that processes hidden by a kernel mode rootkit would be
In this exercise you will safely implement a kernel-mode rootkit as a proof-of-concept for
Wazuh rootkit detection. This rootkit is able to hide itself from the kernel module list as well
as hide selected processes from being visible to "ps". However, Wazuh will still detect it
using the system calls “setsid()”, “getpid()”, and “kill()”. This makes Wazuh a
very effective Linux rootkit detection application by looking for general low-level hiding
behavior..
For this lab the Diamorphine kernel module has already been compiled for you and is
waiting in /tmp/diamorphine.ko to be inserted.
2: Download, build, and load the rootkit kernel module and put it to use
git clone https://github.com/wazuh/Diamorphine.git
cd Diamorphine
sed -i 's/asm\/uaccess\.h/linux\/uaccess.h/' diamorphine.c
make
insmod diamorphine.ko
The kernel-level rootkit “diamorphine” is now installed on this system! By default it is hidden
so we wouldn’t be able to detect it by running “lsmod”. Only with a special "kill" signal can
we make diamorphine unhide itself: Try it out:
In the case of Diamorphine, any attempt to send kill signal -63 to any process whether it
exists or not, will toggle whether the Diamorphine kernel module hides itself.
This rootkit also allows you to hide a selected processes from being seen by the "ps"
command for example. Here we will find the pid of rsyslogd and then hide it. Your rsyslogd
pid will be different than the one in this example. Substitute the correct pid for 535 below.
3: Next configure your linux-agent to run rootcheck scans every 5 minutes and to disable
syscheck FIM and policy enforcement scans to make the lab less noisy. To accomplish this,
replace the entire <rootcheck> section in /var/ossec/etc/shared/linux/agent.conf on
the manager, with the following, Also, in the <syscheck> section, set <disabled> to yes.
Then restart first the manager and then the agent with the "systemctl restart
wazuh-manager" & "systemctl restart wazuh-agent" commands.
<rootcheck>
<disabled>no</disabled>
<frequency>300</frequency>
<skip_nfs>yes</skip_nfs>
<check_unixaudit>yes</check_unixaudit>
<check_files>yes</check_files>
<check_trojans>yes</check_trojans>
<check_dev>yes</check_dev>
<check_sys>yes</check_sys>
<check_pids>yes</check_pids>
<check_ports>yes</check_ports>
<check_if>yes</check_if>
</rootcheck>
Now our next rootcheck scan should run shortly. It should notice the rsyslogd process
which we have hidden with Diamorphine and alert about it.
4: Pay close attention to ossec.log on your linux-agent. You will see the following
information being displayed then the rootcheck scan runs:
We can see that the rootcheckd process is running some check on the /dev filesystem
(check_rc_dev), as well as running some syscall checks (check_rc_sys) as well as, and this is
probably the most interesting part for our rootkit, running the checks on the the process id’s
(check_rc_pids).
6: Remember, if you run the same "kill -31" command as before against rsyslogd, the
rsyslogd process will become visible again. The subsequent rootcheck scan would no
longer alert about it.
7: Remove the rootkit from your linux-agent# since we don’t need it any longer.
rmmod diamorphine
kill -63 509
rmmod diamorphine