Professional Documents
Culture Documents
owlh team
1 What is OwlH? 1
2 A few topics 3
i
ii
CHAPTER 1
What is OwlH?
This is OwlH, open source solution. OwlH is born to help security engineers to manage, visualize, analyze and respond
to threats detected from Open Source Network IDS like Suricata and BroIDS, offering:
• Manage Network IDS (suricata and broIDS) nodes at scale
• SoftwareTAP for cloud and on-premises environments
• Centralized Rule Management
• Centralized Visualization
• Network Data Collection and Big Data Storage
• Compliance Mapping and Dashboards
• Incident Response Automation
1
owlh_documentation Documentation, Release 0.4 - Cloud and Bro
A few topics
Security world is not related to a tool. It is related to a continuous process that must be able to evolve and adapt to
your network, systems, and software as they do.
Also, Security and cyber-security terms are a really big and complex world with a huge amount sub-worlds, regions,
areas, or whatever how you would like to call them
OwlH is born to help with one piece of this galaxy. We can summarize it as help to implement and maintain Network
Traffic Analysis process based in Network IDS open source solutions. But a process is not just a tool or a solution, it
will contain tasks, tools, solutions and it must evolve and adapt.
So, OwlH is about that, a platform to provide process definitions that will use 3rd party tools or solutions, our own
tools, and our tasks definition to successfully implement them.
This picture will summarize the process we are working in.
3
owlh_documentation Documentation, Release 0.4 - Cloud and Bro
This appliance run the Network IDS software. OwlH supports Suricata Network IDS and will support BroIDS on next
releases. Usually, best approach is to run a network IDS node as only network IDS so there is no other services or
production environment tools working on it that are not related with traffic collection and analysis.
Main role of this appliance is to listen traffic, analyze captured traffic using the ruleset provided and send the alerts to
the master node. Also OwlH will include a capability to run actions in response to detected alerts.
With OwlH you can deploy this OwlH NIDS node from scratch or you can include an integrate any deployed Network
IDS node. Supported platforms are Debian Stretch and CentOS 7.
• deploy as appliance
• deploy as a service in a running Network IDS probe.
Centralized management will provide you an easy way to maintain your Network IDS probes. Among others, Master
node will provide centralized rule management based on Open Rules solution, probes status Monitoring, configuration
management, etc.
This should be an appliance, you can deploy different managers in parallel as a cluster. The OwlH master software
can also run into Wazuh Manager if you will use OwlH together with Wazuh.
This will introduce an easy way to integrate your Suricata output into Wazuh world. this is a one-way integration
process, from your Suricata node to your Wazuh Dashboard. OwlH will help also to manage your Suricata nodes
configuration and rules, and many other things. but right now, let’s integrate your Suricata node with Wazuh.
As usual, please keep in contact if there is any clarification or help needed.
• email our support team - support@owlh.net
• visit our mailing list - OwlH mailing list (owlh@googlegroups.com)
Main steps
We assume that you will use a new Suricata deployment or you will use a current one. Both will work. In this
procedure, we relay on Wazuh agent to do the collection work, so if your platform supports wazuh agent you should
be able to integrate your Suricata too. Most usual environments are supported by both, but anyway, please, verify
Suricata and Wazuh-agent requirements to find the right match.
2.3.3 Configure Suricata to store output in JSON format - EVE log configuration
By default Suricata configuration file suricata.yaml has the EVE (Extensible Event Format) enabled and configured to
store the output in JSON format. But, anyway is a good idea to review that configuration and verify that we have all
the info we want in the output file.
Output file is usually - /var/log/suricata/eve.json
a tail -f /var/log/suricata/eve.json will help you to verify that configuration is
˓→working.
Here you have a SAMPLE eve-log configuration. Please, there are great configuration settings to use, so expend some
time, review your Suricata documentation and find the right configuration for your needs.
outputs:
- eve-log:
enabled: yes
filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
filename: eve.json
types:
- alert:
metadata: yes
tagged-packets: yes
xff:
enabled: yes
mode: extra-data
- http:
extended: yes
- dns:
query: yes # enable logging of DNS queries
(continues on next page)
We are integrating Suricata with Wazuh, so we need to have Wazuh Manager and elastic stack running before to end
our configuration. At least we will need a Wazuh Manager connected to the elastic stack.
Please, follow Wazuh install guide to deploy manager and elastic stack. If you have this done, you can skip this step.
Wazuh Agent will be the transporter of our Suricata output. It provides a secure communication channel between our
Suricata node and Wazuh Manager and the storage repository. Of course, Wazuh Agent does a lot more, it will help us
to take care of our Suricata security by providing FIM, OS and audit Log Monitoring, and many others. Check Wazuh
Agent doc if you are not familiar with its capabilities.
To install it please read and follow the install instructions from Wazuh. Or request our help.
• email our support team - support@owlh.net
• visit our mailing list - OwlH mailing list (owlh@googlegroups.com)
By default, Wazuh will use the JSON decoder to parse any JSON log entry from a wazuh agent. This decoder works
really great, so we don’t need to care about parsing.
To create an alert from collected logs, Wazuh uses rules. Each rule has an alert value so if the logs match with a rule
and the rule’s alert value is equal or higher than alert level umbral defined in wazuh manager, then you will have an
alert.
So, by default, most Suricata rules will have a 0 value level to prevent noisy events. we suggest to modify this values
just to be sure that everything is collected, you can then adjust the alert level as needed in the future, as well as you
can modify rules also as you may need.
If you are not familiar with decoders and rules, this may help - Wazuh decoders and rules.
Remember to restart your Wazuh Manager service after any change in your configuration file and check your Wazuh
Manager logs.
We need to tell our Wazuh Agent to read the Suricata output file. This will be done in the ossec.conf file under
/var/ossec/etc folder (Linux systems). Check your <ossec_config> tag and include following lines.
Remember to restart your Wazuh Agent service after any change in your configuration file and check your Wazuh
Agent logs.
Suricata json format includes fields like src_ip, src_port, dest_ip and dest_port. but wazuh elastic index is using srcip,
srcport, dstip and dstport.
so we will do the mapping modification in logstash by including the following in the wazuh logstash filter.
filter {
if [data][src_ip] {
mutate{
add_field => [ "[data][srcip]","%{[data][src_ip]}"]
remove_field => [ "[data][src_ip]" ]
}
}
if [data][dest_ip] {
mutate{
add_field => [ "[data][dstip]","%{[data][dest_ip]}"]
remove_field => [ "[data][dest_ip]" ]
}
}
if [data][dest_port] {
mutate{
add_field => [ "[data][dstport]","%{[data][dest_port]}"]
remove_field => [ "[data][dest_port]" ]
}
}
if [data][src_port] {
mutate{
add_field => [ "[data][srcport]","%{[data][src_port]}"]
remove_field => [ "[data][src_port]" ]
}
}
}
Elasticsearch Wazuh index template is based on agent fields and doesn’t include all the new fields types that Suricata
will provide. This is not a real problem as an index refresh into kibana will allow you to manage Suricata without a
problem. But some useful things may happen if we use the right field type as for example an amazing flow dashboard
with useful traffic graphics.
These are some fields that will require template customization.
"flow": {
"properties": {
"bytes_toclient" : {
"type": "long",
"doc_values": "true"
},
"bytes_toserver": {
"type": "long",
"doc_values": "true"
}
}
},
Note: As there can be some issues when modifying elasticsearch indices and templates, please request our help to do
it. We are working to prepare a full index template and instructions.
Components
This system will require Bro working of course, and Wazuh agent installed. OwlH instructions will help to configure
both Bro and Wazuh agent.
you can load the json_logs.bro configuration that will tell ASCII writer to write output in JSON format. You must
include following line in your .bro configuration files. It can be /etc/bro/site/local.bro or you can follow our recomen-
dation and write the configs in owlh.bro file (please, see below).
This will modify output and will store just json output, you won’t have ASCII output.
@load tuning/json_logs.bro
Usually, you would like to have both outputs, ASCII and JSON. You can use add-json packet (https://github.com/
J-Gras/add-json) and load it in your local.bro or owlh.bro.
@load packages/add-json/add-json.bro
It is a good idea to help wazuh rules to do their job, to include a field that will identify what kind of log line we are
analyzing. Bro output doesn’t include that info per line by default, so we are going to help wazuh by including the
field ‘bro_engine’ that will tell wazuh what kind of log is it.
We are using redef function to include a custom field for each ::Info record of each Protocol. Here are just a few of
them, we will include more by default in next releases.
We include all OwlH customizations in OwlH_*.bro files, that helps to have a clear view of what OwlH does as well
as we hope it will simplify configuration management.
Under /etc/bro/site we will create two files
• owlh.bro - Will include JSON call and @load for bro_engine field definition.
• owlh_types.bro - Will include all redef statments
You will only need to load OwlH.bro at the end of your local.bro file to include all these configurations
@load /etc/bro/site/OwlH.bro
and owlh_types.bro:
<localfile>
<log_format>syslog</log_format>
<location>/path/to/bro/logs/*.log</location>
</localfile>
Note: if needed, You can specify files instead of all .log ones
<localfile>
<log_format>syslog</log_format>
<location>/path/to/bro/logs/weird.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/path/to/bro/logs/conn.log</location>
</localfile>
Good news is that Wazuh’s JSON decoder works really great, so using JSON output from BRO allow us to save time
developing an specific decoder for its standard ASCII out.
We only need to create a few rules to identify the Bro events and forward them to ELK.
Include the Wazuh rules into your /var/ossec/etc/rules/local-rules.xml file to manage your Zeek logs
<group name="zeek">
<rule id="99001" level="5">
<field name="bro_engine">SSH</field>
<description>Zeek: SSH Connection</description>
</rule>
<rule id="99001" level="5">
<field name="bro_engine">SSL</field>
<description>Zeek: SSL Connection</description>
</rule>
<rule id="99002" level="5">
<field name="bro_engine">DNS</field>
<description>Zeek: DNS Query</description>
</rule>
<rule id="99004" level="5">
<field name="bro_engine">CONN</field>
<description>Zeek: Connection detail</description>
</rule>
</group>
We need to modify Logstash filters (/etc/logstash/conf.d/) to allow JSON record cleaning from Bro to Wazuh-alert
index parsing. It is necesary because bro uses [id] field to group network src and dest addresses and ports info and
parsing will fail
Also, it is done so we can store IP-PORT data in the right fields for wazuh index
filter {
if [data][id][orig_h] {
mutate {
add_field => [ "[data][srcip]", "%{[data][id][orig_h]}" ]
add_field => [ "[data][dstip]", "%{[data][id][resp_h]}" ]
add_field => [ "[data][srcport]", "%{[data][id][orig_p]}" ]
add_field => [ "[data][dstport]", "%{[data][id][resp_p]}" ]
remove_field => [ "[data][id]" ]
}
}
}
You will need to refresh your Wazuh-alerts-3.x indeces to include the new Zeek fields. from your kibana console, go
to Management -> index -> select right wazuh-alerts index -> click top-right refresh icon to refresh
OwlH team will help you to define rules that will identify traffic that can be related to PCI requirements like unen-
crypted traffic between PCI related systems. Use of unknown services from PCI network to external servers, Firewall
policy violations when publishing internal services.
Please download the script that will allow you to manage your compliance mapping
$ curl -so /tmp/owlh-suri2pci.sh https://raw.githubusercontent.com/
owlh/owlhpci/master/owlh-suri2pci.sh
2.6.1 OwlH Software TAP to monitor traffic in AWS and GCLOUD environments
OwlH Software TAP (sTAP) will collect full or specific traffic from your instances and forward it to OwlH Master that
will run the Network IDS tool to do the analysis.
This doc will describe a basic configuration using CentOS instances, Bro and Suricata Network IDS and Wazuh
Integration. (other Linux distributions as well as Windows Support is available)
There are a lot of moving pieces, feel free to ask for help support@owlh.net. This doc will try to simplify deployment
but for sure it will need some customization as well as may need some architecture understanding.
Main steps:
• Introduction: How does it work?
• Prepare your environment
– Option: Create an administration network
– OwlH Master
– Suricata NIDS
– BRO NIDS
– Wazuh Integration
– Default configuration settings
• Register your instances
– Define Instances settings
– Configure your instance
• Enjoy it
Software TAP is for capture traffic in remote instances, transport captured traffic to a central analysis platform, analyze
the traffic and alert. It works in any environment, but it is really useful when you need this visibility in a cloud
environment.
Note: For cloud like AWS or Google Cloud should be good idea to deploy our instances with two different network
interfaces, so we can use main interface as public service interface and secondary for management propouses, as traffic
forward from instances to OwlH system
• Copy your owlh master ssh key to your instances /tmp folder. Be sure it is in the right place.
Note: change user and 1.1.1.1 as required or please, follow your own deployment process to ensure that the owlh
master pub key is in place on each instance.
We will help to have a better continuous monitoring by using a configuration based on a dummy network interface
and running Network IDS solutions continuously. PCAPS will be injected using TCPREPLAY script in the dummy
interface.
Suricata deployment script will help you to deploy Suricata 4.0.4 from source code in a CentOS 7 box.
If you prefer a different way to deploy suricata, please follow Suricata documentation.
Run Suricata IDS
Zeek deployment script will help you to deploy Bro IDS from source code in a CentOS 7 box.
If you prefer a different way to deploy Bro, please follow Zeek documentation.
Run Zeek IDS
{
"pidfile" : "/tmp/flock.pid",
"logfile" : "/var/log/owlh/flock.log",
"inventory" : "/etc/owlh/inventory.conf",
"owlh_user" : "owlh",
"owlh_user_key" : "/home/owlh/.ssh/owlhmaster",
"max_cpu" : "25",
"max_mem" : "25",
"max_storage" : "80",
"capture_time" : "60",
"default_interface" : "ens33",
"filter_path" : "/etc/owlh/filter.bpf",
(continues on next page)
BPF filter
You can specify what traffic to be captured if you don’t want to capture everything. Main and default configuration
will provide filter to not collect management traffic from OwlH master to your agents.
Remember this filter must be deployed into each one agent. be sure it is on each one of your servers.
Your bpf filter should be at least something like this
not host 1.1.1.1 and not port 22
Where 1.1.1.1 must be replaced with your OwlH master ip that will connect to your server.
We will need some tools and a user in each one of your servers in order to coordinate the traffic capture functionality
• Create and configure owlh user in your servers
The owlh user will be use by OwlH Master Orchestrator to run traffic capture and collect pcap files. to create user and
configure it please follow this script:
#!/bin/bash
# Created 28.02.18
# v0.1 24.05.18 master@owlh.net
#sudo echo "not host 10.164.0.4 and not port 22" > /etc/owlh/filter.bpf
Script also includes tcpdump installation as part of the traffic capture stuff. Please be sure you have tcpdump running
before continue. This step is only needed if you don’t have tcpdump installed yet.
We need to know a little bit about your network. At least, we need to know what are the servers that you want to
capture traffic from.
Please, include in your OwlH server inventory file all your servers /etc/owlh/inventory.json. Define them as needed
[
{
"id" : "1",
"name" : "agent-1-openrules",
"ip" : "192.168.1.218",
"enabled" : "true",
"active" : "true"
},
{
"id" : "2",
"name" : "agent-2-217",
"ip" : "192.168.1.217",
"enabled" : "true",
"active" : "true"
}
]
Be sure you have at least one Wazuh manager and elastic stack working before to continue. Please follow Wazuh
documentation.
Integrate OwlH master with Wazuh is pretty easy. We only need to deploy our Wazuh agent into the OwlH master.
Follow Wazuh agent deploy instructions for RPM packets to deploy the agent.
in summary, you will set up the repository by running the following command:
now, lest register agent into your Wazuh Manager. if you are using authd on your manager:
# register agent
/var/ossec/bin/agent-auth -m 1.1.1.1 -A owlhmaster
Please review, authd documentation or find a different way to register your agent. Register agent documentation
Finally, modify your ossec.conf file to monitor your suricata output
<localfile>
<log_format>syslog</log_format>
<location>/var/log/suricata/eve.json</location>
</localfile>
2.6.11 Enjoy It
# is everything in place?
# start Wazuh
# start Suricata
# start Zeek
# start Flock Controller