Professional Documents
Culture Documents
SIEMonster V4 Starter Edition Operations Guide V1.0
SIEMonster V4 Starter Edition Operations Guide V1.0
If this guide is distributed with software that includes an end user agreement, this guide, as
well as the software described in it, is furnished under license and may be used or copied
only in accordance with the terms of such license. Except as permitted by any such license,
no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any
form or by any means, electronic, mechanical, recording, or otherwise, without the prior
written permission of SIEMonster. Please note that the content in this guide is protected
under copyright law even if it is not distributed with software that includes an end user
license agreement.
The content of this guide is furnished for informational use only, is subject to change without
notice, and should not be construed as a commitment by SIEMonster. SIEMonster assumes
no responsibility or liability for any errors or inaccuracies that may appear in the
informational content contained in this guide.
Please remember that existing artwork or images that you may want to include in your
project may be protected under copyright law. The unauthorized incorporation of such
material into your new work could be a violation of the rights of the copyright owner. Please
be sure to obtain any permission required from the copyright owner.
Any references to company names in sample templates are for demonstration purposes only
and are not intended to refer to any actual organization.
SIEMonster.com 2
1 Preface ......................................................................................................................... 2
1.1 What is SIEMonster ............................................................................................................................. 3
3 Infrastructure .............................................................................................................. 8
3.1 Operating System ................................................................................................................................ 8
3.2 Hardware/Virtual SPECS .................................................................................................................... 8
3.3 Networking ............................................................................................................................................ 8
3.4 Open Ports ............................................................................................................................................. 8
4 Infrastructure .............................................................................................................. 9
4.1 Docker ...................................................................................................................................................... 9
4.2 SIEMonster App .................................................................................................................................... 9
4.3 Open Distro............................................................................................................................................ 9
4.4 The Hive ............................................................................................................................................... 10
4.5 Cortex .................................................................................................................................................... 11
4.6 MITRE Att&CK .................................................................................................................................... 12
4.7 MISP Framework ............................................................................................................................... 13
4.8 NiFi ......................................................................................................................................................... 13
4.9 Patrowl .................................................................................................................................................. 14
4.10 OpenCTI ............................................................................................................................................. 15
4.11 Alerting............................................................................................................................................... 16
4.12 Message Queuing - Kafka ........................................................................................................... 17
4.13 Performance ..................................................................................................................................... 18
4.14 Reporting........................................................................................................................................... 19
4.15 Wazuh ................................................................................................................................................. 20
4.16 Suricata............................................................................................................................................... 21
4.17 DNS Settings .................................................................................................................................... 22
4.18 Endpoint Setup................................................................................................................................ 23
4.19 Suggestions ...................................................................................................................................... 23
SIEMonster.com 3
5 Installation ................................................................................................................25
5.1 Download ............................................................................................................................................ 25
5.2 Requirements ..................................................................................................................................... 25
5.3 VMware Workstation ....................................................................................................................... 25
5.4 Oracle VirtualBox .............................................................................................................................. 26
5.5 ESXi ........................................................................................................................................................ 27
5.6 SIEMonster first time Start-Up ..................................................................................................... 29
5.6.1 DHCP IP Address ......................................................................................................... 29
5.6.2 Static IP Address .......................................................................................................... 29
5.7 DNS Settings ...................................................................................................................................... 30
5.7.1 First Time Configuration ........................................................................................... 31
5.8 Demo Data .......................................................................................................................................... 32
5.9 Open ports .......................................................................................................................................... 33
5.10 Client Setup ...................................................................................................................................... 34
5.10.1 Microsoft Windows .................................................................................................. 34
5.10.2 Linux Machines .......................................................................................................... 34
5.10.3 Apple Mac ................................................................................................................... 34
SIEMonster.com 4
7.4 My Profile............................................................................................................................................. 58
7.5 Superadmin Panel............................................................................................................................. 59
8 Dashboards ...............................................................................................................60
8.1 Discover ................................................................................................................................................ 60
8.1.1 Exercise: Discover the Data ...................................................................................... 65
8.2 Visualize................................................................................................................................................ 67
8.2.1 Aggregations ................................................................................................................ 67
8.2.2 Visualizations ................................................................................................................ 73
8.2.3 Exercise: Visualize the Data...................................................................................... 88
8.3 Dashboard ........................................................................................................................................... 89
8.3.1 Exercise: Creating a new Dashboard .................................................................... 90
8.4 Alerting ................................................................................................................................................. 92
8.4.1 Monitor ........................................................................................................................... 92
8.4.2 Exercise : Creating Monitors .................................................................................... 92
8.4.3 Alerting: Security Roles ............................................................................................. 97
8.4.4 Exercise: View and Acknowledge Alerts .............................................................. 97
8.4.5 Exercise: Create, Update, and Delete Monitors and Destinations ............. 98
8.4.6 Exercise: Ready Only .................................................................................................. 99
8.5 Wazuh ................................................................................................................................................. 100
8.5.1 Wazuh: Security Events .......................................................................................... 100
8.5.2 Wazuh: PCI DSS......................................................................................................... 102
8.5.3 Wazuh: OSSEC ........................................................................................................... 103
8.5.4 Wazuh: GDPR ............................................................................................................. 103
8.5.5 Wazuh: Ruleset .......................................................................................................... 108
8.5.6 Wazuh: Dev Tools ..................................................................................................... 108
8.6 Dev Tools ........................................................................................................................................... 109
8.6.1 Exercise : Dev Tools ................................................................................................. 110
8.7 Management .................................................................................................................................... 113
8.7.1 Index Patterns ............................................................................................................ 113
8.7.2 Exercise : Creating an Index Pattern to Connect to Elasticsearch ........... 113
8.7.3 Managing Saved Objects ...................................................................................... 115
8.8 Security ............................................................................................................................................... 116
8.8.1 Permissions ................................................................................................................. 116
SIEMonster.com 5
8.8.2 Action Groups............................................................................................................ 116
8.8.3 Roles.............................................................................................................................. 117
8.8.4 Exercise : Creating Role .......................................................................................... 117
8.8.5 Backend Roles ........................................................................................................... 117
8.8.6 Users ............................................................................................................................. 117
8.8.7 Exercise : Creating a User ...................................................................................... 117
8.8.9 Exercise : Role Mapping ......................................................................................... 118
SIEMonster.com 6
11.2.1 Adding an Event ..................................................................................................... 144
11.2.2 Add Attributes to the Event ............................................................................... 146
11.2.3 Add Attachment to the Event............................................................................ 147
11.3 List Attributes ................................................................................................................................. 149
11.4 Search Attributes .......................................................................................................................... 150
13 Alerts..................................................................................................................... 160
13.1 QuickStart........................................................................................................................................ 160
13.2 Exercise: Praeco - Creating a new rule.................................................................................. 160
13.3 Praeco: Configuration ................................................................................................................. 163
13.4 Praeco: Upgrading ....................................................................................................................... 164
13.5 Praeco: Scenarios.......................................................................................................................... 165
SIEMonster.com 7
Ease of Use ............................................................................................................................ 174
Security 174
15.2 NiFi User Interface ........................................................................................................................ 175
15.3 Exercise: Building a Dataflow ................................................................................................... 183
15.3.1 Adding a Processor ............................................................................................... 183
15.3.2 Configuring a Processor ...................................................................................... 184
15.3.3 Connecting Processors ........................................................................................ 185
15.3.4 Starting and Stopping a Processor .................................................................. 188
SIEMonster.com 8
SIEMonster - High Level
Design
SIEMonster.com 1
1 Preface
In 2015, one of our corporate clients told us of their frustrations with the exorbitant licensing
costs of commercial Security Information and Events Management (SIEM) products. The
customer light heartedly asked whether we could build them an open source SIEM to get
rid of these annual license fees. We thought that was a great idea and set out so to develop
a SIEM product for Managed Security Service Providers (MSSP’s) and Security
Professionals. This product is called SIEMonster.
SIEMonster Version 1 was released in late April of 2016 and a commercial release in
November 2016. The release has been an astounding success with over 100,000 downloads
of the product. We have assisted individuals and companies integrate SIEMonster into small
medium and extra-large companies all around the world. SIEMonster with the help of the
community and a team of developers have been working hard since the Version1 release
incorporating what the community wanted to see in a SIEM as well as things we wanted to
see in the next release.
Along the way we have signed up MSSP’s from around the world who have contributed to
the rollout of SIEMonster and in return they have assisted us with rollout scripts, ideas and
things we hadn’t even considered. We are now proud to release the latest Version 4.0 of
SIEMonster.
Community Edition: A single server ideal for 1-100 endpoints. SIEMonster Community
Edition is a free version of SIEMonster, running on CoreOS, it is fully featured with community
support.
Professional Edition: A single server that runs locally or in the Cloud and is ideal for 1-200
endpoints. SIEMonster Starter Edition is available in a 30-day trial and can be converted into
an annual subscription. This is perfect for smaller organizations that require professional
support and the product scales to multiple servers increasing the endpoint count to 1000.
Enterprise: Multi Server Cloud or Local that infinitely scales from 1-100,000+ Endpoints that
can run ingestion from 1-500,000 Events per second using managed Kubernetes and Kafka.
MSSP: A Multi-tenancy Edition of SIEMonster in AWS or Local install for select customers
and Managed Security Service providers.
SIEMonster.com 2
1.1 What is SIEMonster
Powerful Open Source security tools are increasingly being released to help security
professionals perform automated tasks. But they are difficult to install, maintain and support
and impossible to integrate with existing SIEM Solutions.
SIEMonster is a collection of the best open source security tools, as well as our own
development as professional hackers to provide a SIEM for everyone. We show case the
latest and greatest tools for security professionals. Not only that but we have built the
platform on K8 with managed ingestion and can reach EPS of 500K in our cloud offering. We
offer white-label solutions, local installation on ESXi or BareMetal at an affordable price.
One of the most important features is our adaptability with open source modules. We can
bring in new cutting-edge modules to show case to our customers and the open source
author a chance to showcase their products. But not only bring in we integrate them with all
the existing components. The Hive, an Incident Response tool is free and open source, but
it is a standalone system. Using SIEMonster you can use the TheHive to report on an incident
sign the task to someone to fix within the software and still have everything, case
management, logs, and data under one roof. This is a unique offering and identity who we
are.
SIEMonster have integrated Wazuh, Ni-Fi, Cortex and The Hive modules among others into
this latest build. We have done all the hard work for you integrated them into the SIEMonster
suite. Now you can have a SIEM, with Incident Reporting, Advanced Correlation with Threat
Intelligence and Active Response all working together.
SIEMonster.com 3
2 Introduction to SIEMonster Community Edition
SIEMonster Community Edition Version 4 is built on the best supportable components and
custom development from a wish list of the SIEMonster community. This training document
will cover the architecture and the features that make up SIEMonster, so that all security
professionals can run a SIEM in their organizations with no budget.
SIEMonster Community Edition is built on CoreOS running docker. The product comes in
VMware, ESXi, HyperV and Bare Metal.
• MISP Framework
• MITRE ATT&CK
• Wazuh HIDS system with Kibana plugin and OpenSCAP options & simplified agent
registration process
• All new dashboard with options for 2FA, site administration with user role-based access and
faster load times
• Data Correlation UI, community rulesets and dashboards, community and open source free
plugins that make the SIEM.
• Incorporate your existing Vulnerability Scans into the Dashboard, (OpenVAS, Nexpose,
Metasploit, Burp, Nessus etc.)
SIEMonster welcome you to try out our fully functional SIEM solution and if you wish to
purchase the product with support please contact sales at https://www.siemonster.com
SIEMonster.com 4
2.1 Scope
This document covers all the software and hardware infrastructure components for the
Security Operations Centre SIEMonster Community Edition product and the operations
guide including how to use guides.
2.2 Audience
This document is intended for technical representatives of companies, SOC owners as well
as security analysts and professionals. The audience of this document are expected to have
a thorough level of knowledge of Security, Software and Server Architecture.
The relevant parts are included here for convenience and may of course be subject to
change. They will be updated when notification is received from the relevant owners.
SIEMonster.com 5
2.3 SIEMonster Community Edition Build Overview
Below is a high-level diagram of the Infrastructure components.
SIEMonster.com 6
2.4 SIEMonster Community Edition Portal Front End
SIEMonster.com 7
3 Infrastructure
This section contains the Operating System, Storage, RAM requirements networking, and
Open Ports for ingestion of the Community Edition.
CPU 8 VCPU
RAM 32 GB
3.3 Networking
• DHCP enabled for initial system load, Manual IP setup after install
Service Ports
External services open ports for ingestion, i.e. what client will send their data to SIEMonster
Service Ports
Kafka Receiver
TCP Port 9094
(beats family)
SIEMonster.com 8
4 Infrastructure
This section contains the application components and descriptions of the build.
4.1 Docker
SIEMonster Community Edition is run on Docker. The Starter, Enterprise and MSSP editions
are run on Kubernetes for infinite and auto scalability.
• Changeable themes
• SMTP and Slack notifications for password retrieval and authentication failures
Open Distro for Elasticsearch protects your cluster by providing a comprehensive set of
advanced security features, including a number of authentication options (such as Active
Directory and OpenID), encryption in-flight, fine-grained access control, detailed audit
logging, advanced compliance features, and more.
Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting
system, enabling you to monitor your data and send notifications automatically to your
SIEMonster.com 9
stakeholders. With an intuitive Kibana interface and powerful API, it is easy to set up and
manage alerts. Build specific alert conditions using Elasticsearch's query and scripting
capabilities. Alerts help teams reduce response times for operational and security events.
Open Distro for Elasticsearch makes it easy for users who are already comfortable with SQL
to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems.
SQL offers more than 40 functions, data types, and commands including join support and
direct export to CSV.
• Pre-configured alert templates can be easily customized to fit specific use cases. Triggered
alerts are sent directly to The Hive with all relevant data being extracted
• Cortex Mailer Responder which allows you to e-mail the case information and IoCs
• Submission of observables from cases and alerts to third party IOC services
SIEMonster.com 10
Cortex integration
• Analyzers can be launched against observables to get more details about a given
observable
• Responders can be launched against case, tasks, observables, logs, and alerts to
execute an action
4.5 Cortex
Cortex solves two common problems frequently encountered by SOCs, CSIRTs and security
researchers in the course of threat intelligence, digital forensics and incident response:
• How to analyze observables they have collected, at scale, by querying a single tool instead
of several?
• How to actively respond to threats and interact with the constituency and other teams?
Cortex can analyze (and triage) observables at scale using more than 100 analyzers. you can
actively respond to threats and interact with your constituency and other parties thanks to
Cortex responders. Within the SIEMonster platform Cortex is pre-integrated with TheHive
and MISP to get you up and running.
SIEMonster.com 11
Analyzers and Responders are autonomous applications managed by and run through the
Cortex core engine. Analyzers allow analysts and security researchers to analyze observables
and IOCs such as domain names, IP addresses, hashes, files, URLs at scale.
• A custom xml configuration is setup with Windows agents to translate process activity to
MITRE ATT&CK™ vectors so specific events can be easily queried by the SOC analyst. This
also applies to alerts based on these types of events of which there are many pre-canned
templates out of the box
Dashboards are also provided for forensic analysis of MITRE ATT&CK™ correlations.
SIEMonster.com 12
4.7 MISP Framework
The Malware Information and Sharing Platform (MISP) is a threat intelligence platform for
sharing, storing and correlating Indicators of Compromise of targeted attacks, threat
intelligence, financial fraud information, vulnerability information or even counter-terrorism
information. MISP is used today in multiple organizations to not only to store, share,
collaborate on cyber security indicators, malware analysis, but also to use the IoCs and
information to detect and prevent attacks, frauds or threats against ICT infrastructures,
organizations or people.
Integration within the SIEMonster platform is preconfigured for Cortex, OpenCTI, MISP and
Cortex. Feeds for threat intel can be configured for many of the available free sources as well
as from subscription sources if required.
With the focus on automation and standards, MISP provides you with a powerful REST API,
extensibility (via misp-modules) or additional libraries such as PyMISP.
4.8 NiFi
NiFi was built to automate the flow of data between systems. While the term 'dataflow' is
used in a variety of contexts, we use it here to mean the automated and managed flow of
information between systems. This problem space has been around ever since enterprises
had more than one system, where some of the systems created data and some of the
systems consumed data. The problems and solution patterns that emerged have been
discussed and articulated extensively. A comprehensive and readily consumed form is found
in the Enterprise Integration Patterns
Within the SIEMonster platform NiFi is used to ingest incoming event log data from the
Kafka message queue. Various templates have been provided for different endpoint types
SIEMonster.com 13
including but not limited to Active Directory, common firewall and VPN devices, HIDS agents
and IDS feeds.
All data flow is visualized allowing the analyst to view in real time the log flows and metrics.
Templates are also provided to assist in adding new sources with debug options and data
sinks before going into production.
4.9 Patrowl
PatrOwl is an advanced platform for orchestrating Security Operations like Penetration
testing, Vulnerability Assessment, Code review, Compliance checks, Cyber-Threat
Intelligence / Hunting and SOC & DFIR Operations, including:
Correlate asset risk value against vulnerabilities, bringing business intelligence and SIEM in
closer alignment. Within the SIEMonster platform Patrowl is integrated with Cortex and
TheHive. Asset for assessment can be added singly or in bulk using the asset import feature.
Results are displayed in a Dashboard and with TheHive integration, new alerts can be
configured for High or Critical vulnerabilities as well as asset risk weighting correlation.
SIEMonster.com 14
4.10 OpenCTI
OpenCTI is an open source platform allowing organizations to manage their cyber threat
intelligence knowledge and observables. It has been created in order to structure, store,
organize and visualize technical and non-technical information about cyber threats.
The structuration of the data is performed using a knowledge schema based on the STIX2
standards. It has been designed as a modern web application including a GraphQL API and
an UX oriented frontend. Also, OpenCTI is integrated with MISP, TheHive and MITRE ATT&CK
within the SIEMonster platform as well as having a connector for CVE information. The initial
dashboard will begin immediate import of MISP observables for analysis
SIEMonster.com 15
4.11 Alerting
Alerting is provided by the OpenDistro Kibana interface, Elastalert with GUI front-end and
via Apache Nifi dependent on the use case. 30+ pre-canned alert types are provided to get
you up and running. Typical queries include those for anomalies, aggregations, pattern
matching along with threat intel/Mitre correlation, Indicators of Compromise (IOCs), NIDS
signature matching and asset vulnerabilities. Alerts can be configured to automatically
create tickets in the TheHive Incident Response module and to notify stakeholders via most
common webhooks or direct email.
Many pre-canned alerts are available in a disabled state to allow you to quickly get up and
running. We also provide a Webhook to SMTP connector for Kibana alerts, not available as
standard that permits the emailing of alerts
SIEMonster.com 16
Elastalert GUI
• Provides durable, fast and fault tolerant message streaming for handling real time data feeds.
• Compatible with Apache Nifi and the Elastic Beats family agents.
• Options for in flight stream data extraction and new stream creation dependent on specific
triggers.
• Ability to set data retention periods per use case in case of upstream processing back
pressure.
SIEMonster.com 17
Incoming events are stored initially in Apache Kafka before being processed in Nifi and then
sent to Elasticsearch. This provides a buffer in case of bursts in activity while also providing
an endpoint by topic management system with options for real time alert stream creation.
4.13 Performance
Performance and alerting metrics are visualized and actioned via Grafana, Prometheus, Alert
Manager and Cerebro as well as Metricbeat with preloaded dashboards. Incoming log event
rates can be monitored as well as container stats and CPU, load and disk space. Slack
endpoints can be easily set up to receive alerts for sudden spikes in activity, CPU 90%+ or
10% disk space remaining.
Metricbeat Metrics
SIEMonster.com 18
4.14 Reporting
SIEMonster internal reporting tool provides a comprehensive tool with automated reporting
straight to your inbox. This tool allows automated reports to be generated, and sent to the
appropriate individual, on any event, such as MacAfee Anti-Virus, detected a virus but did
not clean and send these follow up items in a report. Reports are available in PDF or XLS
format, including Dashboards snapshots for visualization.
SIEMonster.com 19
4.15 Wazuh
Wazuh is a free and open source platform for threat detection, security monitoring, incident
response and regulatory compliance. It can be used to monitor endpoints, cloud services
and containers, and to aggregate and analyze data from external sources.
Wazuh is used to collect, aggregate, index and analyze security data, helping organizations
detect intrusions, threats and behavioral anomalies.
As cyber threats are becoming more sophisticated, real-time monitoring and security
analysis are needed for fast threat detection and remediation. That is why our light-weight
agent provides the necessary monitoring and response capabilities, while our server
component provides the security intelligence and performs data analysis.
Wazuh agents scan the monitored systems looking for malware, rootkits and suspicious
anomalies. They can detect hidden files, cloaked processes or unregistered network listeners,
as well as inconsistencies in system call responses. In addition to agent capabilities, the
server component uses a signature-based approach to intrusion detection, using its regular
expression engine to analyze collected log data and look for indicators of compromise.
Wazuh agents pull software inventory data and send this information to the server, where it
is correlated with continuously updated CVE (Common Vulnerabilities and Exposure)
databases, in order to identify well-known vulnerable software. Automated vulnerability
assessment helps you find the weak spots in your critical assets and take corrective action
before attackers exploit them to sabotage your business or steal confidential data.
Wazuh is integrated into the Dashboards module of SIEMonster and there are also pre-
canned alerts configured.
SIEMonster.com 20
4.16 Suricata
Suricata is an open source threat detection engine that was developed by the Open
Information Security Foundation (OISF). Suricata can act as an intrusion detection system
(IDS), and intrusion prevention system (IPS), or be used for network security monitoring. It
was developed alongside the community to help simplify security processes. As a free and
robust tool, Suricata monitors network traffic using an extensive rule set and signature
language. Suricata also features Lua scripting support to monitor more complex threats.
The SIEMonster Community Edition provides a Suricata pipeline that performs packet
capture and analysis on the local network interface, acting as a host-based IDS. The resultant
data is then sent to Kafka before being ingested by Elasticsearch. The commercial
SIEMonster releases extend these capabilities in the form of network and cloud tabs and
multi-network interface monitoring.
Alerts can be easily configured for signature matches and there is also a dashboard provided
for further IDS analysis.
SIEMonster.com 21
4.17 DNS Settings
Installation of SIEMonster within a local network requires some DNS settings to be set either
in the client hosts file of the machine accessing the platform or by adding some entries into
the local DNS server. The client host file will be typically:
Using the IP address of the SIEMonster appliance, the entries for the hosts file will be as
follows using the above IP as an example:
192.168.0.30 siemonster.internal.com
192.168.0.30 webreporting.siemonster.internal.com
192.168.0.30 misp.siemonster.internal.com
192.168.0.30 cortex.siemonster.internal.com
192.168.0.30 sm-kibana.siemonster.internal.com
192.168.0.30 praeco.siemonster.internal.com
192.168.0.30 metrics.siemonster.internal.com
192.168.0.30 hive.siemonster.internal.com
192.168.0.30 nifi.siemonster.internal.com
192.168.0.30 patrowl.siemonster.internal.com
192.168.0.30 opencti.siemonster.internal.com
192.168.0.30 kafka.siemonster.internal.com
192.168.0.30 prometheus.siemonster.internal.com
192.168.0.30 alertmanager.siemonster.internal.com
192.168.0.30 cerebro.siemonster.internal.com
192.168.0.30 metrics.siemonster.internal.com
192.168.0.30 kafka-manager.siemonster.internal.com
Setting for a DNS server will be A record aliases, for example if the appliance IP address is
192.168.0.30 siemonster.internal.com
192.168.0.30 *.siemonster.internal.com
SIEMonster.com 22
4.18 Endpoint Setup
To collect logs from endpoints we recommend the following is installed to collect the logs.
Service Ports
Wazuh (Microsoft Windows, Linux & Mac) TCP Ports 1514, 1515 and 55000
Kafka Receiver
Note: For Microsoft Windows users you will see Winlogbeat and Filbeat listed here.
Winlogbeat is designed to collect straight systems logs whereas Filebeat will collect logs
from Exchange, IIS and SQL for example. You will need to install both for multi-purpose
Windows servers.
4.19 Suggestions
Do you have any suggestions you would like to see in the next build?
SIEMonster.com 23
SIEMonster – Build Guide
SIEMonster.com 24
5 Installation
5.1 Download
To download the Community Edition, please visit the SIEMonster website at
http://www.siemonster.com. Proceed to the download section, complete the form with all
required details and click submit. An e-mail with the download link will be sent to you. The
download consists of a single file named “coreos.ova”, and is supported by VMWare ESXi,
VMWare Workstation and Oracle VirtualBox.
5.2 Requirements
Please note that this pre-build virtual machine has been configured to run with the following
hardware:
Hardware
CPU 8 VCPU
RAM 32 GB
3. Click “Open” and browse to the folder location where you saved the download described in
Section 6.1.
SIEMonster.com 25
8. Once booted, a pre-determined sequence of automated tasks will be performed that requires
no input.
9. Open a console for the virtual machine and capture the IP address that is displayed on the
screen as demonstrated below.
3. Browse to where the “ova” file was downloaded, Select the file and Click “Next”
4. Leave the default hardware settings as is and specify a location for the virtual machine to be
imported to and Click “Import”
7. Once booted, a pre-determined sequence of automated tasks will be performed that requires
no input.
8. Open a console for the virtual machine and capture the IP address that is displayed on the
screen as demonstrated below.
SIEMonster.com 26
5.5 ESXi
To import the pre-built image, please perform the following action:
3. Select “Deploy a virtual machine from an OVF or OVA file” and Click “Next”
4. Specify the name you wish to use for the virtual machine and drag the OVA into the
drag/drop box indicated by the red arrow and Click “Next”. NOTE: On some instances of ESXi,
using a name with spaces and/or non-alphanumeric characters can cause the deployment to
fail. Please ensure to use a simplified name for the installation.
5. Select the Datastore where you would like the virtual machine stored and Click “Next”.
SIEMonster.com 27
6. Specify the deployment options “Thin” or “Thick”* disk and Select “Power on automatically”
if you wish to do so and Click “Next”. NOTE: The Community Edition is provided with 1
Terabyte of disk space allocated. Should you choose to deploy it as “Thick” provisioned please
ensure that there is sufficient disk space in the environment.
7. You will be presented with a “Ready to complete” screen with a summary of the deployment
about to be performed. As indicated in this window, do not refresh your browser as this will
interrupt the process. Proceed by clicking “Finish”
Thin provision stores the disk in the smallest size possible, only consuming what has been
stored. Thick provisioned disk allocates all disk space upfront.
8. A progress indicator will appear in the recent tasks window at the bottom of the interface
indicating the progress of the import, please wait for this to complete.
9. If you didn’t not Select “Power on Automatically” as part of the deployment process, please
proceed to power on the virtual machine.
10. Once booted, a pre-determined sequence of automated tasks will be performed that requires
no input.
SIEMonster.com 28
11. Open a console for the virtual machine and capture the IP address that is displayed on the
screen as demonstrated below.
12. Please proceed to the section with the heading “SIEMonster First Time Startup”.
• Type ifconfig ens33 and Press [Enter], confirm that the IP address presented matches the IP
address that was displayed on the console.
• Type the network configuration as per the following example exchanging the Address,
Gateway and DNS details with that of your network. Note: These entries are case sensitive
• Type ifconfig ens33 and Press [Enter]. Verify that the IP displayed matches that of the
configuration that was typed in the preceding steps
SIEMonster.com 29
5.7 DNS Settings
Installation of SIEMonster within a local network requires some DNS settings to be set either
in the client hosts file of the machine accessing the platform or by adding some entries into
the local DNS server. The client host file will be typically located in:
‘C:\Windows\System32\drivers\etc\hosts’ on Windows
Using the IP address of the SIEMonster appliance, the entries for the hosts file will be as
follows using the above IP as an example:
192.168.0.30 siemonster.internal.com
192.168.0.30 webreporting.siemonster.internal.com
192.168.0.30 misp.siemonster.internal.com
192.168.0.30 cortex.siemonster.internal.com
192.168.0.30 sm-kibana.siemonster.internal.com
192.168.0.30 praeco.siemonster.internal.com
192.168.0.30 metrics.siemonster.internal.com
192.168.0.30 hive.siemonster.internal.com
192.168.0.30 nifi.siemonster.internal.com
192.168.0.30 patrowl.siemonster.internal.com
192.168.0.30 opencti.siemonster.internal.com
192.168.0.30 kafka.siemonster.internal.com
192.168.0.30 prometheus.siemonster.internal.com
192.168.0.30 alertmanager.siemonster.internal.com
192.168.0.30 cerebro.siemonster.internal.com
192.168.0.30 metrics.siemonster.internal.com
192.168.0.30 kafka-manager.siemonster.internal.com
192.168.0.30 comrade.siemonster.internal.com
Setting for a DNS server will be A record aliases, for example if the appliance IP address is
192.168.0.30 then the settings will be:
192.168.0.30 siemonster.internal.com
192.168.0.30 *.siemonster.internal.com
SIEMonster.com 30
5.7.1 First Time Configuration
Once the preceding steps, identifying or configuration the IP address and updating the hosts
entry on the workstation/server that will be performing the configuration and maintenance
for the environment, please proceed with the following steps:
• Specify the administrator e-mail address and password for the platform and Click Sign in
Optional configuration:
Should a proxy server be required please toggle the switch and specify the proxy details as
indicated.
SIEMonster.com 31
• Add the downloaded file to a ZIP archive and e-mail it to support@siemonster.com with the
subject Offline Activation. A response file will be e-mailed back to the originating e-mail
address.
• Select the file that was received from SIEMonster and Click Open
This will conclude the installation and setup portion of the solution. The setup page will
automatically redirect to the login page where the credentials specified in the preceding
actions can be used to login.
SIEMonster.com 32
5.9 Open ports
External services open ports for Administration
Service Ports
External services open ports for ingestion, i.e. what client will send their data to SIEMonster
Service Ports
Kafka Receiver
TCP Port 9094
(Beats family)
SIEMonster.com 33
5.10 Client Setup
To Collect logs from end points we recommend the following. Configuration of these files
and settings can be found in the Operation section of this guide.
https://artifacts.elastic.co/downloads/beats/winlogbeat/winlogbeat-oss-7.4.0-windows-
x86_64.zip
https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0
• Wazuh Agent
https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html
https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0
• Wazuh Agent
https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html
https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0
• Wazuh Agent
https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html
SIEMonster.com 34
SIEMonster – Client Setup
SIEMonster.com 35
6 Installing Agents
Now that the SIEMonster is up and running. It is time to install some agents to get some
data into the SIEM. You will need to install an agent on the boxes that support agents like
Windows and Linux. For boxes that don’t support agents you will need to forward syslog’s
to the SIEM. To collect logs from end points we recommend the following. Configuration of
these files and settings can be found in this chapter of the guide.
1. Download the software Winlogbeat directly from the vendor link below and install it
https://artifacts.elastic.co/downloads/beats/winlogbeat/winlogbeat-oss-7.4.0-
windows-x86_64.zip
SIEMonster.com 36
2. Download the SIEMonster agent-pack, which contains additional modules. The zip
file contains the files you will need for your endpoint.
https://s3-us-west-2.amazonaws.com/agents.siemonster.com/agent-pack-v4-
fullyloaded.zip
SHA256
4f9e9a913afc0fb23692ac1fdf39494a57fdce4f74b97b910b4e6adbe9a031e6
5. From the agent-pack, extract the files with extension .js into the pipelines folder.
o NOTE: On later versions of Windows PS no longer works. Please use the full
name powershell.
If script execution is disabled, then first use the following command from a standard
command prompt:
8. Connect to the SIEMonster platform with an SSH client using the credentials supplied
at the end of this document
SIEMonster.com 37
9. Run the command “cat /volumes/kafka/kafka-ssl/ca/root-ca.pem”, this will output the
certificate data.
10. Select and copy the text display by the command. (take care to start with and with
the -, and not including any spaces.
11. Open a text editor on your platform, paste the text that was copied and save it to
c:\certs\rootCA.pem, creating the folder c:\certs if needed.
12. Edit the winlogbeat.yaml file and ensure that it matches the configuration in the
screenshot. Ping the FQDN displayed in the hosts line to ensure it can be resolved
from the client. NOTE: the forward slashes in the following screenshot is accurate
and should be kept.
13. Check the syntax is correct in this file by running the following command:
SIEMonster.com 38
The provided configuration will also log Sysmon events. See the Section Sysmon/
MITRE ATT&CK™ Integration. The agent pack includes the required ION-Storm
Sysmon dictionary.
https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0
https://s3-us-west-2.amazonaws.com/agents.siemonster.com/agent-pack-v4-
fullyloaded.zip
SHA256
4f9e9a913afc0fb23692ac1fdf39494a57fdce4f74b97b910b4e6adbe9a031e6
2. In this example we have a Debian Operating System and have chosen the deb agent.
Transfer this zip file and the deb installer file via SCP to the target server and install
using the following command:
Once installed the filebeat service will be inactive and the configuration file can be found at
/etc/filebeat/filebeat.yml. This configuration file must be modified to suit the logs being
monitored and the FQDN of the SIEMonster server. A sample is included in the agent-pack
which should be uncomrpressed. To obtain the certificate needed please do the following:
1. Connect to the SIEMonster platform with an SSH client using the credentials supplied
at the end of this document
SIEMonster.com 39
3. Select and copy the text display by the command. (take care to start with and with
the -, and not including any spaces.
4. Open a text editor on your platform, paste the text that was copied and save it to
/etc/filebeat/root-ca.pem.
For example, to modify this for Apache logs this path may be altered to:
/var/log/apache2/access.log.
4. Next ensure the client can resolve the FQDN of the SIEMonster server.
SIEMonster CE FQDN
SIEMonster.com 40
5. Test the connection by navigating to the /etc/filebeat folder and running the
command:
filebeat -e -c filebeat.yml
Working connection
https://documentation-dev.wazuh.com/current/index.html
OSSEC agents for Windows, Mac & Linux are installed via the OSSEC binary:
https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html
/var/ossec/bin/manage_agents
Note: Using a PuTTY session from Windows to SIEMonster will allow easy copy and paste
for generated keys than using vmware tools and copy/pasting.
SIEMonster.com 41
OSSEC HIDS Menu
• Choose ‘A’
Add a name for the agent and an IP address that should match the one the agent will
be connecting from i.e. CALFORNIADC01 192.168.0.100 (Note: if the agent hosts have
dynamic IP addresses then ‘any’ can be used instead of an IP address).
• Press ‘Y’
Retrieve the agent key information by entering ‘E’ for extract and the ID for the agent. Copy
this key as it will be required for the remote agent install.
Example:
MDAxIFRlc3RBZ2V0biAxMTEuMTExLjExMS4xMTEgY2MxZjA1Y2UxNWQyNzEyNjdlMmE3MT
RlODI0MTA1YTgxNTM5ZDliN2U2ZDQ5MWYxYzBkOTU4MjRmNjU3ZmI2Zg==
SIEMonster.com 42
https://packages.wazuh.com/3.x/windows/wazuh-agent-3.9.5-1.msi
SHA512Checksum:
3da92c3a0e8c5fde77810aa71a0b4a5c61fbea7d3a6fc39586e01c156f1fd1114830c09f68a51
8e2062c1933d28bd14ed8011139fa0f27a23ffad235b4482269
Edit the ossec.conf file in the ossec-agent install location, adding the IP or FQDN of the
SIEMonster appliance and changing the protocol to TCP:
Launch the agent and enter the IP address or FQDN of the CE appliance along with the key
previously presented.
SIEMonster.com 43
Back on the SIEMonster appliance check that the agent has connected correctly, by checking
in the Wazuh Kibana application:
To install the remote agent on a Linux Debian based machine, follow the steps outlined here:
https://documentation.wazuh.com/3.9/installation-guide/installing-wazuh-
agent/wazuh_agent_deb.html
Edit /var/ossec/etc/ossec.conf on the agent, add the IP of the SIEMonster appliance &
change the protocol to TCP.
SIEMonster.com 44
Restart Wazuh:
/var/ossec/bin/ossec-control restart
Linux & Windows agents may also be automatically registered from the command line
without any setup on the Wazuh Manager.
For Linux:
SIEMonster.com 45
For Windows:
In production this method would be extended to use SSL certificates and/or authentication.
6.7 SYSLOG
All syslogs can be sent to The SIEMonster Appliance Network devices with remote syslog
settings should be set to the SIEMonster appliance IP address. Syslogs are accepted on UDP
port 514. For troubleshooting purposes, incoming syslogs can be found within the
SIEMonster appliance at:
/volumes/wazuh-manager/data/logs/archives/archives.json
Parsing is handled by Wazuh & Apache Nifi before forwarding to the ES cluster.
6.8 Inputs
The next step is to check for incoming events in Kibana. Assuming the index is named logs-
endpoint-winevent-DATE as preset in the Apache Nifi configuration, then the events should
be visible in the Discovery panel.
Access the Dashboards top menu or tile from the web application:
SIEMonster.com 46
If the index has been renamed, the it should be first registered in the Management – Index
Patterns panel:
SIEMonster.com 47
Visit the Discovery menu and select the configured index
SIEMonster.com 48
6.9 Sysmon/Windows MITRE ATT&CK™ Integration
System Monitor (Sysmon) is a Windows system service and device driver that, once installed
on a system, remains resident across system reboots to monitor and log system activity to
the Windows event log. It provides detailed information about process creations, network
connections, and changes to file creation time.
1. Extract the Sysmon dictionary file - sysmonconfig-export.xml from the agent pack –
See section 1.2.
2. Download, Extract and install Sysmon, install with the command sysmon64 -
accepteula -i sysmonconfig-export.xml , link to download page:
https://technet.microsoft.com/en-us/sysinternals/sysmon.
3. Ensure that the following lines exist in the Winlogbeat configuration by using the
supplied winlogbeat template in the agent pack.
<Select Path=”Microsoft-Windows-Sysmon/Operational”>*</Select>\</Query>\
5. MITRE ATT&CK™ vectors run on end hosts can now be searched for in Elasticsearch,
for example.
SIEMonster.com 49
SIEMonster.com 50
SIEMonster – Operations Guide
SIEMonster.com 51
7 Managing Users and Roles in SIEMonster
It is critical for businesses to protect data and prevent unauthorized usage. SIEMonster
provides advanced user access management capabilities that enable you to set up users,
define permissions, and optimize security. Permissions in SIEMonster are based on Roles.
Objectives
• Create Roles
In SIEMonster, these different tasks are split between the following two user roles by default.
However, more user roles can be added to configure the user types.
• Admin
• User
Administrators
• Alerts
• Dashboards
• Analyzers
• Threat Modelling
• Incident Response
• Metrics
• Audit Discovery
• Flow Processors
• Threat Intel
• Reporting
SIEMonster.com 52
7.1.1 Exercise: Create a User in SIEMonster
1. In the browser, type the URL for SIEMonster to open the login page. The login page
opens.
2. Enter the credentials (username and password) and click Sign In.
3. On the Home page, click on the dotted vertical line on the top right and click on the
Admin Panel.
A standard user with a User profile will not have access to any module.
SIEMonster.com 53
7.1.2 Exercise: Create Roles in SIEMonster
1. On the Home page, click on the dotted vertical line on the top right, and click Admin
Panel. Alternatively, you can also click on the icon on the top left and click Admin
Panel.
2. Click the Roles tab, and then click admin role. Note that all the modules are enabled.
3. Click the user role and note that all the modules are disabled. These modules can be
enabled for a specific user group.
SIEMonster.com 54
5. You are going to create a role for a security operations team. Enter the Role Name
as secOps and click OK.
6. Click to open the newly created role and enable Dashboards and Incident Response
modules.
7. Under Add Module section, enter Training in the Module Name field and
https://freshdesk.com in the Module URL field. Click Add Module.
SIEMonster.com 55
7.2 Mailgun
Mailgun is an email automation service provided by Rackspace. It offers a complete cloud-
based email service for sending, receiving, and tracking emails sent through websites and
web applications. Mailgun features are available through an intuitive RESTful API or using
traditional email protocols like SMTP.
Mailgun in SIEMonster is used for the web applications, the user needs to sign up for a free
Mailgun account. It will allow users to receive email notices from the web application
whenever a login is attempted with their email addresses.
Click the Admin Panel -> Notifiers tab to access the Mailgun settings. Mailgun can be
setup by providing Mailgun API-key, Domain Name, and Sending Email Address.
SIEMonster.com 56
7.3 LDAP Integration
User authentication can be setup by integrating SIEMonster with LDAP services. Users not
already in the SIEMonster platform can be added when logging in with their LDAP email
address and password. When a user from Active Directory logs in, that user will be logged
in as a new user with no modules enabled. The administrators can then assign a role to the
users.
1. Click the Admin Panel > LDAP tab to access the LDAP settings. Enter the Host name
or IP address of your LDAP server
2. Enter the Port Number used for LDAP communication (389 by default)
3. Enable TLS (Transport Layer Security) that offers secured method of sending data,
and it requires a certificate that can be uploaded.
5. Click Perform Connection Test to check your connection, and then click Save LDAP
Settings.
SIEMonster.com 57
7.4 My Profile
SIEMonster provides a comprehensive profile view of the user. To access this, click the dotted
vertical line on the top right, and click My Profile.
A Profile includes the data of a user such as the Display name, Email, Password, Two
Factor Authentication, and Past Login Attempts.
It will display a QR code on the screen that can be scanned using Google Authenticator,
Authy, or Symantec's VIP Access to generate authentication codes. Click Enable to enable
two factor authentication.
SIEMonster.com 58
7.5 Superadmin Panel
On the Home page, click the dotted vertical line on the top right, and click Superadmin
Panel.
Superadmin Panel can be used to setup the Inactivity timeout. If the inactivity timeout is
setup to 1h, it means the system will automatically log a user out after one hour’s inactivity.
Click the drop-down arrow under Value to change the Inactivity timeout value.
Sometimes a user may have some end point protections on the network preventing any
WebSocket methods so that you can invoke the other server method. This can be setup by
specifying the relevant value.
SIEMonster.com 59
8 Dashboards
The SIEMonster Kibana Dashboard is a visualization application of Elasticsearch that allows
users to visualize the incoming data and create dashboards based on that.
The SIEMonster Kibana Dashboard gives you full flexibility and functionality on how you
want your dashboards to appear for different users. This section will provide you with a good
guide on how to use the dashboards and customize them to your own organizations.
8.1 Discover
Discover allows you to explore your data with Kibana’s data discover functions. You have
access to every document in every index that matches the selected index pattern. You can
view document data, filter the search results, and submit search queries.
Time Filter
Specific time period for the search results can be defined by using the Time Picker. By
default, the time range is set to the This month. However, the time picker can be used to
change the default time period.
SIEMonster.com 60
Histogram
After you selected a time range that contains data, you will see a histogram at the top of the
page, that will show the distribution of events over time.
The time you select will automatically be applied to any page you visit including the
Dashboard and Visualize. However, this behavior can be changed at individual page level.
Auto refresh rate can be setup to select a refresh interval from the list. This can periodically
resubmit your searches to retrieve the latest results.
Searches can be Saved and then used later by clicking the Open button.
SIEMonster.com 61
Fields
All the Available fields with their data types are available on the left side of the page. If you
hover-over any field, you can click add to add that field as a column to the table on the right
and it will then show the contents of this field.
Once added, hover-over the field that you want to remove and click Remove column.
SIEMonster.com 62
To add a filter from the list of fields available:
SIEMonster.com 63
Search for Documents
To search and filter the documents shown in the list, you can use the large search box at the
top of the page. The search box accepts query strings in a special syntax.
If you want to search content in any field, just type in the content that you want to search.
Entering anomaly in the search box and pressing enter will show you only events that contain
the term anomaly.
SIEMonster.com 64
8.1.1 Exercise: Discover the Data
Discover allows you to explore the incoming data with Kibana’s data discovery functions.
You can submit search queries, filter the search results, and view document data. You can
also see the number of documents that match the search query and get field value statistics.
2. To display the incoming raw data, click Discover. The report’s data is selected by
clicking on the This month option and selecting the desired date range, for the
purpose of this exercise select This month.
In the time range tool, there are different ways to select the date range,
you can select the date from the Commonly used menu with pre-set
relative periods of time (for example Year to date, This month), or
Recently used data ranges.
SIEMonster.com 65
3. The histogram at the top of the page shows the distribution of documents over the
time range selected.
4. Expand one of the events to view the list of data fields used in that event. Queries are
based on these fields.
5. By default, the table shows the localized version of the time field that’s configured for
the selected index pattern. You can toggle on or off different event fields if you hover
over the field and click add.
6. In some business scenarios, it is helpful to view all the documents related to a specific
event. To show the context related to the document, expand one of the events and
click View surrounding documents.
SIEMonster.com 66
7. Search results can also be filtered to view those
documents that contain value in a filter. Click + Add
filter to add a filter manually.
8.2 Visualize
Visualizations are used to aggregate and visualize your data in your Elasticsearch indices in
different ways. Kibana visualizations are based on Elasticsearch queries.
By using a series of Elasticsearch aggregations to extract and process your data, you can
create a Dashboard with charts that shows the trends, spikes, and dips. Visualizations can be
based on the searches saved from Discover, or you can start with a new search query.
The next section introduces the concept of Elasticsearch Aggregations as they are the basis
of visualization.
8.2.1 Aggregations
The aggregation of the data in SIEMonster is not done by Kibana, but by the underlying
Elasticsearch. The aggregation framework provides data based on a search query and it can
build analytic information over a set of documents.
There are different types of aggregations, each with its own purpose. Aggregation can be
categorized into four types:
• Bucket Aggregation
• Metric Aggregation
• Matric Aggregation
• Pipeline Aggregation
Bucket Aggregation
A bucket aggregation, groups all documents into several buckets, each containing a subset
of the indexed documents and associated with a key. The decision which bucket to sort a
specific document into can be based on the value of a specific field, a custom filter, or other
parameters.
SIEMonster.com 67
1. Date Histogram
The Date Histogram aggregation requires a field of type date and an interval. It can only be
used with the date values. It will then put all the documents into one bucket, whose value of
the specified date field lies within the same interval.
Example:
You can construct a Date Histogram on @timestamp field of all messages with the interval
minute. In this case, there will be a bucket for each minute and each bucket will hold all
messages that have been written in that minute.
Besides common interval values like minutes, hourly, daily, etc. there is the special value
auto. When you select auto interval, the actual time interval will be determined by Kibana
depending on how large you want to draw this graph, so that a respectable number of
buckets will be created (not too many to pollute the graph, nor too few so the graph would
become irrelevant).
2. Histogram
A Histogram is like Date Histogram, but unlike Date Histogram, Histogram can be applied
on number fields extracted from the documents. It dynamically builds fixed sized buckets
over the values.
3. Range
The range aggregation is like a manual Histogram aggregation. You need to specify a field
of type number, but you must also specify each interval manually. This is useful if you either
want differently sized intervals or intervals that overlap.
Whenever you enter Range in Kibana, you can leave the upper or lower bound empty to
create an open range (like the above 1000-*).
This aggregation includes the from value and excludes the to value for
each range.
4. Terms
Terms aggregation creates buckets by the values of a field. It is very similar to a classical SQL
GROUP BY. You need to specify a field (which can be of any type), it will create a bucket for
each of the values that exist in that field and add all documents in that field with a value.
Example:
You can run a Terms aggregation on the field geoip.country_name that holds the country
name. It will then have a bucket for each country and in each bucket the documents of all
events from that country.
SIEMonster.com 68
The aggregation doesn’t always need to match the whole field value. If you let Elasticsearch
analyze a string field, it will by default split its value up by spaces, punctuation marks and
the like, and each part will be an own term, and as such would get an own bucket.
If you use a Term aggregation on a rule, you might assume that you would get nearly one
bucket per event, because two messages rarely are the same. But this field is analyzed in our
sample data, so you would get buckets for ssh, syslog, failure and so on and in each of these
buckets all documents, that had that Term in the text field (even though it doesn’t need to
match the text field exactly).
Elasticsearch can be configured not to analyze fields or you can configure the analyzer that
is used to match the behavior of a Terms aggregation to your actual needs. For example,
you could let the text field be analyzed so that colons (:) and slashes (/) won’t be split
separators. That way, an URL would be a single term and not split up into http, the domain,
the ending and so on.
5. Filters
Filters is a completely flexible (and at times slower than the others) aggregation. You need
to specify Filters for each bucket that will collect all documents that match its associated
filter.
Example:
Create a Filter aggregation with one query being geoip.country_name:(Ukraine or China) and
the second filter being rule.firedtimes:[100 TO *].
Aggregation will create two buckets, one containing all the events from Ukraine or China,
and one bucket with all the events with 100 or more rule fired times. It is up to you, to decide
what kind of analysis you would do with these two buckets.
6. Significant Terms
The Significant Terms aggregation can be used to find uncommonly common terms in a set
of documents. Given a subset of documents, this aggregation finds all the terms which
appear in this subset more often than could be expected from term occurrences in the whole
document set.
It then builds a bucket for each of the Significant Terms that contains all documents of the
subset in which this term appears. The size parameter controls how many buckets are
constructed, for example how many Significant Terms are calculated.
The subset on which to operate the Significant Terms aggregation can be constructed by a
filter or you can use another bucket aggregation first on all documents and then choose
Significant Terms as a sub-aggregation which is computed for the documents in each
bucket.
SIEMonster.com 69
Example:
You can use the search field at the top to filter our documents for those with
geoip.country_name:China and then select significant terms as a bucket aggregation.
In order to deliver relevant results that really give insight into trends and
anomalies in your data, the Significant Terms aggregation needs
sufficiently sized subsets of documents to work on.
7. GeoHash Grid
Elasticsearch can store coordinates in a special type geo_point field and group points into
buckets that represent cells in a grid. Geohash aggregation can create buckets for values
close to each other. You must specify a field of type geo_point and a precision. The smaller
the precision, the larger area the buckets will cover.
Example:
You can create a Geohash aggregation on the coordinates field in the event data. This will
create a bucket containing events close to each other. Precision can specify how close events
can be and how many buckets are needed for the data.
Metric Aggregations
After you have run a bucket aggregation on your data, you will have several buckets with
documents in them. You can now specify one Metric Aggregation to calculate a single value
for each bucket. The metric aggregation will be run on every bucket and result in one value
per bucket.
The aggregations in this family compute metrics based on values extracted in one way or
another from the documents that are being aggregated, they can also be generated using
scripts.
In the visualizations the bucket aggregation usually will be used to determine the "first
dimension" of the chart (e.g. for a pie chart, each bucket is one pie slice; for a bar chart each
bucket will get its own bar). The value calculated by the metric aggregation will then be
displayed as the "second dimension" (e.g. for a pie chart, the percentage it has in the whole
pie; for a bar chart the actual high of the bar on the y-axis).
Since Metric Aggregations mostly makes sense, when they run on buckets, the examples of
Metric Aggregations will always contain a bucket aggregation as a sample too. But of course,
you could also use the Metric Aggregation on any other bucket aggregation; a bucket stays
a bucket.
SIEMonster.com 70
1. Count
This is not really an aggregation. It returns the number of documents that are in each bucket
as a value for that bucket.
Example:
To calculate the number of events from a specific country, you can use a term aggregation
on the field geoip.country_name (which will create one bucket per country code) and
afterwards run a count metric aggregation. Every country bucket will have the number of
events as a result.
2. Average/Sum
For the Average and Sum aggregations you need to specify a numeric field. The result for
each bucket will be the sum of all values in that field or the average of all values in that field
respectively.
Example:
You can have the same country buckets as above again and use an Average aggregation on
the rule fired times count field to get a result of how many rules fired times events in that
country have in average.
3. Max/Min
Like the Average and Sum aggregation, this aggregation needs a numeric field to run on. It
will return the Minimum value or Maximum value that can be found in any document in the
bucket for that field.
Example: If we use the country buckets and run a Maximum aggregation on the rule fired
times, we would get for each country the highest amount of rule triggers an event had in
the selected time period.
4. Unique Count
The Unique count will require a field and counts how many unique values exist in documents
for that bucket.
Example:
This time we will use range buckets on the rule.firedtimes field, meaning we will have buckets
for users with 1-50, 50-100 and 100- rule fired times.
If we now run a Unique Count aggregation on the geoip.country_name field, we will get for
each rule fired times range the number of how many different countries users with so many
rule fired times would come.
SIEMonster.com 71
In the sample data that would show us, that there are attackers from 8 different countries
with 1 to 50 rule fired times, from 30 for 50 to 100 rule fired times and from 4 different
countries for 100+ rule fired times and above.
5. Percentiles
A Percentiles aggregation is a bit different, since it does not result in one value for each
bucket, but in multiple values per bucket. These can be shown as different colored lines in a
line graph.
When specifying a Percentile aggregation, you must specify a number value field and
multiple percentage values. The result of the aggregation will be the value for which the
specified percentage of documents will be inside (lower) as this value.
Example:
You specify a Percentiles aggregation on the field user.rule fired times_count and specify the
percentile values 1, 50 and 99. This will result in three aggregated values for each bucket.
Let’s assume that we have just one bucket with events in it:
• The 1 percentile result (and e.g. the line in a line graph) will have the value 7. This
means that 1% of all the events in this bucket have a rule fired times count with 7 or
below.
• The 50-percentile result is 276, meaning that 50% of all the events in this bucket have
a rule fired times count of 276 or below.
• The 99 percentile have a value of 17000, meaning that 99% of the events in the bucket
have a rule fired times count of 17000 or below.
SIEMonster.com 72
8.2.2 Visualizations
The SIEMonster Kibana creates visualization of the data in the Elasticsearch queries that can
then be used to build Dashboards that display related visualization.
Markdown Widget A simple widget, that can display some markdown text. Can
be used to add help boxes or links to dashboards
Pie Chart Displays data as a pie with different slices for each bucket
or as a donut
Vertical Bar Chart A chart with vertical bars for each bucket.
Kibana always take you back to the same visualization that you are working on while
navigating between different tabs and the Visualize tab.
As a good practice, you should always save your visualization so that you do not lose it.
SIEMonster.com 73
Creating a Visualization
When you click on Visualize in the side navigation, it will present you with a list of all the
saved visualizations that can be edited and an option for you to create a new visualization.
When you click on the icon button, you will need to select the visualization type, and
then specify either of the following search query to retrieve the data for your visualization.
• Click the name of the Saved Search you want to use to build a visualization from the
saved search.
• To define a new search criterion, select from the From a New Gauge, the Index that
contain the data you want to visualize. This will open the visualization builder.
o Choose the Metric Aggregation (for example Count, Sum, Top Hit, Unique
Count) for the visualization’s Y-axis, and select the Bucket Aggregation (for
example Histogram, Filters, Range, Significant Terms) for the X-axis.
SIEMonster.com 74
Visualization Types
In the following section, all the visualizations are described in detail with some examples.
The order is not alphabetically, but an order that should be more intuitive to understand the
visualizations. All are based on the Wazuh/OSSEC alerts index. A lot of the logic that applies
to all charts will be explained in the Pie Charts section, so you should read this one before
the others.
Pie chart
SIEMonster.com 75
Visualization Editor for a Pie Chart
There are two icons on the top right of the panel of the
Data tab.
SIEMonster.com 76
Aggregations
The slice size of a pie chart is determined by
the Metrics aggregation. click on the icon from the
Data tab and select Count from the Aggregation
drop-down menu.
Split Chart is a commonly used visualization option. In a Split Chart, each bucket created
by the bucket aggregation gets an own chart. All the charts will be placed beside and below
each other and make up the whole visualization. Split Slices is another visualization option
that can generate a slice for each bucket.
Example:
Kibana requires aggregation and its parameters to add a Split Slice type.
• Expand Split Slices by clicking on the icon
• Select Terms from the Aggregation drop-down menu
• Select rule.level from the Field drop-down menu
• Click on the Apply changes icon
The result above shows that there is one pie slice per bucket (for example per rule level).
SIEMonster.com 77
Question: How is the size of the slice in the pie determined?
Answer: This will be done by the Metric aggregation, which by default is set to Count of
documents. So, the pie now shows one slice per rule.level bucket and its percentage depends
on the number of events, that came from this event.
Question: Pie chart with a sum Metric aggregation across the ruled count but, why are there
only shown two slices?
Answer: This is determined by the Order and Size option in the Bucket aggregation. You
can specify how many buckets you want to see in the chart, and if you would like to see the
ones with the least (bottom) or the highest (top) values.
This order and size are linked to the Metric aggregation on the top. To demonstrate this,
switch the Metric aggregation on the top. When you expand it, you can switch the type to
Sum and the field to rule.firedtimes. You will now get a slice for each level and its size will be
determined, by the sum of the triggers per rule, that fired in our time range.
By using the Size option, you can restrict results to only show the top results.
With the Order by drop-down menu, you can also specify another Metrics aggregation, that
you want to use for ordering. Some graph types support multiple Metric aggregations. If
you add multiple Metrics aggregations, you will also be able to select in the order by box,
which of these you want to use for ordering.
The Order settings depend on the metric aggregation, that you have selected at the top of
the editor.
SIEMonster.com 78
Nested aggregations on a Pie Chart
A Pie Chart can use nested bucketing. You can click the Add sub-buckets button to add
another level of bucketing. You cannot use a different visualization type in a sub bucket. For
example, you cannot add Split Chart in a Splice Slice type of visualization because it splits
charts first and then use the sub aggregation on each chart.
Adding a sub aggregation of type Split Slices will create a second ring of slices around the
first ring.
Kibana in this scenario first aggregates via a Terms aggregation on the country code field,
so you have one bucket for each country code with all the events from that country in it.
These buckets are shown as the inner pie and their size is determined by the selected metric
aggregation (Count of documents in each bucket).
Inside each bucket Kibana now use the nested aggregation to group by the rule.firedtimes
count in a thousand interval. The result will be a bucket for each country code and inside
each of these buckets, are buckets for each rule fired interval.
The size of the inside buckets is again determined by the selected Metric aggregation,
meaning also the size of documents will be counted. In the Pie chart you will see this nested
aggregation as there are more slices in the second ring.
If you want to change the bucketing order, meaning in this case, you first want to bucket the
events by their rule.firedtimes and then you want to have buckets inside these follower
buckets for each country, you can just use the arrows beside the aggregation to move it to
an outer or inner level.
SIEMonster.com 79
There are some options to the Histogram aggregation. You can set if empty buckets (buckets
in which interval no documents lie) should be shown. This does not make any sense for Pie
charts, since they will just appear in the legend, but due to the nature of the Pie chart, their
slice will be 0% large, so you cannot see it. You can also set a limit for the minimum and
maximum field value, that you want to use.
Click on the Save button on the top right and give your visualization a name.
Coordinate Map
A Coordinate map is most likely the only useful way to display a Geohash aggregation. When
you create a new coordinate map, you can use the Split Chart to create one map per bucket
and use type as Geo Coordinates.
That way you must select a field that contains geo coordinates and a precision. The
visualization will show a circle on the map for each bucket. The circle (and bucket) size
depends on the precision you choose. The color of the circle will indicate the actual value
calculated by the Metric aggregation.
SIEMonster.com 80
Area and Line Charts
Both Area and Line charts
are very similar, they are
used to display data over
time and allow you to
plot your data on X and Y
axis. Area chart paints the
area below the line, and
it supports different
methods of overlapping
and stacking for the
different area.
Add another sub aggregation of type Split Area that can create multiple colored areas in the
chart. To add geo positions, you need to add Terms aggregation on the field
geoip.country_name.raw. Now you have charts showing the events by country.
SIEMonster.com 81
In the Metric and Axes options you can change the Chart Mode that is currently set to
stacked. This option only applies to the Area chart, since in a Line chart there is no need for
stacking or overlapping areas.
There are five different types of modes for the area charts:
• Smooth Lines: Tick this box to curve the top boundary of the area from point to point
• Set Y-Axis Extents: Tick this box and enter values in the y-max and y-min fields to
set the Y-axis to specific values.
• Scale Y-Axis to Data Bounds: The default Y-axis bounds are zero and the maximum
value returned in the data. Tick this box to change both upper and lower bounds to
match the values returned in the data
• Order buckets by descending sum: Tick this box to enforce sorting of buckets by
descending sum in the visualization
• Show Tooltip: Tick this box to enable the display of tooltips
1. Stacked
The area for each bucket will be stacked upon the area below. The total documents across
all buckets can be directly seen from the height of all stacked elements.
SIEMonster.com 82
2. Overlap
In the Overlap view, areas are not stacked upon each other. Every area will begin at the X-
axis and will be displayed semi-transparent, so all areas overlap each other. You can easily
compare the values of the different buckets against each other that way, but it is harder to
get the total value of all buckets in that mode.
3. Percentage
The height of the chart will always be 100% for the whole X-axis and only the percentage
between the different buckets will be shown.
SIEMonster.com 83
4. Silhouette
In this chart mode, a line somewhere in the middle of the diagram is chosen and all charts
evolve from that line to both directions.
5. Wiggle
Wiggle is like the Silhouette mode, but it does not keep a static baseline from which the
areas evolve in both directions. Instead it tries to calculate the baseline for each value again,
so that change in slope is minimized. It makes seeing relations between area sizes and
reading the total value more difficult than the other modes.
SIEMonster.com 84
Multiple Y-axis
Beside changing the view mode, you can also add another Metric aggregation to either Line
or area charts. That Metric aggregation will be shown with its own color in the same chart.
Unfortunately, all Metric aggregations you add, will share the same scale on the Y-axis. That
is why it makes most sense if your Metric aggregations return values in the same dimension
(For example one metric that will result in values from up to 100 and another that result in
values from 1 million to 10 million, will not be displayed very well, since the first metric will
barely be visible in the graph).
Vertical Bar
Stacked: Behaves the same like in area chart, it just stacks the bars onto each other
SIEMonster.com 85
Percentage: Uses 100% height bars, and only shows the distribution between the different
buckets
Grouped: It is the only different mode compared to Area charts. It will place the bars for
each X-axis value beside each other
Metric
A Metric visualization can just display the result of a Metrics aggregation. There is no
bucketing done. It will always apply to the whole data set, that is currently considered (you
can change the data set by typing queries in the top box). The only view option, that exists
is the font size of the displayed number.
Markdown Widget
This is a very simple widget, which does not do anything with your data. You only have the
view options where you can specify some markdown. The markdown will be rendered in the
visualization. This can be very useful to add help texts or links to other pages to your
dashboards. The markdown you can enter is GitHub flavored markdown.
Data Table
A Data Table is a tabular output of aggregation results. It is basically the raw data, that in
other visualizations would be rendered into some graphs.
SIEMonster.com 86
We will get all the country buckets on the top level. They will be presented in the first column
of the table. Since each of these rule level buckets contains multiple buckets for the rule
levels nested aggregation, there are 2 rows for each country, i.e. there is one row with the
country in the front for every bucket in the nested aggregation. The first two rows are both
for United States, and each row for one sub bucket of the nested aggregation. The result of
the metrics aggregation will be shown in the last column. If you add another nested
aggregation you will see, that those tables easily get large and confusing.
Queries in Visualizations
Queries can be entered in a
specific query language in a
search box at the top of the page.
It also works for visualizations.
You can just enter any query and
it will use this as a filter on the
data, before the aggregation runs
on the data.
SIEMonster.com 87
Debugging Visualizations
Kibana offers you with some debugging output for your visualizations. If you are on Visualize
page, you can see a small up pointing arrow below the visualization preview (you will also
see this on dashboards below the visualizations). Hitting this will reveal the debug panel with
several tabs on the top.
Table
Table show the results of the aggregation as a data table visualization. It is the raw data the
way Kibana views it.
Request
The request tab shows the raw JSON of the request, that has been sent to Elasticsearch for
this aggregation.
Response
Shows the raw JSON response body, that Elasticsearch returned to the request.
Statistics
Shows statistics about the call, like the duration of the request and the query, the number
of documents that were hit, and the index that was queried.
1. Click on Visualize from the side navigation and then click the + button.
SIEMonster.com 88
7. Try increasing the number of fields by entering 10 in the Size field and click on Apply
changes button.
8. In the Custom Label field, enter Timestamp and click on Apply changes button.
8.3 Dashboard
A Dashboard displays different visualizations and maps. Dashboard allows you to use a
visualization on multiple dashboards without having you to copy the code around. Editing a
visualization automatically changes every Dashboard using it. Dashboard content can also
be shared.
SIEMonster.com 89
8.3.1 Exercise: Creating a new Dashboard
4. You can click on resize control option on the lower right of the panel and drag to the
new dimensions to resize the panel.
5. You can move these panels around by dragging them from the panel header.
6. If you want to delete a panel from the dashboard, you can click on the gear icon
in on the upper right and select Delete from dashboard.
SIEMonster.com 90
7. Click on the time picker icon in the menu bar to define the value for Refresh
every. This is useful especially when you are viewing the live data coming into the
system.
8. Click on Share in the menu bar, select Embed code, and click on Copy iFrame code.
It can now embed the Dashboard in a web application.
If you copy the link written in the src=”…” attribute and share this, your
users will not have the option to modify the dashboard. This is not a
security feature, since a user can just remove the embed from the URL.
However, it can be helpful if you want to share links to people that
should not modify the dashboards by mistake.
Short URL enables easier link shares.
9. Once you have finished all the visualization, click Save button in the menu bar to save
your Dashboard.
SIEMonster.com 91
8.4 Alerting
Open Distro for Elasticsearch allows you to monitor your data and send alerts automatically
to your stakeholders. It is easy to setup and manage and it uses Kibana interface with a
powerful API.
Alerting feature allows you to setup rules so that you can be notified when something of
interest changes in your data. Anything you can query on, you can build an alert on that. The
Alerting feature notifies you when data from one or more Elasticsearch indices meets certain
conditions. For example, you might want to notify a Slack channel if your application logs
more than five HTTP 503 errors in one 30 minutes, or you might want to page a developer if
no new documents have been indexed in the past 20 minutes.
8.4.1 Monitor
A job that runs on a defined schedule and queries Elasticsearch. The results of these queries
are then used as input for one or more triggers.
With Open Distro for Elasticsearch, you can easily create monitors using the Kibana UI with
a simple visual editor or with an Elasticsearch query. This gives you the flexibility to query
the data most interesting to you and receive alerts on it. For instance, if you are ingesting
access logs, you can choose to be notified when the same user logs in from multiple
locations within an hour, enabling you to proactively address possible intrusion attempts.
SIEMonster.com 92
3. Define the frequency in the Schedule
section of the Configure Monitor
screen.
Monitors can run at a variety of fixed intervals (e.g. hourly, daily, etc.).
however, this process can be customized using the custom cron
expressions for when they should run. Monitors use the Unix cron syntax
and support five fields:
Example:
The following expression translates to “every Monday through Friday at 10:45 AM”:
• 45 10 * * 1-5
4. From the How do you want to define the monitor drop-down menu, select Define
using visual graph. This will define a Monitor visually.
5. From the Index drop-down menu, select any of the wazuh-alerts (for example, select
wazuh-alerts-3.z-2019.06.26). These indices are time based, a new index is created
every day and the name of the index is based on the day it is created. Wildcards can
also be used for Alerting,
SIEMonster.com 93
7. Under Match the following condition, click on FOR THE LAST and select the time
duration to filter the data further (for example select 10 days).
9. A complete history of all alert executions is indexed in Elasticsearch for easy tracking
and visualization. This can help you to answer questions like are my alerts executing?
What are my active alerts? What alerts have been acknowledged or triggered? What
actions were taken?
SIEMonster.com 94
Create Triggers
10. Creating trigger is the next step in creating a Monitor. Under the Trigger name,
specify the name of the trigger (for example you can name this trigger as Flatline).
11. From the Severity level drop-down menu, select 1. Severity levels help to manage
alerts. A trigger with a low severity level (for example 5) might message a chat room,
whereas a trigger with a high severity level (for example 1) might page a specific
individual.
12. Under Trigger condition¸ specify the threshold for the aggregation and timeframe
selected (for example, select IS BELOW 5).
The line moves up and down as you increase and decrease the
threshold. Once this line is crossed, the trigger evaluates to true.
SIEMonster.com 95
Configure Actions
The final step in creating a Monitor is to add one or more actions. Actions send notifications
when trigger conditions are met and support Slack, Amazon Chime, and Webhooks.
13. In the Action name field, specify Flatline Action as the action name.
15. In the Message subject field, specify Wazuh Flatline and click Create.
SIEMonster.com 96
For Trigger condition, specify a Painless script that returns true or false.
Painless is the default Elasticsearch scripting language and has a syntax
like Groovy
18. Test the script using the Run button. A return value of true means the trigger
condition has been met, and the trigger should execute its actions.
2. Click the Index Permissions tab as shown and click Add index permissions.
SIEMonster.com 97
3. In the Index patterns field, type .opendistro-alerting-alerts and click Add index
pattern.
4. Under the Permissions: Action Groups section, click Action Group, and select crud
from the drop-down menu and click Add Action Group.
6. Navigate to Security > Role Mappings and click to add a new role mapping.
From the Role drop-down menu, select alerting-alert role that you have created in
this exercise. You can now map this role to the desired Users or Backend roles by
clicking on Add User or Add Backend Role respectively.
1. Navigate to Security > Roles and then click on to add a new Role. Specify alerting-
monitors as name for this role.
2. Click the Index Permissions tab as shown and click Add index permissions
3. In the Index patterns field, type .opendistro-alerting-config and click Add index
pattern.
4. Under the Permissions: Action Groups section, click Action Group, and select crud
from the drop-down menu and click Add Action Group.
SIEMonster.com 98
5. Navigate to Security > Role Mappings and click to add a new role mapping.
From the Role drop-down menu, select alerting-monitors role that you have created
in this exercise. You can now map this role to the desired Users or Backend roles by
clicking on Add user or Add Backend Role respectively.
6. Click the Index Permissions tab as shown and click Add index permissions
2. In the Index patterns field, type .opendistro-alerting-alertsx`x and click Add index
pattern.
3. Under the Permissions: Action Groups section, click Action Group, and select read
from the drop-down menu and click Add Action Group.
4. Navigate to Security > Role Mappings and click to add a new role mapping.
From the Role drop-down menu, select alerting-read-only Role that you have created
in this exercise. You can now map this role to the desired Users or Backend roles by
clicking Add user or Add Backend Role respectively.
SIEMonster.com 99
8.5 Wazuh
Wazuh is an open source and enterprise ready security monitoring solution used for secure
visibility, threat detection, infrastructure monitoring, compliance, and incident response.
Wazuh dashboard is used to manage agents that read operating system and application
logs, and securely forward them to a central manager for rule-based analysis and storage.
The Wazuh rules help bring to your attention application or system errors, misconfigurations,
attempted and/or successful malicious activities, policy violations, and a variety of other
security and operational issues.
SIEMonster.com 100
The dashboard above shows that 6,937 Alerts were generated and none of them were level
12 or above. More detailed information about these alerts are displayed under the Alerts
summary section.
Each event on the Wazuh Agent is set to a certain severity level with 1
as the default. All events from this level up will trigger an alert in the
Wazuh Manager.
To explore these alerts in more detail, click Discover on Dashboard’s menu bar and then
expand one of the events to view the list of data fields used in that event. This page displays
information in an organized way, allowing filtering by different types of alert fields, including
compliance controls.
For example, in the event below we can find where this message is located in the system
through the location field, what is the CVE code for the vulnerability using the rule.info
field, and what is the IP address of the attacked using the srcip field.
SIEMonster.com 101
8.5.2 Wazuh: PCI DSS
The Payment Card Industry Data Security Standard (PCI-DSS) is a common proprietary IT
compliance standard for organizations that process major credit cards such as Visa,
MasterCard, and American Express and it was developed to encourage and enhance
cardholder data security, and to facilitate the adoption of consistent data security measures
globally. The standard was created to increase control of cardholder data to reduce credit
card fraud.
It applies to all merchants and service providers that process, transmit or store cardholder
data. If your organization handle card payments, it must comply, or risk suffering financial
penalties or even the withdrawal of the facility to accept card payments.
On the Wazuh Dashboard, click PCI DSS. The PCI DSS dashboard opens. The PCI DSS
dashboard shows the data related to the Agents, PCI Requirements, and Alerts summary.
Wazuh can implement PCI DSS by performing log analysis, file integrity checking, policy
monitoring, intrusion detection, real-time alerting, and active response. The Dashboard can
be filtered by selecting different PCI DSS requirements.
SIEMonster.com 102
Under the Alerts summary section on PCI DSS Dashboard, data shows that the Log file rotated
attack is impacting 10.5.2 and 10.5.5 controls. 10.5.2 protects audit trail files from
unauthorized modification, while 10.5.5 uses file integrity monitoring or change detection
software on logs to ensure that existing log data cannot be changed without generating
alerts (although new data being added should not cause an alert).
Controls like these can help you to log data like invalid login access attempts, multiple invalid
login attempts, privilege escalations, and changes to accounts. In order to achieve this, PCI
DSS tags are added to OSSEC log analysis rules, mapping them to the corresponding
requirement(s).
• Central Manager Component: Receives and monitors the incoming log data
• Agents: Collect and send information to the central manager
Wazuh takes advantage of its file integrity monitoring and access control capabilities
coupled with a new tagging in Wazuh ruleset. Rules in compliance with a specific GDPR
technical requirement have a tag describing it.
Wazuh offers extensive support for GDPR compliance, but it can do much more. Wazuh will
help you gain greater visibility into the security of your infrastructure by monitoring hosts at
the operating system and application levels.
SIEMonster.com 103
This solution, based on lightweight multi-platform agents, provides:
This diverse set of capabilities is provided by integrating OSSEC, OpenSCAP and Elastic
Stack into a unified solution and simplifying their configuration and management. Wazuh
provides an updated log analysis ruleset and a RESTful API that allows you to monitor the
status and configuration of all Wazuh agents. It also includes a rich web application (fully
integrated as a Kibana app) for mining log analysis alerts and for monitoring and managing
your Wazuh infrastructure.
The syntax used for rule tagging is gdpr_ followed by the chapter, article and, where
appropriate, the section and paragraph to which the requirement belongs. (e.g.
gdpr_II_5.1.f).
SIEMonster.com 104
Click Discover on Dashboard’s menu bar and then expand one of the events to view the list
of data fields used in that event. In the event below we can find that the rule.gdpr field for
example has values of II_5.1.f and IV_35.7.d.
As we can observe, certain requirements for GDPR compliance are strictly formal with no
place for support at the technical level. However, Wazuh offers a wide range of solutions to
support most of the technical needs of GDPR.
Data protection and file sharing technologies that meet data protection requirements are
also
SIEMonster.com 105
necessary as it is vitally important to know the purpose of the data processing and whether
the data processor, in the case of third parties, is authorized to do it.
Wazuh’s File Integrity Monitoring (FIM) watches specified files and triggers alerts when these
files are modified. The component responsible for this task is called Syscheck. This
component stores the cryptographic checksum and other attributes of a known good file or
Windows registry key and regularly compares it to the current file being used by the system,
looking for changes. Multiple configurations are possible for monitoring in real time, in
intervals of time, only specific objectives, etc. In the same way that personal data files are
monitored, Wazuh can monitor the shared files to make sure they are protected.
Concept. Document and record all data processing. Audit logs and events
Wazuh facilitates the documentation of a large amount of information about file access and
security. It offers the possibility to store all the events that the manager receives in archived
logs. In addition to storing alerts in alert logs and the ability to use more logs and databases
for various purposes, such as audits.
SIEMonster.com 106
unauthorized network access, active advanced persistent threats and verification of the
correct operation of all components.
Security tools are necessary to prevent the entry of unwanted data types and malicious
threats and to ensure that endpoints are not compromised when requesting access to the
network, system, and data. Anti-malware and anti-ransomware are needed to ensure the
integrity, availability, and resilience of data systems, to block and to prevent malware and
rescue threats from entering devices.
Behavioral analysis services that use machine intelligence to identify people who do
anomalous things on the network may be required to provide early visibility and alert when
employees who become corrupt. Such tools can also highlight bizarre activities, such as
employees logged on to devices in two different countries, which almost certainly means
they are at risk for accounts.
Anomaly detection refers to the action of finding patterns in the system that do not match
the expected behavior. Once malware (e.g., a rootkit) is installed on a system, it modifies the
system to hide itself from the user. Although malware uses a variety of techniques to
accomplish this, Wazuh uses a broad-spectrum approach to find anomalous patterns that
indicate possible intruders. The main component responsible for this task is Rootcheck.
However, Syscheck also plays a significant role.
It is worth highlighting the ability to detect vulnerabilities. Now agents are able to natively
collect a list of installed applications and to send it periodically to the manager (where it is
stored in local SQLite databases, one per agent). In addition, the manager builds a global
vulnerability database, using public OVAL CVE repositories and later cross correlating this
information with the agent’s application inventory data.
SIEMonster.com 107
8.5.5 Wazuh: Ruleset
These rules are used by the system to detect attacks, intrusions, software misuse,
configuration problems, application errors, malware, rootkits, system anomalies or security
policy violations. OSSEC provides an out-of-the-box set of rules that we update and
augment, in order to increase Wazuh detection capabilities.
On the editor pane, you can type API requests in several ways:
SIEMonster.com 108
To execute a request, place the cursor on the desired request line and click on the button.
Comments are also compatible on the editor pane using the # character at the beginning of
the line.
• The editor pane, where you type your REST API command and click on the that
sends the query to the Elasticsearch instance or cluster.
• The result pane, that displays the responses to the command.
SIEMonster.com 109
You can select multiple requests and submit them together. Console sends the requests to
Elasticsearch one by one and shows the output in the result pane. Submitting multiple
request is helpful when you are debugging an issue or trying query combinations in multiple
scenarios.
The Console maintains a list of the last 500 commands that Elasticsearch
executed successfully. Click History on the top right of the panel to view
your recent commands. If you want to view any request, select that
request and click Apply. Console will add this to the editor pane.
Click on the action icon and select Open documentation to view the documentation for
the Search APIs.
All the cat commands accept a query string parameter help to see all
the headers and info they provide, and the /_cat command alone lists all
the available commands. The indices command provides a cross section
of each index.
SIEMonster.com 110
2. Type the following code to add a doc into an index and click on the button
Optional Task:
Run the GET /_cat/indices code to view the index that you have just created.
5. GET /pdfbook/_search will display the existing pdfbook, copy the value of one of the
_id of a pdfbook from the result pane. You can use the code shown below to query a
specific document by the value of ID you just copied.
SIEMonster.com 111
6. Use the code below to search a pdfbook based on the name of an author.
8. The following code can be used to verify whether the document has been updated.
9. The following code can be used to delete any of the existing documents.
SIEMonster.com 112
8.7 Management
The Management module is used to perform your runtime configuration of Kibana, including
both the initial setup and ongoing configuration of index patterns, advanced settings that
can change the behavior of Kibana application, and the various Objects that you can save
throughout Kibana such as Search, Visualization, Index-pattern, and Dashboard.
3. Specify an Index Pattern that matches the name of one or more of your Elasticsearch
indices. Enter the Index Pattern name as wazuh* and click Next steps.
4. Select @timestamp from the Time Filter field name drop-down menu.
The Time Filter will use this field to filter your data by time. You can
choose not to have a time field, but you will not be able to narrow down
your data by a time range.
SIEMonster.com 113
5. Click Create index pattern. Once you have created an index pattern, you will be
presented with a table of all fields and associated data types in the index.
You can start working with your Elasticsearch data in Kibana after you have created your
Index Pattern. Here are some things to try:
SIEMonster.com 114
8.7.3 Managing Saved Objects
A Saved Object can be a Search, Dashboard, Visualization, or an Index Pattern. You can view,
edit, delete, import, or export Saved Objects from Management > Saved Objects.
Advanced Settings
Using the Advances Settings feature, you can edit the settings that control the behavior of
Kibana. You can change the default format of the date, the default index for Timelion, can
set the precision for decimal values, or can set the default query language.
You can view Advanced Settings from Management > Advanced Settings.
SIEMonster.com 115
8.8 Security
Open Distro for Elasticsearch includes the Security plugin for authentication and access
control. The plugin provides numerous features to help you secure your cluster. The security
defined can be very granular, for example you can configure a user to see certain indices,
certain dashboards, or just view the Dashboard and not be able to edit it.
Security and access control are managed using different concepts as discussed below:
8.8.1 Permissions
Permissions are individual actions assigned to Action Groups, such as creating an index as
shown below.
SIEMonster.com 116
8.8.3 Roles
Security Role defines the scope of a permission or action group on a cluster, index,
document, or field. Roles are the basis for access control in Open Distro for Elasticsearch
Security. Roles allow you to specify which actions its Users can take, and which Indices those
Users can access. Roles control cluster operations, access to indices, and even the fields and
documents Users can access.
8.8.6 Users
A user makes requests to Elasticsearch clusters. A user typically has credentials including
Username and Password. A user can have zero or more Backend Roles, and zero or more
User attributes.
SIEMonster.com 117
The Security plugin automatically hashes the password and stores it
in the .opendistro_security index.
Backend Roles are optional and are not the same as security Roles.
Backend roles are external Roles that come from an external
authentication system, for example LDAP or Active Directory. If you are
not using any external system, you can ignore this step.
Attributes are also optional, and they are User properties that you can
use for variable substitution in Index Permissions.
2. Likewise, a mapping of all_access (Role) to admin (Backend role) means that any User
with the Backend Role of admin (from an LDAP or Active Directory server) gains all
the Permissions of all_access after authenticating. You can map each Role to many
Users or Backend Roles.
SIEMonster.com 118
9 Incident Response
TheHive is a scalable, open source and free Security Incident Response Platform. TheHive is
tightly integrated with Malware Information Sharing Platform (MISP) and is designed to
make life easier for SOCs, CSIRTs, CERTs and any information security practitioner dealing
with security incidents that need to be investigated and acted upon swiftly.
TheHive can be synchronized with one or multiple MISP instances to start investigations out
of MISP events. Investigation’s results can also be exported as an MISP event to detect and
react to attacks that have been dealt with. When integrated with Cortex, TheHive allows
security analysts and researchers to analyze hundreds of observables at once using more
than hundred analyzers.
TheHive can be configured to import events from one or more MISP instances using various
filters (tag whitelist, tag blacklist, organization blacklist, max attributes per event).
Cortex integration
TheHive uses Cortex to have access to analyzers and responders.
• Analyzers can be launched against observables to get more details about a given observable
• Responders can be launched against case, tasks, observables, logs, and alerts to execute an
action
• One or multiple Cortex instances can be connected to TheHive
Case Merging
Two (or more) cases can be easily merged together if they relate to the same threat or have
a significant observable overlap.
SIEMonster.com 119
TheHive supports several authentication methods:
• Active Directory
• LDAP
• API keys
• X.509 SSO
• OAuth 2
• Local authentication
9.1 Collaborate
Collaboration is really important in TheHive and it allows multiple SOC and CERT analysts to
work on the same case simultaneously. For example, a security analyst may work on tracking
a malware activity on proxy logs, while another may deal with malware analysis as soon as
IOCs have been added by their co-workers. TheHive's live stream allows everyone to keep
an eye on what is happening on the platform, in real time.
TheHive allows the Observables, real time information pertaining to new or existing cases,
tasks to be available to all team members due to its built-in live stream capability. Special
notifications allow them to handle or assign new tasks and preview new MISP events and
alerts from multiple sources such as email reports, CTI providers and SIEMs. They can then
import and investigate them right away.
9.2 Elaborate
Every investigation in TheHive corresponds to a case. Cases and associated tasks can be
created from scratch, using a template engine, from MISP events, SIEM alerts, email reports,
or any other significant source of security events.
Metrics and custom fields can be added to the template to drive team’s activity, identify any
investigations that can take potentially significant time and seek to automate monotonous
tasks through dashboards. Analysts can record their progress, attach important files, add
tags, or import password protected ZIP archives containing malware or suspicious data
without opening them.
Each case can be broken down into one or more tasks. These tasks can contain different
work log. TheHive’s template engine is used to add the same task for a specific case every
time a case is created. Case templates can be used to link metrics to specific case types in
order to drive the team's activity, identify the type of investigations that take significant time,
and seek to automate monotonous tasks.
An analyst is assigned with a task, or a team member can take charge of a task without
waiting for someone to assign it to them.
SIEMonster.com 120
9.3 Analyze
Observables in hundreds or thousands in numbers can be added to each case that you
create, import them directly from an MISP event, or any alert send to the platform. TheHive
can be linked to one of many MISP instance, and MISP events can be previewed to decide
whether they permit an investigation or not. Once investigations are complete, you can
export IOCs to MISP instance. If an investigation is in order, the analyst can then add the
event to an existing case or import it as a new case using a customizable template.
SIEM alerts, phishing and other suspicious emails can be sent to TheHive using TheHive4py.
TheHive4py is a Python API client for TheHive, it is an open source and free security incident
response platform designed to make life easier for SOCs, CSIRTs, CERTs and any information
security practitioner dealing with security incidents that need to be investigated and acted
upon swiftly. These alerts then appear in the Alerts panel along with new or updated MISP
events, where they can be previewed, imported into cases, or ignored.
TheHive has the ability to automatically identify Observables that have been already seen in
previous cases. Observables can also be associated with a Traffic Light Protocol (TLP),
Permissible Actions Protocol (PAP), or the source that provided or generated them using
tags. The analyst can also easily mark observables as IOCs and isolate those using a search
query then export them for searching in a SIEM or other data stores.
9.4 Respond
Analysts can hold Cortex responders to contain an incident, eradicate malware and perform
other orchestration tasks. For example, they can call a responder to reply to a suspicious
email notification from TheHive, block a URL at the proxy level or gather evidence from a
compromised endpoint.
SIEMonster.com 121
9.5 Exercise: Adding a User
1. To access the user management page, drop-down from the Admin menu on the top right
of the screen and select Users.
2. The User management page displays all the existing users of the system.
5. From the Roles drop-down menu, select the required role. Click Save user.
1. Click Alerts on the top navigation bar. The Alerts list page opens with the list of alerts in the
system. This is a list of unassigned alerts waiting to be picked up by any available analyst.
SIEMonster.com 122
3. Alert Preview window opens showing you the alert details and a list of extracted
Observables.
4. Import alert as drop-down menu on the bottom right allows you to assign a Case template
that will be used for case creation. Templates contain lists of predetermined tasks that should
be performed on the alert. To create an empty case, click Yes, Import to turn the alert into a
case that will be assigned to you.
2. Click + New Case on the top navigation bar to create a new case. Create a new case window
opens.
3. In the Title field, type wazuh-alerts-3.x-*_SSH Failed Login. This will serve as the name of the
case.
5. In the Tags field, provide SSH Failed Login, 172.16.3.6, and sshd tags.
SIEMonster.com 123
6. From the PAP field, select AMBER.
8. In the Description field, enter a relevant and meaningful description. Click + Create case.
TLP is the Traffic Light Protocol which uses 4 color codes to indicate boundaries of how far
outside the original group or recipient the information may be shared.
An example provided by TheHive website is: “For example, a file added as observable can be
submitted to VirusTotal if the associated TLP is WHITE or GREEN. If it’s AMBER, its hash is
computed and submitted to VT but not the file. If it’s RED, no VT lookup is done.”
PAP is the Permissible Actions Protocol which mimics the TLP but indicates to the analyst
how they may use the IOC in investigating the alert. It dictates actions that may be taken
with each IOC, such as active vs passive response.
9. Click on your case to open the main window for the case. Cases have 3 tabs in the main
window (Details, Tasks, and Observables) as well as a live stream on the right-hand side
showing task and status updates from all analysts.
SIEMonster.com 124
The Details page shows metadata related to the case such as tags, date, severity, related
cases, a description, and TLP and PAP designations.
Any tasks designed by an Analyst, or those defined in an attached Case template are
displayed under the Tasks tab. Tasks should be used to track the actions taken to answer
investigative questions. Tasks that you accept, or which are auto-assigned to you show up
in My tasks on the top navigation bar. Tasks that are not assigned are displayed in the
Waiting tasks on the top navigation bar. All the extracted Observables and their types are
displayed under the Observables tab
12. In the Value field, type 1.1.1.1, and in the Tags field, add test tag. Click Create observable(s).
You only have to specify a value for either Tag or Description, not both.
SIEMonster.com 125
13. Click the observable value 1[.]1[.]1[.]1 under Value/Filename to open the detailed page.
14. The detailed page shows Metadata, links to other cases where IOC is also present, and an
Analysis section to run the Analyzers for enrichment. Click Run all.
15. Click on the Observables tab after running the Analyzer. You should now see a list of tags,
this is your enrichment and now gives you more actionable data.
SIEMonster.com 126
16. Switch back to the detailed page and click on any date under the Last analysis column to
view a more detailed report of the scan results.
Case Closure
17. when you ready to close the case, click Close on the main title bar. The Close Case screen
opens.
You will need to provide the following details on the Case basic information page.
• Template name
• Title prefix
• Severity
• TLP
• PAP
• Tags
• Description
SIEMonster.com 127
Along with description, you will also need to provide a Task to outline the investigative steps
for this alert. This provides a consistent approach to handling events since the Task List
becomes your investigative playbook. These should also reflect the actions defined in your
SOPs.
In addition to the above, You will also need to provide the values for Metrics and Custom
fields. Items you select here must first be defined in the respective Case metrics and Case
custom fields sections under the Admin drop-down menu on the top right of the screen.
A Case metric is just a variable defined to increment. Metrics can also be displayed in graphs
on the Dashboard.
A Case custom field allows you to add additional fields for an Analyst to provide the response
as either a string drop-down, number, Boolean, or a date.
9.10 Dashboards
TheHive allows you to create meaningful dashboards to drive any activity and support a
budget request.
a. Case statistics
b. Alert statistics
c. Job statistics
d. Observable statistics
3. In the Title field, type the name of your Dashboard. In the Description field, enter a relevant
and meaningful description.
4. From the Visibility drop-down menu, select Shared and click Create.
SIEMonster.com 128
5. Dashboard is built through a drag-and-drop interface.
6. Drag and drop the Donut chart to the empty space to add this chart in the Dashboard. No
title window opens.
8. From the Entity drop-down menu, select Case. From the Aggregation Field drop-down
menu, select status. This will show all the cases by status. Click Apply.
SIEMonster.com 129
9. Repeat the process to drag another Donut chart and drop it on the existing one.
10. Specify the Title as Case Tags, Entity as Case, and Aggregation Field as tags.
11. Drag and drop Row this time, and then drag and drop Bar type chart on the new row.
SIEMonster.com 130
12. Specify the Title as Case severity history, Entity as Case, Date Field as createdAT, Interval as
By week, and Category Field as severity. Click Apply.
13. Click Save once you have made all the changes in your Dashboard. Click Edit before making
any changes to your existing Dashboard.
SIEMonster.com 131
10 Analyzers
Cortex is an open source and free software. Cortex analyzes observables, at scale, by
querying a single tool instead of many. It helps a common problem frequently encountered
by SOCs, CSIRTs, and security researchers in the course of threat intelligence, digital forensics
and incident response.
Observables, such as IP and email addresses, URLs, domain names, files or hashes, can be
analyzed one by one or in bulk mode using a Web interface. Analysts can
also automate these operations thanks to the Cortex REST API.
Cortex helps you to analyze different types of observables using more than 35 analyzers.
Most analyzers come in different flavors. For example, using the VirusTotal analyzer, you can
submit a file to VT or simply check the latest available report associated with a file or a hash.
Cortex4py is a Python API client for Cortex, a powerful observable analysis engine where
observables, such as IP and email addresses, URLs, domain names, files or hashes, can be
analyzed one by one or in bulk mode using a Web interface. Analysts can also automate
these operations thanks to the Cortex REST API. Cortex4py allows analysts to automate these
operations and submit observables in bulk mode through the Cortex REST API from
alternative SIRP platforms, custom scripts or MISP
Cortex has many analyzers and a RESTful API, that makes observable analysis a breeze,
particularly if called from TheHive. TheHive can also leverage Cortex responders to perform
specific actions on alerts, cases, tasks and observables collected in the course of the
investigation: send an email to the constituents, block an IP address at the proxy level, notify
team members that an alert needs to be taken care of urgently and much more.
Cortex allows you to create and manage multiple organizations, manage the associated
users and give them multiple roles.
Setting up an Organization
The default cortex Organization cannot be used for any other purpose than managing global
administrators (users with the superAdmin role), Organizations and their associated users. It
cannot be used to enable/disable or configure Analyzers. To do so, you need to create your
own Organization inside Cortex by clicking on the Add organization button.
SIEMonster.com 132
Enable and Configure Analyzers
By default, and within every freshly created organization, all analyzers are disabled. If you
want to enable and configure them, use the Web UI (Organization > Configurations and
Organization > Analyzers tabs).
All analyzer configuration is done using the Web UI, including adding API keys and
configuring rate limits.
You can log in using this user account. Notice that the default cortex organization has been
created. If you open this organization then you will be able to see your user account, a Cortex
global administrator.
SIEMonster.com 133
10.3 Cortex: Create an Organization
The default Cortex organization can only be used to manager global administrators (users
with the superAdmin role), organizations and their associated users. If you want to configure,
or enable/disable an Analyzer:
• Click + Add organization
• Specify Organization’s name
• Provide suitable Description
SIEMonster.com 134
User accounts cannot be deleted once created but they can be locked
by an orgAdmin or a superAdmin. Once locked, they cannot be used, but
they be unlocked by either an orgAdmin or a superAdmin.
read
• This role cannot be used in the default cortex organization
• This role cannot submit jobs
• The user can access all the jobs that have been performed by the Cortex instance, including
their results
• This organization can only contain super administrators
analyze
• This role cannot be used in the default cortex organization
• This role can submit a new job using one of the configured analyzers for their organization
• This organization can only contain super administrators
orgAdmin
• This role cannot be used in the default cortex organization
• A user with an analyze role can manage users within their organization
• They can add users and give them read, analyze and/or orgAdmin roles
• This role also permits to configure analyzers for the organization
• This organization can only contain super administrators
superAdmin
• This role is incompatible with all the other roles listed above
• It can be used solely for managing organizations and their associated users
• When you install Cortex, the first user that is created will have this role
• Several users can have it as well but only in the default cortex organization, which is
automatically created during installation
SIEMonster.com 135
The table below summarizes the capabilities of these roles.
Responders are programs that perform different actions and apply to alerts, cases, tasks,
task logs, and observables.
Analyzers and responders can be configured, enabled, or disabled only
by orgAdmin users.
SIEMonster.com 136
Analyzers Config Tab
1. Open the Organization page and click the Analyzer Config tab. Configuration of all the
available Analyzers are defined here including settings that are common to all the flavors of
a given analyzer.
2. Click the Organization -> Analyzers tab, orgAdmin users can configure, enable, or disable
specific analyzer flavors. They can override the global configuration inherited from the
Organization -> Analyzers Config tab and add additional, non-global configuration that
some analyzer flavors might need to work correctly.
SIEMonster.com 137
4. Click the he Organization > Responders tab, orgAdmin users can configure, enable, or
disable, enable and configure specific responder flavors. They can override the global
configuration inherited from the Organization > Responders Config tab and add additional,
non-global configuration that some responder flavors might need to work correctly.
Users with the superAdmin role cannot see the Jobs History.
SIEMonster.com 138
A user that has an analyze role
can submit a new job using one
of the configured analyzers for
their organization. Click +New
Analysis to submit a new job.
SIEMonster.com 139
11 Threat Intel
Malware Information Sharing Platform (MISP) is an open source software solution use to
collect, store, distribute, and share cyber security indicators and threats about cyber security
incidents analysis and malware analysis.
The objective of MISP is to promote the sharing of structured information within the security
community and abroad. MISP provides functionalities to support the exchange of
information but also the consumption of said information by Network Intrusion Detection
Systems (NIDS) and log analysis tools like Security Information and Event Management
(SIEM).
MISP is accessible from different interfaces like a web interface (for analysts or incident
handlers) or via a REST API (for systems pushing and pulling IOCs). The inherent goal of MISP
is to be a robust platform that ensures a smooth operation from revealing, maturing and
exploiting the threat information.
There are many different types of users of an information sharing platform like MISP:
The objective of the MISP, open source threat intelligence and sharing platform is to:
• Facilitate the storage of technical and non-technical information about seen malware and
attacks
• Create automatically relations between malware and their attributes
• Store data in a structured format (allowing automated use of the database to feed detection
systems or forensic tools)
• Generate rules for Network Intrusion Detection System (NIDS) that can be imported on IDS
systems (e.g. IP addresses, domain names, hashes of malicious files, pattern in memory)
• Share malware and threat attributes with other parties and trust-groups
• Improve malware detection and reversing to promote information exchange among
organizations (e.g. avoiding duplicate works)
• Create a platform of trust - trusted information from trusted partners
• Store locally all information from other instances (ensuring confidentiality on queries)
SIEMonster.com 140
11.1 Feeds
Feeds contain indicators that can be automatically imported in MISP at regular intervals, they
can be both remote or local resources. Such indicators contain a pattern that can be used to
detect suspicious or malicious cyber activity.
• MISP standardized format which is the preferred format to benefit from all the MISP
functionalities
• CSV format, allows you to select the columns that are to be imported
You can easily import any remote or local URL to store them in your MISP instance. Feeds
description can be also easily shared among different MISP instances as you can export a
feed description as JSON and import it back in another MISP instance.
• Caching enabled: To
enable a feed for caching,
SIEMonster.com 141
you need to check the caching enabled field to benefit automatically of the feeds in your
local MISP instance
• Input Source: Drop-down from Input Source menu and select either:
• URL: URL of the feed, where it is located (for Local hosted files, point to the manifest.json e.g.
/home/user/feed-generator/output/manifest.json)
• Source Format: Drop-down from Source Format menu and select either:
o MISP Feed: The source points to a list of json formatted like MISP events
o Freetext Parsed Feed:
▪ Target Event: These are the event that get updated with the data from the
feed. Target Event can be either New Event Each Pull (A new event will be
created each time the feed is pulled) or Fixed Event (A unique event will be
updated with the new data. This event is determined by the next field)
▪ Target Event ID: The ID of the event where the data will be added (if not set,
the field will be set the first time the feed is fetched)
▪ Exclusion Regex: Add a regex pattern for detecting iocs that should be
skipped (this can be useful to exclude any references to the actual report /
feed for example)
▪ Auto Publish: If checked, events created from the feed will be automatically
published
▪ Override IDS Flag: If checked, the IDS flag will be set to false
▪ Delta Merge: If checked, only data coming from the last fetch are kept, the
old ones are deleted
o Simple CSV Parsed Feed:
▪ Target Event: These are the event that get updated with the data from the
feed. Target Event can be either New Event Each Pull (A new event will be
created each time the feed is pulled) or Fixed Event (A unique event will be
updated with the new data. This event is determined by the next field)
▪ Target Event ID: The ID of the event where the data will be added (if not set,
the field will be set the first time the feed is fetched)
▪ Exclusion Regex: Add a regex pattern for detecting iocs that should be
skipped (this can be useful to exclude any references to the actual report /
feed for example)
▪ Auto Publish: If checked, events created from the feed will be automatically
published
SIEMonster.com 142
▪ Override IDS Flag: If checked, the IDS flag will be set to false
▪ Delta Merge: If checked, only data coming from the last fetch are kept, the
old ones are deleted
• Distribution: It define the distribution option that will be set on the event created by the
feed
• Filter rules: They allow you to define which organizations or tags allowed or blocked
11.2 Events
MISP events are encapsulations for contextually linked information. The MISP interface
allows the user to have an overview over or to search for events and attributes of events that
are already stored in the system in various ways.
On the left pane, click List Events. The Events page opens, that displays a list of last 60
events.
Published: Already published events are marked by a checkmark, and the unpublished
events are marked by a cross
Owner Org: The organization that owns the event on this instance. This field is only visible
to administrators
ID: It displays the ID number of the event that was assigned by the system
SIEMonster.com 143
Info: A short description of the event
Actions: The controls that allows user to either view or modify the event. The Actions
available are:
• Publish Event
• Edit
• Delete
• View
To create the event, click Add Event on the left pane, Add Event page opens. During this
first step, you will be create a basic event without any actual attributes, but storing general
information such as a description, time and risk level of the incident.
SIEMonster.com 144
Provide the following data in the Add Event page.
Distribution: It controls the visibility of the event once it is published. Distribution also
controls whether the event will be synchronized to other servers or not. The following
options are available in the drop-down menu:
• Your organization only: This setting will only allow members of your organization to see
this event
• This Community-only: Users that are part of your MISP community will be able to see this
event. This includes your own organization, organizations on the MISP server, and
organizations running MISP servers that synchronize with this server
• Connected communities: Users that are part of your MISP community will be able to see
this event. This includes all organizations on this MISP server, all organizations on MISP
servers synchronizing with this server, and the hosting organizations of servers that connect
to those afore mentioned servers (so basically any server that is 2 hops away from this one).
Any other organizations connected to linked servers that are 2 hops away from this own will
be restricted from seeing the event.
• All communities: This will share the event with all MISP communities, allowing the event to
be freely propagated from one server to the next.
Threat Level: This field indicates the risk level of this event. The following options are
available in the drop-down menu:
• Low: General mass malware.
• Medium: Advanced Persistent Threats (APT)
• High: Sophisticated APTs and 0day attacks
Analysis: Indicates the current stage of the analysis for this event, with the following possible
options:
• Initial: The analysis is just beginning
• Ongoing: The analysis is in progress
• Completed: The analysis is complete
Event Info: This is where the malware/incident can get a brief description starting with the
internal reference.
SIEMonster.com 145
11.2.2 Add Attributes to the Event
Once the event is created, the next step is to add attributes. This can be done by adding
them manually or importing the attributes from an external format (OpenIOC,
ThreatConnect). Click + on the event screen that you have created to add an attribute.
Keep in mind that the system searches for regular expressions in the value field of all
attributes when entered, replacing detected strings within it as set up by the server's
administrator (for example to enforce standardized capitalization in paths for event
correlation or to bring exact paths to a standardized format).
Category: This drop-down menu explains the category of the attribute, meaning what
aspect of the malware this attribute is describing
Type: Categories determine what aspect of an event they are describing. The Type explains
by what means that aspect is being described. As an example, the source IP address of an
SIEMonster.com 146
attack, a source e-mail address or a file sent through an attachment can all describe the
payload delivery of a malware. These would be the types of attributes with the category of
payload deliver
Distribution: Distribution drop-down menu allows you to control who will be able to see
this attribute
Value: The actual value of the attribute, enter data about the value based on what is valid
for the chosen attribute type. For example, for an attribute of type ip-src (source IP address),
1.1.1.1 would be a valid value
Contextual Comment: You can add some comments to the attribute that will not be used
for correlation but instead serves as purely an informational field
For Intrusion Detection System: This option allows the attribute to be used as an IDS
signature when exporting the NIDS data, unless it is being overruled by the white-list.
Batch import: If there are several attributes of the same type to enter (such as a list of IP
addresses, it is possible to enter them all into the same value-field, separated by a line break
between each line. This will allow the system to create separate lines for each attribute
SIEMonster.com 147
Provide the following data in the Add Attachment(s) page.
Distribution: This drop-down menu allows you to control who will be able to see this
attachment
Contextual Comment: You can add some comments to the attribute that will not be used
for correlation but instead serves as purely an informational field
Upload field: By hitting browse, you can browse your file system and point the uploader to
the file that you want to attach to the attribute
Malware: This check-box marks the file as malware and it will be zipped and protected by
the password, to protect the users of the system from accidentally downloading and
executing the file. Make sure to tick this if you suspect that the filed is infected, before
uploading it
Once all the attributes and attachments that you want to include with the event are included,
click Publish Event on the left pane of the event screen.
SIEMonster.com 148
This will alert the eligible users of it and push the event to instances that your instance
connects to. There is an alternate way of publishing an event without alerting any other
users, by clicking Publish (no email). This should only be used for minor edits (such as
correcting a typo).
To access the list of attributes, click List Attributes on the left pane, The Attributes page
opens, that displays a list of last 60 attributes.
Event: This is the ID number of the event that the attribute is tied to. If an event belongs to
your organization, then this field will be colored red.
Type: The type of the value contained in the attribute (for example a source IP address)
Value: The actual value of the attribute, describing an aspect, defined by the category and
type fields of the malware (for example 1.1.1.1)
SIEMonster.com 149
IDS: Shows whether the attribute has been flagged for NIDS signature generation or not
Actions: A set of buttons that allow you to edit or delete the attribute
• For the value, event ID and organization, you can enter several search terms by entering each
term as a new line.
• To exclude things from a result, use the NOT operator (!) in front of the term.
• For string searches (such as searching for an expression or tags) - lookups are simple string
matches.
• If you want a substring match encapsulate the lookup string between "%" characters.
Apart from being able to list all events, it is also possible to search for data contained in the
value field of an attribute, by clicking on the "Search Attributes" button.
SIEMonster.com 150
12 Metrics
Metrics provides some metrics on the backend systems to enhance performance. Metrics
has a health monitor used to monitor your cluster, stack health, and detailed statistics. Using
the web interface, this is available on the Metrics - Grafana Elasticsearch Dashboard.
Grafana allows you to query, visualize, alert on and understand your metrics no matter where
they are stored. With Grafana, you can create, explore, and share dashboards with your team
and foster a data driven culture.
• Graphite
• InfluxDB
• OpenTSDB
• Prometheus
• Elasticsearch
• CloudWatch
• MySQL
• PostgreSQL
SIEMonster.com 151
The query language and capabilities of each Data Source are very different. Data from
multiple Data Sources can be combined onto a single Dashboard, but each Panel in the
Dashboard is tired to a specific Data Source.
Navigate to Home > Metrics to access the Metrics module. To view the existing Data
Sources or add a new one, click on the Grafana icon at the top left and then click Create
your first data source. The Add data source page opens.
SIEMonster.com 152
12.2 Metrics: Organization
Grafana supports multiple organizations to support a wide variety of deployment models,
including using a single Grafana instance to provide service to multiple potentially untrusted
organizations. In many cases, Grafana will be deployed with a single Organization.
Each Organization contains their own dashboards, data sources and configuration, and
cannot be shared between organizations. While users may belong to more than one,
multiple organizations are most frequently used in multi-tenant deployments. All
dashboards are owned by an organization.
It is important to remember
that most metric databases
do not provide any sort of
per-user series
authentication. Therefore, in
Grafana, data sources and
dashboards are available to
all users in an Organization.
Grafana supports a wide variety of internal and external ways for users to authenticate
themselves. These include from its own integrated database, an external SQL server, or an
external LDAP server.
2. Click new user and specify Name, Email, Username, and Password.
SIEMonster.com 153
3. Click Create, the Users page opens that list the newly added user.
4. Click the newly created user then enable the Grafana Admin checkbox. Grafana
admin user has all the permissions and its different to the Admin role in Grafana.
5. Under the Organizations section, click inside the organization name, type and select
the organization you want to assign this user to.
6. From the Role drop-down menu, select Viewer and click Add.
Navigate to Home > Metrics to open the Metrics Dashboard. You can click on the default
Dashboard to switch to another Dashboard.
SIEMonster.com 154
The time period for the Dashboard can be controlled by the time picker at the top-right
corner of the Dashboard.
Dashboards can be tagged, and the Dashboard picker provides quick, searchable access to
all Dashboards in an Organization.
Dashboards can utilize Templating to make them more dynamic and interactive, and
Annotations to display event data across Panels. This can help correlate the time series data
in the Panel with other events.
Dashboards (or a specific Panel) can be Shared easily in a variety of ways. You can send a
link to someone who has a login to your Grafana. You can use the Snapshot feature to encode
all the data currently being viewed into a static and interactive JSON document; it’s so much
better than emailing a screenshot,
SIEMonster.com 155
12.4.1 Exercise: Building a New Dashboard
1. Click on the default Dashboard and then click New dashboard to open the new
Dashboard screen.
2. The New Dashboard screen displays all the available panels that can be added on the
Dashboard. Click Choose Visualization.
3. Click the Graph Panel to add it to the empty space to add this panel in the Dashboard.
Specify the settings that you want to apply for this Panel.
SIEMonster.com 156
4. Click to go back to the Dashboard.
5. Click Add panel icon to add another Panel and repeat the same process.
6. Click the Panel header and select Share to share the Panel. Panels (or an entire
Dashboard) can be Shared easily in a variety of ways. You can send a link to someone
who has a login to your Grafana. You can use the Snapshot feature to encode all the
data currently being viewed into a static and interactive JSON document.
8. Click Edit to open the Metrics tab below the Panel, this is where a query can be
specified.
9. Click Save Dashboard icon. Save As window opens. Provide a name for the
Dashboard and click Save.
Rows are always 12 “units” wide. These units are automatically scaled dependent on the
horizontal resolution of your browser. You can control the relative width of Panels within a
row by setting their own width.
Regardless of your resolution, or time range, Grafana can show you the
perfect amount of datapoints using the MaxDataPoint functionality.
Utilize the Repeating Rows functionality to dynamically create or remove entire Rows (that
can be filled with Panels), based on the Template variables selected.
SIEMonster.com 157
Rows can be collapsed by clicking on the Row Title. If you save a Dashboard with a Row
collapsed, it will save in that state and will not preload those graphs until the row is
expanded.
There are a wide variety of styling and formatting options that each Panel exposes to allow
you to create the perfect picture.
Panels can be dragged and dropped and rearranged on the Dashboard. They can also be
resized.
• Graph
• Singlestat
• Table
• Text
• Heatmap
• Alert List
• Dashboard List
• Plugin List
Panels like the Graph panel allow you to graph as many metrics and series as you want.
Other panels like Singlestat require a reduction of a single query into a single number.
Dashboard List and Text are special panels that do not connect to any Data Source.
Panels can be made more dynamic by utilizing Dashboard Templating variable strings within
the panel configuration (including queries to your Data Source configured via the Query
Editor).
Utilize the Repeating Panel functionality to dynamically create or remove Panels based on
the Templating Variables selected.
The time range on Panels is normally what is set in the Dashboard time picker, but this can
be overridden by utilizes Panel specific time overrides.
SIEMonster.com 158
12.7 Metrics: Query Editor
The Query Editor exposes capabilities of your Data Source and allows you to query the
metrics that it contains.
Use the Query Editor to build one or more queries (for one or more series) in your time series
database. The panel will instantly update allowing you to effectively explore your data in real
time and build a perfect query for that Panel.
You can utilize Template variables in the Query Editor within the queries themselves. This
provides a powerful way to explore data dynamically based on the Templating variables
selected on the Dashboard.
Grafana allows you to reference queries in the Query Editor by the row that they’re on. If you
add a second query to graph, you can reference the first query simply by typing in #A. This
provides an easy and convenient way to build compounded queries.
SIEMonster.com 159
13 Alerts
Praeco is an open source alerting tool with a full Graphical User Interface (GUI) for creating
alerts.
13.1 QuickStart
Run the app using Docker compose (Compose is a tool for defining and running multi-
container Docker applications. With Compose, you use a Compose file to configure your
application's services. Then, using a single command, you create and start all the services
from your configuration). Praeco includes everything you need to get started. Just provide
it the IP address of your Elasticsearch instance.
• Don't use 127.0.0.1 for PRAECO_ELASTICSEARCH. See first item under the
Troubleshooting section.
SIEMonster.com 160
3. In the Name field, type the name of the rule.
4. Click inside the Index field and select the value of the index you want to use (This
value depends on your data).
5. From the Time type drop-down menu, select Default. Default value is used if the time
field in your data is store as a Date. However, you need to choose another option if
the time field in your data is stored as a timestamp.
9. Click UNFILTERED to open the Builder page. Click NEW FILTER, select @timestamp
from the drop-down menu and click Add filter. Select is not empty from the drop-
down menu and click Done.
SIEMonster.com 161
Multiple filters can be added until you have the results you want to alert
against. The actual filters you add depends on the type of data you want
to be alerted on.
10. Click IS and select the threshold for alerting from the drop-down menu.
11. Click FOR THE LAST and specify the value depending on the time frame you want to
measure over. Counts of results are divided into time-based buckets, that are
represented by a red bar in the chart.
12. Click WITH OPTIONS if you want to enable count query or terms query.
If Use count query is enabled, ElastAlert will poll Elasticsearch using the
count api, and not download all the matching documents. This is useful
if you are looking for numbers and not the actual data. It can be used if
you expect large volume of query hits
SIEMonster.com 162
If Use terms query is enabled, ElastAlert will make an aggregation query against
Elasticsearch to get counts of documents matching each unique value of query key. Terms
size specifies the maximum number of terms returned per query, default value is 50.
15. In the Subject and Body text fields, type the subject and body of your alert
respectively. Type % followed by some characters to insert tokens into your alerts and
select a field from the drop-down menu. When you get alerted, your tokens will be
replaced by the content of this field from the event that triggered the alert.
16. If you have selected Email as your destination, then you need to click on the Email
tab and specify the From address, Reply to, and To.
17. Click Test to run a simulation of this alert over a specified time period. This does not
send out the actual alert.
Any ElastAlert option you put into rules/BaseRule.config will be applied to every rule.
// Hide these fields when editing rules, if they are already filled in template
"hidePreconfiguredFields": []
SIEMonster.com 163
13.4 Praeco: Upgrading
To upgrade to the newest release of Praeco, run the following commands:
Some version upgrades require further configuration. Version specific upgrade instructions
are below.
At the top:
# cache github api
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=github_api_cache:60m
max_size=10g
inactive=60m use_temp_path=off;
Example:
The default config example file below shows where to place these snippets.
server {
listen 8080;
SIEMonster.com 164
location /api {
rewrite ^/api/?(.*)$ /$1 break;
proxy_pass http://elastalert:3030/;
}
location /api-ws {
rewrite ^/api-ws/?(.*)$ /$1 break;
proxy_pass http://elastalert:3333/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api-app/releases {
proxy_cache github_api_cache;
proxy_pass https://api.github.com/repos/ServerCentral/praeco/releases;
}
location / {
root /var/www/html;
try_files $uri $uri/ /index.html;
}
}
Create file rules/BaseRule.config, paste in the following content and change as required.
slack_webhook_url: ''
smtp_host: ''
smtp_port: 25
slack_emoji_override: ':postal_horn:'
SIEMonster.com 165
Question: How do I change the writeback index?
Answer: Edit config/elastalert.yaml and config/api.config.json and change the
writeback_index values.
SIEMonster.com 166
14 Reporting
SIEMonster Reporting Module (SRM) is SIEMonster’s solution for generating reports from
different modules.
• Schedules
• Output formats (PDF, PNG, XLSX, CSV)
• Time filters
• Paper design (A4, A3, Portrait, Landscape)
• Notification sources (SMTP, Mailgun, Slack)
• WYSIWYG-editable subject/message with dynamic templates
The interface of SRM has an adaptive design to work with modules on smaller screen starting
from 4.7 inches.
At the moment SRM only works with Kibana Module (Dashboards and
Searches). Support for other modules will be included in the future.
SIEMonster.com 167
14.1 Reporting: Configuration
Customization including configuring variables is required to make SRM work. The variables
listed below have values automatically assigned to them.
To get
ENV-variables
Module URL field specifies the address of the Reporting Server. Mothership_URL variable
specifies offline address of the main SIEMonster Server in services local network and it works
with licensing. Other main SRM pages are specified in the other Sublinks and are generated
automatically.
SIEMonster.com 168
14.3 Dynamic Settings
Dynamic settings of the Reporting module is used to perform some tuning and can be
edited directly in the Reporting UI. The variables have default values that are assigned at
the tenant creation/reset stage. All variables have names and comments that describe their
destination.
Click Reporting on the Home page to open the Reporting module. Click on the Settings
icon in the left pane to open the Settings page.
14.4 Pages
Web UI of the Reporting module consists of the following pages accessible by the left pane:
• Scheduled Reports
• Reports History
• Settings
• About (License Details)
SIEMonster.com 169
14.5 Scheduled Reports
The Schedule Reports page displays the list scheduled reports. Each scheduled report has
action buttons including resume, clone, preview, edit, and delete.
1. Click Reporting on the Home page to open Scheduled Reports page. Click
Schedule a Report to create a new report.
3. Use the defaults for both Class (Kibana) and Type (Dashboard).
4. From the Select a Dashboard drop-down menu, select [Logs] Web Traffic.
5. Click Quick, under the Time Window section, and specify time filter as last 15 days.
6. Design section lets you specify the settings for the generated files including the
format and orientation. Select the required values from the Format and Paper
Format drop-down menus, and then select the required Orientation.
7. Action section lets you configure the notifications process for example Mail or Slack.
Specify the required values for the required process.
8. Message section has a WYSIWYG editor with the Subject and Message Body fields.
Provide the required values using the editor.
9. Schedule section is used to manage schedule for the report generation. For example,
you can specify that a report will created and sent every day at 21:00, or first day of
every month.
SIEMonster.com 170
14.6 Reports History
Report History displays reports with a failure or success status filtered for the last 30 days.
Reports History also displ[ays a table with the state of the report (failure/success), creation
date, further information for the failed reports, and a link to download the successful reports.
SIEMonster.com 171
15 Flow Processors
Apache NiFi is an open source data ingestion platform that was built to automate the flow
of data between systems (For example transfer a JSON document and add that in a database,
transfer all the FTP files directly to Hadoop, transfer data from Apache Kafka to Elasticsearch).
Apache NiFi supports powerful and scalable directed graphs of data routing, transformation,
and system mediation logic.
It was developed by the National Security Agency (NSA) and is now being maintained and
further development is supported by Apache foundation. It is based on Java and runs in Jetty
server. NiFi Supports any device which runs Java and you can easily install NiFi on AWS. NiFi
is used in varied industries such as healthcare, insurance, telecom, manufacturing, finance,
oil and gas among others. As a best practice organize your projects into three parts
ingestion, test and monitoring.
Apache NiFi is now used in many top organizations that want to harness the power of their
fast data by sourcing and transferring information from and to their database and big data
lakes. It is a key tool to learn for the analyst and data scientists alike.
Apache NiFi has an easy to use drag and drop user interface, and it focuses on the
configuration of the processors. It guarantees that you do not lose your data through its
guaranteed delivery feature.
SIEMonster.com 172
Some of the high-level capabilities and objectives of Apache NiFi include:
Flow Management
• Apache NiFi provides a guaranteed delivery even at a very high scale. This is achieved through
effective use of a purpose-built persistent write-ahead log and content repository. Together
they are designed in such a way as to allow for very high transaction rates, effective load-
spreading, copy-on-write, and play to the strengths of traditional disk read/writes.
SIEMonster.com 173
• NiFi supports buffering of all queued data as well as the ability to provide back pressure as
those queues reach specified limits or to age off data as it reaches a specified age (its value
has perished).
• NiFi allows the setting of one or more prioritization schemes for how data is retrieved from a
queue. The default is oldest first, but there are times when data should be pulled newest first,
largest first, or some other custom scheme.
• There are points of a dataflow where the data is absolutely critical, and it is loss intolerant.
There are also times when it must be processed and delivered within seconds to be of any
value. NiFi enables the fine-grained flow specific configuration of these concerns.
Ease of Use
• Dataflows can become quite complex. Being able to visualize those flows and express them
visually can help greatly to reduce that complexity and to identify areas that need to be
simplified. NiFi enables not only the visual establishment of dataflows but it does so in real-
time. Rather than being 'design and deploy' it is much more like molding clay. If you make a
change to the dataflow that change immediately takes effect. Changes are fine-grained and
isolated to the affected components. You don’t need to stop an entire flow or set of flows
just to make some specific modification.
• Dataflows tend to be highly pattern oriented and while there are often many different ways
to solve a problem, it helps greatly to be able to share those best practices. Templates allow
subject matter experts to build and publish their flow designs and for others to benefit and
collaborate on them.
• NiFi automatically records, indexes, and makes available provenance data as objects flow
through the system even across fan-in, fan-out, transformations, and more. This information
becomes extremely critical in supporting compliance, troubleshooting, optimization, and
other scenarios.
• NiFi’s content repository is designed to act as a rolling buffer of history. Data is removed only
as it ages off the content repository or as space is needed. This combined with the data
provenance capability makes for an incredibly useful basis to enable click-to-content,
download of content, and replay, all at a specific point in an object’s lifecycle which can even
span generations.
Security
• NiFi is designed to scale-out through the use of clustering many nodes together as described
above. If a single node is provisioned and configured to handle hundreds of MB per second,
then a modest cluster could be configured to handle GB per second. This then brings about
interesting challenges of load balancing and fail-over between NiFi and the systems from
which it gets data. Use of asynchronous queuing-based protocols like messaging services,
Kafka, etc., can help. Use of NiFi’s 'site-to-site' feature is also very effective as it is a protocol
SIEMonster.com 174
that allows NiFi and a client (including another NiFi cluster) to talk to each other, share
information about loading, and to exchange data on specific authorized ports.
• NiFi is also designed to scale-up and down in a very flexible manner. In terms of increasing
throughput from the standpoint of the NiFi framework, it is possible to increase the number
of concurrent tasks on the processor under the Scheduling tab when configuring. This allows
more processes to execute simultaneously, providing greater throughput. On the other side
of the spectrum, you can perfectly scale NiFi down to be suitable to run on edge devices
where a small footprint is desired due to limited hardware resources.
As shown in the highlighted status bar below, a user can access information about the
following attributes:
• Active Threads
• Total queued data
• Transmitting Remote Process Groups
• Not Transmitting Remote Process Groups
• Running Components
• Stopped Components
• Invalid Components
• Disabled Components
• Up to date Versioned Process Groups
• Locally modified Versioned Process Groups
• Stale Versioned Process Groups
• Locally modified and Stale Versioned Process Groups
• Sync failure Versioned Process Groups
SIEMonster.com 175
The Operate Pallete consists of buttons that manipulate the components on the canvas.
They are used to manage the flow, as well as by administrators who manage user access
and configure system properties, such as how many system resources should be provided
to the application.
The management toolbar has buttons to manage the flow, and for a NiFi administrator to
manage user access and system properties.
Additionally, the UI has some features that allow you to easily navigate around the canvas.
You can use the Navigate Palette to pan around the canvas, and to zoom in and out.
SIEMonster.com 176
The Birds Eye View of the dataflow provides a high-level view of the dataflow and allows you
to pan across large portions of the dataflow.
The components toolbar contains all tools for building the dataflow.
Processor
Processor pulls data from external sources, performs actions on attributes and content of
FlowFiles, and publishes data to external source. User can drag the process icon on the
canvas and select the desired processor for the data flow in NiFi.
Input port
Input Ports provide a mechanism for transferring data into a Process Group. When an Input
Port is dragged onto the canvas, the user is prompted to name the Port. All Ports within a
Process Group must have unique names.
SIEMonster.com 177
All components exist only within a Process Group. When a user initially navigates to the NiFi
page, the user is placed in the Root Process Group. If the Input Port is dragged onto the
Root Process Group, the Input Port provides a mechanism to receive data from remote
instances of NiFi via Site-to-Site. In this case, the Input Port can be configured to restrict
access to appropriate users, if NiFi is configured to run securely.
Output port
The output port is used to transfer data to the processor, which is not present in that process
group. After dragging this icon, NiFi asks to enter the name of the Output port and then it
is added to the NiFi canvas.
Output Ports provide a mechanism for transferring data from a Process Group to
destinations outside of the Process Group. When an Output Port is dragged onto the canvas,
the user is prompted to name the Port. All Ports within a Process Group must have unique
names.
If the Output Port is dragged onto the Root Process Group, the Output Port provides a
mechanism for sending data to remote instances of NiFi via Site-to-Site. In this case, the Port
acts as a queue. As remote instances of NiFi pull data from the port, that data is removed
from the queues of the incoming Connections. If NiFi is configured to run securely, the
Output Port can be configured to restrict access to appropriate users.
Process Group
Process Groups can be used to logically group a set of components so that the dataflow is
easier to understand and maintain. When a Process Group is dragged onto the canvas, you
are prompted to name the Process Group. All Process Groups within the same parent group
must have unique names. The Process Group will then be nested within that parent group.
SIEMonster.com 178
Once you have dragged a Process Group onto the canvas, right-click on the Process Group
to select an option from context menu. The options available to you from the context menu
vary, depending on the privileges assigned to you.
While the options available from the context menu vary, the following options are typically
available when you have full privileges to work with the Process Group:
Configure: This option allows you to establish or change the configuration of the Process
Group.
Variables: This option allows you to create or configure variables within the NiFi UI.
Enter group: This option allows you to enter the Process Group.
View status history: This option opens a graphical representation of the Process Group’s
statistical information over time.
View connections -> Upstream: This option allows you to see and jump to upstream
connections that are coming into the Process Group.
View connections -> Downstream: This option allows you to see and jump to downstream
connections that are going out of the Process Group.
Center in view: This option centers the view of the canvas on the given Process Group.
SIEMonster.com 179
Group: This option allows you to create a new Process Group that contains the selected
Process Group and any other components selected on the canvas.
Create template: This option allows you to create a template from the selected Process
Group.
Copy: This option places a copy of the selected Process Group on the clipboard, so that it
may be pasted elsewhere on the canvas by right-clicking on the canvas and selecting Paste.
If the remote NiFi is a clustered instance, the URL that should be used is the URL of any NiFi
instance in that cluster. When data is transferred to a clustered instance of NiFi via an RPG,
the RPG will first connect to the remote instance whose URL is configured to determine
which nodes are in the cluster and how busy each node is. This information is then used to
load balance the data that is pushed to each node. The remote instances are then
interrogated periodically to determine information about any nodes that are dropped from
or added to the cluster and to recalculate the load balancing based on each node’s load.
• Local Network Interface: In some cases, it may be desirable to prefer one network interface
over another. For example, if a wired interface and a wireless interface both exist, the wired
interface may be preferred. This can be configured by specifying the name of the network
interface to use in this box. If the value entered is not valid, the Remote Process Group will
not be valid and will not communicate with other NiFi instances until this is resolved.
• Transport Protocol: On a Remote Process Group creation or configuration dialog, you can
choose Transport Protocol to use for Site-to-Site communication.
By default, it is set to RAW which uses raw socket communication using a dedicated port.
HTTP transport protocol is especially useful if the remote NiFi instance is in a restricted
SIEMonster.com 180
network that only allow access through HTTP(S) protocol or only accessible from a specific
HTTP Proxy server. For accessing through a HTTP Proxy Server, BASIC and DIGEST
authentication are supported.
Funnel
Funnel is used to transfer the output of a processor to multiple processors. User can use the
below icon to add the funnel in a NiFi data flow.
Funnels are used to combine the data from many Connections into a single Connection. This
has two advantages.
• First, if many Connections are created with the same destination, the canvas can become
cluttered if those Connections have to span a large space. By funneling these Connections
into a single Connection, that single Connection can then be drawn to span that large space
instead.
• Secondly, Connections can be configured with FlowFile Prioritizers. Data from several
Connections can be funneled into a single Connection, providing the ability to Prioritize all
of the data on that one Connection, rather than prioritizing the data on each Connection
independently.
Template
This icon is used to add a data flow template to NiFi canvas. This helps to reuse the data
flow in the same or different NiFi instances. After dragging, a user can select the templates
already added in the NiFi.
SIEMonster.com 181
Templates can be created from the components toolbar, or they can be imported from other
dataflows. These Templates provide larger building blocks for creating a complex flow
quickly. When the Template is dragged onto the canvas, the user is provided with a window
to choose which Template to add to the canvas.
Click the drop-down menu to view all the available Templates. Any Template that was
created with a description will show a question mark icon, indicating that there is more
information. Hovering over the icon with the mouse will show the description.
Label
These are used to add text on NiFi canvas about any component present in NiFi. It offers a
range of colors used by a user to add aesthetic sense.
Labels are used to provide documentation to parts of a dataflow. When a Label is dropped
onto the canvas, it is created with a default size. The Label can then be resized by dragging
the handle in the bottom-right corner. The Label has no text when initially created.
To add text to the Label, right click on Label and select Configure.
SIEMonster.com 182
15.3 Exercise: Building a Dataflow
You can build an automated dataflow using the NiFi UI by:
Processor
The Processor is the most commonly used component, as it is responsible for data ingress,
egress, routing, and manipulating. There are many different types of Processors. In fact, this
is a very common Extension Point in NiFi, meaning that many vendors may implement their
own Processors to perform whatever functions are necessary for their use case.
1. To add a Processor, drag the Processor icon and drop it into the middle of the canvas. Add
Processor window opens.
SIEMonster.com 183
keyword file, for instance, will provide us a few different Processors that deal with files.
Filtering by the term local will narrow down the list pretty quickly, as well. If we select a
Processor from the list, we will see a brief description of the Processor near the bottom of
the dialog.
2. To bring in files from a local disk into NiFi, you can use the GetFile Processor. This Processor
pulls data from our local disk into NiFi and then removes the local file. Select the Processor
and click ADD, it will be added to the canvas in the location that it was dropped.
3. Now that we have added the GetFile Processor, right-clicking on the Processor to select
Configure from the context menu. The options available to you from the context menu vary,
depending on the privileges assigned to you.
The following options are typically available when you have full privileges to work with a
Processor:
Configure: This option allows you to establish or change the configuration of the Processor.
Start or Stop: This option allows you to either start or stop a Processor, depending on the
current state of the Processor.
Enable or Disable: This option allows you to enable or disable a Processor, depending on
the current state of the Processor.
View data provenance: This option displays the NiFi Data Provenance table, with
information about data provenance events for the FlowFiles routed through that Processor.
SIEMonster.com 184
View status history: This option opens a graphical representation of the Processor’s
statistical information over time.
View usage: This option takes the user to the Processor’s usage documentation.
View connections -> Upstream: This option allows you to see and jump to upstream
connections that are coming into the Processor. This is particularly useful when processors
connect into and out of other Process Groups.
View connections -> Downstream: This option allows you to see and jump to downstream
connections that are going out of the Processor. This is particularly useful when processors
connect into and out of other Process Groups.
Center in view: This option centers the view of the canvas on the given Processor.
Change color: This option allows you to change the color of the Processor, which can make
the visual management of large flows easier.
Create template: This option allows you to create a template from the selected Processor.
Copy: This option places a copy of the selected Processor on the clipboard, so that it may
be pasted elsewhere on the canvas by right-clicking on the canvas and selecting Paste.
Delete: This option allows you to delete a Processor from the canvas.
Once the Properties tab has been selected, we are given a list of several different properties
that we can configure for the Processor. The properties that are available depend on the
type of Processor and are generally different for each type. Properties that are in bold are
required properties. The Processor cannot be started until all required properties have been
configured. The most important property to configure for GetFile is the directory from which
to pick up files.
5. In the Input Directory field, type ./data-in, this will cause the Processor to start picking up
any data in the data-in subdirectory of the NiFi Home directory. In order for this property to
be valid, create a directory named data-in in the NiFi home directory and then click the Ok
button to close the dialog.
SIEMonster.com 185
For example, many Processors define two Relationships: success and failure. Users are then
able to configure data to be routed through the flow one way if the Processor is able to
successfully process the data and route the data through the flow in a completely different
manner if the Processor cannot process the data for some reason. Or, depending on the use
case, it may simply route both relationships to the same route through the flow.
6. Now that we have added and configured our GetFile processor and applied the configuration,
we can see in the top-left corner of the Processor an Alert icon ( Alert ) signaling that the
Processor is not in a valid state. Hover over this icon, you can see that the success relationship
has not been defined. This means that we have not told NiFi what to do with the data that
the Processor transfers to the success Relationship.
7. In order to address this, let’s add another Processor that we can connect the GetFile Processor
to, by following the same steps above. This time, however, you will simply log the attributes
that exist for the FlowFile. To do this, we will add a LogAttributes Processor.
8. You can now send the output of the GetFile Processor to the LogAttribute Processor. Hover
over the GetFile Processor with the mouse and a Connection Icon ( Connection ) will appear
over the middle of the Processor. Drag this icon from the GetFile Processor and drop it to the
LogAttribute Processor. Create Connection window opens.
SIEMonster.com 186
10. Click on the Settings tab of the Create Connection window. In the Name field, specify the
name of the connection. Otherwise, the Connection name will be based on the selected
Relationships.
11. We can also set FlowFile Expiration for the data. By default, it is set to 0 sec which indicates
that the data should not expire. Change the value so that when data in this Connection
reaches a certain age, it will automatically be deleted (and a corresponding EXPIRE
Provenance event will be created).
12. The Back Pressure Object Threshold allow you to specify how full the queue is allowed to
become before the source Processor is no longer scheduled to run. This allows you to handle
cases where one Processor is capable of producing data faster than the next Processor is
capable of consuming that data. If the back pressure is configured for each connection along
the way, the Processor that is bringing data into the system will eventually experience the
back pressure and stop bringing in new data so that your system has the ability to recover.
13. The Available Prioritizers option is available on right-hand side. This allows you to control
how the data in this queue is ordered. Drag Prioritizers from the Available prioritizers list to
the Selected prioritizers list in order to activate the prioritizer. If multiple prioritizers are
activated, they will be evaluated such that the Prioritizer listed first will be evaluated first and
if two FlowFiles are determined to be equal according to that Prioritizer, the second Prioritizer
will be used.
15. Note that the Alert icon has changed to a Stopped icon ( Stopped ).
SIEMonster.com 187
The LogAttribute Processor, however, is now invalid because its success Relationship has not
been connected to anything. Let’s address this by signaling that data that is routed to
success by LogAttribute should be Auto Terminated, meaning that NiFi should consider the
FlowFile’s processing complete and drop the data. To do this, you configure the LogAttribute
Processor.
16. Right click on the LogAttribute Processor and click the Settings tab. Check Success under
Automatically Terminate Relationships to Auto Terminate the data. Click APPLY, notice
that both Processors are now stopped.
17. At this point, you have two Processors on your graph, but nothing is happening. In order to
start the Processors, click on each one individually, right-click and choose the Start menu
item.
• You can also select the first Processor, and then hold the Shift key while
selecting the other Processor in order to select both. Then, you can
right-click and choose the Start menu item.
• As an alternative to using the context menu, you can select the
Processors and then click the Start icon in the Operate palette.
18. Once started, the icon in the top-left corner of the Processors will change from a stopped
icon to a running icon. You can then stop the Processors by using the Stop icon in the Operate
palette or the Stop menu item.
SIEMonster.com 188
16 Audit Discovery
PatrOwl is a scalable, free and open source solution for orchestrating Security Operations.
PatrOwl is an advanced platform for orchestrating Security Operations like Penetration
testing, Vulnerability Assessment, Code review, Compliance checks, Cyber-Threat
Intelligence / Hunting and SOC & DFIR Operations. Fully-Developed in Python (Django for
the backend and Flask for the engines). It remains incredibly easy to customize all
components. Asynchronous tasks and engine scalability are supported by RabbitMQ and
Celery.
• Thinking and acting like hackers: PatrOwl use the same mindset (tools, tactics and
procedures), monitor continuously all stacks of assets, prioritize efficiently the remediation of
vulnerabilities and the suspicious activities.
• Best-of-breed and custom tools: PatrOwl has a unique cockpit and rationalized use of best-
of-breed/custom tools to support cyber-threat monitoring strategies and remediation
workflows.
• Monitoring Internet-faced systems: Scan continuously websites, public IP, domains and
subdomains for vulnerabilities and misconfigurations
• Attacker assets monitoring: Ensure readiness of teams by identifying attackers’ assets and
tracking changes of their IP, domains, and web applications
• Phishing / APT scenario preparation: Monitor early signs of targeted attacks, new domain
registration, suspicious Tweets, paste, VirusTotal submissions, and phishing reports
• Regulation and Compliance: Evaluate compliance gaps using provided scan templates
SIEMonster.com 189
• Penetration tests: Perform the reconnaissance steps, the full stack vulnerability assessment
and the remediation checks
16.2 PatrowlManager
PatrowlManager is the Front-end application for managing the assets, reviewing risks on
real time, orchestrating the operations (scans, searches, API calls), aggregating the results,
relaying alerts on third parties, and providing the reports and dashboards. Operations are
performed by the PatrowlEngines instances.
PatrowlEngines is the engine framework and the supported list of engines performing the
operations on due time. The engines are managed by one or several instance of
PatrowlManager. On the Home page, click Audit Discovery to access the PatrowlManager.
SIEMonster.com 190
Click on an individual asset to open Asset Detailed view that displays the following:
• Current finding counters and grade and trends (last week, months.)
• Findings by threat domains:
o Domain, HTTPS and Certificate, Network infrastructure, System, Web App, Malware,
E-Reputation, Data Leaks, Availability
• All findings and remediations tips
• Related scans and assets
• Investigation links
1. To add a new asset, select Add new asset from the Assets drop-down menu. Add an asset
page opens.
3. In the Name field, enter the title of the asset. For Example, Corporate Website.
4. From the Type drop-down menu. Select IP. Available scan policies will be filtered on
this value.
SIEMonster.com 191
6. From the Criticity drop-down menu, select high. Global risk scoring will depend on
this value.
7. In the Categories, select Operating systems. Categories field contains a list of tags to
quickly describe the asset. Custom values could be added. Click Create a new asset.
Assets can be added in bulk by using the Assets -> Add new assets in
bulk (CSV file) menu.
SIEMonster.com 192
To view the list of existing assets, select List engines from the Engines drop-down menu.
1. To add a new asset, select Add scan engine instance from the Engines drop-down menu.
Add a new scan engine page opens.
2. From the Engine drop-down menu, select the type of engine you want to use.
4. In the Api url field, enter the URL address of the engine.
SIEMonster.com 193
16.4 PatrowlManager Scan Definition
PatrowlManager Scan Definition let’s you search and select asset, and asset group on their
values or names. Policies can be filtered by engine type or threat domain.
PatrowlManager scan performed view can be accessed from the Scans -> List scans
performed that displays scans heatmap over days, weeks, and months. You can apply
advanced filters, run or delete scans from this view, and compare selected scans.
To compare scans with each other, select the scans and click
2. In the Title field, enter the title of the scan. For example, List open ports on Internet-faced
assets or Search technical leaks on GitHub and Twitter.
SIEMonster.com 194
3. In the Description field, enter a
suitable description of the scan.
SIEMonster.com 195
17 Threat Modelling
OpenCTI is an open source platform allowing organizations to manage their cyber threat
intelligence knowledge and observables. It has been created in order to structure, store,
organize and visualize technical and non-technical information about cyber threats.
The structuration of the data is performed using a knowledge schema based on the STIX2
standards. It has been designed as a modern web application including a GraphQL API and
an UX oriented frontend. Also, OpenCTI can be integrated with other tools and applications
such as MISP, TheHive, MITRE ATT&CK, etc.
To an operational level:
Knowledge graph
The whole platform relies on a knowledge hypergraph allowing the usage of hyper-entities
and hyper-relationships including nested relationships.
Automated reasoning
The database engine performs logical inference through deductive reasoning, in order to
derive implicit facts and associations in real-time.
SIEMonster.com 196
By-design sourcing of data origin
Every relationships between entities have time-based and space-based attributes and must
be sourced by a report with a specific confidence level.
Dashboard
OpenCTI platform provides a powerful knowledge management database with an enforced
schema especially tailored for cyber threat intelligence and cyber operations. With multiple
tools and viewing capabilities, analysts are able to explore the whole dataset by pivoting on
the platform between entities and relations. Relations having the possibility to own multiple
context attributes, it is easy to have several levels of context for a given entity.
Navigate to Home > Threat Modelling to open the Dashboard. The Dashboard will fill up
progressively as you import data.
Threats
The Threats service allows you to go through all the data in the platform organized by:
• Threat actors
• Intrusion sets
• Campaigns
• Incidents
• Malwares
SIEMonster.com 197
To view the existing Threats, click the icon from the left navigation pane to visualize all
the threats related data split in different tabs.
Techniques
Click the Techniques tab to display among all the Techniques, Tactics and Procedures (TTPs)
which may be used during an attack. This covers all the kill chain phases as detailed in the
MITRE ATT&CK framework but also tools, vulnerabilities and identified courses of actions
which can be implemented to block these techniques.
SIEMonster.com 198
Observables
Click the Observables tab to display all the technical observables which may have been seen
during an attack, such as infrastructure or file hashes.
The goal is to create a comprehensive tool allowing users to capitalize technical (such as
TTPs and observables) and non-technical information (such as suggested attribution,
victimology etc.) while linking each piece of information to its primary source (a report, a
MISP event, etc.).
All observables are linked to threats with all the information needed to the analysts to fully
understand the situation, the role played by the observable regarding the threat, the source
of the information and the malicious behavior scoring.
Reports
In this tab are all the reports which have been uploaded to the platform. They will be the
starting point for processing the data inside the reports.
SIEMonster.com 199
Entities
This tab contains all information organized according to the identified entities, which can be
either Sectors, Regions, Cities, Organizations, or Persons, targeted by an attack or involved
in it. Lists of entities can be synchronized from the repository through the OpenCTI
connector or can be created internally.
Explore
This tab is a bit specific, as it constitute a workspace from which the user can automatically
generates graphs, timelines, charts and tables from the data previously processed. This can
help compare victimology with each other, or the timelines of attacks.
OpenCTI allows analysts to easily visualize any entity and its relationships. Multiple views are
available as well as an analytics system based on dynamic widgets. For instance, users are
able to compare the victimology of two different intrusion sets.
SIEMonster.com 200
Connectors
In this tab, you can manage the different connectors which are used to upload data to the
platform.
Settings
In this tab, you can change the parameters, visualize all users, create or manage groups,
create or manage tagging (by default, the Traffic Light Protocol is implemented, but you can
add your own tagging) and manage the kill chain steps.
SIEMonster.com 201
Appendix A: Change Management for password.
Please change passwords after installation for the required services.
SIEMonster.com 202