You are on page 1of 210

SIEMonster System Administrator’s Guide

If this guide is distributed with software that includes an end user agreement, this guide, as
well as the software described in it, is furnished under license and may be used or copied
only in accordance with the terms of such license. Except as permitted by any such license,
no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any
form or by any means, electronic, mechanical, recording, or otherwise, without the prior
written permission of SIEMonster. Please note that the content in this guide is protected
under copyright law even if it is not distributed with software that includes an end user
license agreement.

The content of this guide is furnished for informational use only, is subject to change without
notice, and should not be construed as a commitment by SIEMonster. SIEMonster assumes
no responsibility or liability for any errors or inaccuracies that may appear in the
informational content contained in this guide.

Please remember that existing artwork or images that you may want to include in your
project may be protected under copyright law. The unauthorized incorporation of such
material into your new work could be a violation of the rights of the copyright owner. Please
be sure to obtain any permission required from the copyright owner.

Any references to company names in sample templates are for demonstration purposes only
and are not intended to refer to any actual organization.

SIEMonster.com 2
1 Preface ......................................................................................................................... 2
1.1 What is SIEMonster ............................................................................................................................. 3

2 Introduction to SIEMonster Community Edition .................................................... 4


2.1 Scope........................................................................................................................................................ 5
2.2 Audience ................................................................................................................................................. 5
2.3 SIEMonster Community Edition Build Overview ...................................................................... 6
2.4 SIEMonster Community Edition Portal Front End .................................................................... 7

3 Infrastructure .............................................................................................................. 8
3.1 Operating System ................................................................................................................................ 8
3.2 Hardware/Virtual SPECS .................................................................................................................... 8
3.3 Networking ............................................................................................................................................ 8
3.4 Open Ports ............................................................................................................................................. 8

4 Infrastructure .............................................................................................................. 9
4.1 Docker ...................................................................................................................................................... 9
4.2 SIEMonster App .................................................................................................................................... 9
4.3 Open Distro............................................................................................................................................ 9
4.4 The Hive ............................................................................................................................................... 10
4.5 Cortex .................................................................................................................................................... 11
4.6 MITRE Att&CK .................................................................................................................................... 12
4.7 MISP Framework ............................................................................................................................... 13
4.8 NiFi ......................................................................................................................................................... 13
4.9 Patrowl .................................................................................................................................................. 14
4.10 OpenCTI ............................................................................................................................................. 15
4.11 Alerting............................................................................................................................................... 16
4.12 Message Queuing - Kafka ........................................................................................................... 17
4.13 Performance ..................................................................................................................................... 18
4.14 Reporting........................................................................................................................................... 19
4.15 Wazuh ................................................................................................................................................. 20
4.16 Suricata............................................................................................................................................... 21
4.17 DNS Settings .................................................................................................................................... 22
4.18 Endpoint Setup................................................................................................................................ 23
4.19 Suggestions ...................................................................................................................................... 23

SIEMonster.com 3
5 Installation ................................................................................................................25
5.1 Download ............................................................................................................................................ 25
5.2 Requirements ..................................................................................................................................... 25
5.3 VMware Workstation ....................................................................................................................... 25
5.4 Oracle VirtualBox .............................................................................................................................. 26
5.5 ESXi ........................................................................................................................................................ 27
5.6 SIEMonster first time Start-Up ..................................................................................................... 29
5.6.1 DHCP IP Address ......................................................................................................... 29
5.6.2 Static IP Address .......................................................................................................... 29
5.7 DNS Settings ...................................................................................................................................... 30
5.7.1 First Time Configuration ........................................................................................... 31
5.8 Demo Data .......................................................................................................................................... 32
5.9 Open ports .......................................................................................................................................... 33
5.10 Client Setup ...................................................................................................................................... 34
5.10.1 Microsoft Windows .................................................................................................. 34
5.10.2 Linux Machines .......................................................................................................... 34
5.10.3 Apple Mac ................................................................................................................... 34

6 Installing Agents ......................................................................................................36


6.1 Endpoint Guide .................................................................................................................................. 36
6.2 Winlogbeat Agents for Microsoft Hosts................................................................................... 36
6.3 Filebeat SIEM agents for WINDOWS Linux or Apache........................................................ 39
6.4 OSSEC HIDS Agents ......................................................................................................................... 41
6.5 Server-Side Install ............................................................................................................................. 41
6.6 Client-Side Install .............................................................................................................................. 42
6.7 SYSLOG ................................................................................................................................................. 46
6.8 Inputs .................................................................................................................................................... 46
6.9 Sysmon/Windows MITRE ATT&CK™ Integration .................................................................. 49

7 Managing Users and Roles in SIEMonster .............................................................52


7.1 Roles in SIEMonster ......................................................................................................................... 52
7.1.1 Exercise: Create a User in SIEMonster .................................................................. 53
7.1.2 Exercise: Create Roles in SIEMonster ................................................................... 54
7.2 Mailgun ................................................................................................................................................ 56
7.3 LDAP Integration ............................................................................................................................... 57

SIEMonster.com 4
7.4 My Profile............................................................................................................................................. 58
7.5 Superadmin Panel............................................................................................................................. 59

8 Dashboards ...............................................................................................................60
8.1 Discover ................................................................................................................................................ 60
8.1.1 Exercise: Discover the Data ...................................................................................... 65
8.2 Visualize................................................................................................................................................ 67
8.2.1 Aggregations ................................................................................................................ 67
8.2.2 Visualizations ................................................................................................................ 73
8.2.3 Exercise: Visualize the Data...................................................................................... 88
8.3 Dashboard ........................................................................................................................................... 89
8.3.1 Exercise: Creating a new Dashboard .................................................................... 90
8.4 Alerting ................................................................................................................................................. 92
8.4.1 Monitor ........................................................................................................................... 92
8.4.2 Exercise : Creating Monitors .................................................................................... 92
8.4.3 Alerting: Security Roles ............................................................................................. 97
8.4.4 Exercise: View and Acknowledge Alerts .............................................................. 97
8.4.5 Exercise: Create, Update, and Delete Monitors and Destinations ............. 98
8.4.6 Exercise: Ready Only .................................................................................................. 99
8.5 Wazuh ................................................................................................................................................. 100
8.5.1 Wazuh: Security Events .......................................................................................... 100
8.5.2 Wazuh: PCI DSS......................................................................................................... 102
8.5.3 Wazuh: OSSEC ........................................................................................................... 103
8.5.4 Wazuh: GDPR ............................................................................................................. 103
8.5.5 Wazuh: Ruleset .......................................................................................................... 108
8.5.6 Wazuh: Dev Tools ..................................................................................................... 108
8.6 Dev Tools ........................................................................................................................................... 109
8.6.1 Exercise : Dev Tools ................................................................................................. 110
8.7 Management .................................................................................................................................... 113
8.7.1 Index Patterns ............................................................................................................ 113
8.7.2 Exercise : Creating an Index Pattern to Connect to Elasticsearch ........... 113
8.7.3 Managing Saved Objects ...................................................................................... 115
8.8 Security ............................................................................................................................................... 116
8.8.1 Permissions ................................................................................................................. 116

SIEMonster.com 5
8.8.2 Action Groups............................................................................................................ 116
8.8.3 Roles.............................................................................................................................. 117
8.8.4 Exercise : Creating Role .......................................................................................... 117
8.8.5 Backend Roles ........................................................................................................... 117
8.8.6 Users ............................................................................................................................. 117
8.8.7 Exercise : Creating a User ...................................................................................... 117
8.8.9 Exercise : Role Mapping ......................................................................................... 118

9 Incident Response ................................................................................................. 119


9.1 Collaborate ........................................................................................................................................ 120
9.2 Elaborate ............................................................................................................................................ 120
9.3 Analyze ............................................................................................................................................... 121
9.4 Respond ............................................................................................................................................. 121
9.5 Exercise: Adding a User ................................................................................................................ 122
9.6 Exercise: Creating Cases from Alerts ........................................................................................ 122
9.7 Case Management ......................................................................................................................... 123
9.8 Exercise: Creating Cases ............................................................................................................... 123
9.9 Case Template.................................................................................................................................. 127
9.10 Dashboards..................................................................................................................................... 128
9.11 Exercise: Creating a Dashboard ............................................................................................... 128

10 Analyzers .............................................................................................................. 132


10.1 Cortex and TheHive ..................................................................................................................... 133
10.2 Cortex Super Administrator ...................................................................................................... 133
10.3 Cortex: Create an Organization ............................................................................................... 134
10.4 Cortex: Create a User .................................................................................................................. 134
10.5 User Roles ....................................................................................................................................... 135
10.6 Cortex Analyzer and Responder ............................................................................................. 136
10.7 Analyzer Management ............................................................................................................... 136
10.8 Job History ...................................................................................................................................... 138

11 Threat Intel........................................................................................................... 140


11.1 Feeds ................................................................................................................................................. 141
11.1.1 Adding Feeds .......................................................................................................... 141
11.2 Events................................................................................................................................................ 143

SIEMonster.com 6
11.2.1 Adding an Event ..................................................................................................... 144
11.2.2 Add Attributes to the Event ............................................................................... 146
11.2.3 Add Attachment to the Event............................................................................ 147
11.3 List Attributes ................................................................................................................................. 149
11.4 Search Attributes .......................................................................................................................... 150

12 Metrics .................................................................................................................. 151


12.1 Metrics: Data Source ................................................................................................................... 151
12.2 Metrics: Organization.................................................................................................................. 153
12.3 Metrics: Users................................................................................................................................. 153
12.3.1 Exercise: Creating a New User ........................................................................... 153
12.4 Metrics: Dashboard...................................................................................................................... 154
12.4.1 Exercise: Building a New Dashboard .............................................................. 156
12.5 Metrics: Row ................................................................................................................................... 157
12.6 Metrics: Panel................................................................................................................................. 158
12.7 Metrics: Query Editor .................................................................................................................. 159

13 Alerts..................................................................................................................... 160
13.1 QuickStart........................................................................................................................................ 160
13.2 Exercise: Praeco - Creating a new rule.................................................................................. 160
13.3 Praeco: Configuration ................................................................................................................. 163
13.4 Praeco: Upgrading ....................................................................................................................... 164
13.5 Praeco: Scenarios.......................................................................................................................... 165

14 Reporting ............................................................................................................. 167


14.1 Reporting: Configuration ........................................................................................................... 168
14.2 Module Settings............................................................................................................................ 168
14.3 Dynamic Settings .......................................................................................................................... 169
14.4 Pages................................................................................................................................................. 169
14.5 Scheduled Reports ....................................................................................................................... 170
14.5.1 Exercise: Schedule a Report ............................................................................... 170
14.6 Reports History ............................................................................................................................. 171

15 Flow Processors ................................................................................................... 172


15.1 Overview of NiFi Features ......................................................................................................... 173
Flow Management .............................................................................................................. 173

SIEMonster.com 7
Ease of Use ............................................................................................................................ 174
Security 174
15.2 NiFi User Interface ........................................................................................................................ 175
15.3 Exercise: Building a Dataflow ................................................................................................... 183
15.3.1 Adding a Processor ............................................................................................... 183
15.3.2 Configuring a Processor ...................................................................................... 184
15.3.3 Connecting Processors ........................................................................................ 185
15.3.4 Starting and Stopping a Processor .................................................................. 188

16 Audit Discovery ................................................................................................... 189


16.1 PatrOwl Use Cases ....................................................................................................................... 189
16.2 PatrowlManager ........................................................................................................................... 190
16.2 PatrowlManager Assets .............................................................................................................. 190
16.2.1 PatrowlManager: Add a New Asset ................................................................. 191
16.3 PatrowlManager Engine Management ................................................................................. 192
16.3.1 PatrowlManager: Add a New Scan Engine ................................................... 193
16.4 PatrowlManager Scan Definition ............................................................................................ 194
16.4.1 PatrowlManager: Add a New Scan .................................................................. 194
16.5 PatrowlManager Alerting Rules .............................................................................................. 195

17 Threat Modelling ................................................................................................. 196


17.1 Threat Modelling: Features ....................................................................................................... 196
17.2 Threat Modelling: User Interface ............................................................................................ 197

Appendix A: Change Management for password................................................. 202

SIEMonster.com 8
SIEMonster - High Level
Design

SIEMonster.com 1
1 Preface
In 2015, one of our corporate clients told us of their frustrations with the exorbitant licensing
costs of commercial Security Information and Events Management (SIEM) products. The
customer light heartedly asked whether we could build them an open source SIEM to get
rid of these annual license fees. We thought that was a great idea and set out so to develop
a SIEM product for Managed Security Service Providers (MSSP’s) and Security
Professionals. This product is called SIEMonster.

SIEMonster Version 1 was released in late April of 2016 and a commercial release in
November 2016. The release has been an astounding success with over 100,000 downloads
of the product. We have assisted individuals and companies integrate SIEMonster into small
medium and extra-large companies all around the world. SIEMonster with the help of the
community and a team of developers have been working hard since the Version1 release
incorporating what the community wanted to see in a SIEM as well as things we wanted to
see in the next release.

Along the way we have signed up MSSP’s from around the world who have contributed to
the rollout of SIEMonster and in return they have assisted us with rollout scripts, ideas and
things we hadn’t even considered. We are now proud to release the latest Version 4.0 of
SIEMonster.

Community Edition: A single server ideal for 1-100 endpoints. SIEMonster Community
Edition is a free version of SIEMonster, running on CoreOS, it is fully featured with community
support.

Professional Edition: A single server that runs locally or in the Cloud and is ideal for 1-200
endpoints. SIEMonster Starter Edition is available in a 30-day trial and can be converted into
an annual subscription. This is perfect for smaller organizations that require professional
support and the product scales to multiple servers increasing the endpoint count to 1000.

Enterprise: Multi Server Cloud or Local that infinitely scales from 1-100,000+ Endpoints that
can run ingestion from 1-500,000 Events per second using managed Kubernetes and Kafka.

MSSP: A Multi-tenancy Edition of SIEMonster in AWS or Local install for select customers
and Managed Security Service providers.

SIEMonster.com 2
1.1 What is SIEMonster
Powerful Open Source security tools are increasingly being released to help security
professionals perform automated tasks. But they are difficult to install, maintain and support
and impossible to integrate with existing SIEM Solutions.

SIEMonster is a collection of the best open source security tools, as well as our own
development as professional hackers to provide a SIEM for everyone. We show case the
latest and greatest tools for security professionals. Not only that but we have built the
platform on K8 with managed ingestion and can reach EPS of 500K in our cloud offering. We
offer white-label solutions, local installation on ESXi or BareMetal at an affordable price.

One of the most important features is our adaptability with open source modules. We can
bring in new cutting-edge modules to show case to our customers and the open source
author a chance to showcase their products. But not only bring in we integrate them with all
the existing components. The Hive, an Incident Response tool is free and open source, but
it is a standalone system. Using SIEMonster you can use the TheHive to report on an incident
sign the task to someone to fix within the software and still have everything, case
management, logs, and data under one roof. This is a unique offering and identity who we
are.

SIEMonster have integrated Wazuh, Ni-Fi, Cortex and The Hive modules among others into
this latest build. We have done all the hard work for you integrated them into the SIEMonster
suite. Now you can have a SIEM, with Incident Reporting, Advanced Correlation with Threat
Intelligence and Active Response all working together.

SIEMonster.com 3
2 Introduction to SIEMonster Community Edition
SIEMonster Community Edition Version 4 is built on the best supportable components and
custom development from a wish list of the SIEMonster community. This training document
will cover the architecture and the features that make up SIEMonster, so that all security
professionals can run a SIEM in their organizations with no budget.

SIEMonster Community Edition is built on CoreOS running docker. The product comes in
VMware, ESXi, HyperV and Bare Metal.

Some of these features include.:

• Open Distro Elasticsearch

• Real time alerts, not delayed alerts using Apache Ni-Fi

• NIDS function with Suricata

• Apache Kafka message ingest and queuing

• The Hive 4 in 1 Incidence Response

• Cortex Threat Analysis

• MISP Framework

• MITRE ATT&CK

• PatrOwl asset-based risk and vulnerability analysis

• Open CTI Threat Intelligence

• Wazuh HIDS system with Kibana plugin and OpenSCAP options & simplified agent
registration process

• Out of the box built ready to go

• All new dashboard with options for 2FA, site administration with user role-based access and
faster load times

• Elastalert, alerting on anomalies, spikes, or other patterns within Elasticsearch.

• Prometheus metric exporters with Prometheus AlertManager for system monitoring.

• Data Correlation UI, community rulesets and dashboards, community and open source free
plugins that make the SIEM.

• Incorporate your existing Vulnerability Scans into the Dashboard, (OpenVAS, Nexpose,
Metasploit, Burp, Nessus etc.)

SIEMonster welcome you to try out our fully functional SIEM solution and if you wish to
purchase the product with support please contact sales at https://www.siemonster.com

SIEMonster.com 4
2.1 Scope
This document covers all the software and hardware infrastructure components for the
Security Operations Centre SIEMonster Community Edition product and the operations
guide including how to use guides.

Training videos are available on the SIEMonster website. https://www.siemonster.com

2.2 Audience
This document is intended for technical representatives of companies, SOC owners as well
as security analysts and professionals. The audience of this document are expected to have
a thorough level of knowledge of Security, Software and Server Architecture.

The relevant parts are included here for convenience and may of course be subject to
change. They will be updated when notification is received from the relevant owners.

SIEMonster.com 5
2.3 SIEMonster Community Edition Build Overview
Below is a high-level diagram of the Infrastructure components.

SIEMonster.com 6
2.4 SIEMonster Community Edition Portal Front End

SIEMonster.com 7
3 Infrastructure
This section contains the Operating System, Storage, RAM requirements networking, and
Open Ports for ingestion of the Community Edition.

3.1 Operating System


Latest stable CoreOS.

3.2 Hardware/Virtual SPECS


Hardware

CPU 8 VCPU

RAM 32 GB

Storage 1TB HDD

3.3 Networking
• DHCP enabled for initial system load, Manual IP setup after install

• Options available for static network and proxy configuration

3.4 Open Ports

External services open ports for Administration

Service Ports

SIEMonster App TCP Ports 443

SSH (admin) TCP Port 22

External services open ports for ingestion, i.e. what client will send their data to SIEMonster

Service Ports

Syslog TCP Ports 514 UDP 514

Wazuh TCP Ports 1514, 1515 and 55000

Kafka Receiver
TCP Port 9094
(beats family)

SIEMonster.com 8
4 Infrastructure
This section contains the application components and descriptions of the build.

4.1 Docker
SIEMonster Community Edition is run on Docker. The Starter, Enterprise and MSSP editions
are run on Kubernetes for infinite and auto scalability.

4.2 SIEMonster App


This is the web application providing the core functionalities of the stack. The following
features are provided:

• 2FA authentication supporting Google and Microsoft

• Changeable themes

• Role management for role-based access control

• Customizable menus and dashboards

• User time-out and WebSocket options

• SMTP and Slack notifications for password retrieval and authentication failures

• Transparent pass-through authentication to Kibana

• LDAP integration supporting Unix and Microsoft AD LDAP servers

• Customizable news feed integration

• Automated backend database backup

4.3 Open Distro


Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting
system, enabling you to monitor your data and send notifications automatically to your
stakeholders. With an intuitive Kibana interface and powerful API, it is easy to set up and
manage alerts. Build specific alert conditions using Elasticsearch's query and scripting
capabilities. Alerts help teams reduce response times for operational and security events.

Open Distro for Elasticsearch protects your cluster by providing a comprehensive set of
advanced security features, including a number of authentication options (such as Active
Directory and OpenID), encryption in-flight, fine-grained access control, detailed audit
logging, advanced compliance features, and more.

Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting
system, enabling you to monitor your data and send notifications automatically to your

SIEMonster.com 9
stakeholders. With an intuitive Kibana interface and powerful API, it is easy to set up and
manage alerts. Build specific alert conditions using Elasticsearch's query and scripting
capabilities. Alerts help teams reduce response times for operational and security events.

Open Distro for Elasticsearch makes it easy for users who are already comfortable with SQL
to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems.
SQL offers more than 40 functions, data types, and commands including join support and
direct export to CSV.

4.4 The Hive


TheHive is utilized within the SIEMonster platform as an incident response/case
management system. It is meshed with Alerting, MISP, OpenCTI, PatrOwl and Cortex to
automate the process of incident creation. To make life simpler for SOCs, CSIRTs and CERTs,
all information pertaining to a security incident is presented for review. Whilst weighing up
and excluding false positives, the SOC team are given an indication of next steps to take.

The mesh integration details within SIEMonster can be summarized as follows:

• Pre-configured alert templates can be easily customized to fit specific use cases. Triggered
alerts are sent directly to The Hive with all relevant data being extracted

• Automated import of MISP events and case creation

• Cortex Mailer Responder which allows you to e-mail the case information and IoCs

• Submission of observables from cases and alerts to third party IOC services

• Access to MISP extended events and health checking

• Case creation via OpenCTI analysis results

SIEMonster.com 10
Cortex integration

TheHive uses Cortex to have access to analyzers and responders

• Analyzers can be launched against observables to get more details about a given
observable

• Responders can be launched against case, tasks, observables, logs, and alerts to
execute an action

• One or multiple Cortex instances can be connected to TheHive

4.5 Cortex
Cortex solves two common problems frequently encountered by SOCs, CSIRTs and security
researchers in the course of threat intelligence, digital forensics and incident response:

• How to analyze observables they have collected, at scale, by querying a single tool instead
of several?

• How to actively respond to threats and interact with the constituency and other teams?

Cortex can analyze (and triage) observables at scale using more than 100 analyzers. you can
actively respond to threats and interact with your constituency and other parties thanks to
Cortex responders. Within the SIEMonster platform Cortex is pre-integrated with TheHive
and MISP to get you up and running.

SIEMonster.com 11
Analyzers and Responders are autonomous applications managed by and run through the
Cortex core engine. Analyzers allow analysts and security researchers to analyze observables
and IOCs such as domain names, IP addresses, hashes, files, URLs at scale.

4.6 MITRE Att&CK


MITRE ATT&CK™ is a globally accessible knowledge base of adversary tactics and techniques
based on real-world observations. The ATT&CK knowledge base is used as a foundation for
the development of specific threat models and methodologies in the private sector, in
government, and in the cybersecurity product and service community.

Integration within the SIEMonster platform is multifold.

• A custom xml configuration is setup with Windows agents to translate process activity to
MITRE ATT&CK™ vectors so specific events can be easily queried by the SOC analyst. This
also applies to alerts based on these types of events of which there are many pre-canned
templates out of the box

Dashboards are also provided for forensic analysis of MITRE ATT&CK™ correlations.

• Integration with MISP events

• Integration with OpenCTI relationship analysis

SIEMonster.com 12
4.7 MISP Framework
The Malware Information and Sharing Platform (MISP) is a threat intelligence platform for
sharing, storing and correlating Indicators of Compromise of targeted attacks, threat
intelligence, financial fraud information, vulnerability information or even counter-terrorism
information. MISP is used today in multiple organizations to not only to store, share,
collaborate on cyber security indicators, malware analysis, but also to use the IoCs and
information to detect and prevent attacks, frauds or threats against ICT infrastructures,
organizations or people.

Integration within the SIEMonster platform is preconfigured for Cortex, OpenCTI, MISP and
Cortex. Feeds for threat intel can be configured for many of the available free sources as well
as from subscription sources if required.

With the focus on automation and standards, MISP provides you with a powerful REST API,
extensibility (via misp-modules) or additional libraries such as PyMISP.

4.8 NiFi
NiFi was built to automate the flow of data between systems. While the term 'dataflow' is
used in a variety of contexts, we use it here to mean the automated and managed flow of
information between systems. This problem space has been around ever since enterprises
had more than one system, where some of the systems created data and some of the
systems consumed data. The problems and solution patterns that emerged have been
discussed and articulated extensively. A comprehensive and readily consumed form is found
in the Enterprise Integration Patterns

Within the SIEMonster platform NiFi is used to ingest incoming event log data from the
Kafka message queue. Various templates have been provided for different endpoint types

SIEMonster.com 13
including but not limited to Active Directory, common firewall and VPN devices, HIDS agents
and IDS feeds.

All data flow is visualized allowing the analyst to view in real time the log flows and metrics.
Templates are also provided to assist in adding new sources with debug options and data
sinks before going into production.

4.9 Patrowl
PatrOwl is an advanced platform for orchestrating Security Operations like Penetration
testing, Vulnerability Assessment, Code review, Compliance checks, Cyber-Threat
Intelligence / Hunting and SOC & DFIR Operations, including:

• Full-stack security overview (IP to Data)

• Define threat intelligence and vulnerability assessment scans policies

• Orchestrate scans using tailor-made engines

• Collect & aggregate findings

• Contextualize, tracks, prioritize findings

• Check remediation effectiveness

Correlate asset risk value against vulnerabilities, bringing business intelligence and SIEM in
closer alignment. Within the SIEMonster platform Patrowl is integrated with Cortex and
TheHive. Asset for assessment can be added singly or in bulk using the asset import feature.

Results are displayed in a Dashboard and with TheHive integration, new alerts can be
configured for High or Critical vulnerabilities as well as asset risk weighting correlation.

SIEMonster.com 14
4.10 OpenCTI
OpenCTI is an open source platform allowing organizations to manage their cyber threat
intelligence knowledge and observables. It has been created in order to structure, store,
organize and visualize technical and non-technical information about cyber threats.

The structuration of the data is performed using a knowledge schema based on the STIX2
standards. It has been designed as a modern web application including a GraphQL API and
an UX oriented frontend. Also, OpenCTI is integrated with MISP, TheHive and MITRE ATT&CK
within the SIEMonster platform as well as having a connector for CVE information. The initial
dashboard will begin immediate import of MISP observables for analysis

SIEMonster.com 15
4.11 Alerting
Alerting is provided by the OpenDistro Kibana interface, Elastalert with GUI front-end and
via Apache Nifi dependent on the use case. 30+ pre-canned alert types are provided to get
you up and running. Typical queries include those for anomalies, aggregations, pattern
matching along with threat intel/Mitre correlation, Indicators of Compromise (IOCs), NIDS
signature matching and asset vulnerabilities. Alerts can be configured to automatically
create tickets in the TheHive Incident Response module and to notify stakeholders via most
common webhooks or direct email.

Many pre-canned alerts are available in a disabled state to allow you to quickly get up and
running. We also provide a Webhook to SMTP connector for Kibana alerts, not available as
standard that permits the emailing of alerts

Apache Nifi Alerts

SIEMonster.com 16
Elastalert GUI

4.12 Message Queuing - Kafka


Apache Kafka is a publish/subscribe message queuing system that is utilized within
SIEMonster not only for its scalability but also for the following:

• Provides durable, fast and fault tolerant message streaming for handling real time data feeds.

• Compatible with Apache Nifi and the Elastic Beats family agents.

• Enables custom configuration per endpoint group by using topic declarations.

• Improving data governance and guaranteed delivery

• Options for in flight stream data extraction and new stream creation dependent on specific
triggers.

• Ability to set data retention periods per use case in case of upstream processing back
pressure.

SIEMonster.com 17
Incoming events are stored initially in Apache Kafka before being processed in Nifi and then
sent to Elasticsearch. This provides a buffer in case of bursts in activity while also providing
an endpoint by topic management system with options for real time alert stream creation.

4.13 Performance
Performance and alerting metrics are visualized and actioned via Grafana, Prometheus, Alert
Manager and Cerebro as well as Metricbeat with preloaded dashboards. Incoming log event
rates can be monitored as well as container stats and CPU, load and disk space. Slack
endpoints can be easily set up to receive alerts for sudden spikes in activity, CPU 90%+ or
10% disk space remaining.

Grafana Elasticsearch Metrics

Metricbeat Metrics

SIEMonster.com 18
4.14 Reporting
SIEMonster internal reporting tool provides a comprehensive tool with automated reporting
straight to your inbox. This tool allows automated reports to be generated, and sent to the
appropriate individual, on any event, such as MacAfee Anti-Virus, detected a virus but did
not clean and send these follow up items in a report. Reports are available in PDF or XLS
format, including Dashboards snapshots for visualization.

Slack endpoint sample

SIEMonster.com 19
4.15 Wazuh
Wazuh is a free and open source platform for threat detection, security monitoring, incident
response and regulatory compliance. It can be used to monitor endpoints, cloud services
and containers, and to aggregate and analyze data from external sources.

Wazuh is used to collect, aggregate, index and analyze security data, helping organizations
detect intrusions, threats and behavioral anomalies.

As cyber threats are becoming more sophisticated, real-time monitoring and security
analysis are needed for fast threat detection and remediation. That is why our light-weight
agent provides the necessary monitoring and response capabilities, while our server
component provides the security intelligence and performs data analysis.

Wazuh agents scan the monitored systems looking for malware, rootkits and suspicious
anomalies. They can detect hidden files, cloaked processes or unregistered network listeners,
as well as inconsistencies in system call responses. In addition to agent capabilities, the
server component uses a signature-based approach to intrusion detection, using its regular
expression engine to analyze collected log data and look for indicators of compromise.

Wazuh agents pull software inventory data and send this information to the server, where it
is correlated with continuously updated CVE (Common Vulnerabilities and Exposure)
databases, in order to identify well-known vulnerable software. Automated vulnerability
assessment helps you find the weak spots in your critical assets and take corrective action
before attackers exploit them to sabotage your business or steal confidential data.

Wazuh is integrated into the Dashboards module of SIEMonster and there are also pre-
canned alerts configured.

SIEMonster.com 20
4.16 Suricata
Suricata is an open source threat detection engine that was developed by the Open
Information Security Foundation (OISF). Suricata can act as an intrusion detection system
(IDS), and intrusion prevention system (IPS), or be used for network security monitoring. It
was developed alongside the community to help simplify security processes. As a free and
robust tool, Suricata monitors network traffic using an extensive rule set and signature
language. Suricata also features Lua scripting support to monitor more complex threats.

The SIEMonster Community Edition provides a Suricata pipeline that performs packet
capture and analysis on the local network interface, acting as a host-based IDS. The resultant
data is then sent to Kafka before being ingested by Elasticsearch. The commercial
SIEMonster releases extend these capabilities in the form of network and cloud tabs and
multi-network interface monitoring.

Alerts can be easily configured for signature matches and there is also a dashboard provided
for further IDS analysis.

SIEMonster.com 21
4.17 DNS Settings
Installation of SIEMonster within a local network requires some DNS settings to be set either
in the client hosts file of the machine accessing the platform or by adding some entries into
the local DNS server. The client host file will be typically:

‘C:\Windows\System32\drivers\etc\hosts’ on Windows, or ‘/etc/hosts’ on Mac/Unix systems

Using the IP address of the SIEMonster appliance, the entries for the hosts file will be as
follows using the above IP as an example:

192.168.0.30 siemonster.internal.com

192.168.0.30 webreporting.siemonster.internal.com

192.168.0.30 misp.siemonster.internal.com

192.168.0.30 cortex.siemonster.internal.com

192.168.0.30 sm-kibana.siemonster.internal.com

192.168.0.30 praeco.siemonster.internal.com

192.168.0.30 metrics.siemonster.internal.com

192.168.0.30 hive.siemonster.internal.com

192.168.0.30 nifi.siemonster.internal.com

192.168.0.30 patrowl.siemonster.internal.com

192.168.0.30 opencti.siemonster.internal.com

192.168.0.30 kafka.siemonster.internal.com

192.168.0.30 prometheus.siemonster.internal.com

192.168.0.30 alertmanager.siemonster.internal.com

192.168.0.30 cerebro.siemonster.internal.com

192.168.0.30 metrics.siemonster.internal.com

192.168.0.30 kafka-manager.siemonster.internal.com

Setting for a DNS server will be A record aliases, for example if the appliance IP address is

192.168.0.30 then the settings will be:

192.168.0.30 siemonster.internal.com

192.168.0.30 *.siemonster.internal.com

SIEMonster.com 22
4.18 Endpoint Setup
To collect logs from endpoints we recommend the following is installed to collect the logs.

Service Ports

Syslog TCP Ports 514 UDP 514

Wazuh (Microsoft Windows, Linux & Mac) TCP Ports 1514, 1515 and 55000

Kafka Receiver

Winlogbeat (Windows system logs) TCP Port 9094

Filebeat (Windows, Linux & Mac)

Note: For Microsoft Windows users you will see Winlogbeat and Filbeat listed here.
Winlogbeat is designed to collect straight systems logs whereas Filebeat will collect logs
from Exchange, IIS and SQL for example. You will need to install both for multi-purpose
Windows servers.

4.19 Suggestions
Do you have any suggestions you would like to see in the next build?

Contact us at info@siemonster.com or use the SIEMonster Community Portal to add your


thoughts.

SIEMonster.com 23
SIEMonster – Build Guide

SIEMonster.com 24
5 Installation

5.1 Download
To download the Community Edition, please visit the SIEMonster website at
http://www.siemonster.com. Proceed to the download section, complete the form with all
required details and click submit. An e-mail with the download link will be sent to you. The
download consists of a single file named “coreos.ova”, and is supported by VMWare ESXi,
VMWare Workstation and Oracle VirtualBox.

Currently SIEMonster does not support Microsoft Hyper.

5.2 Requirements
Please note that this pre-build virtual machine has been configured to run with the following
hardware:

Hardware

CPU 8 VCPU

RAM 32 GB

Storage 1TB HDD

NIC 1 x Virtual Nic

5.3 VMware Workstation


To import the pre-built image, please perform the following action:

1. Start VMWare workstation

2. Click the “File” Menu item

3. Click “Open” and browse to the folder location where you saved the download described in
Section 6.1.

4. Select the “ova” file and Click “Open”

5. In the presented dialog box, specify the name of the


virtual machine and the location where you would like it
imported to.

6. Click “Import” and wait for the process to complete

7. Power on the virtual machine

SIEMonster.com 25
8. Once booted, a pre-determined sequence of automated tasks will be performed that requires
no input.

9. Open a console for the virtual machine and capture the IP address that is displayed on the
screen as demonstrated below.

5.4 Oracle VirtualBox


To import the pre-built image, please perform the following action:

1. Start Oracle Virtualbox

2. Click “File” and the Click “Import Appliance”

3. Browse to where the “ova” file was downloaded, Select the file and Click “Next”

4. Leave the default hardware settings as is and specify a location for the virtual machine to be
imported to and Click “Import”

5. Wait for the process to complete

6. Power on the virtual machine

7. Once booted, a pre-determined sequence of automated tasks will be performed that requires
no input.

8. Open a console for the virtual machine and capture the IP address that is displayed on the
screen as demonstrated below.

SIEMonster.com 26
5.5 ESXi
To import the pre-built image, please perform the following action:

1. Log into your ESXi instance web console

2. Click “Create/Register VM”

3. Select “Deploy a virtual machine from an OVF or OVA file” and Click “Next”

4. Specify the name you wish to use for the virtual machine and drag the OVA into the
drag/drop box indicated by the red arrow and Click “Next”. NOTE: On some instances of ESXi,
using a name with spaces and/or non-alphanumeric characters can cause the deployment to
fail. Please ensure to use a simplified name for the installation.

5. Select the Datastore where you would like the virtual machine stored and Click “Next”.

SIEMonster.com 27
6. Specify the deployment options “Thin” or “Thick”* disk and Select “Power on automatically”
if you wish to do so and Click “Next”. NOTE: The Community Edition is provided with 1
Terabyte of disk space allocated. Should you choose to deploy it as “Thick” provisioned please
ensure that there is sufficient disk space in the environment.

7. You will be presented with a “Ready to complete” screen with a summary of the deployment
about to be performed. As indicated in this window, do not refresh your browser as this will
interrupt the process. Proceed by clicking “Finish”

Thin provision stores the disk in the smallest size possible, only consuming what has been
stored. Thick provisioned disk allocates all disk space upfront.

8. A progress indicator will appear in the recent tasks window at the bottom of the interface
indicating the progress of the import, please wait for this to complete.

9. If you didn’t not Select “Power on Automatically” as part of the deployment process, please
proceed to power on the virtual machine.

10. Once booted, a pre-determined sequence of automated tasks will be performed that requires
no input.

SIEMonster.com 28
11. Open a console for the virtual machine and capture the IP address that is displayed on the
screen as demonstrated below.

12. Please proceed to the section with the heading “SIEMonster First Time Startup”.

5.6 SIEMonster first time Start-Up


5.6.1 DHCP IP Address
Once the initial bootup is completed and all the IP has of the SIEMonster host is displayed
and captured from the console. Further configuration of networking is not required to
remain on DHCP configured addressing.

5.6.2 Static IP Address


If a static IP address is required, please perform the following actions

• Login on the console of the virtual machine

• Perform the following:

• Type ifconfig ens33 and Press [Enter], confirm that the IP address presented matches the IP
address that was displayed on the console.

• Type sudo vim /etc/systemd/network/static.network and Press [Enter]

• Type the network configuration as per the following example exchanging the Address,
Gateway and DNS details with that of your network. Note: These entries are case sensitive

• Press [Esc] then Type :wq and Press [Enter].

• Type sudo mv /etc/systemd/network/20-initial-config.network/home/deploy/ and Press


[Enter].

• Type sudo systemctl restart system-networkd and Press [Enter].

• Type ifconfig ens33 and Press [Enter]. Verify that the IP displayed matches that of the
configuration that was typed in the preceding steps

SIEMonster.com 29
5.7 DNS Settings
Installation of SIEMonster within a local network requires some DNS settings to be set either
in the client hosts file of the machine accessing the platform or by adding some entries into
the local DNS server. The client host file will be typically located in:

‘C:\Windows\System32\drivers\etc\hosts’ on Windows

‘/etc/hosts’ on Mac/Unix systems

Using the IP address of the SIEMonster appliance, the entries for the hosts file will be as
follows using the above IP as an example:

192.168.0.30 siemonster.internal.com

192.168.0.30 webreporting.siemonster.internal.com

192.168.0.30 misp.siemonster.internal.com

192.168.0.30 cortex.siemonster.internal.com

192.168.0.30 sm-kibana.siemonster.internal.com

192.168.0.30 praeco.siemonster.internal.com

192.168.0.30 metrics.siemonster.internal.com

192.168.0.30 hive.siemonster.internal.com

192.168.0.30 nifi.siemonster.internal.com

192.168.0.30 patrowl.siemonster.internal.com

192.168.0.30 opencti.siemonster.internal.com

192.168.0.30 kafka.siemonster.internal.com

192.168.0.30 prometheus.siemonster.internal.com

192.168.0.30 alertmanager.siemonster.internal.com

192.168.0.30 cerebro.siemonster.internal.com

192.168.0.30 metrics.siemonster.internal.com

192.168.0.30 kafka-manager.siemonster.internal.com

192.168.0.30 comrade.siemonster.internal.com

Setting for a DNS server will be A record aliases, for example if the appliance IP address is
192.168.0.30 then the settings will be:

192.168.0.30 siemonster.internal.com

192.168.0.30 *.siemonster.internal.com

SIEMonster.com 30
5.7.1 First Time Configuration
Once the preceding steps, identifying or configuration the IP address and updating the hosts
entry on the workstation/server that will be performing the configuration and maintenance
for the environment, please proceed with the following steps:

• Open a Chrome/Firefox/Brave browser tab (Internet Explorer, Edge is not recommended)

• Specify “https:// siemonster.internal.com/” in the address bar and Press [Enter]

• Specify the administrator e-mail address and password for the platform and Click Sign in

Optional configuration:

Should a proxy server be required please toggle the switch and specify the proxy details as
indicated.

In case of an offline activation requirement such as air gapped solution or non-internet


capable devices please perform the following:

• Toggle the “Offline Activation” Switch

• Click “Create an Activation Request”

• Click “Download Request File”

SIEMonster.com 31
• Add the downloaded file to a ZIP archive and e-mail it to support@siemonster.com with the
subject Offline Activation. A response file will be e-mailed back to the originating e-mail
address.

• Click Upload an Activation Response

• Click in the white box labeled Upload Response File

• Select the file that was received from SIEMonster and Click Open

• Your product will now be activated.

This will conclude the installation and setup portion of the solution. The setup page will
automatically redirect to the login page where the credentials specified in the preceding
actions can be used to login.

5.8 Demo Data


SIEMonster runs its own Honeypot environment with a range of Firewalls, Web Servers and
internal Active Directory servers accessible to the public. This environment is built to provide
rich data for your demo SIEM environment. We have captured 24 hours of data and included
this in your SIEMonster Trial Application. After the SIEMonster platform is built it can take
30 minutes for this data to be displayed in your dashboards. If you receive any errors on the
links to the Dashboards the data is still being loaded into the system.

SIEMonster.com 32
5.9 Open ports
External services open ports for Administration

Service Ports

SIEMonster App TCP Ports 443

SSH (admin) TCP Port 22

External services open ports for ingestion, i.e. what client will send their data to SIEMonster

Service Ports

Syslog TCP Ports 514 UDP 514

Wazuh TCP Ports 1514, 1515 and 55000

Kafka Receiver
TCP Port 9094
(Beats family)

SIEMonster.com 33
5.10 Client Setup
To Collect logs from end points we recommend the following. Configuration of these files
and settings can be found in the Operation section of this guide.

5.10.1 Microsoft Windows


• Winlogbeat for Windows events

https://artifacts.elastic.co/downloads/beats/winlogbeat/winlogbeat-oss-7.4.0-windows-
x86_64.zip

• Filebeat for Windows IIS, SQL , Exchange Logs

https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0

• Wazuh Agent

https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html

5.10.2 Linux Machines


• Filebeat

https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0

• Wazuh Agent

https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html

5.10.3 Apple Mac


• Filebeat

https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0

• Wazuh Agent

https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html

SIEMonster.com 34
SIEMonster – Client Setup

SIEMonster.com 35
6 Installing Agents
Now that the SIEMonster is up and running. It is time to install some agents to get some
data into the SIEM. You will need to install an agent on the boxes that support agents like
Windows and Linux. For boxes that don’t support agents you will need to forward syslog’s
to the SIEM. To collect logs from end points we recommend the following. Configuration of
these files and settings can be found in this chapter of the guide.

6.1 Endpoint Guide

Operating System Agent Description Port

Winlogbeat Windows Events TCP 9094

LogBeat IIS, SQL, Exchange etc TCP 9094


Microsoft Windows
Wazuh Intrusion Detection TCP 1514, 1515, 55,000

Logbeat Linux & Apps TCP 9094

Linux Wazuh Intrusion Detection TCP 1514, 1515, 55,000

Logbeat Mac & Apps TCP 9094

Apple Mac Wazuh Intrusion Detection TCP 1514, 1515, 55,000

Syslogs NA Syslog events / Agentless TCP 514 UDP 514

6.2 Winlogbeat Agents for Microsoft Hosts


Winlogbeat is a log collector and forwarder designed for the Microsoft Windows Operating
System.

1. Download the software Winlogbeat directly from the vendor link below and install it

https://artifacts.elastic.co/downloads/beats/winlogbeat/winlogbeat-oss-7.4.0-
windows-x86_64.zip

SIEMonster.com 36
2. Download the SIEMonster agent-pack, which contains additional modules. The zip
file contains the files you will need for your endpoint.

https://s3-us-west-2.amazonaws.com/agents.siemonster.com/agent-pack-v4-
fullyloaded.zip

SHA256
4f9e9a913afc0fb23692ac1fdf39494a57fdce4f74b97b910b4e6adbe9a031e6

3. Extract the contents of the zip file into C:\Program Files\Winlogbeat

4. Create a subfolder ‘pipelines’ within the Winlogbeat folder

5. From the agent-pack, extract the files with extension .js into the pipelines folder.

6. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and


select Run As Administrator). If you are running Windows XP, you may need to
download and install PowerShell.

7. Run the following commands to install Winlogbeat as a Windows service:

o PS C:\Users\Administrator> cd 'C:\Program Files\Winlogbeat'

o PS C:\Program Files\Winlogbeat> .\install-service-winlogbeat.ps1

o NOTE: On later versions of Windows PS no longer works. Please use the full
name powershell.

If script execution is disabled, then first use the following command from a standard
command prompt:

powershell -exec bypass

8. Connect to the SIEMonster platform with an SSH client using the credentials supplied
at the end of this document

SIEMonster.com 37
9. Run the command “cat /volumes/kafka/kafka-ssl/ca/root-ca.pem”, this will output the
certificate data.

10. Select and copy the text display by the command. (take care to start with and with
the -, and not including any spaces.

11. Open a text editor on your platform, paste the text that was copied and save it to
c:\certs\rootCA.pem, creating the folder c:\certs if needed.

12. Edit the winlogbeat.yaml file and ensure that it matches the configuration in the
screenshot. Ping the FQDN displayed in the hosts line to ensure it can be resolved
from the client. NOTE: the forward slashes in the following screenshot is accurate
and should be kept.

13. Check the syntax is correct in this file by running the following command:

C:\Program Files\Winlogbeat> winlogbeat.exe test config -c winlogbeat.yml -e

Test that Winlogbeat can connect to the SIEM

C:\Program Files\Winlogbeat> winlogbeat -e -c winlogbeat.yml’

Highlighted above the successful Kafka connect

Use CTRL C to stop the test connection

14. Start the Winlogbeat service

C:\Program Files\Winlogbeat> Start-Service winlogbeat

SIEMonster.com 38
The provided configuration will also log Sysmon events. See the Section Sysmon/
MITRE ATT&CK™ Integration. The agent pack includes the required ION-Storm
Sysmon dictionary.

6.3 Filebeat SIEM agents for WINDOWS Linux or Apache


Filbeat is a lightweight, open source shipper for log file data. As the next-generation log
forwarder, Filebeat tails logs and quickly sends this information to Kafka for further parsing
in Nifi and then on to Elasticsearch for centralized storage and analysis. SIEMonster uses this
agent for collecting logs from Unix hosts, typically Apache web logs. The Filebeat agent for
your specific Linux flavor may be downloaded using the below vendor link.

https://www.elastic.co/downloads/past-releases/filebeat-oss-7-4-0

1. Download the agent-pack for additional configuration files

https://s3-us-west-2.amazonaws.com/agents.siemonster.com/agent-pack-v4-
fullyloaded.zip

SHA256
4f9e9a913afc0fb23692ac1fdf39494a57fdce4f74b97b910b4e6adbe9a031e6

2. In this example we have a Debian Operating System and have chosen the deb agent.

Transfer this zip file and the deb installer file via SCP to the target server and install
using the following command:

sudo dpkg -i filebeat-oss-7.4.0-amd64.deb

Once installed the filebeat service will be inactive and the configuration file can be found at
/etc/filebeat/filebeat.yml. This configuration file must be modified to suit the logs being
monitored and the FQDN of the SIEMonster server. A sample is included in the agent-pack
which should be uncomrpressed. To obtain the certificate needed please do the following:

1. Connect to the SIEMonster platform with an SSH client using the credentials supplied
at the end of this document

2. Run the command “cat /volumes/kafka/kafka-ssl/ca/root-ca.pem”, this will output the


certificate data.

SIEMonster.com 39
3. Select and copy the text display by the command. (take care to start with and with
the -, and not including any spaces.

4. Open a text editor on your platform, paste the text that was copied and save it to
/etc/filebeat/root-ca.pem.

Secure the certificate as follows:

o sudo chown root:root /etc/filebeat/root-ca.pem

o sudo chmod 644 /etc/filebeat/root-ca.pem

3. Edit the Filebeat configuration file /etc/filebeat/filebeat.yml as follows: The first


element to change will be the ‘paths’ directive in the prospectors’ section see Figure
Below.

For example, to modify this for Apache logs this path may be altered to:

/var/log/apache2/access.log.

Filebeat path modification on remote Apache server

4. Next ensure the client can resolve the FQDN of the SIEMonster server.

SIEMonster CE FQDN

SIEMonster.com 40
5. Test the connection by navigating to the /etc/filebeat folder and running the
command:

filebeat -e -c filebeat.yml

Working connection

Use CTRL C to exit the test session.

6. Restart Filebeat with the command:

sudo service filebeat restart

6.4 OSSEC HIDS Agents


OSSEC-Wazuh HIDS agents may be installed as follows to report to the OSSEC/Wazuh
manager on the SIEMonster appliance. This is great edition to the SIEM. For detailed
information on OSSEC-Wazuh have a look at the Wazuh reference manual:

https://documentation-dev.wazuh.com/current/index.html

OSSEC agents for Windows, Mac & Linux are installed via the OSSEC binary:

https://documentation.wazuh.com/3.9/installation-guide/packages-list/index.html

6.5 Server-Side Install


Manual installation process:

1. On the CE appliance, execute shell access on the Wazuh Container:

docker ps |grep wazuh

docker exec -it <containerID> bash

2. Run the following command:

/var/ossec/bin/manage_agents

Note: Using a PuTTY session from Windows to SIEMonster will allow easy copy and paste
for generated keys than using vmware tools and copy/pasting.

The following options will be available:

SIEMonster.com 41
OSSEC HIDS Menu

• Choose ‘A’

Add a name for the agent and an IP address that should match the one the agent will
be connecting from i.e. CALFORNIADC01 192.168.0.100 (Note: if the agent hosts have
dynamic IP addresses then ‘any’ can be used instead of an IP address).

• Press ‘Y’

Setting up the OSSEC agent IP and name

Retrieve the agent key information by entering ‘E’ for extract and the ID for the agent. Copy
this key as it will be required for the remote agent install.

Example:

Agent key information for '002' is:

MDAxIFRlc3RBZ2V0biAxMTEuMTExLjExMS4xMTEgY2MxZjA1Y2UxNWQyNzEyNjdlMmE3MT
RlODI0MTA1YTgxNTM5ZDliN2U2ZDQ5MWYxYzBkOTU4MjRmNjU3ZmI2Zg==

6.6 Client-Side Install


To install the remote agent on a windows machine, first download the agent install file

SIEMonster.com 42
https://packages.wazuh.com/3.x/windows/wazuh-agent-3.9.5-1.msi

SHA512Checksum:

3da92c3a0e8c5fde77810aa71a0b4a5c61fbea7d3a6fc39586e01c156f1fd1114830c09f68a51
8e2062c1933d28bd14ed8011139fa0f27a23ffad235b4482269

Note. The agent must be installed with administrator privileges.

Edit the ossec.conf file in the ossec-agent install location, adding the IP or FQDN of the
SIEMonster appliance and changing the protocol to TCP:

Launch the agent and enter the IP address or FQDN of the CE appliance along with the key
previously presented.

Restart the agent as shown below.

SIEMonster CE FQDN and Key

SIEMonster.com 43
Back on the SIEMonster appliance check that the agent has connected correctly, by checking
in the Wazuh Kibana application:

To install the remote agent on a Linux Debian based machine, follow the steps outlined here:

https://documentation.wazuh.com/3.9/installation-guide/installing-wazuh-
agent/wazuh_agent_deb.html

Next import the key previously created server-side:

Edit /var/ossec/etc/ossec.conf on the agent, add the IP of the SIEMonster appliance &
change the protocol to TCP.

SIEMonster.com 44
Restart Wazuh:

/var/ossec/bin/ossec-control restart

Check the UI for successful addition.

Linux & Windows agents may also be automatically registered from the command line
without any setup on the Wazuh Manager.

For Linux:

/var/ossec/bin/agent-auth -m <IP or FQDN of SIEMonster Appliance>

SIEMonster.com 45
For Windows:

Using an administrative command prompt, navigate to the ossec-agent installation folder:

agent-auth -m <IP or FQDN of SIEMonster Appliance>

In production this method would be extended to use SSL certificates and/or authentication.

6.7 SYSLOG
All syslogs can be sent to The SIEMonster Appliance Network devices with remote syslog
settings should be set to the SIEMonster appliance IP address. Syslogs are accepted on UDP
port 514. For troubleshooting purposes, incoming syslogs can be found within the
SIEMonster appliance at:

/volumes/wazuh-manager/data/logs/archives/archives.json

Parsing is handled by Wazuh & Apache Nifi before forwarding to the ES cluster.

6.8 Inputs
The next step is to check for incoming events in Kibana. Assuming the index is named logs-
endpoint-winevent-DATE as preset in the Apache Nifi configuration, then the events should
be visible in the Discovery panel.

Access the Dashboards top menu or tile from the web application:

SIEMonster.com 46
If the index has been renamed, the it should be first registered in the Management – Index
Patterns panel:

Kibana Index Patterns

Windows Events Index

SIEMonster.com 47
Visit the Discovery menu and select the configured index

Visualization of the data

From here review some saved searches, visualizations and dashboards.

Checking Log flow in Nifi

SIEMonster.com 48
6.9 Sysmon/Windows MITRE ATT&CK™ Integration
System Monitor (Sysmon) is a Windows system service and device driver that, once installed
on a system, remains resident across system reboots to monitor and log system activity to
the Windows event log. It provides detailed information about process creations, network
connections, and changes to file creation time.

Sysmon is integrated with SIEMonster using the following steps.

1. Extract the Sysmon dictionary file - sysmonconfig-export.xml from the agent pack –
See section 1.2.

2. Download, Extract and install Sysmon, install with the command sysmon64 -
accepteula -i sysmonconfig-export.xml , link to download page:
https://technet.microsoft.com/en-us/sysinternals/sysmon.

3. Ensure that the following lines exist in the Winlogbeat configuration by using the
supplied winlogbeat template in the agent pack.

<Select Path=”Windows PowerShell”>*</Select>\

<Select Path=”Microsoft-Windows-Sysmon/Operational”>*</Select>\</Query>\

4. Check Elasticsearch/kabana on index logs-endpoint-winevent-sysmon-* that the new


logs are being input into SIEMonster.

5. MITRE ATT&CK™ vectors run on end hosts can now be searched for in Elasticsearch,
for example.

SIEMonster.com 49
SIEMonster.com 50
SIEMonster – Operations Guide

SIEMonster.com 51
7 Managing Users and Roles in SIEMonster
It is critical for businesses to protect data and prevent unauthorized usage. SIEMonster
provides advanced user access management capabilities that enable you to set up users,
define permissions, and optimize security. Permissions in SIEMonster are based on Roles.

Objectives

After completing this section, you will be able to:

• Describe the types of Users and Roles available in SIEMonster

• Create Roles

• Create a new Module

• Update Admin settings

7.1 Roles in SIEMonster


As a SIEMonster administrator you can add and manage users and provide relevant
permissions to them. This function is performed in the Admin Panel.

In SIEMonster, these different tasks are split between the following two user roles by default.
However, more user roles can be added to configure the user types.

• Admin

• User

Administrators

The main modules administrators can access are:

• Alerts

• Dashboards

• Analyzers

• Threat Modelling

• Incident Response

• Metrics

• Audit Discovery

• Flow Processors

• Threat Intel

• Reporting

SIEMonster.com 52
7.1.1 Exercise: Create a User in SIEMonster

1. In the browser, type the URL for SIEMonster to open the login page. The login page
opens.

2. Enter the credentials (username and password) and click Sign In.

3. On the Home page, click on the dotted vertical line on the top right and click on the
Admin Panel.

4. Admin Panel will display the existing users.


Click Create User.

5. Enter the email address of the user you want


to add along with the password in the
available fields.

6. Click the drop-down arrow under the user


field and select super.

7. Click the drop-down arrow under the Roles


and select admin.

8. Click Ok to create the new user. This user will


automatically be assigned to the Role that you selected.

A standard user with a User profile will not have access to any module.

SIEMonster.com 53
7.1.2 Exercise: Create Roles in SIEMonster

1. On the Home page, click on the dotted vertical line on the top right, and click Admin
Panel. Alternatively, you can also click on the icon on the top left and click Admin
Panel.

2. Click the Roles tab, and then click admin role. Note that all the modules are enabled.

3. Click the user role and note that all the modules are disabled. These modules can be
enabled for a specific user group.

4. Click the Roles tab and then click Create Role.

SIEMonster.com 54
5. You are going to create a role for a security operations team. Enter the Role Name
as secOps and click OK.

6. Click to open the newly created role and enable Dashboards and Incident Response
modules.

A new module can be added by clicking Add Module, it requires the


Module Name and the Module URL. This is usually required for a third-
party integration.

7. Under Add Module section, enter Training in the Module Name field and
https://freshdesk.com in the Module URL field. Click Add Module.

8. Click Settings for the newly added module


Training. The Module Image can be changed
from this screen, if required. This Image gets
displayed under the Home page.

9. Expand Sublinks and click Add. Enter Training


Videos in the Name field and
https://siemonster.com in the Url field. Click Save
to complete this task.

10. Click Modules on the main navigation bar and


note that the Module (Training) and Sublink
(Training Videos) have been added.

11. Login and Password can be provided under


Credentials to access this module.

SIEMonster.com 55
7.2 Mailgun
Mailgun is an email automation service provided by Rackspace. It offers a complete cloud-
based email service for sending, receiving, and tracking emails sent through websites and
web applications. Mailgun features are available through an intuitive RESTful API or using
traditional email protocols like SMTP.

Mailgun in SIEMonster is used for the web applications, the user needs to sign up for a free
Mailgun account. It will allow users to receive email notices from the web application
whenever a login is attempted with their email addresses.

Click the Admin Panel -> Notifiers tab to access the Mailgun settings. Mailgun can be
setup by providing Mailgun API-key, Domain Name, and Sending Email Address.

SIEMonster.com 56
7.3 LDAP Integration
User authentication can be setup by integrating SIEMonster with LDAP services. Users not
already in the SIEMonster platform can be added when logging in with their LDAP email
address and password. When a user from Active Directory logs in, that user will be logged
in as a new user with no modules enabled. The administrators can then assign a role to the
users.

To set up your LDAP server:

1. Click the Admin Panel > LDAP tab to access the LDAP settings. Enter the Host name
or IP address of your LDAP server

2. Enter the Port Number used for LDAP communication (389 by default)

3. Enable TLS (Transport Layer Security) that offers secured method of sending data,
and it requires a certificate that can be uploaded.

4. Provide the Service Account DN and Service Account Password.

5. Click Perform Connection Test to check your connection, and then click Save LDAP
Settings.

SIEMonster.com 57
7.4 My Profile
SIEMonster provides a comprehensive profile view of the user. To access this, click the dotted
vertical line on the top right, and click My Profile.

A Profile includes the data of a user such as the Display name, Email, Password, Two
Factor Authentication, and Past Login Attempts.

Two Factor Authentication


A user can enable Two Factor Authentication to add additional layer of security. Two Factor
Authentication in SIEMonster is compatible with Google Authenticator, 1Password, and
other two factor authentication applications.

To enable two factor authentication, click Configure 2FA.

It will display a QR code on the screen that can be scanned using Google Authenticator,
Authy, or Symantec's VIP Access to generate authentication codes. Click Enable to enable
two factor authentication.

SIEMonster.com 58
7.5 Superadmin Panel
On the Home page, click the dotted vertical line on the top right, and click Superadmin
Panel.

Superadmin Panel can be used to setup the Inactivity timeout. If the inactivity timeout is
setup to 1h, it means the system will automatically log a user out after one hour’s inactivity.
Click the drop-down arrow under Value to change the Inactivity timeout value.

Sometimes a user may have some end point protections on the network preventing any
WebSocket methods so that you can invoke the other server method. This can be setup by
specifying the relevant value.

SIEMonster.com 59
8 Dashboards
The SIEMonster Kibana Dashboard is a visualization application of Elasticsearch that allows
users to visualize the incoming data and create dashboards based on that.

The SIEMonster Kibana Dashboard gives you full flexibility and functionality on how you
want your dashboards to appear for different users. This section will provide you with a good
guide on how to use the dashboards and customize them to your own organizations.

8.1 Discover
Discover allows you to explore your data with Kibana’s data discover functions. You have
access to every document in every index that matches the selected index pattern. You can
view document data, filter the search results, and submit search queries.

Changing the index


On the Home page, click Dashboards to access the Dashboards. Click Discover from the
menu on the left and click the drop-down arrow to select the index you want to see the data
from.

Time Filter
Specific time period for the search results can be defined by using the Time Picker. By
default, the time range is set to the This month. However, the time picker can be used to
change the default time period.

SIEMonster.com 60
Histogram
After you selected a time range that contains data, you will see a histogram at the top of the
page, that will show the distribution of events over time.

The time you select will automatically be applied to any page you visit including the
Dashboard and Visualize. However, this behavior can be changed at individual page level.

Auto refresh rate can be setup to select a refresh interval from the list. This can periodically
resubmit your searches to retrieve the latest results.

If auto refresh is disabled, click the Refresh button to manually refresh


the visualizations.

Searches can be Saved and then used later by clicking the Open button.

SIEMonster.com 61
Fields
All the Available fields with their data types are available on the left side of the page. If you
hover-over any field, you can click add to add that field as a column to the table on the right
and it will then show the contents of this field.

Once added, hover-over the field that you want to remove and click Remove column.

Adding Filter for Documents


Documents can be filtered in the search results to display specific fields. Negative filters can
be added to exclude documents that contain the specific field value. The applied filters are
shown below the Query bar. Negative filters are shown in red.

SIEMonster.com 62
To add a filter from the list of fields available:

1. Click on the name of the field you want to filter on under


Available fields. This will display the top five values for that
field.

2. Click on the icon to add a positive filter and click on the


icon to add a negative filter. This will then either include or
exclude only those documents that contain the value in the
field.

To add a filter from the documents table:

1. Expand the document by clicking the


expand button .

2. Click on the icon to add a positive


filter and click on the icon to add a
negative filter on the right of the field
name. This will then either include or
exclude only those documents that
contain the value in the field.

3. Click on the icon on the right of the


field name to filter on whether
documents contain the field.

To add a filter manually:

1. Click Add filter. From the Fields


drop-down menu, select the field
_type.

2. Set the Operators filter as is and


from the Values drop-drop
menu, select wazuh.

3. In the Label field, type Wazuh


and click Save.

4. If you hover-over the filter that


you just created, you will have an option to Remove filter.

SIEMonster.com 63
Search for Documents
To search and filter the documents shown in the list, you can use the large search box at the
top of the page. The search box accepts query strings in a special syntax.

If you want to search content in any field, just type in the content that you want to search.
Entering anomaly in the search box and pressing enter will show you only events that contain
the term anomaly.

The query language allows some fine-grained search queries, like:

Search Term Description


lang:en To search inside a field named “lang”
lang:e? Wildcard expressions
lang:(en OR es) OR queries on fields
user.listed_count:[0 TO 10] Range search on numeric fields

SIEMonster.com 64
8.1.1 Exercise: Discover the Data
Discover allows you to explore the incoming data with Kibana’s data discovery functions.
You can submit search queries, filter the search results, and view document data. You can
also see the number of documents that match the search query and get field value statistics.

1. On the Home page, click Dashboards to access the Dashboard.

2. To display the incoming raw data, click Discover. The report’s data is selected by
clicking on the This month option and selecting the desired date range, for the
purpose of this exercise select This month.

In the time range tool, there are different ways to select the date range,
you can select the date from the Commonly used menu with pre-set
relative periods of time (for example Year to date, This month), or
Recently used data ranges.

SIEMonster.com 65
3. The histogram at the top of the page shows the distribution of documents over the
time range selected.

4. Expand one of the events to view the list of data fields used in that event. Queries are
based on these fields.

5. By default, the table shows the localized version of the time field that’s configured for
the selected index pattern. You can toggle on or off different event fields if you hover
over the field and click add.

6. In some business scenarios, it is helpful to view all the documents related to a specific
event. To show the context related to the document, expand one of the events and
click View surrounding documents.

SIEMonster.com 66
7. Search results can also be filtered to view those
documents that contain value in a filter. Click + Add
filter to add a filter manually.

8. Click Save to save these results. In the Title field, type


Rule Overview. Click Confirm Save.

8.2 Visualize
Visualizations are used to aggregate and visualize your data in your Elasticsearch indices in
different ways. Kibana visualizations are based on Elasticsearch queries.

By using a series of Elasticsearch aggregations to extract and process your data, you can
create a Dashboard with charts that shows the trends, spikes, and dips. Visualizations can be
based on the searches saved from Discover, or you can start with a new search query.

The next section introduces the concept of Elasticsearch Aggregations as they are the basis
of visualization.

8.2.1 Aggregations
The aggregation of the data in SIEMonster is not done by Kibana, but by the underlying
Elasticsearch. The aggregation framework provides data based on a search query and it can
build analytic information over a set of documents.

There are different types of aggregations, each with its own purpose. Aggregation can be
categorized into four types:

• Bucket Aggregation
• Metric Aggregation
• Matric Aggregation
• Pipeline Aggregation

Bucket Aggregation
A bucket aggregation, groups all documents into several buckets, each containing a subset
of the indexed documents and associated with a key. The decision which bucket to sort a
specific document into can be based on the value of a specific field, a custom filter, or other
parameters.

Currently, Kibana 5 supports the following 7 bucket aggregations:

SIEMonster.com 67
1. Date Histogram
The Date Histogram aggregation requires a field of type date and an interval. It can only be
used with the date values. It will then put all the documents into one bucket, whose value of
the specified date field lies within the same interval.

Example:
You can construct a Date Histogram on @timestamp field of all messages with the interval
minute. In this case, there will be a bucket for each minute and each bucket will hold all
messages that have been written in that minute.

Besides common interval values like minutes, hourly, daily, etc. there is the special value
auto. When you select auto interval, the actual time interval will be determined by Kibana
depending on how large you want to draw this graph, so that a respectable number of
buckets will be created (not too many to pollute the graph, nor too few so the graph would
become irrelevant).

2. Histogram
A Histogram is like Date Histogram, but unlike Date Histogram, Histogram can be applied
on number fields extracted from the documents. It dynamically builds fixed sized buckets
over the values.

3. Range
The range aggregation is like a manual Histogram aggregation. You need to specify a field
of type number, but you must also specify each interval manually. This is useful if you either
want differently sized intervals or intervals that overlap.

Whenever you enter Range in Kibana, you can leave the upper or lower bound empty to
create an open range (like the above 1000-*).

This aggregation includes the from value and excludes the to value for
each range.

4. Terms
Terms aggregation creates buckets by the values of a field. It is very similar to a classical SQL
GROUP BY. You need to specify a field (which can be of any type), it will create a bucket for
each of the values that exist in that field and add all documents in that field with a value.

Example:
You can run a Terms aggregation on the field geoip.country_name that holds the country
name. It will then have a bucket for each country and in each bucket the documents of all
events from that country.

SIEMonster.com 68
The aggregation doesn’t always need to match the whole field value. If you let Elasticsearch
analyze a string field, it will by default split its value up by spaces, punctuation marks and
the like, and each part will be an own term, and as such would get an own bucket.

If you use a Term aggregation on a rule, you might assume that you would get nearly one
bucket per event, because two messages rarely are the same. But this field is analyzed in our
sample data, so you would get buckets for ssh, syslog, failure and so on and in each of these
buckets all documents, that had that Term in the text field (even though it doesn’t need to
match the text field exactly).

Elasticsearch can be configured not to analyze fields or you can configure the analyzer that
is used to match the behavior of a Terms aggregation to your actual needs. For example,
you could let the text field be analyzed so that colons (:) and slashes (/) won’t be split
separators. That way, an URL would be a single term and not split up into http, the domain,
the ending and so on.

5. Filters
Filters is a completely flexible (and at times slower than the others) aggregation. You need
to specify Filters for each bucket that will collect all documents that match its associated
filter.

Example:
Create a Filter aggregation with one query being geoip.country_name:(Ukraine or China) and
the second filter being rule.firedtimes:[100 TO *].

Aggregation will create two buckets, one containing all the events from Ukraine or China,
and one bucket with all the events with 100 or more rule fired times. It is up to you, to decide
what kind of analysis you would do with these two buckets.

6. Significant Terms
The Significant Terms aggregation can be used to find uncommonly common terms in a set
of documents. Given a subset of documents, this aggregation finds all the terms which
appear in this subset more often than could be expected from term occurrences in the whole
document set.

It then builds a bucket for each of the Significant Terms that contains all documents of the
subset in which this term appears. The size parameter controls how many buckets are
constructed, for example how many Significant Terms are calculated.

The subset on which to operate the Significant Terms aggregation can be constructed by a
filter or you can use another bucket aggregation first on all documents and then choose
Significant Terms as a sub-aggregation which is computed for the documents in each
bucket.

SIEMonster.com 69
Example:
You can use the search field at the top to filter our documents for those with
geoip.country_name:China and then select significant terms as a bucket aggregation.

In order to deliver relevant results that really give insight into trends and
anomalies in your data, the Significant Terms aggregation needs
sufficiently sized subsets of documents to work on.

7. GeoHash Grid
Elasticsearch can store coordinates in a special type geo_point field and group points into
buckets that represent cells in a grid. Geohash aggregation can create buckets for values
close to each other. You must specify a field of type geo_point and a precision. The smaller
the precision, the larger area the buckets will cover.

Example:
You can create a Geohash aggregation on the coordinates field in the event data. This will
create a bucket containing events close to each other. Precision can specify how close events
can be and how many buckets are needed for the data.

Geohash aggregation works better with a Tile Map visualization


(covered later).

Metric Aggregations
After you have run a bucket aggregation on your data, you will have several buckets with
documents in them. You can now specify one Metric Aggregation to calculate a single value
for each bucket. The metric aggregation will be run on every bucket and result in one value
per bucket.

The aggregations in this family compute metrics based on values extracted in one way or
another from the documents that are being aggregated, they can also be generated using
scripts.

In the visualizations the bucket aggregation usually will be used to determine the "first
dimension" of the chart (e.g. for a pie chart, each bucket is one pie slice; for a bar chart each
bucket will get its own bar). The value calculated by the metric aggregation will then be
displayed as the "second dimension" (e.g. for a pie chart, the percentage it has in the whole
pie; for a bar chart the actual high of the bar on the y-axis).

Since Metric Aggregations mostly makes sense, when they run on buckets, the examples of
Metric Aggregations will always contain a bucket aggregation as a sample too. But of course,
you could also use the Metric Aggregation on any other bucket aggregation; a bucket stays
a bucket.

SIEMonster.com 70
1. Count
This is not really an aggregation. It returns the number of documents that are in each bucket
as a value for that bucket.

Example:
To calculate the number of events from a specific country, you can use a term aggregation
on the field geoip.country_name (which will create one bucket per country code) and
afterwards run a count metric aggregation. Every country bucket will have the number of
events as a result.

2. Average/Sum
For the Average and Sum aggregations you need to specify a numeric field. The result for
each bucket will be the sum of all values in that field or the average of all values in that field
respectively.

Example:
You can have the same country buckets as above again and use an Average aggregation on
the rule fired times count field to get a result of how many rules fired times events in that
country have in average.

3. Max/Min
Like the Average and Sum aggregation, this aggregation needs a numeric field to run on. It
will return the Minimum value or Maximum value that can be found in any document in the
bucket for that field.

Example: If we use the country buckets and run a Maximum aggregation on the rule fired
times, we would get for each country the highest amount of rule triggers an event had in
the selected time period.

4. Unique Count
The Unique count will require a field and counts how many unique values exist in documents
for that bucket.

Example:
This time we will use range buckets on the rule.firedtimes field, meaning we will have buckets
for users with 1-50, 50-100 and 100- rule fired times.

If we now run a Unique Count aggregation on the geoip.country_name field, we will get for
each rule fired times range the number of how many different countries users with so many
rule fired times would come.

SIEMonster.com 71
In the sample data that would show us, that there are attackers from 8 different countries
with 1 to 50 rule fired times, from 30 for 50 to 100 rule fired times and from 4 different
countries for 100+ rule fired times and above.

5. Percentiles
A Percentiles aggregation is a bit different, since it does not result in one value for each
bucket, but in multiple values per bucket. These can be shown as different colored lines in a
line graph.

When specifying a Percentile aggregation, you must specify a number value field and
multiple percentage values. The result of the aggregation will be the value for which the
specified percentage of documents will be inside (lower) as this value.

Example:
You specify a Percentiles aggregation on the field user.rule fired times_count and specify the
percentile values 1, 50 and 99. This will result in three aggregated values for each bucket.

Let’s assume that we have just one bucket with events in it:

• The 1 percentile result (and e.g. the line in a line graph) will have the value 7. This
means that 1% of all the events in this bucket have a rule fired times count with 7 or
below.

• The 50-percentile result is 276, meaning that 50% of all the events in this bucket have
a rule fired times count of 276 or below.

• The 99 percentile have a value of 17000, meaning that 99% of the events in the bucket
have a rule fired times count of 17000 or below.

SIEMonster.com 72
8.2.2 Visualizations
The SIEMonster Kibana creates visualization of the data in the Elasticsearch queries that can
then be used to build Dashboards that display related visualization.

There are different types of visualizations available:

Chart Type Description


Area, Line, Bar Charts Compares different series in X/Y charts

Data Table Displays a table of aggregated data

Markdown Widget A simple widget, that can display some markdown text. Can
be used to add help boxes or links to dashboards

Metric Displays one the result of a metric aggregation without


buckets as a single large number

Pie Chart Displays data as a pie with different slices for each bucket
or as a donut

Tile Map Displays a map for results of a geohash aggregation

Vertical Bar Chart A chart with vertical bars for each bucket.

Saving and Loading


Saving visualizations allow you to refresh them in Visualize and use them in Dashboards.
While editing a visualization you will see the same Save, Share and Refresh icons beside
the search bar.

Kibana always take you back to the same visualization that you are working on while
navigating between different tabs and the Visualize tab.

As a good practice, you should always save your visualization so that you do not lose it.

You can import, export, edit, or delete saved visualizations


from Management/Saved Objects.

SIEMonster.com 73
Creating a Visualization

When you click on Visualize in the side navigation, it will present you with a list of all the
saved visualizations that can be edited and an option for you to create a new visualization.

When you click on the icon button, you will need to select the visualization type, and
then specify either of the following search query to retrieve the data for your visualization.

• Click the name of the Saved Search you want to use to build a visualization from the
saved search.

• To define a new search criterion, select from the From a New Gauge, the Index that
contain the data you want to visualize. This will open the visualization builder.

o Choose the Metric Aggregation (for example Count, Sum, Top Hit, Unique
Count) for the visualization’s Y-axis, and select the Bucket Aggregation (for
example Histogram, Filters, Range, Significant Terms) for the X-axis.

SIEMonster.com 74
Visualization Types
In the following section, all the visualizations are described in detail with some examples.
The order is not alphabetically, but an order that should be more intuitive to understand the
visualizations. All are based on the Wazuh/OSSEC alerts index. A lot of the logic that applies
to all charts will be explained in the Pie Charts section, so you should read this one before
the others.

Pie chart

Pie chart is one of the visualization types, it


displays data as a pie with different slices for
each bucket. Metric aggregation determines
the slice size of a Pie Chart.

Once you select the Pie Chart, wazuh-alerts is


used as a source in this example. This example
has a preview of your visualization on the right,
and the edit options in the side navigation on
the left.

SIEMonster.com 75
Visualization Editor for a Pie Chart

There are two icons on the top right of the panel of the
Data tab.

Apply changes is a play icon and Discard changes a


cancel cross beside it.

If you make changes in the editor, you must click Apply


changes to view the changes in the preview on the right
side or press Discard changes to cancel changes and
reset the panel.

The Donut checkbox under the Options tab can be


used to specify whether the diagram should be
displayed as a donut instead of a pie.

Show Tooltip exist on most of the visualizations and


allows you enable or disable tooltips on the chart.
When it is enabled, you can hover over a slice of the pie
chart (or bar in a Bar Chart etc.), a tooltip will display
detailed data about that slice, for example what field
and value the chart belongs to and the value that the
metrics aggregation calculated.

SIEMonster.com 76
Aggregations
The slice size of a pie chart is determined by
the Metrics aggregation. click on the icon from the
Data tab and select Count from the Aggregation
drop-down menu.

The Count aggregations returns a raw count of the


elements in the wazuh-alerts index pattern. Enter a
label in the Custom Label field to change the display
label.

The buckets aggregations determine what information


is being retrieved from your data set. Before you select a bucket aggregation, you can specify
if you are splitting slices within a single chart or splitting into multiple charts.

Split Chart is a commonly used visualization option. In a Split Chart, each bucket created
by the bucket aggregation gets an own chart. All the charts will be placed beside and below
each other and make up the whole visualization. Split Slices is another visualization option
that can generate a slice for each bucket.

Example:
Kibana requires aggregation and its parameters to add a Split Slice type.
• Expand Split Slices by clicking on the icon
• Select Terms from the Aggregation drop-down menu
• Select rule.level from the Field drop-down menu
• Click on the Apply changes icon

The result above shows that there is one pie slice per bucket (for example per rule level).

SIEMonster.com 77
Question: How is the size of the slice in the pie determined?

Answer: This will be done by the Metric aggregation, which by default is set to Count of
documents. So, the pie now shows one slice per rule.level bucket and its percentage depends
on the number of events, that came from this event.

Question: Pie chart with a sum Metric aggregation across the ruled count but, why are there
only shown two slices?

Answer: This is determined by the Order and Size option in the Bucket aggregation. You
can specify how many buckets you want to see in the chart, and if you would like to see the
ones with the least (bottom) or the highest (top) values.

This order and size are linked to the Metric aggregation on the top. To demonstrate this,
switch the Metric aggregation on the top. When you expand it, you can switch the type to
Sum and the field to rule.firedtimes. You will now get a slice for each level and its size will be
determined, by the sum of the triggers per rule, that fired in our time range.

By using the Size option, you can restrict results to only show the top results.

With the Order by drop-down menu, you can also specify another Metrics aggregation, that
you want to use for ordering. Some graph types support multiple Metric aggregations. If
you add multiple Metrics aggregations, you will also be able to select in the order by box,
which of these you want to use for ordering.

The Order settings depend on the metric aggregation, that you have selected at the top of
the editor.

SIEMonster.com 78
Nested aggregations on a Pie Chart
A Pie Chart can use nested bucketing. You can click the Add sub-buckets button to add
another level of bucketing. You cannot use a different visualization type in a sub bucket. For
example, you cannot add Split Chart in a Splice Slice type of visualization because it splits
charts first and then use the sub aggregation on each chart.

Adding a sub aggregation of type Split Slices will create a second ring of slices around the
first ring.

Kibana in this scenario first aggregates via a Terms aggregation on the country code field,
so you have one bucket for each country code with all the events from that country in it.
These buckets are shown as the inner pie and their size is determined by the selected metric
aggregation (Count of documents in each bucket).

Inside each bucket Kibana now use the nested aggregation to group by the rule.firedtimes
count in a thousand interval. The result will be a bucket for each country code and inside
each of these buckets, are buckets for each rule fired interval.

The size of the inside buckets is again determined by the selected Metric aggregation,
meaning also the size of documents will be counted. In the Pie chart you will see this nested
aggregation as there are more slices in the second ring.

If you want to change the bucketing order, meaning in this case, you first want to bucket the
events by their rule.firedtimes and then you want to have buckets inside these follower
buckets for each country, you can just use the arrows beside the aggregation to move it to
an outer or inner level.

SIEMonster.com 79
There are some options to the Histogram aggregation. You can set if empty buckets (buckets
in which interval no documents lie) should be shown. This does not make any sense for Pie
charts, since they will just appear in the legend, but due to the nature of the Pie chart, their
slice will be 0% large, so you cannot see it. You can also set a limit for the minimum and
maximum field value, that you want to use.

Click on the Save button on the top right and give your visualization a name.

Coordinate Map
A Coordinate map is most likely the only useful way to display a Geohash aggregation. When
you create a new coordinate map, you can use the Split Chart to create one map per bucket
and use type as Geo Coordinates.

That way you must select a field that contains geo coordinates and a precision. The
visualization will show a circle on the map for each bucket. The circle (and bucket) size
depends on the precision you choose. The color of the circle will indicate the actual value
calculated by the Metric aggregation.

SIEMonster.com 80
Area and Line Charts
Both Area and Line charts
are very similar, they are
used to display data over
time and allow you to
plot your data on X and Y
axis. Area chart paints the
area below the line, and
it supports different
methods of overlapping
and stacking for the
different area.

The chart that we want to


create, should compare
HIDS rule levels greater
than 9 with attack
signatures. We want to
split up the graph for the two options.

Area Chart with Split Chart Aggregation


Add a new bucket aggregation with type Split Chart with a Filters aggregation and one filter
for rule.level: >9 and another for rule.groups: attack. The X-axis is often used to display a
time value (so data will be displayed over time), but it is not limited to that. You can use any
other aggregation. however, the line (or area) will be interpolated between two points on
the X-axis. This will not make much sense, if the values you choose are not consecutive.

Add another sub aggregation of type Split Area that can create multiple colored areas in the
chart. To add geo positions, you need to add Terms aggregation on the field
geoip.country_name.raw. Now you have charts showing the events by country.

SIEMonster.com 81
In the Metric and Axes options you can change the Chart Mode that is currently set to
stacked. This option only applies to the Area chart, since in a Line chart there is no need for
stacking or overlapping areas.

There are five different types of modes for the area charts:

• Stacked: Stacks the aggregations on top of each other


• Overlap: The aggregations overlap, with translucency indicating areas of overlap
• Wiggle: Displays the aggregations as a streamgraph
• Percentage: Displays each aggregation as a proportion of the total
• Silhouette: Displays each aggregation as variance from a central line

The following behaviors can be enabled or disabled:

• Smooth Lines: Tick this box to curve the top boundary of the area from point to point
• Set Y-Axis Extents: Tick this box and enter values in the y-max and y-min fields to
set the Y-axis to specific values.
• Scale Y-Axis to Data Bounds: The default Y-axis bounds are zero and the maximum
value returned in the data. Tick this box to change both upper and lower bounds to
match the values returned in the data
• Order buckets by descending sum: Tick this box to enforce sorting of buckets by
descending sum in the visualization
• Show Tooltip: Tick this box to enable the display of tooltips

1. Stacked
The area for each bucket will be stacked upon the area below. The total documents across
all buckets can be directly seen from the height of all stacked elements.

Stacked mode of Area chart

SIEMonster.com 82
2. Overlap
In the Overlap view, areas are not stacked upon each other. Every area will begin at the X-
axis and will be displayed semi-transparent, so all areas overlap each other. You can easily
compare the values of the different buckets against each other that way, but it is harder to
get the total value of all buckets in that mode.

Overlap mode of Area chart

3. Percentage
The height of the chart will always be 100% for the whole X-axis and only the percentage
between the different buckets will be shown.

Percentage mode of Area chart

SIEMonster.com 83
4. Silhouette
In this chart mode, a line somewhere in the middle of the diagram is chosen and all charts
evolve from that line to both directions.

Silhouette mode of Area chart

5. Wiggle
Wiggle is like the Silhouette mode, but it does not keep a static baseline from which the
areas evolve in both directions. Instead it tries to calculate the baseline for each value again,
so that change in slope is minimized. It makes seeing relations between area sizes and
reading the total value more difficult than the other modes.

Wiggle mode in Area chart

SIEMonster.com 84
Multiple Y-axis
Beside changing the view mode, you can also add another Metric aggregation to either Line
or area charts. That Metric aggregation will be shown with its own color in the same chart.
Unfortunately, all Metric aggregations you add, will share the same scale on the Y-axis. That
is why it makes most sense if your Metric aggregations return values in the same dimension
(For example one metric that will result in values from up to 100 and another that result in
values from 1 million to 10 million, will not be displayed very well, since the first metric will
barely be visible in the graph).

Vertical Bar

A Bar Chart Example


The vertical bar visualization is much like the area visualization, but more suited if the data
on your X-axis is not consecutive, because each X-axis value will get its own bar(s) and there
won’t be any interpolation done between the values of these bars.

Changing Bar Colors


To the right of the visualization the colors for each filter/query can be changed by expanding
the filter and picking the required color.

You only have three bar modes available:

Stacked: Behaves the same like in area chart, it just stacks the bars onto each other

SIEMonster.com 85
Percentage: Uses 100% height bars, and only shows the distribution between the different
buckets

Grouped: It is the only different mode compared to Area charts. It will place the bars for
each X-axis value beside each other

Metric
A Metric visualization can just display the result of a Metrics aggregation. There is no
bucketing done. It will always apply to the whole data set, that is currently considered (you
can change the data set by typing queries in the top box). The only view option, that exists
is the font size of the displayed number.

Markdown Widget
This is a very simple widget, which does not do anything with your data. You only have the
view options where you can specify some markdown. The markdown will be rendered in the
visualization. This can be very useful to add help texts or links to other pages to your
dashboards. The markdown you can enter is GitHub flavored markdown.

Data Table
A Data Table is a tabular output of aggregation results. It is basically the raw data, that in
other visualizations would be rendered into some graphs.

Data Table Example


Let’s create a table that uses the Split Rows type to aggregate the top 5 countries. and a sub
aggregation with some ranges on the rule level field. In the screenshot above, you can see
what the aggregations should look like and the result of the table.

SIEMonster.com 86
We will get all the country buckets on the top level. They will be presented in the first column
of the table. Since each of these rule level buckets contains multiple buckets for the rule
levels nested aggregation, there are 2 rows for each country, i.e. there is one row with the
country in the front for every bucket in the nested aggregation. The first two rows are both
for United States, and each row for one sub bucket of the nested aggregation. The result of
the metrics aggregation will be shown in the last column. If you add another nested
aggregation you will see, that those tables easily get large and confusing.

If you would now like to see the result of the Metrics


aggregation for the Terms aggregation on the countries,
you would have to sum up all the values in the last column,
that belongs to rows beginning with United States. This is
some work to do and would not work very well for average
metrics aggregations for example. In the view options you
can enable Show metrics for every bucket/level, which will
show the result of the metrics aggregation after every
column, for that level of aggregation. If you switch it on, after the United States column
should appear another column, that says 3989, meaning there are 3989 documents in the
United States bucket. This will be shown in every row, though it will always be the same for
all rows with United States in it. You can also set the size of how many rows should be shown
in one page of the table in the view options.

Queries in Visualizations
Queries can be entered in a
specific query language in a
search box at the top of the page.
It also works for visualizations.
You can just enter any query and
it will use this as a filter on the
data, before the aggregation runs
on the data.

Enter “anomaly” in the search box


and press enter. You will see only
those panes that relate to
anomaly.
This filtering is very useful,
because it will be stored with the
visualization when you save it, meaning if you place the visualization on a dashboard the
query that you stored with the visualization will still be applied.

SIEMonster.com 87
Debugging Visualizations
Kibana offers you with some debugging output for your visualizations. If you are on Visualize
page, you can see a small up pointing arrow below the visualization preview (you will also
see this on dashboards below the visualizations). Hitting this will reveal the debug panel with
several tabs on the top.
Table
Table show the results of the aggregation as a data table visualization. It is the raw data the
way Kibana views it.
Request
The request tab shows the raw JSON of the request, that has been sent to Elasticsearch for
this aggregation.
Response
Shows the raw JSON response body, that Elasticsearch returned to the request.
Statistics
Shows statistics about the call, like the duration of the request and the query, the number
of documents that were hit, and the index that was queried.

8.2.3 Exercise: Visualize the Data


The SIEMonster Kibana creates visualization of the data in the Elasticsearch queries that can
then be used to build Dashboards to display related visualization.

1. Click on Visualize from the side navigation and then click the + button.

2. The next screen will show you different


types of visualization. Click Pie chart.

3. Select the search query that you saved


earlier (Rule Overview) to retrieve the
data of your visualization.

4. Under the Select bucket type section,


click on Split Slices.

5. From the Aggregation drop-down


menu, select Terms.

6. From the Field drop-down menu, select


@timestamp. Click on Apply changes
button.

SIEMonster.com 88
7. Try increasing the number of fields by entering 10 in the Size field and click on Apply
changes button.

8. In the Custom Label field, enter Timestamp and click on Apply changes button.

8.3 Dashboard
A Dashboard displays different visualizations and maps. Dashboard allows you to use a
visualization on multiple dashboards without having you to copy the code around. Editing a
visualization automatically changes every Dashboard using it. Dashboard content can also
be shared.

SIEMonster.com 89
8.3.1 Exercise: Creating a new Dashboard

1. Click Dashboard from the side navigation


and then click Add new dashboard.

2. Click the Add button in the menu bar to


add a visualization to the Dashboard. This
will open Add Panels that will display the
list of available Visualization and the
Saved Searches. The list of visualizations
can be filtered.

3. Search for and click mitre_attack_techniques_matrices and


mitre_attack_mitigation_technique to add these visualizations on the dashboard.

4. You can click on resize control option on the lower right of the panel and drag to the
new dimensions to resize the panel.

5. You can move these panels around by dragging them from the panel header.

6. If you want to delete a panel from the dashboard, you can click on the gear icon
in on the upper right and select Delete from dashboard.

SIEMonster.com 90
7. Click on the time picker icon in the menu bar to define the value for Refresh
every. This is useful especially when you are viewing the live data coming into the
system.

8. Click on Share in the menu bar, select Embed code, and click on Copy iFrame code.
It can now embed the Dashboard in a web application.

If you copy the link written in the src=”…” attribute and share this, your
users will not have the option to modify the dashboard. This is not a
security feature, since a user can just remove the embed from the URL.
However, it can be helpful if you want to share links to people that
should not modify the dashboards by mistake.
Short URL enables easier link shares.

9. Once you have finished all the visualization, click Save button in the menu bar to save
your Dashboard.

10. In the Save dashboard dialog box,


enter the Title for the Dashboard. You
can add a Description to provide some
details for the Dashboard.

11. Enable the Store time with dashboard


option. This will change the time filter of
the dashboard to the currently selected
time each time this dashboard is
loaded. Click Confirm Save.

These saved dashboards can be viewed, edited, imported, exported, and


deleted from Management > Saved Objects from the left navigation
menu. A saved object can be a Search, Visualization, Dashboard, or an
Index Pattern.

SIEMonster.com 91
8.4 Alerting
Open Distro for Elasticsearch allows you to monitor your data and send alerts automatically
to your stakeholders. It is easy to setup and manage and it uses Kibana interface with a
powerful API.

Alerting feature allows you to setup rules so that you can be notified when something of
interest changes in your data. Anything you can query on, you can build an alert on that. The
Alerting feature notifies you when data from one or more Elasticsearch indices meets certain
conditions. For example, you might want to notify a Slack channel if your application logs
more than five HTTP 503 errors in one 30 minutes, or you might want to page a developer if
no new documents have been indexed in the past 20 minutes.

The alerting process is driven by an API.

8.4.1 Monitor
A job that runs on a defined schedule and queries Elasticsearch. The results of these queries
are then used as input for one or more triggers.

With Open Distro for Elasticsearch, you can easily create monitors using the Kibana UI with
a simple visual editor or with an Elasticsearch query. This gives you the flexibility to query
the data most interesting to you and receive alerts on it. For instance, if you are ingesting
access logs, you can choose to be notified when the same user logs in from multiple
locations within an hour, enabling you to proactively address possible intrusion attempts.

8.4.2 Exercise : Creating Monitors


1. Navigate to the Alerting > Monitors tab. This screen will display all the existing
monitors.

2. Click Create monitor. Specify a name for the monitor.

SIEMonster.com 92
3. Define the frequency in the Schedule
section of the Configure Monitor
screen.

a. From the Frequency drop-down


menu, select Daily.

b. In the Around field, select the


time for the monitor. For this
exercise, select 8:00 AM.

c. From the Select the timezone


drop-down menu, select your
preferred time zone.

Monitors can run at a variety of fixed intervals (e.g. hourly, daily, etc.).
however, this process can be customized using the custom cron
expressions for when they should run. Monitors use the Unix cron syntax
and support five fields:

Field Valid Values


Minute 0-59
Hour 0-23
Day of month 1-31
Month 1-12
Day of week 0-7 (0 and 7 are both Sunday)

Example:
The following expression translates to “every Monday through Friday at 10:45 AM”:
• 45 10 * * 1-5

4. From the How do you want to define the monitor drop-down menu, select Define
using visual graph. This will define a Monitor visually.

5. From the Index drop-down menu, select any of the wazuh-alerts (for example, select
wazuh-alerts-3.z-2019.06.26). These indices are time based, a new index is created
every day and the name of the index is based on the day it is created. Wildcards can
also be used for Alerting,

6. From the Time field drop-down menu, select @timestamp.

SIEMonster.com 93
7. Under Match the following condition, click on FOR THE LAST and select the time
duration to filter the data further (for example select 10 days).

8. Click on Create to create your Monitor.

9. A complete history of all alert executions is indexed in Elasticsearch for easy tracking
and visualization. This can help you to answer questions like are my alerts executing?
What are my active alerts? What alerts have been acknowledged or triggered? What
actions were taken?

SIEMonster.com 94
Create Triggers
10. Creating trigger is the next step in creating a Monitor. Under the Trigger name,
specify the name of the trigger (for example you can name this trigger as Flatline).

Trigger names must be unique. Names can only contain letters,


numbers, and special characters.

11. From the Severity level drop-down menu, select 1. Severity levels help to manage
alerts. A trigger with a low severity level (for example 5) might message a chat room,
whereas a trigger with a high severity level (for example 1) might page a specific
individual.

12. Under Trigger condition¸ specify the threshold for the aggregation and timeframe
selected (for example, select IS BELOW 5).

Alerting threshold and severity can be defined using trigger conditions.


Multiple trigger conditions can be applied to each monitor that allows
you to query the data source and generate the appropriate action.
Triggers can be highly customized.

The line moves up and down as you increase and decrease the
threshold. Once this line is crossed, the trigger evaluates to true.

SIEMonster.com 95
Configure Actions

The final step in creating a Monitor is to add one or more actions. Actions send notifications
when trigger conditions are met and support Slack, Amazon Chime, and Webhooks.
13. In the Action name field, specify Flatline Action as the action name.

14. From the Destination name drop-down menu, select a Destination.

15. In the Message subject field, specify Wazuh Flatline and click Create.

Slack and webhook have built-in integrations in Open Distro for


Elasticsearch and they provide multiple alerting options. Alerts can be
Acknowledged, Disabled, or Edited by clicking on the related buttons.

Alerts using Extraction Query

16. Click Edit on your monitor (for


example Demo-Monitor) to
select another type of alert
called query extraction.

17. From the How do you want to


define the monitor drop-down
menu, select Define using
extraction query.

SIEMonster.com 96
For Trigger condition, specify a Painless script that returns true or false.
Painless is the default Elasticsearch scripting language and has a syntax
like Groovy

18. Test the script using the Run button. A return value of true means the trigger
condition has been met, and the trigger should execute its actions.

8.4.3 Alerting: Security Roles


If a security plugin is used alongside alerting, certain users can be limited to certain
permissions. Typically, some users may only be able to view and acknowledge alerts, while
others can create monitors and destinations. Users can be part of more than one role to give
permission they need to use the alerting feature.

8.4.4 Exercise: View and Acknowledge Alerts


1. Navigate to Security > Roles and then click on to add a new Role. Specify alerting-
alerts as name for this Role.

2. Click the Index Permissions tab as shown and click Add index permissions.

SIEMonster.com 97
3. In the Index patterns field, type .opendistro-alerting-alerts and click Add index
pattern.

4. Under the Permissions: Action Groups section, click Action Group, and select crud
from the drop-down menu and click Add Action Group.

5. Click Save Role Definition.

6. Navigate to Security > Role Mappings and click to add a new role mapping.
From the Role drop-down menu, select alerting-alert role that you have created in
this exercise. You can now map this role to the desired Users or Backend roles by
clicking on Add User or Add Backend Role respectively.

8.4.5 Exercise: Create, Update, and Delete Monitors and Destinations

1. Navigate to Security > Roles and then click on to add a new Role. Specify alerting-
monitors as name for this role.

2. Click the Index Permissions tab as shown and click Add index permissions

3. In the Index patterns field, type .opendistro-alerting-config and click Add index
pattern.

4. Under the Permissions: Action Groups section, click Action Group, and select crud
from the drop-down menu and click Add Action Group.

SIEMonster.com 98
5. Navigate to Security > Role Mappings and click to add a new role mapping.
From the Role drop-down menu, select alerting-monitors role that you have created
in this exercise. You can now map this role to the desired Users or Backend roles by
clicking on Add user or Add Backend Role respectively.

8.4.6 Exercise: Ready Only


1. Navigate to Security > Roles and then click on to add a new Role. Specify alerting-
read-only as name for this role.

6. Click the Index Permissions tab as shown and click Add index permissions

2. In the Index patterns field, type .opendistro-alerting-alertsx`x and click Add index
pattern.

3. Under the Permissions: Action Groups section, click Action Group, and select read
from the drop-down menu and click Add Action Group.

4. Navigate to Security > Role Mappings and click to add a new role mapping.
From the Role drop-down menu, select alerting-read-only Role that you have created
in this exercise. You can now map this role to the desired Users or Backend roles by
clicking Add user or Add Backend Role respectively.

SIEMonster.com 99
8.5 Wazuh
Wazuh is an open source and enterprise ready security monitoring solution used for secure
visibility, threat detection, infrastructure monitoring, compliance, and incident response.

Wazuh dashboard is used to manage agents that read operating system and application
logs, and securely forward them to a central manager for rule-based analysis and storage.
The Wazuh rules help bring to your attention application or system errors, misconfigurations,
attempted and/or successful malicious activities, policy violations, and a variety of other
security and operational issues.

8.5.1 Wazuh: Security Events


Security events Dashboard shows security alerts, identifying issues and threats in your
environment.

SIEMonster.com 100
The dashboard above shows that 6,937 Alerts were generated and none of them were level
12 or above. More detailed information about these alerts are displayed under the Alerts
summary section.

Each event on the Wazuh Agent is set to a certain severity level with 1
as the default. All events from this level up will trigger an alert in the
Wazuh Manager.

To explore these alerts in more detail, click Discover on Dashboard’s menu bar and then
expand one of the events to view the list of data fields used in that event. This page displays
information in an organized way, allowing filtering by different types of alert fields, including
compliance controls.

For example, in the event below we can find where this message is located in the system
through the location field, what is the CVE code for the vulnerability using the rule.info
field, and what is the IP address of the attacked using the srcip field.

SIEMonster.com 101
8.5.2 Wazuh: PCI DSS
The Payment Card Industry Data Security Standard (PCI-DSS) is a common proprietary IT
compliance standard for organizations that process major credit cards such as Visa,
MasterCard, and American Express and it was developed to encourage and enhance
cardholder data security, and to facilitate the adoption of consistent data security measures
globally. The standard was created to increase control of cardholder data to reduce credit
card fraud.

It applies to all merchants and service providers that process, transmit or store cardholder
data. If your organization handle card payments, it must comply, or risk suffering financial
penalties or even the withdrawal of the facility to accept card payments.

On the Wazuh Dashboard, click PCI DSS. The PCI DSS dashboard opens. The PCI DSS
dashboard shows the data related to the Agents, PCI Requirements, and Alerts summary.

Wazuh can implement PCI DSS by performing log analysis, file integrity checking, policy
monitoring, intrusion detection, real-time alerting, and active response. The Dashboard can
be filtered by selecting different PCI DSS requirements.

SIEMonster.com 102
Under the Alerts summary section on PCI DSS Dashboard, data shows that the Log file rotated
attack is impacting 10.5.2 and 10.5.5 controls. 10.5.2 protects audit trail files from
unauthorized modification, while 10.5.5 uses file integrity monitoring or change detection
software on logs to ensure that existing log data cannot be changed without generating
alerts (although new data being added should not cause an alert).

Controls like these can help you to log data like invalid login access attempts, multiple invalid
login attempts, privilege escalations, and changes to accounts. In order to achieve this, PCI
DSS tags are added to OSSEC log analysis rules, mapping them to the corresponding
requirement(s).

8.5.3 Wazuh: OSSEC


OSSEC is a scalable, multi-platform, open source Host-based Intrusion Detection System
(HIDS). OSSEC runs on virtually every operating system and is widely used in both on-
premise and in cloud environments. OSSEC helps to implement PCI-DSS by performing log
analysis, checking file integrity, monitoring policy, detecting intrusions, and alerting and
responding in real time. It is also commonly used as a log analysis tool that supports the
monitoring and analyzing of network activities, web servers, and user authentications.

OSSEC is comprised of two components:

• Central Manager Component: Receives and monitors the incoming log data
• Agents: Collect and send information to the central manager

8.5.4 Wazuh: GDPR


Europe is now covered by the world's strongest data protection rules. The mutually agreed
General Data Protection Regulation (GDPR) came into force on May 25, 2018 and was
designed to modernize laws that protect the personal information of individuals.

Wazuh takes advantage of its file integrity monitoring and access control capabilities
coupled with a new tagging in Wazuh ruleset. Rules in compliance with a specific GDPR
technical requirement have a tag describing it.

Wazuh offers extensive support for GDPR compliance, but it can do much more. Wazuh will
help you gain greater visibility into the security of your infrastructure by monitoring hosts at
the operating system and application levels.

SIEMonster.com 103
This solution, based on lightweight multi-platform agents, provides:

• File integrity monitoring


• Intrusion and anomaly detection
• Automated log analysis
• Policy monitoring and compliance

This diverse set of capabilities is provided by integrating OSSEC, OpenSCAP and Elastic
Stack into a unified solution and simplifying their configuration and management. Wazuh
provides an updated log analysis ruleset and a RESTful API that allows you to monitor the
status and configuration of all Wazuh agents. It also includes a rich web application (fully
integrated as a Kibana app) for mining log analysis alerts and for monitoring and managing
your Wazuh infrastructure.

The syntax used for rule tagging is gdpr_ followed by the chapter, article and, where
appropriate, the section and paragraph to which the requirement belongs. (e.g.
gdpr_II_5.1.f).

SIEMonster.com 104
Click Discover on Dashboard’s menu bar and then expand one of the events to view the list
of data fields used in that event. In the event below we can find that the rule.gdpr field for
example has values of II_5.1.f and IV_35.7.d.

As we can observe, certain requirements for GDPR compliance are strictly formal with no
place for support at the technical level. However, Wazuh offers a wide range of solutions to
support most of the technical needs of GDPR.

GDPR Requirement: II_5.1.f


It is necessary to ensure the confidentiality, integrity, availability, and resilience of the
processing systems and services by ascertaining their modifications, accesses, locations and
guaranteeing their security, as well as of the stored data. To control at all times access to the
data, when access takes place and by whom and to control how the data is processed.

Data protection and file sharing technologies that meet data protection requirements are
also

SIEMonster.com 105
necessary as it is vitally important to know the purpose of the data processing and whether
the data processor, in the case of third parties, is authorized to do it.

Concept. File integrity monitoring


One of the solutions that Wazuh offers is File Integrity Monitoring. Wazuh monitors the file
system, identifying changes in content, permissions, ownership, and attributes of files that
you need to keep an eye on.

Wazuh’s File Integrity Monitoring (FIM) watches specified files and triggers alerts when these
files are modified. The component responsible for this task is called Syscheck. This
component stores the cryptographic checksum and other attributes of a known good file or
Windows registry key and regularly compares it to the current file being used by the system,
looking for changes. Multiple configurations are possible for monitoring in real time, in
intervals of time, only specific objectives, etc. In the same way that personal data files are
monitored, Wazuh can monitor the shared files to make sure they are protected.

GDPR Requirement: IV_30.1.g


It is necessary to document all processes and activities to carry out an inventory of data from
beginning to end and to audit, in order to know all the places where personal and sensitive
data is located, processed, stored or transmitted.

Concept. Document and record all data processing. Audit logs and events
Wazuh facilitates the documentation of a large amount of information about file access and
security. It offers the possibility to store all the events that the manager receives in archived
logs. In addition to storing alerts in alert logs and the ability to use more logs and databases
for various purposes, such as audits.

GDPR Requirement: IV_32.2


To control access to data, you will need account management tools that closely monitor
actions taken by standard administrators and users using standard or privileged account
credentials. In this way, the data protection officer will be able to check who is accessing and
processing the data, whether they are authorized to do so and whether they are who they
say they are.

Concept. Account management with/without privileges


Wazuh offers functionalities to monitor access and use of standard or privileged accounts
through its multiple monitoring tools.

GDPR Requirement: IV_35.7.d


Necessary security measures include data breach identification, blocking and forensic
investigation capabilities for rapid understanding of access attempts through active
breaches by malicious actors. This could occur through compromised credentials,

SIEMonster.com 106
unauthorized network access, active advanced persistent threats and verification of the
correct operation of all components.

Security tools are necessary to prevent the entry of unwanted data types and malicious
threats and to ensure that endpoints are not compromised when requesting access to the
network, system, and data. Anti-malware and anti-ransomware are needed to ensure the
integrity, availability, and resilience of data systems, to block and to prevent malware and
rescue threats from entering devices.

Behavioral analysis services that use machine intelligence to identify people who do
anomalous things on the network may be required to provide early visibility and alert when
employees who become corrupt. Such tools can also highlight bizarre activities, such as
employees logged on to devices in two different countries, which almost certainly means
they are at risk for accounts.

Concept. Security Monitoring


To meet these security requirements, Wazuh provides solutions such as Intrusion and
Anomaly Detection. Agents scan the system looking for malware, rootkits or suspicious
anomalies. They can detect hidden files, cloaked processes or unregistered network listeners,
as well as inconsistencies in system call responses. In addition, an integration of Wazuh with
NIDS is viable.

Anomaly detection refers to the action of finding patterns in the system that do not match
the expected behavior. Once malware (e.g., a rootkit) is installed on a system, it modifies the
system to hide itself from the user. Although malware uses a variety of techniques to
accomplish this, Wazuh uses a broad-spectrum approach to find anomalous patterns that
indicate possible intruders. The main component responsible for this task is Rootcheck.
However, Syscheck also plays a significant role.

We may become aware of application or system errors, misconfigurations, attempted and/or


successful malicious activity, policy violations, and a variety of other operational and security
issues through Wazuh rules. Using Automated logs analysis, Wazuh agents read operating
system and application logs and securely forward them to a central manager for rule-based
analysis and storage.

It is worth highlighting the ability to detect vulnerabilities. Now agents are able to natively
collect a list of installed applications and to send it periodically to the manager (where it is
stored in local SQLite databases, one per agent). In addition, the manager builds a global
vulnerability database, using public OVAL CVE repositories and later cross correlating this
information with the agent’s application inventory data.

SIEMonster.com 107
8.5.5 Wazuh: Ruleset
These rules are used by the system to detect attacks, intrusions, software misuse,
configuration problems, application errors, malware, rootkits, system anomalies or security
policy violations. OSSEC provides an out-of-the-box set of rules that we update and
augment, in order to increase Wazuh detection capabilities.

8.5.6 Wazuh: Dev Tools


The Dev Tools allows you to run commands through the API to Elasticsearch and provides a
user interface to interact with the Wazuh API. You can use it to send requests and get a
response.

On the editor pane, you can type API requests in several ways:

• Using in-line parameters, just like in a browser


• Using JSON-formatted parameters
• Combining both in-line and JSON-formatted parameters. Keep in mind that if you
place the same parameters with different values, the in-line parameter has
precedence over the JSON-formatted one

SIEMonster.com 108
To execute a request, place the cursor on the desired request line and click on the button.
Comments are also compatible on the editor pane using the # character at the beginning of
the line.

8.6 Dev Tools


Dev Tools Console enables you to interact with the REST API of Elasticsearch and allows for
auto completion and formatting of queries.

You cannot interact with Kibana API endpoints via Console.

Console has two main areas:

• The editor pane, where you type your REST API command and click on the that
sends the query to the Elasticsearch instance or cluster.
• The result pane, that displays the responses to the command.

SIEMonster.com 109
You can select multiple requests and submit them together. Console sends the requests to
Elasticsearch one by one and shows the output in the result pane. Submitting multiple
request is helpful when you are debugging an issue or trying query combinations in multiple
scenarios.

The Console maintains a list of the last 500 commands that Elasticsearch
executed successfully. Click History on the top right of the panel to view
your recent commands. If you want to view any request, select that
request and click Apply. Console will add this to the editor pane.

Click on the action icon and select Open documentation to view the documentation for
the Search APIs.

8.6.1 Exercise : Dev Tools


1. Type the code GET /_cat/indices in the editor pane as shown below and click on the
button. This will display all the indices in the result pane.

All the cat commands accept a query string parameter help to see all
the headers and info they provide, and the /_cat command alone lists all
the available commands. The indices command provides a cross section
of each index.

SIEMonster.com 110
2. Type the following code to add a doc into an index and click on the button

Optional Task:
Run the GET /_cat/indices code to view the index that you have just created.

3. Type GET /pdfbook/_search to get the details of all the pdfbook.

4. Add another book using the following code:

5. GET /pdfbook/_search will display the existing pdfbook, copy the value of one of the
_id of a pdfbook from the result pane. You can use the code shown below to query a
specific document by the value of ID you just copied.

SIEMonster.com 111
6. Use the code below to search a pdfbook based on the name of an author.

7. Use the code below to update any existing document.

8. The following code can be used to verify whether the document has been updated.

9. The following code can be used to delete any of the existing documents.

SIEMonster.com 112
8.7 Management
The Management module is used to perform your runtime configuration of Kibana, including
both the initial setup and ongoing configuration of index patterns, advanced settings that
can change the behavior of Kibana application, and the various Objects that you can save
throughout Kibana such as Search, Visualization, Index-pattern, and Dashboard.

8.7.1 Index Patterns


Index Patterns are created in Kibana to visualize and explore data. An Index Pattern can
match a single index, multiple indices, and a rollup index. An Index Pattern tells Kibana which
Elasticsearch indices contain the data that you want to work with.

8.7.2 Exercise : Creating an Index Pattern to Connect to Elasticsearch


1. Navigate to Management > Index Patterns from the menu on the left of your page.
This will display the index patterns tab.

2. Click Create index pattern.

3. Specify an Index Pattern that matches the name of one or more of your Elasticsearch
indices. Enter the Index Pattern name as wazuh* and click Next steps.

Your Index Pattern can match multiple Elasticsearch indices. Use a


comma to separate the names, with no space after the comma. The
notation for wildcards (*) and the ability to "exclude" (-) also apply (for
example, test*,-test3).

4. Select @timestamp from the Time Filter field name drop-down menu.

The Time Filter will use this field to filter your data by time. You can
choose not to have a time field, but you will not be able to narrow down
your data by a time range.

SIEMonster.com 113
5. Click Create index pattern. Once you have created an index pattern, you will be
presented with a table of all fields and associated data types in the index.

You can start working with your Elasticsearch data in Kibana after you have created your
Index Pattern. Here are some things to try:

• Interactively explore your data in Discover


• Present your data in charts, tables, and more in Visualize

SIEMonster.com 114
8.7.3 Managing Saved Objects
A Saved Object can be a Search, Dashboard, Visualization, or an Index Pattern. You can view,
edit, delete, import, or export Saved Objects from Management > Saved Objects.

Advanced Settings
Using the Advances Settings feature, you can edit the settings that control the behavior of
Kibana. You can change the default format of the date, the default index for Timelion, can
set the precision for decimal values, or can set the default query language.

You can view Advanced Settings from Management > Advanced Settings.

Advanced Settings should be used by very advanced users only.


Changes you make here can break large portions of Kibana. Some of
these settings may be undocumented, unsupported or experimental. If
a field has a default value, blanking the field will reset it to its default
which may be unacceptable given other configuration directives.
Deleting a custom setting will permanently remove it from Kibana's
config.

SIEMonster.com 115
8.8 Security
Open Distro for Elasticsearch includes the Security plugin for authentication and access
control. The plugin provides numerous features to help you secure your cluster. The security
defined can be very granular, for example you can configure a user to see certain indices,
certain dashboards, or just view the Dashboard and not be able to edit it.

Security and access control are managed using different concepts as discussed below:

8.8.1 Permissions
Permissions are individual actions assigned to Action Groups, such as creating an index as
shown below.

8.8.2 Action Groups


Action groups are set of permissions. For example, the predefined SEARCH Action Group
shown below have Permissions to use the _search and _msearch APIs.

SIEMonster.com 116
8.8.3 Roles
Security Role defines the scope of a permission or action group on a cluster, index,
document, or field. Roles are the basis for access control in Open Distro for Elasticsearch
Security. Roles allow you to specify which actions its Users can take, and which Indices those
Users can access. Roles control cluster operations, access to indices, and even the fields and
documents Users can access.

8.8.4 Exercise : Creating Role


1. Navigate to Security > Roles and then click on to add a new role. Provide name
for the role.
2. For this role, you can specify:
a. No Cluster Permission
b. READ permissions on any two Indices, WRITE permission to a third Index, under
the Index Permissions tab
c. Read Only permission to the Analysts Tenant under the Tenants tab

8.8.5 Backend Roles


Backend Role is optional that comes from an authorized backend, for example LDAP or
Active Directory.

8.8.6 Users
A user makes requests to Elasticsearch clusters. A user typically has credentials including
Username and Password. A user can have zero or more Backend Roles, and zero or more
User attributes.

8.8.7 Exercise : Creating a User


1. Navigate to Security > Internal User
Database and then click to add a
New Internal User. Provide a Username
and Password.

2. Optionally, provide the Backend Roles


or User attributes. Click Submit.

SIEMonster.com 117
The Security plugin automatically hashes the password and stores it
in the .opendistro_security index.

Backend Roles are optional and are not the same as security Roles.
Backend roles are external Roles that come from an external
authentication system, for example LDAP or Active Directory. If you are
not using any external system, you can ignore this step.
Attributes are also optional, and they are User properties that you can
use for variable substitution in Index Permissions.

8.8.8 Role Mapping


Users are mapped to roles after you have created the roles. For role mapping, you select a
role and then map one or more users to it.

8.8.9 Exercise : Role Mapping


1. Navigate to Security > Role Mappings and select kibana_user Role. A mapping of
kibana_user (Role) to jdoe (User) means that John Doe gains all the Permissions of
kibana_user after authenticating.

2. Likewise, a mapping of all_access (Role) to admin (Backend role) means that any User
with the Backend Role of admin (from an LDAP or Active Directory server) gains all
the Permissions of all_access after authenticating. You can map each Role to many
Users or Backend Roles.

SIEMonster.com 118
9 Incident Response
TheHive is a scalable, open source and free Security Incident Response Platform. TheHive is
tightly integrated with Malware Information Sharing Platform (MISP) and is designed to
make life easier for SOCs, CSIRTs, CERTs and any information security practitioner dealing
with security incidents that need to be investigated and acted upon swiftly.

TheHive can be synchronized with one or multiple MISP instances to start investigations out
of MISP events. Investigation’s results can also be exported as an MISP event to detect and
react to attacks that have been dealt with. When integrated with Cortex, TheHive allows
security analysts and researchers to analyze hundreds of observables at once using more
than hundred analyzers.

TheHive can be configured to import events from one or more MISP instances using various
filters (tag whitelist, tag blacklist, organization blacklist, max attributes per event).

Cortex integration
TheHive uses Cortex to have access to analyzers and responders.

• Analyzers can be launched against observables to get more details about a given observable
• Responders can be launched against case, tasks, observables, logs, and alerts to execute an
action
• One or multiple Cortex instances can be connected to TheHive

Case Merging
Two (or more) cases can be easily merged together if they relate to the same threat or have
a significant observable overlap.

Case and Observable Filtering


Cases and Observables can be filtered to display the relevant data.

SIEMonster.com 119
TheHive supports several authentication methods:

• Active Directory
• LDAP
• API keys
• X.509 SSO
• OAuth 2
• Local authentication

9.1 Collaborate
Collaboration is really important in TheHive and it allows multiple SOC and CERT analysts to
work on the same case simultaneously. For example, a security analyst may work on tracking
a malware activity on proxy logs, while another may deal with malware analysis as soon as
IOCs have been added by their co-workers. TheHive's live stream allows everyone to keep
an eye on what is happening on the platform, in real time.

TheHive allows the Observables, real time information pertaining to new or existing cases,
tasks to be available to all team members due to its built-in live stream capability. Special
notifications allow them to handle or assign new tasks and preview new MISP events and
alerts from multiple sources such as email reports, CTI providers and SIEMs. They can then
import and investigate them right away.

9.2 Elaborate
Every investigation in TheHive corresponds to a case. Cases and associated tasks can be
created from scratch, using a template engine, from MISP events, SIEM alerts, email reports,
or any other significant source of security events.

Metrics and custom fields can be added to the template to drive team’s activity, identify any
investigations that can take potentially significant time and seek to automate monotonous
tasks through dashboards. Analysts can record their progress, attach important files, add
tags, or import password protected ZIP archives containing malware or suspicious data
without opening them.

Each case can be broken down into one or more tasks. These tasks can contain different
work log. TheHive’s template engine is used to add the same task for a specific case every
time a case is created. Case templates can be used to link metrics to specific case types in
order to drive the team's activity, identify the type of investigations that take significant time,
and seek to automate monotonous tasks.

An analyst is assigned with a task, or a team member can take charge of a task without
waiting for someone to assign it to them.

SIEMonster.com 120
9.3 Analyze
Observables in hundreds or thousands in numbers can be added to each case that you
create, import them directly from an MISP event, or any alert send to the platform. TheHive
can be linked to one of many MISP instance, and MISP events can be previewed to decide
whether they permit an investigation or not. Once investigations are complete, you can
export IOCs to MISP instance. If an investigation is in order, the analyst can then add the
event to an existing case or import it as a new case using a customizable template.

SIEM alerts, phishing and other suspicious emails can be sent to TheHive using TheHive4py.
TheHive4py is a Python API client for TheHive, it is an open source and free security incident
response platform designed to make life easier for SOCs, CSIRTs, CERTs and any information
security practitioner dealing with security incidents that need to be investigated and acted
upon swiftly. These alerts then appear in the Alerts panel along with new or updated MISP
events, where they can be previewed, imported into cases, or ignored.

TheHive has the ability to automatically identify Observables that have been already seen in
previous cases. Observables can also be associated with a Traffic Light Protocol (TLP),
Permissible Actions Protocol (PAP), or the source that provided or generated them using
tags. The analyst can also easily mark observables as IOCs and isolate those using a search
query then export them for searching in a SIEM or other data stores.

9.4 Respond
Analysts can hold Cortex responders to contain an incident, eradicate malware and perform
other orchestration tasks. For example, they can call a responder to reply to a suspicious
email notification from TheHive, block a URL at the proxy level or gather evidence from a
compromised endpoint.

SIEMonster.com 121
9.5 Exercise: Adding a User
1. To access the user management page, drop-down from the Admin menu on the top right
of the screen and select Users.

2. The User management page displays all the existing users of the system.

3. Click Add user to open the Add user screen.

4. Enter User’s login and User’s name in the relevant fields.

5. From the Roles drop-down menu, select the required role. Click Save user.

9.6 Exercise: Creating Cases from Alerts


Once we have an alert, we can begin the process to create a case, assign tasks, enrich IOC,
and close the case.

1. Click Alerts on the top navigation bar. The Alerts list page opens with the list of alerts in the
system. This is a list of unassigned alerts waiting to be picked up by any available analyst.

2. Select an Alert by clicking on the Preview and Import icon.

SIEMonster.com 122
3. Alert Preview window opens showing you the alert details and a list of extracted
Observables.

A Similar Cases section will appear at the bottom if any Observable in


this case has been seen before. If the two alerts share a link, then you
can opt to add this alert to an existing case instead of generating a new
one.

4. Import alert as drop-down menu on the bottom right allows you to assign a Case template
that will be used for case creation. Templates contain lists of predetermined tasks that should
be performed on the alert. To create an empty case, click Yes, Import to turn the alert into a
case that will be assigned to you.

9.7 Case Management


TheHive allows Analysts to work together to complete tasks and close cases. Tasks and cases
both support assignment to clearly differentiate who is responsible for what, while allowing
everyone to follow along through the live stream.

9.8 Exercise: Creating Cases


1. Click on TheHive logo on top left of the page to access all the cases assigned to you.

2. Click + New Case on the top navigation bar to create a new case. Create a new case window
opens.

3. In the Title field, type wazuh-alerts-3.x-*_SSH Failed Login. This will serve as the name of the
case.

4. Select M as the Severity level of this case.

5. In the Tags field, provide SSH Failed Login, 172.16.3.6, and sshd tags.

SIEMonster.com 123
6. From the PAP field, select AMBER.

7. From the TLP field, select RED.

8. In the Description field, enter a relevant and meaningful description. Click + Create case.

TLP is the Traffic Light Protocol which uses 4 color codes to indicate boundaries of how far
outside the original group or recipient the information may be shared.

An example provided by TheHive website is: “For example, a file added as observable can be
submitted to VirusTotal if the associated TLP is WHITE or GREEN. If it’s AMBER, its hash is
computed and submitted to VT but not the file. If it’s RED, no VT lookup is done.”

PAP is the Permissible Actions Protocol which mimics the TLP but indicates to the analyst
how they may use the IOC in investigating the alert. It dictates actions that may be taken
with each IOC, such as active vs passive response.

9. Click on your case to open the main window for the case. Cases have 3 tabs in the main
window (Details, Tasks, and Observables) as well as a live stream on the right-hand side
showing task and status updates from all analysts.

SIEMonster.com 124
The Details page shows metadata related to the case such as tags, date, severity, related
cases, a description, and TLP and PAP designations.

Any tasks designed by an Analyst, or those defined in an attached Case template are
displayed under the Tasks tab. Tasks should be used to track the actions taken to answer
investigative questions. Tasks that you accept, or which are auto-assigned to you show up
in My tasks on the top navigation bar. Tasks that are not assigned are displayed in the
Waiting tasks on the top navigation bar. All the extracted Observables and their types are
displayed under the Observables tab

10. Click on the Observables tab, click + Add observable.

11. From the Type drop-down menu, select ip.

12. In the Value field, type 1.1.1.1, and in the Tags field, add test tag. Click Create observable(s).

You only have to specify a value for either Tag or Description, not both.

SIEMonster.com 125
13. Click the observable value 1[.]1[.]1[.]1 under Value/Filename to open the detailed page.

14. The detailed page shows Metadata, links to other cases where IOC is also present, and an
Analysis section to run the Analyzers for enrichment. Click Run all.

15. Click on the Observables tab after running the Analyzer. You should now see a list of tags,
this is your enrichment and now gives you more actionable data.

SIEMonster.com 126
16. Switch back to the detailed page and click on any date under the Last analysis column to
view a more detailed report of the scan results.

Case Closure
17. when you ready to close the case, click Close on the main title bar. The Close Case screen
opens.

18. Provide the required details and click Close case.

9.9 Case Template


To access the Case templates page, drop-down from the Admin menu on the top right of
the screen and select Case templates. Click + New template to create a new template.

You will need to provide the following details on the Case basic information page.

• Template name
• Title prefix
• Severity
• TLP
• PAP
• Tags
• Description

SIEMonster.com 127
Along with description, you will also need to provide a Task to outline the investigative steps
for this alert. This provides a consistent approach to handling events since the Task List
becomes your investigative playbook. These should also reflect the actions defined in your
SOPs.

In addition to the above, You will also need to provide the values for Metrics and Custom
fields. Items you select here must first be defined in the respective Case metrics and Case
custom fields sections under the Admin drop-down menu on the top right of the screen.

A Case metric is just a variable defined to increment. Metrics can also be displayed in graphs
on the Dashboard.

A Case custom field allows you to add additional fields for an Analyst to provide the response
as either a string drop-down, number, Boolean, or a date.

9.10 Dashboards
TheHive allows you to create meaningful dashboards to drive any activity and support a
budget request.

9.11 Exercise: Creating a Dashboard


1. Click Dashboards on the top navigation bar to open the Dashboards page. The Dashboards
page displays the following out-of-the-box dashboards:

a. Case statistics
b. Alert statistics
c. Job statistics
d. Observable statistics

2. Click Create new Dashboard. New Dashboard window opens.

3. In the Title field, type the name of your Dashboard. In the Description field, enter a relevant
and meaningful description.

4. From the Visibility drop-down menu, select Shared and click Create.

SIEMonster.com 128
5. Dashboard is built through a drag-and-drop interface.

6. Drag and drop the Donut chart to the empty space to add this chart in the Dashboard. No
title window opens.

7. In the Title field, type Cases by status.

8. From the Entity drop-down menu, select Case. From the Aggregation Field drop-down
menu, select status. This will show all the cases by status. Click Apply.

SIEMonster.com 129
9. Repeat the process to drag another Donut chart and drop it on the existing one.

10. Specify the Title as Case Tags, Entity as Case, and Aggregation Field as tags.

11. Drag and drop Row this time, and then drag and drop Bar type chart on the new row.

SIEMonster.com 130
12. Specify the Title as Case severity history, Entity as Case, Date Field as createdAT, Interval as
By week, and Category Field as severity. Click Apply.

13. Click Save once you have made all the changes in your Dashboard. Click Edit before making
any changes to your existing Dashboard.

14. Auto-Refresh by default is Off.

SIEMonster.com 131
10 Analyzers
Cortex is an open source and free software. Cortex analyzes observables, at scale, by
querying a single tool instead of many. It helps a common problem frequently encountered
by SOCs, CSIRTs, and security researchers in the course of threat intelligence, digital forensics
and incident response.

Observables, such as IP and email addresses, URLs, domain names, files or hashes, can be
analyzed one by one or in bulk mode using a Web interface. Analysts can
also automate these operations thanks to the Cortex REST API.

Cortex helps you to analyze different types of observables using more than 35 analyzers.
Most analyzers come in different flavors. For example, using the VirusTotal analyzer, you can
submit a file to VT or simply check the latest available report associated with a file or a hash.

Cortex4py is a Python API client for Cortex, a powerful observable analysis engine where
observables, such as IP and email addresses, URLs, domain names, files or hashes, can be
analyzed one by one or in bulk mode using a Web interface. Analysts can also automate
these operations thanks to the Cortex REST API. Cortex4py allows analysts to automate these
operations and submit observables in bulk mode through the Cortex REST API from
alternative SIRP platforms, custom scripts or MISP

Cortex has many analyzers and a RESTful API, that makes observable analysis a breeze,
particularly if called from TheHive. TheHive can also leverage Cortex responders to perform
specific actions on alerts, cases, tasks and observables collected in the course of the
investigation: send an email to the constituents, block an IP address at the proxy level, notify
team members that an alert needs to be taken care of urgently and much more.

Cortex allows you to create and manage multiple organizations, manage the associated
users and give them multiple roles.

Setting up an Organization
The default cortex Organization cannot be used for any other purpose than managing global
administrators (users with the superAdmin role), Organizations and their associated users. It
cannot be used to enable/disable or configure Analyzers. To do so, you need to create your
own Organization inside Cortex by clicking on the Add organization button.

Setting up an Organization Administrator


Create the Organization administrator account (user with an orgAdmin role). Then, specify a
password for this user. After doing so, log out and log in with that new user account.

SIEMonster.com 132
Enable and Configure Analyzers
By default, and within every freshly created organization, all analyzers are disabled. If you
want to enable and configure them, use the Web UI (Organization > Configurations and
Organization > Analyzers tabs).

All analyzer configuration is done using the Web UI, including adding API keys and
configuring rate limits.

10.1 Cortex and TheHive


Cortex works very well in combination with TheHive. TheHive can analyze hundreds of
observables in a few clicks by linking with one of several Cortex instance (Depends on your
OPSEC needs and performance requirements). TheHive has a report template engine that
allows you to customize the output of Cortex analyzers that suits your requirements.

Like TheHive, Cortex supports several authentication methods:


• Active Directory
• LDAP
• API keys
• X.509 SSO
• OAuth 2
• Local authentication

10.2 Cortex Super Administrator


Once Cortex has been setup and the databases has been created, you can then create a
Cortex Super Administrator used superAdmin. This user account will be able to create Cortex
organizations and users.

You can log in using this user account. Notice that the default cortex organization has been
created. If you open this organization then you will be able to see your user account, a Cortex
global administrator.

SIEMonster.com 133
10.3 Cortex: Create an Organization
The default Cortex organization can only be used to manager global administrators (users
with the superAdmin role), organizations and their associated users. If you want to configure,
or enable/disable an Analyzer:
• Click + Add organization
• Specify Organization’s name
• Provide suitable Description

An organization cannot be deleted once created but it can be disabled


by a superAdmin.

10.4 Cortex: Create a User


superAdmin can manage user accounts in any organization that exist in the Cortex instance.
Users can also be managed for a specific organization by those who possess the orgAdmin
role in that organization.
To create a user with the superAdmin role:

• Open the Cortex organization (Default)


• Click + Add user
• Provide User’s login
• Provide User’s name
• From the Roles drop-down menu. Select superAdmin

To create a user without the superAdmin role:

• Open the relevant organization


• Click + Add user
• Provide User’s login
• Provide User’s name
• From the Roles drop-down menu. Select the relevant role (read / read, analyze / read, analyze,
orgadmin)

SIEMonster.com 134
User accounts cannot be deleted once created but they can be locked
by an orgAdmin or a superAdmin. Once locked, they cannot be used, but
they be unlocked by either an orgAdmin or a superAdmin.

10.5 User Roles


Cortex has role bases access control and define the following four roles.

read
• This role cannot be used in the default cortex organization
• This role cannot submit jobs
• The user can access all the jobs that have been performed by the Cortex instance, including
their results
• This organization can only contain super administrators

analyze
• This role cannot be used in the default cortex organization
• This role can submit a new job using one of the configured analyzers for their organization
• This organization can only contain super administrators

orgAdmin
• This role cannot be used in the default cortex organization
• A user with an analyze role can manage users within their organization
• They can add users and give them read, analyze and/or orgAdmin roles
• This role also permits to configure analyzers for the organization
• This organization can only contain super administrators

superAdmin
• This role is incompatible with all the other roles listed above
• It can be used solely for managing organizations and their associated users
• When you install Cortex, the first user that is created will have this role
• Several users can have it as well but only in the default cortex organization, which is
automatically created during installation

SIEMonster.com 135
The table below summarizes the capabilities of these roles.

Actions read analyze orgAdmin superAdmin


Read reports X X X
Run jobs X X
Enable/Disable analyzer X
Configure analyzer X
Create org analyst X X
Delete org analyst X X
Create org admin X X
Delete org admin X X
Create Org X
Delete Org X
Create Cortex admin user X

10.6 Cortex Analyzer and Responder


Analyzers and Responders are autonomous applications managed by and run through the
Cortex core engine. Analyzers allow analysts and security researchers to analyze observables
and IOCs such as domain names, IP addresses, hashes, files, URLs at scale. While many
analyzers are free to use, some require special access while others necessitate a valid service
subscription or product license, even though the analyzers themselves are released under
an the AGPL (Affero General Public License).

Responders are programs that perform different actions and apply to alerts, cases, tasks,
task logs, and observables.
Analyzers and responders can be configured, enabled, or disabled only
by orgAdmin users.

10.7 Analyzer Management

Analyzer can be managed using the following ways.

SIEMonster.com 136
Analyzers Config Tab
1. Open the Organization page and click the Analyzer Config tab. Configuration of all the
available Analyzers are defined here including settings that are common to all the flavors of
a given analyzer.

2. Click the Organization -> Analyzers tab, orgAdmin users can configure, enable, or disable
specific analyzer flavors. They can override the global configuration inherited from the
Organization -> Analyzers Config tab and add additional, non-global configuration that
some analyzer flavors might need to work correctly.

3. Click the Organization >


Responders Config tab,
orgAdmin users can define
the configuration for all the
available responders,
including settings which are
common to all the flavors of
a given responders.

SIEMonster.com 137
4. Click the he Organization > Responders tab, orgAdmin users can configure, enable, or
disable, enable and configure specific responder flavors. They can override the global
configuration inherited from the Organization > Responders Config tab and add additional,
non-global configuration that some responder flavors might need to work correctly.

The configuration can only be seen by orgAdmin users of a given


organization. superAdmin users cannot view analyzer configuration.

10.8 Job History


Click Jobs History from the main navigation bar. This page displays all the jobs that have
been performed by the Cortex instance, including their results.

Users with the superAdmin role cannot see the Jobs History.

SIEMonster.com 138
A user that has an analyze role
can submit a new job using one
of the configured analyzers for
their organization. Click +New
Analysis to submit a new job.

• From the TLP and PAP


drop-down menus, select
the required values
• From the Data Type drop-
down menu, select ip
• In the Data field, enter
1.1.1.1
• From the list of available
Analyzers, select the
required analyzer
• Click Start

Click View to see the job report.

SIEMonster.com 139
11 Threat Intel
Malware Information Sharing Platform (MISP) is an open source software solution use to
collect, store, distribute, and share cyber security indicators and threats about cyber security
incidents analysis and malware analysis.

The objective of MISP is to promote the sharing of structured information within the security
community and abroad. MISP provides functionalities to support the exchange of
information but also the consumption of said information by Network Intrusion Detection
Systems (NIDS) and log analysis tools like Security Information and Event Management
(SIEM).

MISP is accessible from different interfaces like a web interface (for analysts or incident
handlers) or via a REST API (for systems pushing and pulling IOCs). The inherent goal of MISP
is to be a robust platform that ensures a smooth operation from revealing, maturing and
exploiting the threat information.

There are many different types of users of an information sharing platform like MISP:

• Malware reversers willing to share indicators of analysis with respective colleagues


• Security analysts searching, validating, and using indicators in operational security
• Intelligence analysts gathering information about specific adversary groups
• Law-enforcement relying on indicators to support or bootstrap their DFIR cases
• Risk analysis teams willing to know about the new threats, likelihood, and occurrences
• Fraud analysts willing to share financial indicators to detect financial frauds

The objective of the MISP, open source threat intelligence and sharing platform is to:

• Facilitate the storage of technical and non-technical information about seen malware and
attacks
• Create automatically relations between malware and their attributes
• Store data in a structured format (allowing automated use of the database to feed detection
systems or forensic tools)
• Generate rules for Network Intrusion Detection System (NIDS) that can be imported on IDS
systems (e.g. IP addresses, domain names, hashes of malicious files, pattern in memory)
• Share malware and threat attributes with other parties and trust-groups
• Improve malware detection and reversing to promote information exchange among
organizations (e.g. avoiding duplicate works)
• Create a platform of trust - trusted information from trusted partners
• Store locally all information from other instances (ensuring confidentiality on queries)

SIEMonster.com 140
11.1 Feeds
Feeds contain indicators that can be automatically imported in MISP at regular intervals, they
can be both remote or local resources. Such indicators contain a pattern that can be used to
detect suspicious or malicious cyber activity.

Feeds can be structured in three different formats:

• MISP standardized format which is the preferred format to benefit from all the MISP
functionalities

• CSV format, allows you to select the columns that are to be imported

• Free-text format allows automatic ingestion and detection of indicator/attribute by parsing


any unstructured text

You can easily import any remote or local URL to store them in your MISP instance. Feeds
description can be also easily shared among different MISP instances as you can export a
feed description as JSON and import it back in another MISP instance.

11.1.1 Adding Feeds


Hover the cursor over Sync
Actions from the main
navigation bar and select List
Feeds. The default feeds and the
current version of MISP are
displayed on this page.

On the left pane, click Add Feed


to open the Add MISP Feed
page. You will need to provide
the following details:
• Enabled: Is the feed active
or not

• Lookup visible: If this is not


checked, the correlation will
only show up to you, if
checked, correlations are
visible for other users as
well

• Caching enabled: To
enable a feed for caching,

SIEMonster.com 141
you need to check the caching enabled field to benefit automatically of the feeds in your
local MISP instance

• Name: It is a name to identify the feed

• Provider: It is the name of the content provider

• Input Source: Drop-down from Input Source menu and select either:

o Network: hosted somewhere outside the platform


o Local: Hosted on the local server. Once this option is selected, another checkbox
Remove input after ingestion will appear. Tick this checkbox if you want to be deleted
after the usage.

• URL: URL of the feed, where it is located (for Local hosted files, point to the manifest.json e.g.
/home/user/feed-generator/output/manifest.json)

• Source Format: Drop-down from Source Format menu and select either:

o MISP Feed: The source points to a list of json formatted like MISP events
o Freetext Parsed Feed:
▪ Target Event: These are the event that get updated with the data from the
feed. Target Event can be either New Event Each Pull (A new event will be
created each time the feed is pulled) or Fixed Event (A unique event will be
updated with the new data. This event is determined by the next field)
▪ Target Event ID: The ID of the event where the data will be added (if not set,
the field will be set the first time the feed is fetched)
▪ Exclusion Regex: Add a regex pattern for detecting iocs that should be
skipped (this can be useful to exclude any references to the actual report /
feed for example)
▪ Auto Publish: If checked, events created from the feed will be automatically
published
▪ Override IDS Flag: If checked, the IDS flag will be set to false
▪ Delta Merge: If checked, only data coming from the last fetch are kept, the
old ones are deleted
o Simple CSV Parsed Feed:
▪ Target Event: These are the event that get updated with the data from the
feed. Target Event can be either New Event Each Pull (A new event will be
created each time the feed is pulled) or Fixed Event (A unique event will be
updated with the new data. This event is determined by the next field)
▪ Target Event ID: The ID of the event where the data will be added (if not set,
the field will be set the first time the feed is fetched)
▪ Exclusion Regex: Add a regex pattern for detecting iocs that should be
skipped (this can be useful to exclude any references to the actual report /
feed for example)
▪ Auto Publish: If checked, events created from the feed will be automatically
published

SIEMonster.com 142
▪ Override IDS Flag: If checked, the IDS flag will be set to false
▪ Delta Merge: If checked, only data coming from the last fetch are kept, the
old ones are deleted

• Distribution: It define the distribution option that will be set on the event created by the
feed

• Default Tag: A default tag can be added to the created event

• Filter rules: They allow you to define which organizations or tags allowed or blocked

11.2 Events
MISP events are encapsulations for contextually linked information. The MISP interface
allows the user to have an overview over or to search for events and attributes of events that
are already stored in the system in various ways.

On the left pane, click List Events. The Events page opens, that displays a list of last 60
events.

Published: Already published events are marked by a checkmark, and the unpublished
events are marked by a cross

Org: The organization that created the event

Owner Org: The organization that owns the event on this instance. This field is only visible
to administrators

ID: It displays the ID number of the event that was assigned by the system

Tags: Tags that are assigned to this event.

#Attr.: The total number of attributes that the event include


Email: The e-mail address of the event's reporter

Date: The date of the attack

SIEMonster.com 143
Info: A short description of the event

Distribution: This field describe who has access to the event

Actions: The controls that allows user to either view or modify the event. The Actions
available are:
• Publish Event
• Edit
• Delete
• View

11.2.1 Adding an Event


The process of adding an event can be split into 3 phases:

1. The creation of the event itself


2. Populating it with attributes and attachment
3. Publishing it

To create the event, click Add Event on the left pane, Add Event page opens. During this
first step, you will be create a basic event without any actual attributes, but storing general
information such as a description, time and risk level of the incident.

SIEMonster.com 144
Provide the following data in the Add Event page.

Date: This is the date of the incident

Distribution: It controls the visibility of the event once it is published. Distribution also
controls whether the event will be synchronized to other servers or not. The following
options are available in the drop-down menu:

• Your organization only: This setting will only allow members of your organization to see
this event
• This Community-only: Users that are part of your MISP community will be able to see this
event. This includes your own organization, organizations on the MISP server, and
organizations running MISP servers that synchronize with this server
• Connected communities: Users that are part of your MISP community will be able to see
this event. This includes all organizations on this MISP server, all organizations on MISP
servers synchronizing with this server, and the hosting organizations of servers that connect
to those afore mentioned servers (so basically any server that is 2 hops away from this one).
Any other organizations connected to linked servers that are 2 hops away from this own will
be restricted from seeing the event.
• All communities: This will share the event with all MISP communities, allowing the event to
be freely propagated from one server to the next.

Threat Level: This field indicates the risk level of this event. The following options are
available in the drop-down menu:
• Low: General mass malware.
• Medium: Advanced Persistent Threats (APT)
• High: Sophisticated APTs and 0day attacks

Analysis: Indicates the current stage of the analysis for this event, with the following possible
options:
• Initial: The analysis is just beginning
• Ongoing: The analysis is in progress
• Completed: The analysis is complete

Event Info: This is where the malware/incident can get a brief description starting with the
internal reference.

SIEMonster.com 145
11.2.2 Add Attributes to the Event
Once the event is created, the next step is to add attributes. This can be done by adding
them manually or importing the attributes from an external format (OpenIOC,
ThreatConnect). Click + on the event screen that you have created to add an attribute.

Keep in mind that the system searches for regular expressions in the value field of all
attributes when entered, replacing detected strings within it as set up by the server's
administrator (for example to enforce standardized capitalization in paths for event
correlation or to bring exact paths to a standardized format).

Provide the following data in the Add Attribute page.

Category: This drop-down menu explains the category of the attribute, meaning what
aspect of the malware this attribute is describing

Type: Categories determine what aspect of an event they are describing. The Type explains
by what means that aspect is being described. As an example, the source IP address of an

SIEMonster.com 146
attack, a source e-mail address or a file sent through an attachment can all describe the
payload delivery of a malware. These would be the types of attributes with the category of
payload deliver

Distribution: Distribution drop-down menu allows you to control who will be able to see
this attribute

Value: The actual value of the attribute, enter data about the value based on what is valid
for the chosen attribute type. For example, for an attribute of type ip-src (source IP address),
1.1.1.1 would be a valid value

Contextual Comment: You can add some comments to the attribute that will not be used
for correlation but instead serves as purely an informational field

For Intrusion Detection System: This option allows the attribute to be used as an IDS
signature when exporting the NIDS data, unless it is being overruled by the white-list.

Batch import: If there are several attributes of the same type to enter (such as a list of IP
addresses, it is possible to enter them all into the same value-field, separated by a line break
between each line. This will allow the system to create separate lines for each attribute

11.2.3 Add Attachment to the Event


Documents including malware itself, reports files from external analysis, or the artifacts
dropped by the malware can be attached to an event. Click Add Attachment on the left
pane of the event screen to upload any attachment.

SIEMonster.com 147
Provide the following data in the Add Attachment(s) page.

Category: This field describes the file that is going to be attached

Distribution: This drop-down menu allows you to control who will be able to see this
attachment

Contextual Comment: You can add some comments to the attribute that will not be used
for correlation but instead serves as purely an informational field

Upload field: By hitting browse, you can browse your file system and point the uploader to
the file that you want to attach to the attribute

Malware: This check-box marks the file as malware and it will be zipped and protected by
the password, to protect the users of the system from accidentally downloading and
executing the file. Make sure to tick this if you suspect that the filed is infected, before
uploading it

Once all the attributes and attachments that you want to include with the event are included,
click Publish Event on the left pane of the event screen.

SIEMonster.com 148
This will alert the eligible users of it and push the event to instances that your instance
connects to. There is an alternate way of publishing an event without alerting any other
users, by clicking Publish (no email). This should only be used for minor edits (such as
correcting a typo).

11.3 List Attributes


Attributes in MISP can be network indicators (e.g. IP address), system indicators (e.g. a string
in memory), or even bank account details. MISP attributes are purely based on usage (what
people and organizations use daily).

• A type (e.g. MD5, url) is how an attribute is described


• An attribute is always in a category (e.g. Payload delivery) which puts it in a context
o A category is what describes an attribute
• An IDS flag on an attribute allows to determine if an attribute can be automatically used for
detection

To access the list of attributes, click List Attributes on the left pane, The Attributes page
opens, that displays a list of last 60 attributes.

The Events page displays the following information:

Event: This is the ID number of the event that the attribute is tied to. If an event belongs to
your organization, then this field will be colored red.

Org: The organization that has created the event


Category: The category of the attribute, showing what the attribute describes (for example
the malware's payload)

Type: The type of the value contained in the attribute (for example a source IP address)

Value: The actual value of the attribute, describing an aspect, defined by the category and
type fields of the malware (for example 1.1.1.1)

Comment: An optional comment attached to the attribute

SIEMonster.com 149
IDS: Shows whether the attribute has been flagged for NIDS signature generation or not

Distribution: Describes who will have access to the event

Actions: A set of buttons that allow you to edit or delete the attribute

11.4 Search Attributes


Data contained in the value field of an attribute can be searched. You can search for
attributes based on contained expression within the value, event ID, submitting organization,
category and type. To search for attributes, click Search Attributes on the left pane, The
Search Attributes page opens.

• For the value, event ID and organization, you can enter several search terms by entering each
term as a new line.
• To exclude things from a result, use the NOT operator (!) in front of the term.
• For string searches (such as searching for an expression or tags) - lookups are simple string
matches.
• If you want a substring match encapsulate the lookup string between "%" characters.

Apart from being able to list all events, it is also possible to search for data contained in the
value field of an attribute, by clicking on the "Search Attributes" button.

SIEMonster.com 150
12 Metrics
Metrics provides some metrics on the backend systems to enhance performance. Metrics
has a health monitor used to monitor your cluster, stack health, and detailed statistics. Using
the web interface, this is available on the Metrics - Grafana Elasticsearch Dashboard.

Grafana allows you to query, visualize, alert on and understand your metrics no matter where
they are stored. With Grafana, you can create, explore, and share dashboards with your team
and foster a data driven culture.

12.1 Metrics: Data Source


Grafana supports many different storage backends for your Data Source. Each Data Source
has a specific query editor that is customized for the features and capabilities that the Data
Source exposes.

Grafana supports the following data sources:

• Graphite
• InfluxDB
• OpenTSDB
• Prometheus
• Elasticsearch
• CloudWatch
• MySQL
• PostgreSQL

SIEMonster.com 151
The query language and capabilities of each Data Source are very different. Data from
multiple Data Sources can be combined onto a single Dashboard, but each Panel in the
Dashboard is tired to a specific Data Source.

Navigate to Home > Metrics to access the Metrics module. To view the existing Data
Sources or add a new one, click on the Grafana icon at the top left and then click Create
your first data source. The Add data source page opens.

Select your preferred database and specify the requested details.

SIEMonster.com 152
12.2 Metrics: Organization
Grafana supports multiple organizations to support a wide variety of deployment models,
including using a single Grafana instance to provide service to multiple potentially untrusted
organizations. In many cases, Grafana will be deployed with a single Organization.

Each Organization contains their own dashboards, data sources and configuration, and
cannot be shared between organizations. While users may belong to more than one,
multiple organizations are most frequently used in multi-tenant deployments. All
dashboards are owned by an organization.

To view the existing Organizations, hover the cursor


over icon from the left navigation pane, and click
orgs. The Orgs page opens that list the existing
organizations. Open any organization if you want to
edit it.

It is important to remember
that most metric databases
do not provide any sort of
per-user series
authentication. Therefore, in
Grafana, data sources and
dashboards are available to
all users in an Organization.

12.3 Metrics: Users


A User is a named account in Grafana. A User can belong to one or more Organizations and
can be assigned different levels of privileges through roles.

Grafana supports a wide variety of internal and external ways for users to authenticate
themselves. These include from its own integrated database, an external SQL server, or an
external LDAP server.

12.3.1 Exercise: Creating a New User


1. To view the existing Users, hover the cursor over icon from the left navigation pane
and select Users. The Users page opens that list the existing users.

2. Click new user and specify Name, Email, Username, and Password.

SIEMonster.com 153
3. Click Create, the Users page opens that list the newly added user.

4. Click the newly created user then enable the Grafana Admin checkbox. Grafana
admin user has all the permissions and its different to the Admin role in Grafana.

5. Under the Organizations section, click inside the organization name, type and select
the organization you want to assign this user to.

6. From the Role drop-down menu, select Viewer and click Add.

12.4 Metrics: Dashboard


The Dashboard is where it all comes together. Dashboards can be thought of as of a set of
one or more Panels organized and arranged into one or more Rows.

Navigate to Home > Metrics to open the Metrics Dashboard. You can click on the default
Dashboard to switch to another Dashboard.

SIEMonster.com 154
The time period for the Dashboard can be controlled by the time picker at the top-right
corner of the Dashboard.

Dashboards can be tagged, and the Dashboard picker provides quick, searchable access to
all Dashboards in an Organization.

Dashboards can utilize Templating to make them more dynamic and interactive, and
Annotations to display event data across Panels. This can help correlate the time series data
in the Panel with other events.

Dashboards (or a specific Panel) can be Shared easily in a variety of ways. You can send a
link to someone who has a login to your Grafana. You can use the Snapshot feature to encode
all the data currently being viewed into a static and interactive JSON document; it’s so much
better than emailing a screenshot,

SIEMonster.com 155
12.4.1 Exercise: Building a New Dashboard

1. Click on the default Dashboard and then click New dashboard to open the new
Dashboard screen.

2. The New Dashboard screen displays all the available panels that can be added on the
Dashboard. Click Choose Visualization.

3. Click the Graph Panel to add it to the empty space to add this panel in the Dashboard.
Specify the settings that you want to apply for this Panel.

SIEMonster.com 156
4. Click to go back to the Dashboard.

5. Click Add panel icon to add another Panel and repeat the same process.

6. Click the Panel header and select Share to share the Panel. Panels (or an entire
Dashboard) can be Shared easily in a variety of ways. You can send a link to someone
who has a login to your Grafana. You can use the Snapshot feature to encode all the
data currently being viewed into a static and interactive JSON document.

7. Click Duplicate to create a copy of the Panel.

8. Click Edit to open the Metrics tab below the Panel, this is where a query can be
specified.

9. Click Save Dashboard icon. Save As window opens. Provide a name for the
Dashboard and click Save.

12.5 Metrics: Row


A Row is a logical divider within a Dashboard and is used to group Panels together.

Rows are always 12 “units” wide. These units are automatically scaled dependent on the
horizontal resolution of your browser. You can control the relative width of Panels within a
row by setting their own width.

Regardless of your resolution, or time range, Grafana can show you the
perfect amount of datapoints using the MaxDataPoint functionality.

Utilize the Repeating Rows functionality to dynamically create or remove entire Rows (that
can be filled with Panels), based on the Template variables selected.

SIEMonster.com 157
Rows can be collapsed by clicking on the Row Title. If you save a Dashboard with a Row
collapsed, it will save in that state and will not preload those graphs until the row is
expanded.

12.6 Metrics: Panel


The Panel is the basic visualization building block in Grafana. Each Panel provides a Query
Editor (dependent on the Data Source selected in the panel) that allows you to extract the
perfect visualization to display on the Panel by utilizing the Query Editor

There are a wide variety of styling and formatting options that each Panel exposes to allow
you to create the perfect picture.

Panels can be dragged and dropped and rearranged on the Dashboard. They can also be
resized.

There are different Panel types:

• Graph
• Singlestat
• Table
• Text
• Heatmap
• Alert List
• Dashboard List
• Plugin List

Panels like the Graph panel allow you to graph as many metrics and series as you want.
Other panels like Singlestat require a reduction of a single query into a single number.
Dashboard List and Text are special panels that do not connect to any Data Source.

Panels can be made more dynamic by utilizing Dashboard Templating variable strings within
the panel configuration (including queries to your Data Source configured via the Query
Editor).

Utilize the Repeating Panel functionality to dynamically create or remove Panels based on
the Templating Variables selected.

The time range on Panels is normally what is set in the Dashboard time picker, but this can
be overridden by utilizes Panel specific time overrides.

SIEMonster.com 158
12.7 Metrics: Query Editor
The Query Editor exposes capabilities of your Data Source and allows you to query the
metrics that it contains.

Use the Query Editor to build one or more queries (for one or more series) in your time series
database. The panel will instantly update allowing you to effectively explore your data in real
time and build a perfect query for that Panel.

You can utilize Template variables in the Query Editor within the queries themselves. This
provides a powerful way to explore data dynamically based on the Templating variables
selected on the Dashboard.

Grafana allows you to reference queries in the Query Editor by the row that they’re on. If you
add a second query to graph, you can reference the first query simply by typing in #A. This
provides an easy and convenient way to build compounded queries.

SIEMonster.com 159
13 Alerts
Praeco is an open source alerting tool with a full Graphical User Interface (GUI) for creating
alerts.

• Silence alerts for a configurable time period in the GUI


• Interactively build alerts for your Elasticsearch data using a query builder
• Preview results in an interactive chart
• Test your alerts against historical data and over a configurable time period
• Send notifications to Slack, Email, Telegram or an HTTP POST endpoint
• Supports the Any, Blacklist, Whitelist, Change, Frequency, Flatline, Spike and Metric
Aggregation rule types
• View logs of when your alerts check, fire and fail
• Use templates to pre-fill commonly used rule options
• View a preview of your query and a graph of results over the last 24h
• See a preview of your alert subject/body as you are editing

13.1 QuickStart
Run the app using Docker compose (Compose is a tool for defining and running multi-
container Docker applications. With Compose, you use a Compose file to configure your
application's services. Then, using a single command, you create and start all the services
from your configuration). Praeco includes everything you need to get started. Just provide
it the IP address of your Elasticsearch instance.

export PRAECO_ELASTICSEARCH=<your elasticsearch ip>


docker-compose up

• Don't use 127.0.0.1 for PRAECO_ELASTICSEARCH. See first item under the
Troubleshooting section.

• To set up Slack, Email or Telegram notifications, edit rules/BaseRule.config.

Praeco should now be available on http://127.0.0.1:8080

13.2 Exercise: Praeco - Creating a new rule

1. On the Home page, click


Praeco.

2. Click create a new rule to open


the Add rule page.

SIEMonster.com 160
3. In the Name field, type the name of the rule.

The name of the rule must be unique.

4. Click inside the Index field and select the value of the index you want to use (This
value depends on your data).

strftime format (%Y-%m-%d) can be used in the Index field.

5. From the Time type drop-down menu, select Default. Default value is used if the time
field in your data is store as a Date. However, you need to choose another option if
the time field in your data is stored as a timestamp.

6. From the Time field drop-down menu, select @timestamp.

7. Click WHEN and select count from the drop-down menu.

8. Click OVER and select All documents to group the rule.

9. Click UNFILTERED to open the Builder page. Click NEW FILTER, select @timestamp
from the drop-down menu and click Add filter. Select is not empty from the drop-
down menu and click Done.

SIEMonster.com 161
Multiple filters can be added until you have the results you want to alert
against. The actual filters you add depends on the type of data you want
to be alerted on.

10. Click IS and select the threshold for alerting from the drop-down menu.

11. Click FOR THE LAST and specify the value depending on the time frame you want to
measure over. Counts of results are divided into time-based buckets, that are
represented by a red bar in the chart.

12. Click WITH OPTIONS if you want to enable count query or terms query.

If Use count query is enabled, ElastAlert will poll Elasticsearch using the
count api, and not download all the matching documents. This is useful
if you are looking for numbers and not the actual data. It can be used if
you expect large volume of query hits

SIEMonster.com 162
If Use terms query is enabled, ElastAlert will make an aggregation query against
Elasticsearch to get counts of documents matching each unique value of query key. Terms
size specifies the maximum number of terms returned per query, default value is 50.

13. Enable Aggregation to send reports of alerts according to a schedule instead of


sending alerts immediately.

14. Under Destinations, select where to send the alert.

15. In the Subject and Body text fields, type the subject and body of your alert
respectively. Type % followed by some characters to insert tokens into your alerts and
select a field from the drop-down menu. When you get alerted, your tokens will be
replaced by the content of this field from the event that triggered the alert.

16. If you have selected Email as your destination, then you need to click on the Email
tab and specify the From address, Reply to, and To.

17. Click Test to run a simulation of this alert over a specified time period. This does not
send out the actual alert.

18. Click Save to save all the changes.

13.3 Praeco: Configuration


Edit rules/BaseRule.config, config/api.config.json, config/elastalert.yaml, and/or
public/praeco.config.json for advanced configuration options. Click for the api docs and the
example elastalert config for more information.

Any ElastAlert option you put into rules/BaseRule.config will be applied to every rule.

The following config settings are available in praeco.config.json:


// Link back to your praeco instance, used in Slack alerts
"appUrl": "http://praeco-app-url:8080",

// A recordatus (https://github.com/ServerCentral/recordatus) instance for javascript error reporting


"errorLoggerUrl": "",

// Hide these fields when editing rules, if they are already filled in template
"hidePreconfiguredFields": []

SIEMonster.com 163
13.4 Praeco: Upgrading
To upgrade to the newest release of Praeco, run the following commands:

docker pull servercentral/praeco && docker pull servercentral/elastalert


docker-compose up --force-recreate --build && docker image prune -f

Some version upgrades require further configuration. Version specific upgrade instructions
are below.

v0.3.9 -> v0.4.0


New options for telegram added to BaseRule.config. Add these lines and customize as
needed:
telegram_bot_token: ''
telegram_api_url: ''
telegram_proxy: ''

v0.3.0 -> v0.3.1


New options es_username and es_password added to config/api.config.json. Add these to
your config if you need this capability.

v0.2.1 -> v0.2.2


Add the following lines to your nginx/default.conf file:

At the top:
# cache github api
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=github_api_cache:60m
max_size=10g
inactive=60m use_temp_path=off;

And within the server {} section:


location /api-app/releases {
proxy_cache github_api_cache;
proxy_pass https://api.github.com/repos/ServerCentral/praeco/releases;

Example:
The default config example file below shows where to place these snippets.

# cache github api


proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=github_api_cache:60m max_size=10g
inactive=60m use_temp_path=off;

server {
listen 8080;

SIEMonster.com 164
location /api {
rewrite ^/api/?(.*)$ /$1 break;
proxy_pass http://elastalert:3030/;
}

location /api-ws {
rewrite ^/api-ws/?(.*)$ /$1 break;
proxy_pass http://elastalert:3333/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}

location /api-app/releases {
proxy_cache github_api_cache;
proxy_pass https://api.github.com/repos/ServerCentral/praeco/releases;
}

location / {
root /var/www/html;
try_files $uri $uri/ /index.html;
}
}

v0.1 -> v0.2

Create file rules/BaseRule.config, paste in the following content and change as required.

slack_webhook_url: ''
smtp_host: ''
smtp_port: 25
slack_emoji_override: ':postal_horn:'

13.5 Praeco: Scenarios


Question: How do I connect to elasticsearch using SSL?
Answer: Edit config/api.config.json and set/add "es_ssl": true.

Question: How do I connect to elasticsearch with a username and password?


Answer: Edit es_username and es_password in config/api.config.json and
config/elastalert.yaml.

Question: How do I serve the Praeco UI over https?


Answer: The Praeco UI is served by an included nginx server. Configure it as you would any
nginx project by editing the files in nginx_config. Then update your docker-compose.yml and
add your certificate files (under webapp volumes). Another option is using a reverse proxy.

SIEMonster.com 165
Question: How do I change the writeback index?
Answer: Edit config/elastalert.yaml and config/api.config.json and change the
writeback_index values.

Question: How do I change ElastAlert options, like SSL, user/pass, etc.?


Answer: Edit config/elastalert.yaml and uncomment the appropriate lines.

Question: How do I run this on Windows?


Answer: First, install docker and docker-compose. Then, using powershell, run these
commands:
$Env:PRAECO_ELASTICSEARCH="1.2.3.4"
docker-compose.exe up

Replace 1.2.3.4 with your Elasticsearch IP.

Question: Can I import my current elastalert rules into Praeco?


Answer: Unfortunately, this is not a possibility for two reasons. First, Praeco only supports a
subset of ElastAlert features, so only certain rules would work. Second, Praeco cannot
automatically create the query builder UI from an arbitrary ElastAlert filter entry, due to the
potential complexity and combinations of filters someone can put in their rule file.

Question: Can I export my Praeco rules into another ElastAlert instance?


Answer: Yes, the Praeco rule files are 100% compatible with other ElastAlert servers.

SIEMonster.com 166
14 Reporting
SIEMonster Reporting Module (SRM) is SIEMonster’s solution for generating reports from
different modules.

SRM is a highly customizable system that generates reports according to different:

• Schedules
• Output formats (PDF, PNG, XLSX, CSV)
• Time filters
• Paper design (A4, A3, Portrait, Landscape)
• Notification sources (SMTP, Mailgun, Slack)
• WYSIWYG-editable subject/message with dynamic templates

The interface of SRM has an adaptive design to work with modules on smaller screen starting
from 4.7 inches.

At the moment SRM only works with Kibana Module (Dashboards and
Searches). Support for other modules will be included in the future.

SIEMonster.com 167
14.1 Reporting: Configuration
Customization including configuring variables is required to make SRM work. The variables
listed below have values automatically assigned to them.
To get

ENV-variables

PORT Listening port for the server


MONGO_DB_URL Database connection string
HASH_SALT Random string used to work with user sessions
CRYPTO_VERSION_PRODUCT_ID Product ID
JWT_SECRET Secret used for JWT in SIEMonster Services.
Should be replaced with a custom value
KIBANA_SECRET Secret used for JWT and Kibana
ELASTIC_HOST Host to connect to ElasticSearch

14.2 Module Settings


Settings for the reporting module can be accessed from Admin Panel > Roles > admin >
Settings.

Module URL field specifies the address of the Reporting Server. Mothership_URL variable
specifies offline address of the main SIEMonster Server in services local network and it works
with licensing. Other main SRM pages are specified in the other Sublinks and are generated
automatically.

SIEMonster.com 168
14.3 Dynamic Settings
Dynamic settings of the Reporting module is used to perform some tuning and can be
edited directly in the Reporting UI. The variables have default values that are assigned at
the tenant creation/reset stage. All variables have names and comments that describe their
destination.

Click Reporting on the Home page to open the Reporting module. Click on the Settings
icon in the left pane to open the Settings page.

14.4 Pages
Web UI of the Reporting module consists of the following pages accessible by the left pane:

• Scheduled Reports
• Reports History
• Settings
• About (License Details)

SIEMonster.com 169
14.5 Scheduled Reports
The Schedule Reports page displays the list scheduled reports. Each scheduled report has
action buttons including resume, clone, preview, edit, and delete.

14.5.1 Exercise: Schedule a Report

1. Click Reporting on the Home page to open Scheduled Reports page. Click
Schedule a Report to create a new report.

2. In the Name field, type Web Traffic to name your report.

3. Use the defaults for both Class (Kibana) and Type (Dashboard).

4. From the Select a Dashboard drop-down menu, select [Logs] Web Traffic.

5. Click Quick, under the Time Window section, and specify time filter as last 15 days.

6. Design section lets you specify the settings for the generated files including the
format and orientation. Select the required values from the Format and Paper
Format drop-down menus, and then select the required Orientation.

7. Action section lets you configure the notifications process for example Mail or Slack.
Specify the required values for the required process.

8. Message section has a WYSIWYG editor with the Subject and Message Body fields.
Provide the required values using the editor.

9. Schedule section is used to manage schedule for the report generation. For example,
you can specify that a report will created and sent every day at 21:00, or first day of
every month.

10. Click Save to create your scheduled report.

SIEMonster.com 170
14.6 Reports History
Report History displays reports with a failure or success status filtered for the last 30 days.

Reports History also displ[ays a table with the state of the report (failure/success), creation
date, further information for the failed reports, and a link to download the successful reports.

SIEMonster.com 171
15 Flow Processors
Apache NiFi is an open source data ingestion platform that was built to automate the flow
of data between systems (For example transfer a JSON document and add that in a database,
transfer all the FTP files directly to Hadoop, transfer data from Apache Kafka to Elasticsearch).
Apache NiFi supports powerful and scalable directed graphs of data routing, transformation,
and system mediation logic.

It was developed by the National Security Agency (NSA) and is now being maintained and
further development is supported by Apache foundation. It is based on Java and runs in Jetty
server. NiFi Supports any device which runs Java and you can easily install NiFi on AWS. NiFi
is used in varied industries such as healthcare, insurance, telecom, manufacturing, finance,
oil and gas among others. As a best practice organize your projects into three parts
ingestion, test and monitoring.

Apache NiFi is now used in many top organizations that want to harness the power of their
fast data by sourcing and transferring information from and to their database and big data
lakes. It is a key tool to learn for the analyst and data scientists alike.

Apache NiFi has an easy to use drag and drop user interface, and it focuses on the
configuration of the processors. It guarantees that you do not lose your data through its
guaranteed delivery feature.

SIEMonster.com 172
Some of the high-level capabilities and objectives of Apache NiFi include:

• Web-based user interface


o Seamless experience between design, control, feedback, and monitoring
• Highly configurable
o Loss tolerant vs guaranteed delivery
o Low latency vs high throughput
o Dynamic prioritization
o Flow can be modified at runtime
o Back pressure
• Data Provenance
o Track dataflow from beginning to end
• Designed for extension
o Build your own processors and more
o Enables rapid development and effective testing
• Secure
o SSL, SSH, HTTPS, encrypted content, etc...
o Multi-tenant authorization and internal authorization/policy management

Apache NiFi is good at:


• Reliable and secure transfer of data between systems
• Delivery of data from sources to analytic platforms
• Enrichment and preparation of data:
o Conversation between formats
o Extraction/parsing
o Routing decisions

What Apache NiFi should not be used for:


• Distributed computation
• Complex event processing
• Joins, rolling windows, aggregate operations

15.1 Overview of NiFi Features


The key features categories include flow management, ease of use, security, extensible
architecture, and flexible scaling model.

Flow Management
• Apache NiFi provides a guaranteed delivery even at a very high scale. This is achieved through
effective use of a purpose-built persistent write-ahead log and content repository. Together
they are designed in such a way as to allow for very high transaction rates, effective load-
spreading, copy-on-write, and play to the strengths of traditional disk read/writes.

SIEMonster.com 173
• NiFi supports buffering of all queued data as well as the ability to provide back pressure as
those queues reach specified limits or to age off data as it reaches a specified age (its value
has perished).

• NiFi allows the setting of one or more prioritization schemes for how data is retrieved from a
queue. The default is oldest first, but there are times when data should be pulled newest first,
largest first, or some other custom scheme.

• There are points of a dataflow where the data is absolutely critical, and it is loss intolerant.
There are also times when it must be processed and delivered within seconds to be of any
value. NiFi enables the fine-grained flow specific configuration of these concerns.

Ease of Use
• Dataflows can become quite complex. Being able to visualize those flows and express them
visually can help greatly to reduce that complexity and to identify areas that need to be
simplified. NiFi enables not only the visual establishment of dataflows but it does so in real-
time. Rather than being 'design and deploy' it is much more like molding clay. If you make a
change to the dataflow that change immediately takes effect. Changes are fine-grained and
isolated to the affected components. You don’t need to stop an entire flow or set of flows
just to make some specific modification.

• Dataflows tend to be highly pattern oriented and while there are often many different ways
to solve a problem, it helps greatly to be able to share those best practices. Templates allow
subject matter experts to build and publish their flow designs and for others to benefit and
collaborate on them.

• NiFi automatically records, indexes, and makes available provenance data as objects flow
through the system even across fan-in, fan-out, transformations, and more. This information
becomes extremely critical in supporting compliance, troubleshooting, optimization, and
other scenarios.

• NiFi’s content repository is designed to act as a rolling buffer of history. Data is removed only
as it ages off the content repository or as space is needed. This combined with the data
provenance capability makes for an incredibly useful basis to enable click-to-content,
download of content, and replay, all at a specific point in an object’s lifecycle which can even
span generations.

Security
• NiFi is designed to scale-out through the use of clustering many nodes together as described
above. If a single node is provisioned and configured to handle hundreds of MB per second,
then a modest cluster could be configured to handle GB per second. This then brings about
interesting challenges of load balancing and fail-over between NiFi and the systems from
which it gets data. Use of asynchronous queuing-based protocols like messaging services,
Kafka, etc., can help. Use of NiFi’s 'site-to-site' feature is also very effective as it is a protocol

SIEMonster.com 174
that allows NiFi and a client (including another NiFi cluster) to talk to each other, share
information about loading, and to exchange data on specific authorized ports.

• NiFi is also designed to scale-up and down in a very flexible manner. In terms of increasing
throughput from the standpoint of the NiFi framework, it is possible to increase the number
of concurrent tasks on the processor under the Scheduling tab when configuring. This allows
more processes to execute simultaneously, providing greater throughput. On the other side
of the spectrum, you can perfectly scale NiFi down to be suitable to run on edge devices
where a small footprint is desired due to limited hardware resources.

15.2 NiFi User Interface


The NiFi UI provides mechanisms for creating automated dataflows, as well as visualizing,
editing, monitoring, and administering those dataflows. The NiFi UI is very interactive and
provides a wide variety of information about NiFi. The UI can be broken down into several
segments, each responsible for different functionality of the application.

As shown in the highlighted status bar below, a user can access information about the
following attributes:

• Active Threads
• Total queued data
• Transmitting Remote Process Groups
• Not Transmitting Remote Process Groups
• Running Components
• Stopped Components
• Invalid Components
• Disabled Components
• Up to date Versioned Process Groups
• Locally modified Versioned Process Groups
• Stale Versioned Process Groups
• Locally modified and Stale Versioned Process Groups
• Sync failure Versioned Process Groups

SIEMonster.com 175
The Operate Pallete consists of buttons that manipulate the components on the canvas.
They are used to manage the flow, as well as by administrators who manage user access
and configure system properties, such as how many system resources should be provided
to the application.

The management toolbar has buttons to manage the flow, and for a NiFi administrator to
manage user access and system properties.

Additionally, the UI has some features that allow you to easily navigate around the canvas.
You can use the Navigate Palette to pan around the canvas, and to zoom in and out.

SIEMonster.com 176
The Birds Eye View of the dataflow provides a high-level view of the dataflow and allows you
to pan across large portions of the dataflow.

The components toolbar contains all tools for building the dataflow.

Processor
Processor pulls data from external sources, performs actions on attributes and content of
FlowFiles, and publishes data to external source. User can drag the process icon on the
canvas and select the desired processor for the data flow in NiFi.

Input port
Input Ports provide a mechanism for transferring data into a Process Group. When an Input
Port is dragged onto the canvas, the user is prompted to name the Port. All Ports within a
Process Group must have unique names.

SIEMonster.com 177
All components exist only within a Process Group. When a user initially navigates to the NiFi
page, the user is placed in the Root Process Group. If the Input Port is dragged onto the
Root Process Group, the Input Port provides a mechanism to receive data from remote
instances of NiFi via Site-to-Site. In this case, the Input Port can be configured to restrict
access to appropriate users, if NiFi is configured to run securely.

Output port
The output port is used to transfer data to the processor, which is not present in that process
group. After dragging this icon, NiFi asks to enter the name of the Output port and then it
is added to the NiFi canvas.

Output Ports provide a mechanism for transferring data from a Process Group to
destinations outside of the Process Group. When an Output Port is dragged onto the canvas,
the user is prompted to name the Port. All Ports within a Process Group must have unique
names.

If the Output Port is dragged onto the Root Process Group, the Output Port provides a
mechanism for sending data to remote instances of NiFi via Site-to-Site. In this case, the Port
acts as a queue. As remote instances of NiFi pull data from the port, that data is removed
from the queues of the incoming Connections. If NiFi is configured to run securely, the
Output Port can be configured to restrict access to appropriate users.

Process Group
Process Groups can be used to logically group a set of components so that the dataflow is
easier to understand and maintain. When a Process Group is dragged onto the canvas, you
are prompted to name the Process Group. All Process Groups within the same parent group
must have unique names. The Process Group will then be nested within that parent group.

SIEMonster.com 178
Once you have dragged a Process Group onto the canvas, right-click on the Process Group
to select an option from context menu. The options available to you from the context menu
vary, depending on the privileges assigned to you.

While the options available from the context menu vary, the following options are typically
available when you have full privileges to work with the Process Group:

Configure: This option allows you to establish or change the configuration of the Process
Group.

Variables: This option allows you to create or configure variables within the NiFi UI.

Enter group: This option allows you to enter the Process Group.

Start: This option allows you to start a Process Group.

Stop: This option allows you to stop a Process Group.

View status history: This option opens a graphical representation of the Process Group’s
statistical information over time.

View connections -> Upstream: This option allows you to see and jump to upstream
connections that are coming into the Process Group.

View connections -> Downstream: This option allows you to see and jump to downstream
connections that are going out of the Process Group.

Center in view: This option centers the view of the canvas on the given Process Group.

SIEMonster.com 179
Group: This option allows you to create a new Process Group that contains the selected
Process Group and any other components selected on the canvas.

Create template: This option allows you to create a template from the selected Process
Group.

Copy: This option places a copy of the selected Process Group on the clipboard, so that it
may be pasted elsewhere on the canvas by right-clicking on the canvas and selecting Paste.

Delete: This option allows you to delete a Process Group.

Remote Process Group


Remote Process Groups appear and behave similar to Process Groups. However, the Remote
Process Group (RPG) references a remote instance of NiFi. When an RPG is dragged onto
the canvas, rather than being prompted for a name, the user is prompted for the URL of the
remote NiFi instance.

If the remote NiFi is a clustered instance, the URL that should be used is the URL of any NiFi
instance in that cluster. When data is transferred to a clustered instance of NiFi via an RPG,
the RPG will first connect to the remote instance whose URL is configured to determine
which nodes are in the cluster and how busy each node is. This information is then used to
load balance the data that is pushed to each node. The remote instances are then
interrogated periodically to determine information about any nodes that are dropped from
or added to the cluster and to recalculate the load balancing based on each node’s load.

• Local Network Interface: In some cases, it may be desirable to prefer one network interface
over another. For example, if a wired interface and a wireless interface both exist, the wired
interface may be preferred. This can be configured by specifying the name of the network
interface to use in this box. If the value entered is not valid, the Remote Process Group will
not be valid and will not communicate with other NiFi instances until this is resolved.

• Transport Protocol: On a Remote Process Group creation or configuration dialog, you can
choose Transport Protocol to use for Site-to-Site communication.

By default, it is set to RAW which uses raw socket communication using a dedicated port.
HTTP transport protocol is especially useful if the remote NiFi instance is in a restricted

SIEMonster.com 180
network that only allow access through HTTP(S) protocol or only accessible from a specific
HTTP Proxy server. For accessing through a HTTP Proxy Server, BASIC and DIGEST
authentication are supported.

Funnel
Funnel is used to transfer the output of a processor to multiple processors. User can use the
below icon to add the funnel in a NiFi data flow.

Funnels are used to combine the data from many Connections into a single Connection. This
has two advantages.
• First, if many Connections are created with the same destination, the canvas can become
cluttered if those Connections have to span a large space. By funneling these Connections
into a single Connection, that single Connection can then be drawn to span that large space
instead.
• Secondly, Connections can be configured with FlowFile Prioritizers. Data from several
Connections can be funneled into a single Connection, providing the ability to Prioritize all
of the data on that one Connection, rather than prioritizing the data on each Connection
independently.

Template
This icon is used to add a data flow template to NiFi canvas. This helps to reuse the data
flow in the same or different NiFi instances. After dragging, a user can select the templates
already added in the NiFi.

SIEMonster.com 181
Templates can be created from the components toolbar, or they can be imported from other
dataflows. These Templates provide larger building blocks for creating a complex flow
quickly. When the Template is dragged onto the canvas, the user is provided with a window
to choose which Template to add to the canvas.

Click the drop-down menu to view all the available Templates. Any Template that was
created with a description will show a question mark icon, indicating that there is more
information. Hovering over the icon with the mouse will show the description.

Label
These are used to add text on NiFi canvas about any component present in NiFi. It offers a
range of colors used by a user to add aesthetic sense.

Labels are used to provide documentation to parts of a dataflow. When a Label is dropped
onto the canvas, it is created with a default size. The Label can then be resized by dragging
the handle in the bottom-right corner. The Label has no text when initially created.

To add text to the Label, right click on Label and select Configure.

SIEMonster.com 182
15.3 Exercise: Building a Dataflow
You can build an automated dataflow using the NiFi UI by:

• Drag components from the toolbar to the canvas


• Configure the components to meet specific needs
• Connect the components together

Processor
The Processor is the most commonly used component, as it is responsible for data ingress,
egress, routing, and manipulating. There are many different types of Processors. In fact, this
is a very common Extension Point in NiFi, meaning that many vendors may implement their
own Processors to perform whatever functions are necessary for their use case.

15.3.1 Adding a Processor


The Processor is the most commonly used component, as it is responsible for data ingress,
egress, routing, and manipulating. There are many different types of Processors. In fact, this
is a very common Extension Point in NiFi, meaning that many vendors may implement their
own Processors to perform whatever functions are necessary for their use case.

1. To add a Processor, drag the Processor icon and drop it into the middle of the canvas. Add
Processor window opens.

In the Add Processor window,


you have different options to
choose from. When a developer
creates a Processor, the
developer can assign tags to that
Processor. These can be thought
of as keywords. You can filter by
these tags or by Processor name
by typing into the Filter box in
the top-right corner. Type in the
keywords that you would think
of when wanting to ingest files
from a local disk. Typing in

SIEMonster.com 183
keyword file, for instance, will provide us a few different Processors that deal with files.
Filtering by the term local will narrow down the list pretty quickly, as well. If we select a
Processor from the list, we will see a brief description of the Processor near the bottom of
the dialog.

2. To bring in files from a local disk into NiFi, you can use the GetFile Processor. This Processor
pulls data from our local disk into NiFi and then removes the local file. Select the Processor
and click ADD, it will be added to the canvas in the location that it was dropped.

15.3.2 Configuring a Processor

3. Now that we have added the GetFile Processor, right-clicking on the Processor to select
Configure from the context menu. The options available to you from the context menu vary,
depending on the privileges assigned to you.

The following options are typically available when you have full privileges to work with a
Processor:

Configure: This option allows you to establish or change the configuration of the Processor.

For Processors, Ports, Remote Process Groups, Connections and Labels,


it is possible to open the configuration dialog by double-clicking on
desired component.

Start or Stop: This option allows you to either start or stop a Processor, depending on the
current state of the Processor.

Enable or Disable: This option allows you to enable or disable a Processor, depending on
the current state of the Processor.

View data provenance: This option displays the NiFi Data Provenance table, with
information about data provenance events for the FlowFiles routed through that Processor.

SIEMonster.com 184
View status history: This option opens a graphical representation of the Processor’s
statistical information over time.

View usage: This option takes the user to the Processor’s usage documentation.

View connections -> Upstream: This option allows you to see and jump to upstream
connections that are coming into the Processor. This is particularly useful when processors
connect into and out of other Process Groups.

View connections -> Downstream: This option allows you to see and jump to downstream
connections that are going out of the Processor. This is particularly useful when processors
connect into and out of other Process Groups.

Center in view: This option centers the view of the canvas on the given Processor.

Change color: This option allows you to change the color of the Processor, which can make
the visual management of large flows easier.

Create template: This option allows you to create a template from the selected Processor.

Copy: This option places a copy of the selected Processor on the clipboard, so that it may
be pasted elsewhere on the canvas by right-clicking on the canvas and selecting Paste.

Delete: This option allows you to delete a Processor from the canvas.

4. Select the Properties tab from the Configure Processor window.

Once the Properties tab has been selected, we are given a list of several different properties
that we can configure for the Processor. The properties that are available depend on the
type of Processor and are generally different for each type. Properties that are in bold are
required properties. The Processor cannot be started until all required properties have been
configured. The most important property to configure for GetFile is the directory from which
to pick up files.

5. In the Input Directory field, type ./data-in, this will cause the Processor to start picking up
any data in the data-in subdirectory of the NiFi Home directory. In order for this property to
be valid, create a directory named data-in in the NiFi home directory and then click the Ok
button to close the dialog.

15.3.3 Connecting Processors


Each Processor has a set of defined Relationships that it is able to send data to. When a
Processor finishes handling a FlowFile, it transfers it to one of these Relationships. This allows
a user to configure how to handle FlowFiles based on the result of Processing.

SIEMonster.com 185
For example, many Processors define two Relationships: success and failure. Users are then
able to configure data to be routed through the flow one way if the Processor is able to
successfully process the data and route the data through the flow in a completely different
manner if the Processor cannot process the data for some reason. Or, depending on the use
case, it may simply route both relationships to the same route through the flow.

6. Now that we have added and configured our GetFile processor and applied the configuration,
we can see in the top-left corner of the Processor an Alert icon ( Alert ) signaling that the
Processor is not in a valid state. Hover over this icon, you can see that the success relationship
has not been defined. This means that we have not told NiFi what to do with the data that
the Processor transfers to the success Relationship.

7. In order to address this, let’s add another Processor that we can connect the GetFile Processor
to, by following the same steps above. This time, however, you will simply log the attributes
that exist for the FlowFile. To do this, we will add a LogAttributes Processor.

8. You can now send the output of the GetFile Processor to the LogAttribute Processor. Hover
over the GetFile Processor with the mouse and a Connection Icon ( Connection ) will appear
over the middle of the Processor. Drag this icon from the GetFile Processor and drop it to the
LogAttribute Processor. Create Connection window opens.

9. Because GetFile has only a single


Relationship, success, it is automatically
selected for you.

SIEMonster.com 186
10. Click on the Settings tab of the Create Connection window. In the Name field, specify the
name of the connection. Otherwise, the Connection name will be based on the selected
Relationships.

11. We can also set FlowFile Expiration for the data. By default, it is set to 0 sec which indicates
that the data should not expire. Change the value so that when data in this Connection
reaches a certain age, it will automatically be deleted (and a corresponding EXPIRE
Provenance event will be created).

12. The Back Pressure Object Threshold allow you to specify how full the queue is allowed to
become before the source Processor is no longer scheduled to run. This allows you to handle
cases where one Processor is capable of producing data faster than the next Processor is
capable of consuming that data. If the back pressure is configured for each connection along
the way, the Processor that is bringing data into the system will eventually experience the
back pressure and stop bringing in new data so that your system has the ability to recover.

13. The Available Prioritizers option is available on right-hand side. This allows you to control
how the data in this queue is ordered. Drag Prioritizers from the Available prioritizers list to
the Selected prioritizers list in order to activate the prioritizer. If multiple prioritizers are
activated, they will be evaluated such that the Prioritizer listed first will be evaluated first and
if two FlowFiles are determined to be equal according to that Prioritizer, the second Prioritizer
will be used.

14. Click ADD to add the connection to your graph.

15. Note that the Alert icon has changed to a Stopped icon ( Stopped ).

SIEMonster.com 187
The LogAttribute Processor, however, is now invalid because its success Relationship has not
been connected to anything. Let’s address this by signaling that data that is routed to
success by LogAttribute should be Auto Terminated, meaning that NiFi should consider the
FlowFile’s processing complete and drop the data. To do this, you configure the LogAttribute
Processor.

16. Right click on the LogAttribute Processor and click the Settings tab. Check Success under
Automatically Terminate Relationships to Auto Terminate the data. Click APPLY, notice
that both Processors are now stopped.

15.3.4 Starting and Stopping a Processor

17. At this point, you have two Processors on your graph, but nothing is happening. In order to
start the Processors, click on each one individually, right-click and choose the Start menu
item.

• You can also select the first Processor, and then hold the Shift key while
selecting the other Processor in order to select both. Then, you can
right-click and choose the Start menu item.
• As an alternative to using the context menu, you can select the
Processors and then click the Start icon in the Operate palette.

18. Once started, the icon in the top-left corner of the Processors will change from a stopped
icon to a running icon. You can then stop the Processors by using the Stop icon in the Operate
palette or the Stop menu item.

SIEMonster.com 188
16 Audit Discovery
PatrOwl is a scalable, free and open source solution for orchestrating Security Operations.
PatrOwl is an advanced platform for orchestrating Security Operations like Penetration
testing, Vulnerability Assessment, Code review, Compliance checks, Cyber-Threat
Intelligence / Hunting and SOC & DFIR Operations. Fully-Developed in Python (Django for
the backend and Flask for the engines). It remains incredibly easy to customize all
components. Asynchronous tasks and engine scalability are supported by RabbitMQ and
Celery.

The purpose of PatrOwl is to efficiently move from a proactive to a predictive security


posture.

• Thinking and acting like hackers: PatrOwl use the same mindset (tools, tactics and
procedures), monitor continuously all stacks of assets, prioritize efficiently the remediation of
vulnerabilities and the suspicious activities.

• Security automation and orchestration: PatrOwl enable to continuously scan an


organization’s environment for any changes that might indicate a potential threat.

• Best-of-breed and custom tools: PatrOwl has a unique cockpit and rationalized use of best-
of-breed/custom tools to support cyber-threat monitoring strategies and remediation
workflows.

16.1 PatrOwl Use Cases

• Monitoring Internet-faced systems: Scan continuously websites, public IP, domains and
subdomains for vulnerabilities and misconfigurations

• Vulnerability and remediation tracking: Identify vulnerabilities, send a full report to


ticketing system, and rescan to check for remediation

• Vulnerability assessment of internal systems: Orchestrate regular scans on a fixed


perimeter and check changes (asset, vulnerability, criticality)

• Attacker assets monitoring: Ensure readiness of teams by identifying attackers’ assets and
tracking changes of their IP, domains, and web applications

• Phishing / APT scenario preparation: Monitor early signs of targeted attacks, new domain
registration, suspicious Tweets, paste, VirusTotal submissions, and phishing reports

• Regulation and Compliance: Evaluate compliance gaps using provided scan templates

SIEMonster.com 189
• Penetration tests: Perform the reconnaissance steps, the full stack vulnerability assessment
and the remediation checks

• Continuous Integration / Continuous Delivery: Automation of static code analysis, external


resources assessment and web application vulnerability scans

16.2 PatrowlManager
PatrowlManager is the Front-end application for managing the assets, reviewing risks on
real time, orchestrating the operations (scans, searches, API calls), aggregating the results,
relaying alerts on third parties, and providing the reports and dashboards. Operations are
performed by the PatrowlEngines instances.

PatrowlEngines is the engine framework and the supported list of engines performing the
operations on due time. The engines are managed by one or several instance of
PatrowlManager. On the Home page, click Audit Discovery to access the PatrowlManager.

16.2 PatrowlManager Assets


PatrowlManager lets you manage the existing assets and creating a new asset. To view the
list of existing assets, select List assets from the Assets drop-down menu.

SIEMonster.com 190
Click on an individual asset to open Asset Detailed view that displays the following:

• Current finding counters and grade and trends (last week, months.)
• Findings by threat domains:
o Domain, HTTPS and Certificate, Network infrastructure, System, Web App, Malware,
E-Reputation, Data Leaks, Availability
• All findings and remediations tips
• Related scans and assets
• Investigation links

16.2.1 PatrowlManager: Add a New Asset

1. To add a new asset, select Add new asset from the Assets drop-down menu. Add an asset
page opens.

2. In the Value field, enter the IP of the asset.

3. In the Name field, enter the title of the asset. For Example, Corporate Website.

4. From the Type drop-down menu. Select IP. Available scan policies will be filtered on
this value.

5. In the Description field, enter a suitable description to describe the asset.

SIEMonster.com 191
6. From the Criticity drop-down menu, select high. Global risk scoring will depend on
this value.

7. In the Categories, select Operating systems. Categories field contains a list of tags to
quickly describe the asset. Custom values could be added. Click Create a new asset.

Assets can be added in bulk by using the Assets -> Add new assets in
bulk (CSV file) menu.

16.3 PatrowlManager Engine Management


PatrowlManager lets you manage the existing and adding new Engines. Engine Management
view lets a user:

• Create, modify or delete engines


• Change functional state
• View engine info, including current scans performed
• Refresh engines states
• Enable/Disable the auto-refresh

SIEMonster.com 192
To view the list of existing assets, select List engines from the Engines drop-down menu.

16.3.1 PatrowlManager: Add a New Scan Engine

1. To add a new asset, select Add scan engine instance from the Engines drop-down menu.
Add a new scan engine page opens.

2. From the Engine drop-down menu, select the type of engine you want to use.

3. In the Name field, enter the name of the engine.

4. In the Api url field, enter the URL address of the engine.

5. Tick the Enabled checkbox if you


want to enable the engine once
created.

6. From the Authentication method


drop-down menu, select the
authentication method to access to
the engine from the
PatrowlManager host. Click Create
a new engine.

SIEMonster.com 193
16.4 PatrowlManager Scan Definition
PatrowlManager Scan Definition let’s you search and select asset, and asset group on their
values or names. Policies can be filtered by engine type or threat domain.

PatrowlManager scan performed view can be accessed from the Scans -> List scans
performed that displays scans heatmap over days, weeks, and months. You can apply
advanced filters, run or delete scans from this view, and compare selected scans.

To compare scans with each other, select the scans and click

16.4.1 PatrowlManager: Add a New Scan


1. To add a new scan, select Add new scan from the Scans drop-down menu. Add a new scan
definition page opens.

2. In the Title field, enter the title of the scan. For example, List open ports on Internet-faced
assets or Search technical leaks on GitHub and Twitter.

SIEMonster.com 194
3. In the Description field, enter a
suitable description of the scan.

4. From the Scan Type field, select


whether the scans be started once
or periodically.

5. From the Start scan field, select the


time to start the scan.

6. From the Search asset(s) field,


search and select asset(s) targeted
by the scan.

7. From the Filter by Engine and


Filter by Category fields, search
the scan policy using the engine or
the category filter.

8. From the Select Policy field, select


the scan policy.

9. From the Select Engine drop-down


menu, select the scan engine that
will perform the scan each time.
Click Create a new scan.

16.5 PatrowlManager Alerting Rules


PatrowlManager alerting rules management view can be accessed from the Rules -> List
rules. This view allows you to create, copy, update, delete, and change the functional status
of the alerting rules.

SIEMonster.com 195
17 Threat Modelling
OpenCTI is an open source platform allowing organizations to manage their cyber threat
intelligence knowledge and observables. It has been created in order to structure, store,
organize and visualize technical and non-technical information about cyber threats.

The structuration of the data is performed using a knowledge schema based on the STIX2
standards. It has been designed as a modern web application including a GraphQL API and
an UX oriented frontend. Also, OpenCTI can be integrated with other tools and applications
such as MISP, TheHive, MITRE ATT&CK, etc.

OpenCTI solves the following challenges:

From a strategic level:

• Victimology of an intrusion set of a threat actor over time


• Tactics and procedures of a campaign targeting a specific sector
• Reusing of legitimate tools in malicious codes families
• Campaigns targeting an organization or sector over time

To an operational level:

• Observables linked to a specific threat and evolution over time


• Clusters of malicious artefacts and enrichment

17.1 Threat Modelling: Features

Knowledge graph
The whole platform relies on a knowledge hypergraph allowing the usage of hyper-entities
and hyper-relationships including nested relationships.

Exploration and correlation


The whole dataset could be explored with analytics and correlation engines including many
visualization plugins, MapReduce and Pregel computations.

Unified and consistent data model


From operational to strategic level, all information are linked through a unified and
consistent data model based on the STIX2 standards.

Automated reasoning
The database engine performs logical inference through deductive reasoning, in order to
derive implicit facts and associations in real-time.

SIEMonster.com 196
By-design sourcing of data origin
Every relationships between entities have time-based and space-based attributes and must
be sourced by a report with a specific confidence level.

Data access management


Full control of data access management using groups with permissions based on granular
markings on both entities and relationships.

17.2 Threat Modelling: User Interface

Dashboard
OpenCTI platform provides a powerful knowledge management database with an enforced
schema especially tailored for cyber threat intelligence and cyber operations. With multiple
tools and viewing capabilities, analysts are able to explore the whole dataset by pivoting on
the platform between entities and relations. Relations having the possibility to own multiple
context attributes, it is easy to have several levels of context for a given entity.

Navigate to Home > Threat Modelling to open the Dashboard. The Dashboard will fill up
progressively as you import data.

Threats
The Threats service allows you to go through all the data in the platform organized by:

• Threat actors
• Intrusion sets
• Campaigns
• Incidents
• Malwares

SIEMonster.com 197
To view the existing Threats, click the icon from the left navigation pane to visualize all
the threats related data split in different tabs.

Techniques

Click the Techniques tab to display among all the Techniques, Tactics and Procedures (TTPs)
which may be used during an attack. This covers all the kill chain phases as detailed in the
MITRE ATT&CK framework but also tools, vulnerabilities and identified courses of actions
which can be implemented to block these techniques.

SIEMonster.com 198
Observables

Click the Observables tab to display all the technical observables which may have been seen
during an attack, such as infrastructure or file hashes.

The goal is to create a comprehensive tool allowing users to capitalize technical (such as
TTPs and observables) and non-technical information (such as suggested attribution,
victimology etc.) while linking each piece of information to its primary source (a report, a
MISP event, etc.).
All observables are linked to threats with all the information needed to the analysts to fully
understand the situation, the role played by the observable regarding the threat, the source
of the information and the malicious behavior scoring.

Reports

In this tab are all the reports which have been uploaded to the platform. They will be the
starting point for processing the data inside the reports.

SIEMonster.com 199
Entities
This tab contains all information organized according to the identified entities, which can be
either Sectors, Regions, Cities, Organizations, or Persons, targeted by an attack or involved
in it. Lists of entities can be synchronized from the repository through the OpenCTI
connector or can be created internally.

Explore
This tab is a bit specific, as it constitute a workspace from which the user can automatically
generates graphs, timelines, charts and tables from the data previously processed. This can
help compare victimology with each other, or the timelines of attacks.

OpenCTI allows analysts to easily visualize any entity and its relationships. Multiple views are
available as well as an analytics system based on dynamic widgets. For instance, users are
able to compare the victimology of two different intrusion sets.

SIEMonster.com 200
Connectors
In this tab, you can manage the different connectors which are used to upload data to the
platform.

Settings
In this tab, you can change the parameters, visualize all users, create or manage groups,
create or manage tagging (by default, the Traffic Light Protocol is implemented, but you can
add your own tagging) and manage the kill chain steps.

SIEMonster.com 201
Appendix A: Change Management for password.
Please change passwords after installation for the required services.

Use only Alphanumeric passwords, e.g. Ys3CretpAss624

Application Username Password


Grafana (Health) admin admin
Web App Mongo siemuser02 s13M0nSterZ
Mongo Hash Salt N/A 6B44D8EDB86B4CA8BB8F3AAA35DDAF7D
Wazuh API siemonster S13M0nSterZ
CA N/A s13M0nSterZ
Truststore N/A s13M0nSterZ
Keystore N/A s13M0nSterZ
Elastic elastic s13M0nSterZ
Beats beats s13M0nSterZ
MySQL dbuser dbpass
MySQL Root root HmKCUMrTBuc7MyxLw36U8wJAakyX3xtFo9gMxvArQPthpTAojNN
Cortex Admin demo_admin demo
Cortex Integration demo_integrati PburE4tBpTkdQREGTnUaMKbo6Y4*yHQWdwnAZTGARCg
on
Hive Admin admin admin
Hive Integration demo_integrati PburE4tBpTkdQREGTnUaMKbo6Y4*yHQWdwnAZTGARCg
on
MISP admin@siemo 4kJFW9vkwqUiARniv3BpJZmFre
nster.internal.c
om
Patrowl admin s13MonSterZ
OpenCTI admin@siemo s13M0nSterZ
nster.internal.c
om
OpenCTI N/A 3c972036-5962-43c1-a32d-0ef9e63f64c7
AdminToken
ssh deploy s13M0nSterZ

SIEMonster.com 202

You might also like