Professional Documents
Culture Documents
This documentation explores the role of security content for Exabeam products that use the
legacy data structure. These on-premises and legacy SaaS products do not require compliance
with the Exabeam common information model. To find specific security content topics, see the list
of topics about Security Content in the Legacy Structure.
If you are transitioning to any Exabeam cloud-native applications and services, switch to
Exabeam Security Content in the Common Information Model. The structure and role of security
content is shifting in the cloud-native Exabeam ecosystem. A new common information model
brings cross-product standardization, granular flexibility, and easier extensibility.
Exabeam provides out-of-the-box security content that supports integrations with multiple third-
party vendors. As the threat landscape changes, Exabeam security content is supplemented with
content packages that are released on a regular cadence. Content packages are delivered as
zipped files and can be installed without the need to upgrade your Advanced Analytics or Data
Lake applications. Depending on which version of each product you are using, content packages
can be deployed in one of two ways:
• Content Installer – Using the content installer script requires manipulating content package
files in a command line environment. It also requires a manual restart of the relevant internal
engines. For more information, see Content Installer.
• Content over Cloud – Using the content over cloud process allows content packages to be
deployed directly from the cloud. This process is available for both cloud and on-prem versions
of Exabeam software, beginning with Advanced Analytics i54 and Data Lake i36. For more
information, see Manage Security Content in Advanced Analytics or Manage Security Content
in Data Lake.
While each product has its own processors and engines for handling data ingestion tasks, the use
of security content follows a similar pipeline in both products. The sections below outline the path
of security content through each product. This pipeline, as shown in the image below, will serve as
a framework for discussing security content functionality throughout this documentation.
• Logs – Both Advanced Analytics and Data Lake collect and ingest data in the form of log files
from a variety of sources. Logs can be ingested directly from servers, applications, databases,
and other devices across an infrastructure, whether the source is local, remote, or cloud-based.
Logs can be fetched from a SIEM or other third-party repository. Log information can also be
collected from sources that provide contextual information, such as threat intelligence services
and identity providers. For more information. see Understanding the Log.
• Parsers – Both Advanced Analytics and Data Lake use parsing to extract meaningful values
from verbose log data. Log files are associated with specific parsers based on each parser's
set of conditions. When a log meets parser conditions, a set of regular expressions (regexes)
are used to extract values of interest from the log and map them to Exabeam fields. Not all log
data lends itself to parsing and there are some differences between what should be parsed for
Advanced Analytics and for Data Lake. For more information, see Do You Need a Parser? or
Exabeam Parsers.
• Events – In Advanced Analytics, each parser is matched to an event builder definition. When
a parsed message is output from a parser, it runs through the event building engine to be
categorized as a specific event. The event becomes the key to further analysis. It determines
the minimum required fields necessary for models and rules to process different types of
events. For more information, see Exabeam Event Building.
• Enrichers – In both Advanced Analytics and Data Lake, enrichers provide contextual
information for an event. While most log information describes what users and entities are
doing, context information describes who the users and entities are. Enrichers can add new
values, modify values, or create new fields based on existing fields or on context tables.
Enrichers can provide system-based or user-based context. For more information, and several
enrichment use cases, see Exabeam Enrichment.
• Sessions – In Advanced Analytics, once events have been built and enriched, they must be
added to a session in order for any analysis to take place. A session is the visual representation
of all the events attributed to a user in the course of a workday. Before an event can be used to
train a model, visualize a timeline, or trigger a rule, it must be part of a session.
• Models – In Advanced Analytics, machine learning functionality models behavior at the user,
peer group, entity, and organization levels. Once an event is part of a session, it's checked
against the available user-based and asset-based models to determine if the event should
be used for training. When training begins, historical values are tracked so that anomalous
behavior can be detected. For more information about the different types of models available,
see Exabeam Models.
• Rules – In Advanced Analytics, rules contain the logical expressions that define malicious
or unwanted behavior. When an event matches the conditions defined in the rule, the rule is
triggered and assigns points to a user's session. Two types of rules are available in Advanced
Analytics. Model-based rules use historical values stored in a model and usually trigger when
an event is evaluated as anomalous. Fact-based rules use field values from an event and
work more like simple correlation rules when determining whether or not to trigger. For more
information and some sample rules, see Exabeam Rules.
The library contains documentation about parsers, events types, models, rules, and MITRE
techniques, and shows how these tools map to one and other. The library is constructed so
that it can be browsed via multiple navigation paths. Depending on how you want to drill into the
information, you can:
• Search by data source – Select a vendor and a product that are the source of the data. View
the event types, parsers, and the number of rules and models Exabeam employs to cover this
data source. Drill down further to view the parser syntax or the names of the rules and models.
• Search by use case – Select a specific use case. Exabeam supports use cases in the following
categories: compromised insiders, malicious insiders, and external threats. View tables for each
vendor and product that the use case supports. Each table shows the relevant event types and
the number of supported rules and models. Drill down further to view the names of the rules and
models.
• View by MITRE ATT&CK® framework – View the Exabeam coverage map that shows which
attack techniques Exabeam covers with its rules and models.
Exabeam can ingest logs directly from a source, fetch logs from SIEM log repositories, or ingest
logs via Syslog - including from Exabeam Data Lake. These logs provide insight into the activity
of both users and entities (like servers and workstations) and they help surface security issues
across your enterprise. Context sources outside of the logs contribute additional information that
help make sense of the log data.
Once logs have been collected, they can be parsed in Data Lake and Advanced Analytics. The
following topics discuss how to determine which logs are worth parsing and how to identify fields
of interest within the logs.
A parser extracts values from a log and maps those values to the appropriate Exabeam fields. See
Exabeam Parsers for more information.
Since parsing is a complex and resource intensive piece of the Data Lake pipeline, it may have
some performance implications. Therefore, you should only use parsers where necessary. It is
important to note that you can always go back in Data Lake and reparse old logs.
NOTE
You are limited in the types of rules, reports, and dashboards you can create without a parser since
you do not get the benefit of field values.
In logs that support Advanced Analytics event types, only the specific fields that are used for
display or for processing need to beparsed. For example, in Entity Analytics logs, like network
connection, events do not require a user. Only IP, host, port, and similar information should be
parsed.
If logs are required for Advanced Analytics (UBA), they must have a user or a way to identify the
user, such as a badge to a user list. Without a user, Advanced Analytics cannot process the log
and it does not make sense to create a parser.
NOTE
In Entity Analytics, events without a user can be processed, such as network connection status and net
flow connection. Any event type can be part of an asset timeline as long as it has a host name. It is still
necessary to make sure the log falls into one of the Advanced Analytics security related event types.
1. Determine whether the log has security significance. In this case, the log is from a web proxy
product which shows access to websites, and therefore has security significance.
2. Answer the following questions to perform an initial analysis of the log:
a. Can the log be tied back to a user or device?
Yes, the log can be tied back to a user (user=jdoe@company.com).
b. Does the log have a complete time field?
No, the log time is missing the year. We will use exabeam_time (Aug 14 22:13:03).
c. What product or vendor produced these logs?
(vendor=Forcepoint) It would also help to know the product, but it is not crucial in this
case.
3. Perform the secondary analysis, asking whether the log can be mapped to an Exabeam Event
Type.
The event type for this log is "web-activity-allowed". We expect "web-activity-allowed" logs
to have "web_domain" and user information, which this log includes.
4. Based on these questions, we can conclude that Exabeam will provide value by ingesting this
log.
Exabeam Parsers
Parser definitions are contained in a set of configuration files, mostly named for the vendors
whose products they apply to (<vendor_name>.conf ). Each parser definition describes the
following:
When a log is ingested, the values of interest must be extracted from it and mapped to Exabeam
fields. These activities are performed by parsers. Parsing log files effectively is key to downstream
functionality for both Advanced Analytics and Data Lake.
Exabeam products are delivered with a large set of default parsers which are stored in the
following path: /opt/exabeam/config/default. Note that while all of the default parsers can
be used in Data Lake, only a subset of them can be used by Advanced Analytics.
If you create custom parsers, they should be stored in the path: opt/exabeam/config/custom.
Parser conditions are evaluated according to their order in the parser list. A log entering the
ingestion engine is checked against the conditions of the first parser in the list. If the conditions
don't match, then the log moves on to the next parser in the list, and so on. When a log matches a
parser, no further parser conditions are evaluated. The parser with the matched condition is used
to parse the event.
NOTE
If two parsers have similar conditions, list the parser with the broader condition below the parser with
more specific condition. Otherwise, the parser with the broader condition will also parse the more
specific logs.
will be surrounded by parentheses. The first value in the parentheses will be a set of curly brackets
containing the name of the field of the extracted value. The curly brackets are followed by the
regular expression identifying the value.
For example, the expression ABC({my_field}...) will parse the immediate three characters
after the string “ABC” in the log and will map them to a field called my_field. For example, if the
received log is "ABC123XYZ" the field my_field will contain the value “123”.
If the string "ABC" does not exist in the log, the field my_field will not be created.
All regular expression statements are evaluated in consecutive order against the entire log. If a
value is mapped to a certain field in one expression and then a different value is mapped to the
same field, the second mapping will overwrite the first.
{
Name = o365-inbox-rules-2
Vendor = Microsoft
Product = Office 365
Lms = Direct
DataType = "app-activity"
TimeFormat = "yyyy-MM-dd'T'HH:mm:ss"
Conditions = ["""Operation":"Set-Mailbox""" ]
Fields = [
""""CreationTime":"({time}\d\d\d\d-\d\d-\d\dT\d\d:\d\d:\d\d)"""",
"""Forward.+?Value":"(smtp:)?({target}[^"]+@({target_domain}[^"]+))""""
""""ResultStatus":"({outcome}[^"]+)"""",
""""ClientIP":"\[?({src_ip}[^"]+?)\]?:({src_port}\d+)"""",
"""({activity}Set-Mailbox)""",
"""cs1=(\[\{"additional-properties"\:)?\{"({activity}[^"]+)""",
"""msg=({additional_info}.+?)\s\w+=""",
""""Value":"(?:smtp:)?.+?@({target_domain}[^"]+)"""",
"""UserId":"({user_email}[^"\\]+@({user_domain}[^"]+))""",
"""destinationServiceName=({app}.+?)\s*filePath"""
"""({app}Office 365)"""
]
DupFields = ["app->resource"]
}
Table 1.
The command runs the ingestion engine over the supplied log fetch type, in this case the logs that
were sent by syslog, and parses only the specified dates. The directory on which the command
runs the ingestion engine is specified in the configuration file by the --config-file parameter.
This is typically the log storage directory. The output will be in the same directory.
Troubleshooting Regexes
Regexes extract data to ingest into the Exabeam platform. Creating the correct regex is crucial to
getting all the value that Advanced Analytics offers, such as rule scoring and modeling.
You can use multiple regexes for a single field name. Typically, this is used when the format of a
field differs within a log. In that situation, you can use multiple regexes to be sure that that one of
them will parse the field correctly. In the case that both regexes will be matched against the log,
the regex that appears later (further below in the fields array) will have higher precedence, and
thus its value for the field will be used.
For example, in the Parser Parameter Definition example, two regexes can parse the app field.
If the first regex works, and an app value is parsed, and the second regex also works, the data
parsed by the second regex will overwrite what was initially parsed for app by the first regex.
NOTE
You can use regex101.com to help you create and test regex syntax.
Regexes Misparsing
Design your regexes to be able to capture all possible variations of your data. By carefully creating
and testing your regexes, you can make sure that Advanced Analytics doesn't miss data that
would prevent it from being able to model a specific field.
Design your regexes to capture edge cases for how a value might appear in the log. In many
cases, regexes are initially built to end when a space or an array/log-constructor-like character
(':','[', '}') is used to end the regex. You will need to balance the requirements to allow a broadly
tuned regex to capture what should be required, as well as limit how far the regex is allowed to
capture. Sometimes a regex change is required due to a space being allowed in a filename, or the
log management system happens to be appending forward or back slashes.
Performance Tuning
The speed of a regex is crucial for the stability of the ingestion engine. A single high volume
log source that hits a single parser that takes 70 ms to parse a single log will severely degrade
performance . Starting with Advanced Analytics I48, parsers that impact the ingestion process as
a whole will be automatically disabled.
In many cases, this occurs because the regex was designed to be as broadly tuned as possible,
and does several 'look aheads' in the log. If a log line is large, a single regex in a parser that
tries to look through most, if not the entire log, will cause the ingestion engine to slow down and
eventually disable the parser.
• If time is not available in the raw log, use the syslog field headers.
• Without parsing a user, src_ip, dest_ip, dest_host, or src_host , Advanced Analytics cannot
process the event and the log will be of no value to you.
• Parsers are organized into major vendors . For example, parsers for logs generated by Carbon
Black products can be found in config/default/parsers_carbonblack.conf.
The event builder stage is the part of the analytics engine pipeline that categorizes a parsed
message into an Exabeam event type. Event types are the basic units that are used by the rest of
the processing engine (enrichment, models, rules, and UI components).
Every parser is matched to an event builder definition. If there is no event builder for a parser,
nothing is done with the parsed output of that log event. Parsed messages that do become events
are written into evt.gz files.
• Reduces the number of parsers – Different event types can be created based on a single
parser. For example, a Windows login event will indicate the successful or failed outcome of a
login in a field, which will have the value 0x0 in case of a successful login and another value in
case of a failed login. This value can be parsed and conditioned on the event builder to create
a local-logon or a failed-logon event. This eliminates the need for two parsers to capture these
two event types.
• Combines information in two logs – Some log sources provide all the information needed in
an Exabeam event in two separate log events. For example, one VPN log could indicate the
user's session and the source IP, and another log could provide the user's session ID, user name,
and assigned IP. In order to create a meaningful vpn-login event, the information from both logs
have to be combined into a single event. This can be achieved in the event builder based on the
session ID field that would be identical in both logs.
• Complex combination of multiple logs – Some email sources can generate hundreds of logs
for a single email. In order to combine information in all these messages into a single event, a
complex logic is needed which can be defined in the event builder.
There is also no issue with having a single event builder match the output of several parsers. This
is done for example when there are several formats of an event that require several parsers (for
example, a Windows event is collected with Snare and Beats from different systems). In fact, this
will reduce the number of event builders and make them easier to manage.
However, unlike in the parsing stage where a parser cannot match with an event if the event has
already been matched with another parser, multiple event builders can apply to a single message.
This results in the creation of multiple events for a single log. Therefore, it is important to make
sure that no more than one event builder will match a parsed message. This is usually done by
creating a condition on the name of the desired parser as well as some of the data in the message,
if necessary. There may be special cases in which more than one event should be created for a
single log.
netskope-file-write = {
input-message = [{
expression = "InList(type, 'netskope-activity','s-
netskope-activity','cef-netskope-file-operation-1') and
InList(toLower(activity),'edit','move','create')"
}]
name = netskope-file-write
output-type = file-write
source = Netskope Active Platform
vendor = Netskope Active Platform
}
Field Description
Event netskope-file-write
Builder ID
This can be any value but it must be identical to the name parameter.
Input- This contains the expression(s) that should be considered by this event builder. Similar parsers that should
message create the same type of event based on the same conditions seen across logs can be grouped into a single
event builder as shown in the above Event Builder Definition example.
"InList(type, 'parser_name_1','parser_name_2')"
This expression matches a parser to an event builder. The type field contains the name of the parser that
created the message.
NOTE
This type refers to the parser message type, which happens to be the parser name.
Everything parsed by the parsers defined in this expression only get evaluated by this event builder
definition (unless the parser also exists in another event builder) and set of conditions.
"InList(toLower(activity),'edit','move','create')"
The rest of the conditions involve using logical expressions that check against parsed fields to decide
whether a message becomes an event.
Name This is the name of this specific event builder definition. It must be the same as the key given to the entire
config/hocon block. If not, the analytics engine will not start. If another event builder contains the same
name further down the file, the first one will be overwritten.
Field Description
Output- This is the Exabeam event type assigned to this event. It has to match one of Exabeam events and will
Type determine how this event will be handled by downstream analytics.
NOTE
This type refers to the event type.
Source The specific software/OS/product name that generates the log. This field will typically be visible in the UI.
Vendor The name of the vendor of the system that generated the log.
Event Stitching
Sometimes multiple logs are needed to build an event, meaning all the data needed for a single
event are spread out over multiple logs. Event stitching allows the event builder to extract all
the relevant pieces of information from different parsed logs and create a single event. Different
parsers extract information from the relevant logs detailing different pieces of the same logical
activity, for example a vpn login, to create a single event.
Two types of event builders can be used to combine information from multiple messages to a
single event:
postfix-email-in = {
input-message = [ ### for 'ContivityMultiEventTracker' ebuilders there should
only be two objects in this field array, as 'ContivityMultiEventTracker' uses
exactly two messages
{
expression = "type = 's-postfix-dlp-email'" ### the expression that
contains what parser to catch and other logical expression as the conditions.
output-fields =
"msg_id,src_ip,src_host,sender,recipients,recipient,host,subject" ### fields
to keep from the parsed message that will be apart of the event
type = s-postfix-dlp-email ### a name given to this specific message
extraction
},
{
expression = "type = 's-postfix-dlp-email-1'"
output-fields = "dest_ip,dest_host"
type = s-postfix-dlp-email-1
}
]
key-fields = "msg_id" ### the field that ties together the two messages and
should be present in both parsed messages
name = postfix-email-in ### the name of the event builder, same as the hocon
block name
output-type = dlp-email-alert-in ### event-type
source = Postfix ### product name
An event builder always creates an event of a specific type. Each event type has unique fields that
correlate to certain information in a log. There are three types of fields: required, extended, and
informational.
For an event builder to create an event, a log must contain information that matches an event
type's required fields. Information that maps to extended and informational fields is optional to
create an event, but is still useful to process and display the event.
For a table of event types and their required, extended, and informational fields, see Event Types
and Required Fields.
Components, like rules, machine learning algorithms, and Smart Timelines™, need a few basic data
points to properly process an event. To ensure that an event contains these data points, Event
Builder creates an event from a log only if that log contains the required data for a specific event
type.
For example, the process-created event type has a process_name required field. To create
an process-created event, a log must contain information about the process name.
When Event Builder creates an event from a log, it matches certain information in the log to an
event type's extended fields, if the information exist. Risk Engine uses the information contained in
the extended fields to train models and evaluate the event against rules.
For example, the vpn-login event type has an os extended field. The VPN29 - VPN
Operating Systems model trains on this os information. If the model considers the os
anomalous, it may trigger the VPN32 - First VPN from OS rule.
When an event builder creates an event from a log, it matches certain information in the log to an
event type's informational fields, if the information exists. In both Advanced Analytics and Data
Lake, these fields are used to correlate important data, like host and IP addresses, and enrich
events with contextual information so you can easily search for logs, events, users, or assets.
For example, user_sid is an informational field for the event type kerberos-login. Data Lake
maps user_sid to account_id so you can search for either user_sid or account_id and
find the same log.
In Advanced Analytics, Smart Timeline™ events also display certain information based on these
informational fields. If the event builder can't find the information for informational fields in the log,
the informational fields appear blank in the Smart Timeline.
NOTE
Events can be created even if the required fields are not present. However, the event will not be
applicable for rule scoring and modeling.
Subcategory: • dest
NOTE Create
This is tied to Windows • host
events 4720 or 624. • account_name
• time
• user
Extended Fields:
• account_domain
• src
Informational Fields:
• domain
• event_name
• account
• logon_id
• event_code
Subcategory: • time
NOTE Changes
This is tied to Windows • host
events 4726 or 630. • user
• target_user
Extended Fields:
• dest_host
• account_name
Informational Fields:
• domain
• target_domain
• event_name
• outcome
• dest_ip
• src
• user_sid
• account
• logon_id
• event_code
Extended Fields:
• —
Informational Fields:
• dest
• domain
• target_domain
• event_name
• outcome
• src
• user_sid
• account
• logon_id
• event_code
account-enabled An account was enabled by a user. Category: Account Required Fields:
Subcategory: • target_user
Management
• dest
• host
• time
• user
Extended Fields:
• src_host
Informational Fields:
• domain
• target_domain
• event_name
• outcome
• src_ip
• account
• logon_id
• event_code
Subcategory: • dest
Activity
• domain
• host
• time
• user
Extended Fields:
• src_host
Informational Fields:
• caller_domain
• event_name
• src_ip
• caller_user
• auth_method
• logon_id
• event_code
account- A user changed their account password. Category: Account Required Fields:
password-change
Subcategory: • time
NOTE Activity
This is tied to Windows • host
events 4723 or 627. • user
• target_user
Extended Fields:
• target_domain
• dest_host
Informational Fields:
• domain
• event_name
• outcome
• dest_ip
• src
• user_sid
• account
• logon_id
• event_code
Extended Fields:
• —
Informational Fields:
• failure_reason
• domain
• event_name
• outcome
• dest
• src
• user_sid
• logon_id
• event_code
• target_domain
Extended Fields:
• target_domain
• dest_host
Informational Fields:
• target_user_sid
• domain
• event_name
• outcome
• dest_ip
• src
• user_sid
• account
• logon_id
• event_code
Extended Fields:
• process
• src_host
• safe_value
Informational Fields:
• process_name
• command_line
• account_logon_guid
• account_domain
• domain
• event_name
• src_ip
• process_directory
• user_sid
• user_uid
• logon_id
• user_logon_guid
• event_code
• dest_service
Extended Fields:
• —
Informational Fields:
• dest
• domain
• target_domain
• event_name
• outcome
• src
• user_sid
• account
• logon_id
• event_code
Extended Fields:
• browser
• os
• mime
• user_agent
• src
Informational Fields:
• dest
• result
• additional_info
• event_name
• resource
• target
• event_code
Extended Fields:
• src_ip
Informational Fields:
• dest
• failure_reason
• src_host
• result
• additional_info
• event_name
• user_agent
• object
• resource
• event_code
app-login A user logged into an application. Category: Required Fields:
Application
• time
Subcategory: Login
• host
• user
• app
Extended Fields:
• browser
• os
• user_agent
• src
Informational Fields:
• dest
• event_name
• user_email
• event_code
• protocol
Subcategory: • dest
NOTE Change
This is tied to Windows • host
events indicating audit • time
log clearance, such as
• user
Windows 1102 and 517.
Extended Fields:
• src_host
Informational Fields:
• domain
• event_name
• src_ip
• logon_id
• audit_category
• policy
• event_code
audit-policy- An audit policy was changed. Category: — Required Fields:
change
Subcategory: — • dest
NOTE
This is tied to Windows • domain
events 4719 and 612. • host
• time
• policy
• user
Extended Fields:
• src_host
Informational Fields:
• event_name
• src_ip
• subcategory
• event_name
• logon_id
• audit_category
• event_code
Extended Fields:
• —
Informational Fields:
• browser
• src
• domain
• os
• event_name
• outcome
• user_agent
• auth_method
• app
• realm
• event_code
Extended Fields:
• src_ip
Informational Fields:
• browser
• src_host
• domain
• additional_info
• os
• event_name
• outcome
• user_agent
• auth_method
• app
• realm
• event_code
Extended Fields:
• src_ip
Informational Fields:
• browser
• src_host
• domain
• os
• event_name
• outcome
• auth_method
• app
• realm
• event_code
batch-logon A non-interactive batch logon occurred. Category: Account Required Fields:
Subcategory: • dest
NOTE Activity
This is tied to Windows • host
events 4624, and 528 • time
with logon type 4.
• user
Extended Fields:
• process
• src_host
• account
• event_code
Informational Fields:
• process_name
• domain
• logon_type
• event_name
• src_ip
• auth_package
• logon_id
Extended Fields:
• —
Informational Fields:
• src_ip
• user_agent
• role
• policy
• account
cloud-admin- Failed administrative activity against Category: Cloud Required Fields:
activity-failed cloud services.
Subcategory: • user
Admin
• service
• activity
• failure_reason
Extended Fields:
• —
Informational Fields:
• src_ip
• user_agent
• role
• policy
• account
Extended Fields:
• —
Informational Fields:
• process_name
• domain
• event_name
• src
• account
• event_code
config-change A user made a configuration change. Category: — Required Fields:
Subcategory: — • dest
• host
• object
• time
• activity
• user
Extended Fields:
• —
Informational Fields:
• event_name
• outcome
• src
• event_code
Subcategory: • host
Activity
• db_operation
• db_user
• database_name
• time
• user
Extended Fields:
• —
Informational Fields:
• process_name
• dest
• table_name
• session_id
• server_group
• process
• domain
• additional_info
• database_schema
• event_name
• reason
• service_name
• src
• app
• account
• database_object
• protocol
• sql_count
Extended Fields:
• —
Informational Fields:
• process_name
• dest
• table_name
• session_id
• server_group
• process
• domain
• additional_info
• database_schema
• event_name
• failure_reason
• service_name
• src
• app
• account
• database_object
• event_code
• protocol
Extended Fields:
• dest
• response_size
• process
• src
Informational Fields:
• process_name
• alert_severity
• table_name
• malware_url
• alert_id
• server_group
• domain
• additional_info
• alert_type
• event_name
• db_operation
• service_name
• app
• account
• database_object
• event_code
Extended Fields:
• —
Informational Fields:
• domain
• db_user
• app
• db_operation
• db_query
• service_name
• server_group
• src
• dest
• rsponse_size
• database_object
• database_schema
Extended Fields:
• —
Informational Fields:
• process_name
• dest
• server_group
• process
• domain
• additional_info
• event_name
• outcome
• service_name
• auth_package
• src
• app
• account
• event_code
• protocol
Subcategory: • host
Activity
• db_user
• database_name
• time
• user
Extended Fields:
• src
Informational Fields:
• process_name
• dest
• server_group
• process
• domain
• event_name
• service_name
• auth_package
• app
• account
• event_code
• protocol
Subcategory: • host
Activity
• db_user
• database_name
• time
• user
• db_query
Extended Fields:
• response_size
• db_operation
• src
Informational Fields:
• process_name
• dest
• table_name
• server_group
• process
• domain
• database_schema
• event_name
• service_name
• app
• database_object
• event_code
Extended Fields:
• —
Informational Fields:
• domain
• db_user
• app
• db_query
• service_nme
• server_group
• src
• dest
• response_size
• database_object
• database_schema
Extended Fields:
• process_name
• top_domain
• alert_severity
• dest
• process
• alert_type
• user
• protocol
Informational Fields:
• device_id
• alert_id
• file_name
• domain
• additional_info
• event_name
• outcome
• target
• event_code
Extended Fields:
• subject
Informational Fields:
• dest
• return_path
• recipients
• alert_name
• event_name
• outcome
• direction
• user_email
• bytes
• message_id
• external_domain
• src
• num_recipients
• external_address
• attachments
• user
• event_code
Extended Fields:
• —
Informational Fields:
• dest
• subject
• return_path
• recipients
• alert_name
• event_name
• outcome
• direction
• user_email
• bytes
• message_id
• external_domain
• src
• num_recipients
• external_address
• attachments
• user
• event_code
Extended Fields:
• file_ext
• alert_id
• subject
• file_name
• dest_ip
• bytes
• external_domain
• num_recipients
• external_address
• user
Informational Fields:
• return_path
• recipients
• alert_name
• event_name
• outcome
• direction
• user_email
• dest_host
• message_id
• src
• attachments
• event_code
Extended Fields:
• file_name
• bytes
• external_domain
• external_address
• user
Informational Fields:
• dest
• subject
• return_path
• recipients
• alert_name
• event_name
• outcome
• direction
• user_email
• message_id
• src
• num_recipients
• attachments
• event_code
Extended Fields:
• —
Informational Fields:
• dest
• dns_respons_code
• result
• query_type
• src_port
• event_name
• outcome
• bytes
• response
• query_id
• dest_port
• src_mac
• category
• activity
• user
• event_code
• protocol
• query_flags
Extended Fields:
• category
Informational Fields:
• response_flags
• result
• query_type
• src_port
• event_name
• outcome
• bytes
• response
• query_id
• dest_port
• src
• activity
• user
• event_code
• protocol
Subcategory: — • domain
• host
• object
• time
• user
Extended Fields:
• activity_type
• object_class
• dest_host
• src
• attribute
Informational Fields:
• object_ou
• event_name
• dest_ip
• new_attribute
• object_dn
• account
• logon_id
• old_attribute
• event_code
Subcategory: • failure_reason
Activity
• host
• app
• time
• user
Extended Fields:
• src_ip
Informational Fields:
• dest
• browser
• src_host
• os
• event_name
• outcome
• user_agent
• event_code
Extended Fields:
• —
Informational Fields:
• object_ou
• dest
• failure_reason
• event_name
• outcome
• activity_type
• object_class
• new_attribute
• src
• object_dn
• attribute
• old_attribute
• event_code
Extended Fields:
• failure_reason
• result_code
• logon_type
• auth_package
• src
• event_code
Informational Fields:
• process_name
• process
• domain
• event_name
• user_sid
Extended Fields:
• location_building
• location_city
• user
Informational Fields:
• first_name
• employee_id
• dest
• event_name
• direction
• last_name
• src
• event_code
Extended Fields:
• —
Informational Fields:
• process_name
• process
• activity_details
• domain
• os
• event_name
• device_type
• bytes
• src
• file_path
• activity
• event_code
failed-vpn-login A remote access VPN login attempt Cateogory: VPN Required Fields:
performed either from a public IP address
or from an internal network address failed. Subcategory: Login • host
• src
• time
• user
Extended Fields:
• —
Informational Fields:
• dest
• failure_reason
• domain
• realm
• event_code
Extended Fields:
• process
• user
Informational Fields:
• process_name
• alert_severity
• file_ext
• alert_id
• domain
• os
• alert_type
• event_name
• accesses
• src
• file_path
• file_parent
• action
• event_code
Subcategory: • dest
Activity
• file_name
• host
• time
• user
Extended Fields:
• file_ext
• accesses
• src
• file_parent
Informational Fields:
• src_file_dir
• process_name
• process
• domain
• event_name
• object
• bytes
• src_file_name
• file_path
• app
• file_type
• activity
• event_code
• protocol
Subcategory: • dest
Download
• file_name
• host
• time
• user
Extended Fields:
• —
Informational Fields:
• src_file_dir
• process_name
• file_ext
• process
• domain
• event_name
• accesses
• object
• bytes
• src
• src_file_name
• file_path
• file_parent
• activity
• event_code
• protocol
Extended Fields:
• file_ext
• src
• file_parent
Informational Fields:
• src_file_dir
• process_name
• process
• domain
• event_name
• object
• bytes
• src_file_name
• file_path
• app
• file_type
• activity
• event_code
• protocol
Extended Fields:
• file_ext
• process
• accesses
• src
• file_parent
Informational Fields:
• src_file_dir
• process_name
• domain
• event_name
• object
• bytes
• src_file_name
• file_path
• app
• file_type
• activity
• event_code
• protocol
Subcategory: • dest
Activity
• file_name
• host
• time
• user
Extended Fields:
• —
Informational Fields:
• src_file_dir
• process_name
• file_ext
• process
• domain
• event_name
• accesses
• object
• bytes
• src
• src_file_name
• file_path
• app
• file_type
• file_parent
• activity
• event_code
• protocol
Extended Fields:
• src_file_dir
• file_ext
• process
• accesses
• src
• src_file_name
• file_path
• file_parent
Informational Fields:
• process_name
• domain
• event_name
• object
• bytes
• app
• file_type
• activity
• event_code
• protocol
Informational Fields:
• event_name
• src_ip
• service_name
• user_sid
Subcategory: • dest
NOTE Activity
This is tied to Windows • host
events 4624 or 528 • time
events with logon type
• user
2 or 7. Also tied
to Windows events Extended Fields:
with logon type 11
and a process name • process
indicating a local • src_host
interactive logon. And • domain
tied to Linux local
• logon_type
logon events.
• badge_id
• account
• event_code
Informational Fields:
• process_name
• event_name
• src_ip
• auth_package
• logon_id
Extended Fields:
• account_ou
• dest_host
• src
• account_name
Informational Fields:
• account_comain
• group_type
• domain
• event_name
• group_domain
• dest_ip
• account
• logon_id
• event_code
• account_dn
Extended Fields:
• src
Informational Fields:
• dest
• account_domain
• group_type
• domain
• account_ou
• event_name
• group_domain
• account_name
• logon_id
• event_code
• account_dn
nac-failed-logon A logon attempted to a NAC failed. Category: Network Required Fields:
Subcategory: • dest
Security
• domain
• host
• time
• user
Extended Fields:
• —
Informational Fields:
• network
• auth_server
• event_name
• src_mac
• src
• account
• event_code
Subcategory: • dest
Access
• host
• time
• user
Extended Fields:
• auth_type
• src_mac
Informational Fields:
• network
• domain
• auth_server
• event_name
• src
• account
• event_code
Extended Fields:
• protocol
Informational Fields:
• bytes_in
• time_end
• event_name
• outcome
• direction
• dest_interface
• end_reason
• bytes
• src_interface
• time_start
• bytes_out
• packets
• user
• event_code
Extended Fields:
• process
• dest_port
• user
Informational Fields:
• process_name
• alert_severity
• alert_id
• domain
• additional_info
• src_port
• alert_type
• event_name
• outcome
• bytes
• policy
• event_code
• protocol
Extended Fields:
• —
Informational Fields:
• bytes_in
• failure_reason
• dest_mac
• dest_translated_ip
• event_name
• outcome
• rule
• src_translated_ip
• direction
• dest_interface
• bytes
• src_mac
• dest_translated_port
• src_interface
• src_translated_port
• activity
• user
• event_code
• protocol
Extended Fields:
• —
Informational Fields:
• bytes_in
• failure_reason
• dest_mac
• dest_translated_ip
• event_name
• outcome
• rule
• src_translated_ip
• direction
• dest_interface
• bytes
• src_mac
• dest_translated_port
• src_interface
• bytes_out
• src_translated_port
• activity
• action
• user
• event_code
• protocol
Informational Fields:
• event_name
• src_ip
physical-access A user successfully opened a door, gate, Category: Physical Required Fields:
or other entrance using their badge. Security
• time
Subcategory:
Activity • host
• badge_id
• location_door
Extended Fields:
• location_city
• Location_building
• user
Informational Fields:
• first_name
• employee_id
• dest
• event_name
• outcome
• direction
• last_name
• src
• event_code
Extended Fields:
• —
Informational Fields:
• domain
• event_name
• outcome
• bytes
• num_pages
• src
• account
• activity
• event_code
Informational Fields:
• object_server
• domain
• event_name
• debug_privilege
• environment_privilege
• process_directory
• object
• tcb_privilege
• ownership_privilege
• logon_id
Extended Fields:
• process-Name
• process
• src_host
Informational Fields:
• object_server
• domain
• object_type
• event_name
• debug_privilege
• src_ip
• environment_privilege
• process_directory
• tcb_privilege
• ownership_privilege
• logon_id
• event_code
Extended Fields:
• command_line
• process
• user
Informational Fields:
• alert_severity
• alert_id
• domain
• additional_info
• src_port
• event_name
• parent_process
• process_directory
• md5
• dest_port
• src
• event_code
Extended Fields:
• command_line
• process
• src_host
• process_directory
• user
• event_code
Informational Fields:
• path
• domain
• event_name
• outcome
• src_ip
• parent_process
• md5
• pid
• logon_id
Extended Fields:
• user
• process
Informational Fields:
• command_line
• path
• domain
• event_name
• outcome
• parent_process
• md5
• pid
• src
• logon_id
• event_code
Extended Fields:
• process
• process_directory
• web_domain
• dest_port
• user
Informational Fields:
• domain
• src_port
• event_name
• direction
• bytes
• md5
• pid
• event_code
Extended Fields:
• process
• user
Informational Fields:
• domain
• src_port
• event_name
• direction
• process_directory
• bytes
• web_domain
• md5
• pid
• dest_port
• event_code
Extended Fields:
• dest
• process
• ticket_options
• domain
• ticket_encryption_type
• auth_package
• account
• event_code
Informational Fields:
• process_name
• logon_type
• event_name
• logon_id
Extended Fields:
• process
• ticket_options
• domain
• ticket_encryption_type
• logon_type
• auth_package
• badge_id
• account
• event_code
Informational Fields:
• process_name
• event_name
• service_name
• logon_id
Extended Fields:
• process_name
• alert_severity
• dest
• process
• alert_type
• user
Informational Fields:
• malware_url
• alert_id
• file_name
• domain
• additional_info
• src_port
• event_name
• outcome
• md5
• dest_port
• category
• event_code
Subcategory: • host
NOTE Creation
This is tied to service • service_name
creation events, such • dest_host
as Windows 4697.
• time
• user
Extended Fields:
• process_name
• process
Informational Fields:
• account_domain
• domain
• event_name
• process_directory
• dest_ip
• src
• account_name
• service_type
• user_sid
• logon_id
• event_code
Subcategory: — • dest
NOTE
This is tied to Windows • host
events 4624 and 528 • time
with logon type 5.
• user
Extended Fields:
• process
• src_host
• account
• event_code
Informational Fields:
• process_name
• domain
• logon_type
• event_name
• src_ip
• auth_package
• logon_id
Extended Fields:
• src_port
• file_path
• event_code
Informational Fields:
• file_name
• domain
• event_name
• outcome
• accesses
• src
• share_path
• file_type
• logon_id
Extended Fields:
• —
Informational Fields:
• src_port
file_path
• event_code
• file_name
• domain
• event_name
• outcome
• accesses
• src
• share_path
• file_type
• logon_id
storage-access Object access activity from a cloud Category: Cloud Required Fields:
storage bucket.
Subcategory: • user
Storage
• service
• activity
• bucket
• file_name
Extended Fields:
• —
Informational Fields:
• src_ip
• user_agent
Subcategory: • user
Storage
• service
• activity
Extended Fields:
• bucket
Informational Fields:
• src_ip
• user_agent
• role
• policy
• account
storage-activity- Failed activity against cloud storage Category: Cloud Required Fields:
failed services.
Subcategory: • user
Storage
• service
• activity
• failure_reason
Extended Fields:
• bucket
Informational Fields:
• src_ip
• user_agent
• role
• policy
• account
Subcategory: • dest
NOTE Activity
Tied to Windows event • host
4698. • task_name
• time
• user
Extended Fields:
• process_name
• process
Informational Fields:
• account_domain
• description
• domain
• run_level
• event_name
• process_directory
• s
• account_name
• event_code
Extended Fields:
• —
Informational Fields:
• process_name
• process
• activity_details
• domain
• os
• event_name
• device_type
• bytes
• src
• file_path
• event_code
Extended Fields:
• —
Informational Fields:
• process_name
• process
• activity_details
• domain
• os
• event_name
• device_type
• bytes
• src
• file_path
• activity
• event_code
Extended Fields:
• process
Informational Fields:
• process_name
• file_ext
• activity_details
• domain
• os
• event_name
• process_directory
• device_type
• bytes
• src
• file_path
• activity
• event_code
Extended Fields:
• device_id
• process
Informational Fields:
• src_file_dir
• process_name
• file_ext
• activity_details
• domain
• process_name
• os
• event_name
• process_directory
• device_type
• bytes
• src
• src_file_name
• file_path
• activity
• src_file_ext
• event_code
Extended Fields:
• —
Informational Fields:
• user
• src_port
• src_translated_ip
• dest_port
• dest_translated_ip
• src_translated_port
• dest_translated_port
• bytes_in
• bytes_out
• session_duration
• action
• session_id
vpn-login Remote access VPN login attempt either Category: VPN Required Fields:
from a public IP address or from an
internal network address was successful. Subcategory: Login • host
• src
• time
• user
Extended Fields:
• dest
• os
• src_translated_ip
• badge_id
• realm
• account
Informational Fields:
• session_id
domain
• event_name
• event_code
Extended Fields:
• dest
• file_name
• alert_name
• service_name
• object
• bytes
• external_domain
• session_duration
• num_pages
• bytes_out
• safe_value
Informational Fields:
• bytes_in
• session_id
• domain
• os
• event_name
• src_translated_ip
• reason
• src
• realm
• account
• event_code
Extended Fields:
• top_domain
• bytes_in
• browser
• os
• uri_path
• full_url
• dest_ip
• src
• categories
• category
• uri_query
Informational Fields:
• result_code
• src_port
• event_name
• mime
• user_agent
• dest_host
• dest_port
• proxy_action
• bytes_out
• referrer
• event_code
• protocol
Extended Fields:
• top_domain
• bytes_in
• uri_path
• full_url
• dest_ip
• src
• categories
• category
• uri_query
Informational Fields:
• failure_reason
• browser
• result_code
• src_port
• os
• event_name
• mime
• user_agent
• dest_host
• dest_port
• proxy_action
• bytes_out
• referrer
• event_code
• protocol
Extended Fields:
• —
Informational Fields:
• client_type
• app_version
webconference- Web conference application operations. Category: Required Fields:
operations-activity Application
• user_email
Subcategory:
Activity • app
• activity
• object_type
Extended Fields:
• —
Informational Fields:
• additional_info
• object
web-meeting- Web conference meeting created. Category: Required Fields:
created Application
• user
Subcategory:
Activity • has_password
• enforce_login
• join_before_host
• waiting_room
Extended Fields:
• —
Informational Fields:
• meeting_number
• meeting_topic
• meeting_type
• meeting_duration
• meeting_timezone
• —
Informational Fields:
• meeting_number
• meeting_topic
• meeting_type
• meeting_duration
• meeting_timezone
web-meeting- Web conference meeting participant Category: Required Fields:
participant-joined joined. Application
• meeting_host_id
Subcategory:
Activity • participant_name
• participant_id
Extended Fields:
• —
Informational Fields:
• meeting_number
• meeting_topic
• meeting_type
• meeting_timezone
web-meeting- Web conference meeting started. Category: Required Fields:
started Application
• meeting_host_id
Subcategory:
Activity Extended Fields:
• —
Informational Fields:
• meeting_number
• meeting_topic
• meeting_type
• meeting_timezone
Extended Fields:
• —
Informational Fields:
• meeting_number
winsession- A user disconnected from an existing Category: Account Required Fields:
disconnect Terminal Services session.
Subcategory: • dest
Activity
NOTE • domain
This is tied to Windows • host
event 4779.
• time
• user
Extended Fields:
• src_host
Informational Fields:
• process_name
• event_name
• src_ip
• account
• logon_id
• event_code
Extended Fields:
• —
Informational Fields:
• process_name
• domain
• event_name
• src
• account
• event_code
workstation- A user unlocked their workstation. Category: Endpoint Required Fields:
unlocked
Subcategory: • dest
NOTE Activity
This is tied to Windows • host
event 4801. • time
• user
Extended Fields:
• —
Informational Fields:
• process_name
• domain
• event_name
• src
• account
• event_code
Exabeam Enrichment
Enrichment refers to adding contextual information to an event that is otherwise not already
parsed or not available in the log.
Most enrichment adds new values, modifies them, or creates new fields based on existing fields or
context table lookups.
Types of Enrichment
There are multiple types of enrichment available.
System-Defined Enrichment
This type of enrichment is done automatically by Advanced Analytics in the backend, and can be
slightly tuned in the custom_exabeam_config.conf file.
• Host-Ip Mapping – If a user or hostname is detected without the other, this enrichment feature
populates the missing field based on previously seen data.
• Security/Dlp-Alerts-to-User Mapping – When security or DLP alerts do not have the user
information, this enrichment feature populates the user field based on previously seen data.
User-Defined Enrichment
This type of enrichment can be manually controlled. It includes the following types of enrichment
activities:
• Context Retrieval
• New Field Based on Information Within the Event
• Field Modification
• Required Model/Rule Field
count-win-command {
EventTypes = ['process-created','privileged-object-access']
Condition = "exists(process_name) &&
((InList(toLower(process_name),'net.exe') &&
InList(toLower(arg),'start','user','time','view','use','localgroup','group','con
fig','share')) || (InList(toLower(process_name),'netsh.exe') &&
InList(toLower(arg),'advfirewall')) ||
(InList(toLower(process_name),'tasklist.exe','ver.exe','ipconfig.exe','systeminf
o.exe','netstat.exe','whoami','qprocess.exe','query.exe','type.exe','at.exe','re
g.exe','wmic.exe','wusa.exe','sc.exe','rundll32.exe','psexesvc.exe',
'icacls.exe', 'arp.exe', 'route.exe')))"
Map = [
{
Field = "win_command_count"
Value = """'1'"""
},
{
Field = "win_critical_command"
Value = """process_name"""
}
]
}
In the analytics engine, count expressions allow the occurrences of a field to be counted. In the
above example, the counting occurs on the process-name field. However, the expression does
not specify which process-name values should be counted. The enriched expression below
ensures that the new count field is only created when the process-created value occurs.
Sum(win_command_count, 'process-created')
NOTE
To count the number of times a set of conditions is satisfied in a session, create a field and assign it
a value of 1 to indicate the condition must be satisfied for the event. This allows implementation of
abnormal number based use cases.
A rule can use the field to know the unique number of processes that satisfied the condition.
DistinctCount(win_critical_command, 'process-created')
Context Retrieval
Enrichers that provide contextual enrichment use a parsed field value as a key to extract a value
for a new field from a context table.
In the example below, the user field is created from a context table called user_email. When
the email_user field is parsed from the log, the AD user value is fetched from the context table
and mapped to the email_user which will stitch the event to the user timeline.
user-email {...
Map = [
{
Field = "user"
Value = """GetValue('email_user',toLower(user_email))"""
}...
Example 1
In this example a user_type field is created, which holds an attribute about the user, such as
whether the user is a local user.
local-user {
EventTypes =
['batch-logon','file-delete','file-read','file-write','privileged-
access','privileged-object-access','process-created','service-
logon','workstation-locked','workstation-unlocked','local-logon','remote-
access','remote-logon','account-password-change','account-password-
reset','account-lockout','account-unlocked','account-enabled','account-
disabled','account-deleted','account-creation','member-added','member-removed']
Condition = "exists(domain) && ((exists(dest_host)
&& dest_host = domain) || InList(toLower(domain),'workgroup',
'window manager', 'font driver host')) && vendor='Microsoft
Windows' && !InList(event_code,'4648','4769','673','676','552')
and not EndsWith(user, '$') and !InList(toLower(user),'system','local
service','network service','anonymous logon')"
Map = [
{
Field = "user_type"
Value = """'local'"""
},
{
Field = "user"
Value = "concat(user, ' (', dest_host, ')')"
}
]
}
Example 2
In this example, the alert-based event types are reviewed and a determination is made about
which field is the local asset, based on priority and the isSiteLocal() function.
security-alert-local_asset {
EventTypes =
['security-alert','dlp-alert','process-alert','network-alert','database-alert']
Condition = "exists(src_host) || exists(src_ip) || exists(dest_host) ||
exists(dest_ip)"
Map = [
{
Field = "local_asset"
Value =
"""if(isSiteLocal(src_ip),first(src_host,src_ip),if(isSiteLocal(dest_ip),first(d
est_host,dest_ip),first(src_host,src_ip,dest_host,dest_ip)))"""
}
]
}
Field Modification
In the following example, an existing field is modified to create a field that can be used for
detection by existing Advanced Analytics content.
bytes-domain {
EventTypes =
['dlp-email-alert-out','dlp-email-alert-out-failed','dlp-alert','usb-
insert','usb-write','usb-read','dlp-email-alert-in','share-access','print-
activity','file-write','file-delete']
Condition = "exists(bytes_unit) && !exists(bytes)"
Map = [
{
Field = "bytes_num"
Value = """replaceAll(bytes_num, ",","")"""
},
{
Field = "bytes"
Value =
"""Multiply(bytes_num,ReturnIf(ToLower(bytes_unit)='kb',1024,ReturnIf(ToLower(by
tes_unit)='mb',1048576,ReturnIf(ToLower(bytes_unit)='gb',1073741824,0))))"""
}
]
}
NOTE
Advanced Analytics content, related to data transfer sizes, operates using bytes (not kilobytes,
megabytes, or gigabytes). So, when a value is parsed from a log that is not represented in bytes, the
parsed value is modified accordingly. The parsed bytes value (bytes_num) is multiplied by 1024 when
the bytes_unit value is in kilobytes, or by 1024*1024=1048576, if the value is in megabytes, and so on.
Enrichers often include a concat logical expression to put two field values together.
In the example below, a target_user field is created that either begins with root – if
target_user_id = '0' – or begins with the actual target_user_id field if the value is
not equal to 0.
unix-target-id {
EventTypes =
['account-deleted','account-password-change','account-password-reset']
Condition = """!exists(target_user) && exists(target_user_id) &&
exists(dest_host) && vendor='Unix'"""
Map = [
{
Field = "target_user"
Value = "ReturnIf(target_user_id = '0', concat('root (', dest_host, ')'),
concat(target_user_id, ' (', dest_host, ')'))"
}
]
}
netflow-scanhost {
EventTypes = ['netflow-connection']
Condition = "exists(src_host)"
Map = [
{
Field = "src_host_time"
Value = """concat(src_host, '-', take(time,9))"""
}
]
}
A rule can use the following syntax to track whether a host (src_host) reached out to another
host (dest_host) 20 times within a second. This expression works because the time value
is concatenated to the src_host value. Because src_host will not change, then the time is
reliably the same for the different events.
• First, restrict what events are enriched, and then define the fields and values to be created.
• Restriction is done using the 'EventType' and 'Condition' field.
• Make sure you only look at events that are of the type(s) specified in the 'EventTypes' field,
expressed as an array of event_types.
• Further restrict what gets enriched by adding the analytics engine expressions in the conditions
parameter. In the above example, we only enrich the event if the field 'some_field' ends with
'.bat' or we enrich the event if the field 'certain_field' already exists.
The following sections describe how persistence and event templates work.
NOTE
Be sure to define this configuration in the custom configuration file (/opt/exabeam/config/
custom/custom_exabeam_config.conf). Do not change the default configuration file.
For example, to persist a new field, vpn_source_location, that's associated with a parsed and
enriched vpn-login event, add it to PersistedEventFields as shown in the sample below.
Don't forget to enclose the entry within PersistedEventFields { } as shown below.
PersistedEventFields {
----------------------
----------------------
vpn-login = [_id,
vendor,
src_ip,
src_host,
"GetValue('country_code',src_ip)",
"GetValue('isp',src_ip)",
"GetValue('zone_info',dest)",
src_translated_ip,
dest_host,
dest_ip,
src_network_type,
realm,
os,
vpn_source_location]
----------------------
----------------------
}
For example, to display a new field, vpn_source_location, that's associated with a vpn-
login event on the UI, the field must be added to an event template associated with the vpn-
login event.
Event Template
Event templates are used to display fields associated with an event in the UI. The sample below
shows the structure of a VpnLoginTemplate event, and below it the template. The template is
used to display the fields associated with the vpn-login event in the UI. It includes fields such as
time, user, account, src_ip, src_host, source, and more.
VpnLoginTemplate {
rows = [
{
columns = [
{
label = "TIME"
value = "time|event.time"
},
{
label = "USER"
value = "user|event.user"
},
{
label = "ACCOUNT"
value = "user|event.account"
icon = "AccountSwitch"
}
]
},
{
columns = [
{
label = "SOURCE IP"
value = "asset|event.src_ip"
},
{
label = "SOURCE HOST"
value = "asset|event.src_host"
},
{
label = "SOURCE"
value = "default|event.source"
}
]
},
{
columns = [
{
label = "COUNTRY"
value = "location.country|event.getvalue('country_code', src_ip)"
},
{
label = "ISP"
value = "location.isp|event.getvalue('isp', src_ip)"
},
{
label = "VPN ASSIGNED IP"
value = "default|event.src_translated_ip"
}
]
},
{
columns = [
{
label = "VPN SERVER"
value = "default|event.dest_host"
},
{
label = "VPN SERVER IP"
value = "default|event.dest_ip"
}
]
},
{
columns = [
{
label = "VPN VENDOR"
value = "default|event.vendor"
},
{
label = "VPN REALM"
value = "default|event.realm"
},
{
label = "OS"
value = "default|event.os"
}
]
}
]
}
For example, to find the template associated with the vpn-login event, search for vpn-login
in the EventFormats section of the default configuration file. The sample entry below shows that
the associated template, VpnLoginTemplate, is listed in the DetailsTemplate parameter for
the vpn-login event.
EventFormats {
-------------------------
-------------------------
vpn-login {
DisplayName = "VPN login"
Description = "Remote access VPN login attempt either from a public IP address
or from an internal network address was successful."
HeaderTemplate = "VPN login from {location.country|
event.getvalue('country_code', src_ip)}"
DetailsTemplate = "VpnLoginTemplate"
}
-------------------------
-------------------------
}
To find the corresponding template configuration, search for the template name
VpnLoginTemplate, in the Templates section of the default configuration file. The
corresponding event template , is shown in the sample below:
Templates {
-----------------------------
-----------------------------
VpnLoginTemplate {
rows = [
{
columns = [
{
label = "TIME"
value = "time|event.time"
},
{
label = "USER"
value = "user|event.user"
},
{
label = "ACCOUNT"
value = "user|event.account"
icon = "AccountSwitch"
}
]
},
{
columns = [
{
label = -------
value = ------
}
]
},
--------------------
--------------------
}
----------------------
----------------------
}
1. In the EventFormats section of the default configuration file, search for vpn-login.
2. To find the name of the template associated with the vpn-login event, look for
the DetailsTemplate parameter in the vpn-login entry. The associated template is
VpnLoginTemplate.
3. To find the template configuration for VpnLoginTemplate, search for VpnLoginTemplate
in the Templates section of the default configuration file.
4. Copy the VpnLoginTemplate entry and paste it as a new entry in a custom configuration file.
NOTE
This step is necessary so that the default configuration file remains unchanged.
Note that fields are added as column in a template. Each template row can contain only three
columns. When adding a new field, if there is no open column in an existing row, add a new row
and then add the new field as the first column in the new row.
In the example below, the new vpn_source_location field has been added as the third
column to an existing row. Don't forget to enclose the entry within Templates { } as shown
below.
Templates {
-----------------------------
-----------------------------
VpnLoginTemplate {
rows = [
{
columns = [
{
label = "TIME"
value = "time|event.time"
},
{
label = "USER"
value = "user|event.user"
},
{
label = "ACCOUNT"
value = "user|event.account"
icon = "AccountSwitch"
}
]
},
--------------------
--------------------
{
columns = [
{
label = "VPN SERVER"
value = "default|event.dest_host"
},
{
label = "VPN SERVER IP"
value = "default|event.dest_ip"
},
{
label = "VPN SRC LOCATION"
value = "default|event.vpn_source_location"
}
]
},
--------------------
--------------------
}
----------------------
----------------------
}
Please note that the parameter label defines the name of the field displayed on the UI and value
parameter should contain the field for which you need the value to be displayed. Most importantly,
the field which you want to display has to be persisted as described in the earlier section. If not,
you will not be able to display the value for your field. In this case, vpn_source_location has
to be persisted in Mongo, and then added to the template in order to display it with respect to the
vpn-login event.
NOTE
In order for a field to be displayed in a UI by using an event template, the field must be persisted.
Exabeam Models
Exabeam Advanced Analytics performs anomaly detection using models. Without models, rules
can only score on 'fact' based logic, the kind that looks for specific things in the logs or counting
for specific values over an entire session. Models also track historical values (features) for a given
item (scope). For example, tracking hosts (feature values) a user (scope) has logged into. If the
current value is deemed to be abnormal, versus the historical values in the model, a rule can
associate a score with this anomaly. Anomaly detection is performed by calculating a number of
statistics about the features in a given model to check whether the feature value seen, in an event
being evaluated, is unusual or not.
Advanced Analytics statistical profiling is not only about user-level data. In fact, Exabeam profiles
other entities, including hosts and peer groups. RAM and performance permitting, just about
anything can be modeled. If it is parsed, then the parsed/enriched field can be used as either the
scope or the feature in a model. Ensuring how large a model might grow as well as understanding
what values in the future may populate the model and how it will affect anomaly detection are
factors to consider when deciding what to make a scope or feature for a model.
For information about the different type of models available, see Types of Models. For information
about the attributes contained in models, see the table in Model Attributes.
Types of Models
Exabeam includes three types of models. Follow the links below for information and samples of
each type of model.
• Categorical – This type of model is used to train on values that are strings such as host or user
names.
• Numerical_Clustered – This type of model is used to train on numerical values such as the
number of hosts that a user logs into during a session.
• Numerical_Time_of_Week – This type of model is used to train on the time when events occur.
Categorical Models
Categorical models are used to train on data represented by string values, such as host or user
names. These models can be based either on user behavior or on asset activity. In the sections
below, each example models specific endpoint process activity, but one is a user-based model
while the other is an asset-based model.
User-based Model
This example models the process-create and process-alert endpoint activity of a specific user.
The user-based nature of the model is clear from the value for the Scope attribute. For more
information about the model attributes, see the table below the example.
EPA-HP {
ModelTemplate = "Processes for the user"
Description = "Models processes for this user"
Category = "End Point Activity"
IconName = ""
ScopeType = "USER"
Scope = """user"""
Feature = """process_name"""
FeatureName = "process"
FeatureType = "process_name"
TrainIf = """sequenceCount(process_name,'process-created','process-alert')=1"""
ModelType = "CATEGORICAL"
AgingWindow = "32"
CutOff = "10"
Alpha = "2"
MaxNumberOfBins = "10000000"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = [ "process-created", "process-alert" ]
Disabled = "FALSE"
}
• Count
• sequenceCount
• DistinctCount
• sequenceDistinctCount
In this example, the following expression ensures that the model trains on all process_name
values for the specified user during process-create or process-alert activities.
sequenceCount(process_name,'process-created','process-alert')=1
ModelType This model trains on non-numerical data, so the ModelType value is CATEGORICAL.
HistogramEventTypes A histogram for this model displays the process-create and process-alert activity of a user
in a specific range of time.
For definitions and examples of other Exabeam model attributes, see Model Attributes.
Asset-based Model
This example models the process-create and process-alert endpoint activity of a specific asset.
The A– at the start of the model name indicates that it's an asset-based model. The asset-based
nature of the model is also clear from the value for the SequenceTypes attribute. For more
information about the model attributes, see the table below the example.
A-EPA-HP {
ModelTemplate = "Processes on this asset"
Description = "Models processes on this asset"
Category = "End Point Activity"
IconName = ""
ScopeType = "DEVICE"
Scope = """dest_host"""
Feature = """process_name"""
FeatureName = "process"
FeatureType = "process_name"
TrainIf = """CountBy(process_name,dest_host,'process-created','process-
alert','process-network')=1"""
ModelType = "CATEGORICAL"
AgingWindow = ""
CutOff = "10"
Alpha = "3"
MaxNumberOfBins = "5000000"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = [ "process-created", "process-alert", "process-
network" ]
SequenceTypes = [asset]
Disabled = "FALSE"
}
• CountBy
• CountByIf
• DistinctCountBy
• DistinctCountByIf
In this example, the following expression ensures that the model trains on all process_name
values for the destination host during process-create, process-alert, or process-
network activities.
CountBy(process_name,dest_host,'process-created','process-
alert','process-network')=1
ModelType This model trains on non-numerical data, so the ModelType value is CATEGORICAL.
HistogramEventTypes A histogram for this model displays the process-create, process-alert, and process-
network activities for an asset in a specific range of time.
SequenceTypes This example is an asset-based model, so the value is asset.
For definitions and examples of other Exabeam model attributes, see Model Attributes.
As shown in the image below, the algorithm fits the historical points into gamma distribution,
O(n), and then calculates the normal area boundary, b, using O(1). The resulting abnormal area is
identified as a in the graph example.
Numerical clustered models can be based either on user behavior or on asset activity. In the
sections below, each example models models the amount of data, per day, uploaded to the web,
but one is a user-based (UBA) model while the other is an asset-based (EA) model.
User-based Model
This example models the amount of data per day (in bytes) that a specific user uploads to the
web. The user-based nature of the model is clear from the value for the Scope attribute. For more
information about the model attributes, see the table below the example.
WEB-UBytesSum-Out {
ModelTemplate = "Sum of bytes written/uploaded to the web in a day by the user"
Description = "Models the amount of data (in bytes) that were uploaded to the
web in a day by the user"
Category = "Web Activity"
IconName = ""
ScopeType = "USER"
Scope = "user"
Feature = "sequenceSum(bytes_in_post,'web-activity-allowed')"
FeatureName = "bytes"
FeatureType = "quantity"
TrainIf = """sequenceSum(bytes_in_post,'web-activity-allowed')>0"""
ModelType = "NUMERICAL_CLUSTERED"
BinWidth = "5"
AgingWindow = ""
CutOff = "10"
Alpha = "1"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = [ "sequence-end" ]
Disabled = "FALSE"
}
• sum
• sequenceSum
• DistinctCount
• sequenceDistinctCount
In this example, the following expression ensures that the model trains when the sum of bytes
uploaded to the web by a user is greater than 0 for web-activity-allowed events.
sequenceSum(bytes_in_post,'web-activity-allowed')>0
ModelType This example models a quantity of data, so the ModelType value is NUMERICAL_CLUSTERED.
HistogramEventTypes A histogram for this model displays the amount of data per day (in bytes) uploaded to the web by
a user in a specific range of time. The sequence-end value indicates that the histogram for the
model is generated at the end of the sequence.
For definitions and examples of other Exabeam model attributes, see Model Attributes.
Asset-based Model
This example models the amount of data per day (in bytes) that a specific asset uploads to
the web. The A– at the start of the model name indicates that it's an asset-based model. The
asset-based nature of the model is also clear from the value for the SequenceTypes attribute. For
more information about the model attributes, see the table below the example.
A-WEB-BytesSum-Out {
ModelTemplate = "Sum of bytes written/uploaded to the web in a day by the asset"
Description = "Models the amount of data (in bytes) that were uploaded to the
web in a day by the asset"
Category = "Web Activity"
IconName = ""
ScopeType = "DEVICE"
Scope = "src_host"
Feature = "sumBy(bytes_in_post,src_host,'web-activity-allowed')"
FeatureName = "bytes"
FeatureType = "quantity"
TrainIf = """sumBy(bytes_in_post,'web-activity-allowed')>0"""
ModelType = "NUMERICAL_CLUSTERED"
BinWidth = "5"
AgingWindow = ""
CutOff = "10"
Alpha = "1"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = [ "sequence-end" ]
SequenceTypes = [asset]
Disabled = "FALSE"
}
• sumBy
• sumByIf
• DistinctCountBy
• DistinctCountByIf
In this example, the following expression ensures that the model trains when the sum of bytes
uploaded to the web by an asset is greater than 0 for web-activity-allowed events.
CountBy(process_name,dest_host,'process-created','process-
alert','process-network')=1
ModelType This example models a quantity of data, so the ModelType value is NUMERICAL_CLUSTERED.
HistogramEventTypes A histogram for this model displays the amount of data per day (in bytes) uploaded to the web by
an asset in a specific range of time. The sequence-end value indicates that the histogram for the
model is generated at the end of the sequence.
Sequence Types This example is an asset-based model, so the value is asset.
For definitions and examples of other Exabeam model attributes, see Model Attributes.
PR-UT-TOW {
ModelTemplate = "Print activity time for user"
Description = "Models the times of day that this user performs print activity"
Category = "Print Activity"
IconName = "user"
ScopeType = "USER"
Scope = "user"
Feature = "TimeOfWeek()"
FeatureName = "Time"
FeatureType = "Time"
TrainIf = """TRUE"""
ModelType = "NUMERICAL_TIME_OF_WEEK"
AgingWindow = ""
CutOff = "10"
Alpha = "1"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = [ "print-activity" ]
Disabled = "FALSE"}
// End of PR-UT-TOW
For definitions and examples of other Exabeam model attributes, see Model Attributes.
Model Categories
In both user-based and asset-based models, the Category attribute helps define the scope of a
model. The list below includes some of the default Exabeam model categories. For definitions and
examples of Exabeam model attributes, see Model Attributes.
• Alerts
• Applications
• Asset Activity Monitoring
• Assets
• Critical Server Login
• DLP
• Database Activity
• Devices
• Directory Service
• Domain Controller Login
• Email
• End Point Activity
• File Access
• Groups
• Identities
• Locations
• Network
• Network Alert
• Other
• Physical Access
• Print Activity
• Privilege Access
• Time
• TopGroups
• TopUsers
• Users
• VPN
• Web Activity
• Windows Audit Change
• Workstations
• Zones
Model Attributes
Attributes are the expressions that comprise a model. They define everything from the scope of a
model to the type of data a model should train on and under what conditions the model training
should occur. The table below provides definitions and examples of possible model attributes.
NOTE
Other expressions cannot be
used within the count()
expression. For example
count(concat(f1,f2)...)
is not supported. Instead, create
and use an enriched field with
the count expression you want to
include.
cf = [(N-C) / N]
N: number of observed events.
C: number of unique observed events.
: a factor determining how quickly the
confidence grows. The higher the number
the slower confidence grows.
cf = [(N-C) / N]
N: number of observed events.
C: number of unique observed events.
: a factor determining how quickly the
confidence grows. The higher the number
the slower confidence grows.
HistogramEventTypes Array of events to be considered by the model. [ "print-activity" ]
Disabled Indicates whether or not the model is disabled. If TRUE "FALSE"
no data is collected. Can be enabled.
Exabeam Rules
Once a log has passed the event building and enrichment phase it is now ready to be processed
against the "risk engine", where it will get evaluated against a set of rules that automatically ship
with Exabeam.
Rules contain the logical expressions that define unwanted and malicious behavior (or behavior
you want to be alerted on). They provide scoring to the timeline and all the values in the UI.
Every single definition that recognizes a specific malicious behavior that you would like to add
points to a timeline are defined in a Rule.
Rules are mainly defined by a logical expression, set in the RuleExpression field, and when this
expression evaluates to true, the rule "triggers" and points are added to the relevant session or
sequence. For example,
For overview information about working with rules, see Types of Rules and Create Rules. For
information about the attributes contained in rules, see the table in Rule Attributes.
Types of Rules
There are two types of rules:
Create Rules
Rules are meant to define some logical activity (for examples, a vpn login) you want to be alerted
on. Any event that successfully goes into a timeline can trigger a rule and add points to a timeline
and elevate a user to notable status. So, rules indicate the flagging of bad, suspicious, and benign
behavior.
• RuleID - The string that is the 'name' of the HOCON block in the rule configuration file.
If another rule seen later (below) in the rules.conf file has the same key, the first rule
configuration will be thrown out.
• ClassifyIf - Supports the analytics engine expressions that are usable in the RuleExpression,
meaning this is an additional place to place rule expression logic, though generally the logic put
in this attribute usually identifies restrictions and scoping of when the rule should trigger. Many
rules use this field to place a session/sequence "count" analytics engine expression that limits
the rule from triggering more than once for a specific value.
NOTE
For fact based rules, set ClassifyIf to "TRUE".
• RuleEventTypes - Events of the event type expressed in this field will be evaluated against
the rule. A rule with event type web-activity-denied will never be evaluated against a network-
connection-denied event.
• RuleExpression - Rule logic. When should this rule 'fire'. For example, RuleExpression ="""
process_name="malicious.exe" """.
• RuleType - session, asset, endpoint, web, file, database, external or account-lockout. All events
in Advanced Analytics fall under one of these seven event buckets. For example, process-
created events are only put into the 'endpoint' sequence. So, even if the RuleEventType
contains process-created, if RuleType = 'session', this rule will never fire for process-
created events.
When a rule is found to not trigger when it should, it is normally one of these five fields that are
edited to fix the rule.
When testing new rules, take the events you want to trigger against, and ensure that the event
type is included in RuleEvent types, and that the event type is included in the ruleType
(session, asset, endpoint sequence, etc.).
When DependencyExpression is used, Rule A can be conditioned to trigger only if rule B has
triggered for the same event. Complex sets of rules can be created by using and, or, and not
operators to define combinations of rule dependencies.
• WasRuleFired('Rule_Z') – The rule will trigger only if rule_Z has previously triggered.
• WasRuleFired('Rule_Z', dest_host) – The rule will trigger only if rule_Z has previously
triggered and the value of the dest_host in the event it triggered on is the same as the value in
the current event.
The WasRuleFired expression can also be used to negate a rule. For example, !
WasRuleFired('rule_X') indicates that a rule should only trigger if rule_X has not triggered.
This expression can often be used to ensure that a rule triggers only once per session or
sequence.
Internal Rules
Internal rules are used by other rules to create dependencies. These rules do not add scoring
points and are not displayed in the UI. They are used to identify events that have no security value
(and therefore no score), but are useful to identify a situation that could be significant for another
rule.
For example, the internal NEW_USER rule is used to identify a new user in the environment. It is
used as a dependency for different rules that identify access to privileged machines, executive
machines, etc. Together they identify a situation in which a new user is accessing a privileged
machine. In order to change how a new user is identified, simply change the NEW_USER rule and
the change is propagated to all the rules that depend on it.
Another use for internal rules is to set conditions that require information from more than one
model. Since a single rule can be triggered by conditions in only one model, multiple internal rules
can be used to set conditions in different models. They can be linked to a single score-generating
rule that uses the internal rules as dependencies to provide the score if all conditions are met.
• User based rules use Count, SequenceCount, and DistinctCount for gathering session/
sequence data.
• Asset based rules use CountBy for all purposes of gathering sequence data. All asset events
are 'sequence' events, and thus CountBy can be used for gathering sequence data for any
event type.
Fact-based Rules
Fact-based rules can be focused either on user behavior or on asset activity. In the sections
below, each example includes a fact-based rule, one based on user actions and the other on
asset activity.
The user-based example describes a simple rule commonly seen in traditional SIEMs. It specifies
that for a parsed field x in an event, trigger the rule if x = some_string.
The asset-based example describes a more powerful rule using the DistinctCountBy
expressions to check for occurrences of three types of alerts on a specific asset. The
DistinctCountBy on the source_host field will return the number of each type of alert
observed in a session.
FA-Outlook-pst {
RuleName = "A file ends with either pst or ost"
RuleDescription = "A file copied ends with either pst or ost"
ReasonTemplate = "PST/OST file copied"
AggregateReasonTemplate = "PST/OST file copied"
RuleType = "file"
RuleCategory = "File Activity"
ClassifyIf = """TRUE"""
RuleEventTypes = [ "file-write" ]
Disabled = "FALSE"
Model = "FACT"
FactFeatureName = "src_file_name"
Score = "20.0"
RuleLabels {
mitre = ["T1114"]
}
PercentileThreshold = "0.1"
RuleExpression = """sequenceCount(src_file_name,'file-write')=1 &&
(endsWith(toLower(src_file_name), '.pst') || endsWith(toLower(src_file_name),
'.ost'))"""
DependencyExpression = "FA-Outlook" }
A-ALERT-DISTINCT-NAMES {
RuleName = "Various security alerts on asset"
RuleDescription = "At least three distinct security alerts were reported for
the asset. This raises the probability that the asset is compromised."
ReasonTemplate = "Third distinct security alert on asset"
AggregateReasonTemplate = ""
RuleType = "asset"
RuleCategory = "Security Alert"
ClassifyIf = """TRUE"""
RuleEventTypes = [ "security-alert" ]
Disabled = "FALSE"
Model = "FACT"
FactFeatureName = """src_host"""
Score = "25.0"
RuleLabels {
mitre = ["T1066"]
}
PercentileThreshold = "0.1"
RuleExpression = """DistinctCountBy(alert_name, asset, 'security-alert')=3 && !
WasRuleFired('A-ALERT-DISTINCT-NAMES')"""
DependencyExpression = "NA"
Aggregation {
DataExpr = """DistinctCountBy(alert_name, asset, 'security-alert')=3 && !
WasRuleFired('A-ALERT-DISTINCT-NAMES')"""
EventExpr = "TRUE"
}
}
Model-based Rules
Every event processed by the analytics engine, goes through a classification phase, during
which the event is evaluated against all existing models. In this phase, the event is triaged with
corresponding models, and corresponding model data is stored with that event. This data is later
used in the RuleExpressions attribute. In rules that are model-based, the model calculations
used in the RuleExpression attribute are created when the event is classified. In this way, when
model logic is included in a rule expression, such as percentile_threshold_value, its value
has already been predetermined.
Model-based rules can be focused either on user behavior or on asset activity. In the sections
below, each example includes a model-based rule, one based on user actions and the other on
asset activity. For information purposes, the corresponding model for each rule is also shown.
The user-based example describes a rule that detects first-time security alerts for a user. The rule
is based on a model that trains on security alerts for the user and triggers whenever a new alert is
seen for that user.
The asset-based example describes a rule that determines whether a log-on failure event is
anomalous for an asset. The rule is based on a model that trains on the number of failed logon
events per day experienced by a specific asset. The rule triggers when the current failure event is
considered anomalous.
For a look at the model attributes on which the rule is based, see The Corresponding Model below
the rule.
The Rule
SA-UA-F {
RuleName = "First security alert name for user"
RuleDescription = "This is the first occurrence of this security alert name for
the user"
ReasonTemplate = "First security alert with name {default|featureValue|
histogram} for user"
AggregateReasonTemplate = "First security alert name for user: {default|
featureValue|histogram}"
RuleType = "session"
RuleCategory = "Security Alert"
ClassifyIf = """count(alert_name,'security-alert')=1"""
RuleEventTypes = [ "security-alert" ]
Disabled = "FALSE"
Model = "SA-UA"
FactFeatureName = "alert_name"
Score = "10.0"
RuleLabels {
mitre = ["T1078"]
}
PercentileThreshold = "0.1"
RuleExpression = """num_observations=0"""
DependencyExpression = "NA"
}
count(alert_name,'security-alert')=1
RuleEventTypes An array that indicates which events can trigger the rule. In this example, the rule is triggered
when a security-alert event occurs.
num_observations=0
DependencyExpression The value NA indicates that the rule is independent of other rules.
SA-UA {
ModelTemplate = "Security alert names for user"
Description = "Models security alert names for the user"
Category = "Other"
IconName = ""
ScopeType = "USER"
Scope = """user"""
Feature = """alert_name"""
FeatureName = "alert_name"
FeatureType = "alert_name"
TrainIf = """count(alert_name,'security-alert')=1"""
ModelType = "CATEGORICAL"
AgingWindow = "32"
CutOff = "5"
Alpha = "0.8"
MaxNumberOfBins = "1000000"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = [ "security-alert" ]
Disabled = "FALSE"
}
For a look at the model attributes on which the rule is based, see The Corresponding Model below
the rule.
The Rule
A-FLSh-Count-Ac {
RuleName = "Abnormal number of failed logons from asset (L)"
RuleDescription = "Extremely abnormal number of failed logons from asset"
ReasonTemplate = "({quantity|featureValue}) failed logons from asset, expected
around {quantity|percentileThresholdValue|histogram}"
AggregateReasonTemplate = ""
RuleType = "asset"
RuleCategory = "Failed Logon and Account Lockout"
ClassifyIf = """TRUE"""
RuleEventTypes = ["failed-logon"]
Disabled = "FALSE"
Model = "A-FLSh-Count"
FactFeatureName = "NA"
Score = "40.0"
ScoreTarget = src_host
RuleLabels {
mitre = ["T1110","T1078"]
}
PercentileThreshold = "0.1"
RuleExpression = """num_observations<percentile_threshold_count &&
ConfidenceFactorAboveOrEqual() && percentile_count_distance>5 && !
WasRuleFired('A-FLSh-Count-Ac')"""
DependencyExpression = "NA"
Aggregation {
DataExpr = """!WasRuleFired('A-FLSh-Count-Ac')"""
EventExpr = "TRUE"
ModelExpr = """num_observations<percentile_threshold_count &&
ConfidenceFactorAboveOrEqual() && percentile_count_distance>5"""
}
}
A-FLSh-Count {
ModelTemplate = "Count of failed logons from host"
Description = "Models the number of failed logons from this asset"
Category = "Assets"
IconName = ""
ScopeType = "DEVICE"
Scope = """src_host"""
Feature = """DistinctCountBy(event_id,src_host,'failed-logon')"""
FeatureName = "activity"
FeatureType = "quantity"
TrainIf = """DistinctCountBy(event_id,src_host,'failed-logon')>0"""
ModelType = "NUMERICAL_CLUSTERED"
AgingWindow = ""
CutOff = "5"
Alpha = "1"
MaxNumberOfBins = "1000000"
ConvergenceFilter = "confidence_factor>=0.8"
HistogramEventTypes = ["sequence-end"]
SequenceTypes = [asset]
Disabled = "FALSE"
}
Rule Attributes
Attributes are the expressions that comprise a rule. They contain the logical expressions that
define unwanted or malicious behavior, or behavior you want to be alerted on. The table below
provides definitions and examples of possible rule attributes.
{default|featureValue|
histogram}
{default|featureValue|
histogram}
multiply(field1,field2)