Professional Documents
Culture Documents
Burst handling
IBM® QRadar® uses burst handling to ensure that no data is lost when the system exceeds the allocated events
per second (EPS) or flows per minute (FPM) license limits.
When QRadar receives a data spike that causes it to exceed the allocated EPS and FPM limits, the extra events
and flows are moved to a temporary queue to be processed when the incoming data rate slows. When burst
handling is triggered, a system notification alerts you that the appliance exceeded the EPS or FPM license limit.
The backlog in the temporary queue is processed in the order that the events or flows were received. The older
data at the start of the queue is processed before the most recent data at the end of the queue. The rate at which
the queue empties or fills is impacted by several factors, including the volume and duration of the data spike, the
capacity of the appliance, and the payload size.
Hardware appliances normally can handle burst rates at least 50% greater than the appliance's stated EPS and
FPM capability, and can store up to 5GB in the temporary queue. The actual burst rate capability depends upon
the system load. VM appliances can achieve similar results if the VM is adequately sized and meets the
performance requirements.
The burst recovery rate is the difference between the allocated rate and the incoming rate. When the volume of
incoming data slows, the system processes the backlog of events or flows in the queue as fast as the recovery rate
allows. The smaller the recovery rate, the longer it takes to empty the queue.
Every morning, between 8am and 9am, a company's network experiences a data spike as employees log in and
begin to use the network resources
Internal events
IBM® QRadar® appliances generate a small number of internal events when they communicate with each other
as they process data.
To ensure that the internal events are not counted against the allocated capacity, the system automatically
returns all internal events to the license pool immediately after they are generated.
By adjusting the allocation of the shared license pool, you ensure that the event and flow capacity is distributed
according to the network workload, and that each QRadar® host has enough EPS and FPM to effectively manage
periods of peak traffic.
In deployments that have separate event collector and event processor appliances, the event collector inherits
the EPS rate from the event processor that it is attached to. To increase the capacity of the event collector,
allocate more EPS from the shared license pool to the parent event processor.
A license that includes both event and flow capacity might not contribute both the EPS and FPM to the shared
license pool. The license pool contributions are dependent on the type of appliance that the license is allocated
to. For example, when you apply a license to a 16xx Event Processor, only the EPS is added to the license pool.
The same license, when applied to a 17xx Flow Processor, contributes only the FPM to the license pool. Applying
the license to an 18xx Event/Flow Processor contributes both EPS and FPM to the pool. With exception of
software licenses for event or flow collectors, all software licenses contribute both the EPS and FPM to the shared
license pool, regardless of which type of appliance the license is allocated to.
As of QRadar V7.3.2, you can now acquire stackable EPS/Flow increments instead of replacing existing console or
other managed hosts license when you need to increase the overall event or flow thresholds of your deployment.
After the licenses are uploaded and deployed, the event/flow capacity can then be reallocated through the
License Pool Management.
The license pool becomes over-allocated when the combined EPS and FPM that is allocated to the managed hosts
exceeds the EPS and FPM that is in the shared license pool. When the license pool is overallocated, the License
Pool Management window shows a negative value for the EPS and FPM, and the allocation chart turns
red. QRadar blocks functionality on the Network Activity and Log Activity tabs, including the ability to view
events and flows from the Messages list on the main QRadar toolbar.
To enable the blocked functionality, reduce the EPS and FPM that you allocated to the managed hosts in your
deployment. If the existing licenses do not have enough event and flow capacity to handle the volume of network
data, upload a new license that includes enough EPS or FPM to resolve the deficit in the shared license pool.
Expired licenses
When a license expires, QRadar continues to process events and flows at the allocated rate.
If the EPS and FPM capacity of the expired license was allocated to a host, the shared resources in the license pool
might go into a deficit, and cause QRadar to block functionality on the Network Activity and Log Activity tabs.
License management
License keys entitle you to specific IBM® QRadar® products, and control the event and flow capacity for
your QRadar deployment. You can add licenses to your deployment to activate other QRadar products, such
as QRadar Vulnerability Manager.
When you install QRadar, the default license key is temporary and gives you access to the system for 35 days
from the installation date. The email that you received from IBM when you purchased QRadar contains your
permanent license keys. These license keys extend the capabilities of your appliance, and you must apply them
before the default license expires.
License keys determine your entitlement to IBM® QRadar® products and features, and the system
capacity for handling events and flows.
About this task
You must upload a license key when you are doing these tasks:
Deploying changes
Changes that are made to the IBM® QRadar® deployment must be pushed from the staging area to the
production area.
Procedure
1. On the navigation menu ( ), click Admin.
2. Check the deployment banner to determine whether changes must be deployed.
3. Click View Details to view information about the undeployed configuration changes.
4. Choose the deployment method:
a. To deploy changes and restart only the affected services, click Deploy Changes on the
deployment banner.
b. To rebuild the configuration files and restart all services on each managed host,
click Advanced > Deploy Full Configuration.
Important: QRadar continues to collect events when you deploy the full configuration.
When the event collection service must restart, QRadar does not restart it automatically. A
message displays that gives you the option to cancel the deployment and restart the service
at a more convenient time.
After you apply the license keys to QRadar, redistribute the EPS and FPM rates to ensure that each of the
managed hosts is allocated enough capacity to handle the average volume of network traffic, and still have
enough EPS and FPM available to efficiently handle a data spike. You do not need to deploy the changes after you
redistribute the EPS and FPM capacity.
License expiry
The processing capacity of the system is measured by the volume of events and flows that QRadar can process in
real time. The capacity can be limited by either the appliance hardware or the license keys. The temporary license
key allows for 5,000 events per second (EPS) on the QRadar Console, and 10,000 EPS on each managed host. The
FPM rate for the temporary license is 200,000 on both the QRadar Console and the managed hosts.
When a license expires, QRadar continues to process events and flows up to the licensed capacity limits. If the
EPS and FPM capacity of the expired license was allocated to a host, the shared license pool might go into a
deficit, and cause QRadar to block capabilities on the Network Activity and Log Activity tabs.
When QRadar is not licensed to handle the volume of incoming network data, you can add a license that has
more event or flow capacity.
Each host in your QRadar deployment must have enough event and flow capacity to ensure that QRadar can
handle incoming data spikes. Most incoming data spikes are temporary, but if you continually receive system
notifications that indicate that the system exceeded the license capacity, you can replace an existing license with
a license that has more EPS or FPM capacity.
By adjusting the allocation of the shared license pool, you ensure that the event and flow capacity is
distributed according to the network workload, and that each QRadar® host has enough EPS and FPM to
effectively manage periods of peak traffic.
In deployments that have separate event collector and event processor appliances, the event collector
inherits the EPS rate from the event processor that it is attached to. To increase the capacity of the event
collector, allocate more EPS from the shared license pool to the parent event processor.
A license that includes both event and flow capacity might not contribute both the EPS and FPM to the
shared license pool. The license pool contributions are dependent on the type of appliance that the license
is allocated to. For example, when you apply a license to a 16xx Event Processor, only the EPS is added
to the license pool. The same license, when applied to a 17xx Flow Processor, contributes only the FPM
to the license pool. Applying the license to an 18xx Event/Flow Processor contributes both EPS and FPM
to the pool. With exception of software licenses for event or flow collectors, all software licenses
contribute both the EPS and FPM to the shared license pool, regardless of which type of appliance the
license is allocated to.
As of QRadar V7.3.2, you can now acquire stackable EPS/Flow increments instead of replacing existing
console or other managed hosts license when you need to increase the overall event or flow thresholds of
your deployment. After the licenses are uploaded and deployed, the event/flow capacity can then be
reallocated through the License Pool Management.
Capacity sizing
The best way to deal with spikes in data is to ensure that your deployment has enough events per second
(EPS) and flows per minute (FPM) to balance peak periods of incoming data. The goal is to allocate EPS
and FPM so that the host has enough capacity to process data spikes efficiently, but does not have large
amounts of idle EPS and FPM.
When the EPS or FPM that is allocated from the license pool is very close to the average EPS or FPM for
the appliance, the system is likely to accumulate data in a temporary queue to be processed later. The
more data that accumulates in the temporary queue, also known as the burst-handling queue, the longer it
takes QRadar® to process the backlog. For example, a QRadar host with an allocated rate of 10,000 EPS
takes longer to empty the burst handling queue when the average EPS rate for the host is 9,500, compared
to a system where the average EPS rate is 7,000.
Offenses are not generated until the data is processed by the appliance, so it is important to minimize how
frequently QRadar adds data to the burst handling queue. By ensuring that each managed host has enough
capacity to process short bursts of data, you minimize the time that it takes for QRadar to process the
queue, ensuring that offenses are created when an event occurs.
When the system continuously exceeds the allocated processing capacity, you cannot resolve the problem
by increasing the queue size. The excess data is added to the end of the burst handling queue where it
must wait to be processed. The larger the queue, the longer it takes for the queued events to be processed
by the appliance.
Allocating a license key to a host
Allocate a license key to an IBM QRadar host when you want to replace an existing license, add
new QRadar products, or increase the event or flow capacity in the shared license pool.
Use the License Pool Management window to ensure that the events per second (EPS) and flows per minute
(FPM) that you are entitled to is fully used. Also, ensure that IBM QRadar is configured to handle periodic bursts
of data without dropping events or flows, or having excessive unused EPS and FPM.
Deleting licenses
Delete a license if you mistakenly allocated it to the wrong QRadar host. Also, delete an expired license to
stop IBM QRadar from generating daily system notifications about the expired license.
For auditing, export information about the license keys that are installed on your system to an external .xml file.
Incremental licensing
Incremental licensing streamlines the license distribution process and saves you time and effort because you
don't need separate licenses for each appliance. Purchase monthly capacity increases that can be applied to your
deployment, without running the risk that these temporary keys might shut down the entire system when they
expire. Now, you can add more flows and events to the Console license and redistribute to your pool of
appliances as you see fit. Use your operational budget to add capacity to perpetual licenses on a temporary basis
for short-term projects such as network onboarding, reorganizing, and testing use cases.
With incremental licensing, you can now acquire stackable EPS/Flow increments instead of replacing existing
console or other managed hosts license when you need to increase the overall event or flow thresholds of your
deployment. After the licenses are uploaded and deployed, the event/flow capacity can then be reallocated
through the License Pool Management. For example, suppose that you have a 3105 All-in-One with 1000 EPS on a
perpetual key. You’re working on a 6-month project where several new log sources need to be on-boarded for a
temporary period. Previously, this project would cause the EPS volumes to exceed the 1000 EPS limit. With the
new incremental licensing feature, you can purchase an extra 2500 EPS bundle for just 6 months. IBM provides a
license key that incrementally raises the EPS from 1000 through to 3500 for the 6-month period. At the end of the
6-month period, the additional 2500 EPS expires, but the original 1000 EPS remains operational, without any
additional intervention from support or product
You can review the status of the primary and secondary host in your high-availability (HA) cluster.
The following table describes the status of each host that is displayed in the System and License
Management window:
Status Description
Active Specifies that the host is the active system and that all services are running normally. The
primary or secondary HA host can display the active status.
Status Description
Note: If the secondary HA host displays the active status, the primary HA host failed.
Specifies that the host is acting as the standby system. In the standby state, no services are
Standby running but data is synchronized if disk replication is enabled. If the primary or secondary
HA host fails, the standby system automatically becomes the active system.
Specifies that the primary or secondary host failed.
If the primary HA host displays Failed, the secondary HA host assumes the responsibilities
of the primary HA host and displays the Active status.
Failed
If the secondary HA host displays Failed, the primary HA host remains active, but is not
protected by HA.
A system in a failed state must be manually repaired or replaced, and then restored. If the
network fails, you might need access to the physical appliance.
Specifies that data is synchronizing between hosts.
Synchronizing
Note: This status is displayed only when disk replication is enabled.
Services that process events, flows, offenses, and heartbeat ping tests are stopped for the
offline HA host.
Failover cannot occur until the administrator sets the HA host online.
Restoring Specifies that the host is restoring. For more information, see Verifying the status of
primary and secondary hosts.
Needs License Specifies that a license key is required for the HA cluster. In this state, no processes are
running.
Setting Offline Specifies that an administrator is changing the status of an HA host to offline.
Setting Online Specifies that an administrator is changing the status of an HA host to online
If the secondary HA host displays the Upgrading status, the primary HA host remains
active, but is not protected by HA. Heartbeat monitoring and disk replication, if enabled,
Status Description
continue to function.
After DSMs or protocols are installed and deployed on a Console, the Console replicates
the DSM and protocol updates to its managed hosts. When primary and secondary HA hosts
are synchronized, the DSM and protocols updates are installed on the secondary HA host.
Procedure
1. On the navigation menu ( ), click Admin.
2. On the navigation menu, click System Configuration.
3. Click the System and License Management icon.
4. Identify the QRadar® primary console.
5. Hover your mouse over the host name field.
Creating an HA cluster
Pairing a primary host, secondary high-availability (HA) host, and a virtual IP address creates an HA cluster.
If the primary HA host fails and the secondary HA host becomes active, the Cluster Virtual IP address is
assigned to the secondary HA host.
In an HA deployment, the interfaces on both the primary and secondary HA hosts can become saturated. If
performance is impacted, you can use a second pair of interfaces on the primary and secondary HA hosts to
manage HA and data replication. Use a crossover cable to connect the interfaces.
Important: You can enable a crossover connection during and after the creation of an HA cluster and this does not
cause any event collection downtime.
Procedure
1. On the navigation menu ( ), click Admin.
2. Click System and License Management.
3. Select the host for which you want to configure HA.
4. From the Actions menu, select Add HA Host and click OK.
5. Read the introductory text. Click Next.
6. Type values for the parameters:
Option Description
Primary A new primary HA host IP address. The new IP address replaces the previous IP address. The
Host IP current IP address of the primary HA host becomes the Cluster Virtual IP address.
address
The new primary HA host IP address must be on the same subnet as the virtual host IP
address.
Important: For a NAT deployment, use the private IP address of the primary HA host.
Secondary The IP address of the secondary HA host. The secondary HA host must be on the same subnet
HA host IP as the primary HA host.
address
Important: For a NAT deployment, use the private IP address of the secondary HA host.
7. To configure advanced parameters, click the arrow beside Show Advanced Options and type
values for the parameters.
Option Description
Heartbeat The time, in seconds, that you want to elapse between heartbeat pings. The default is 10
Interval seconds.
(seconds)
For more information about heartbeat pings, see Heartbeat ping tests.
Heartbeat The time, in seconds, that you want to elapse before the primary HA host is considered
Timeout unavailable if no heartbeat is detected. The default is 30 seconds.
(seconds)
Network The IP addresses of the hosts that you want the secondary HA host to ping. The default
Connectivity Testis to ping all other managed hosts in the QRadar deployment.
List peer IP For more information about network connectivity testing, see Network connectivity
addresses tests.
(comma
delimited)
Note: Do not exceed your system's capacity. The limit for Distributed Replicated Block Devices
is 4096 MB/s.
Disable Disk This option is displayed only when you are configuring an HA cluster by using a
Replication managed host.
Configure Crossover cables allow QRadar to isolate the replication traffic from all other QRadar traffic,
Crossover Cable such as events, flows, and queries.
You can use crossover cables for connections between 10 Gbps ports, but not the
management interface.
Crossover Select the interfaces that you want to connect to the primary HA host.
Option Description
Interface
Important: All interfaces with an established link, or an undetermined link, appear in the list.
Select interfaces with established links only.
Crossover Select Show Crossover Advanced Options to enter, edit, or view the property values.
Advanced
Options
Disconnecting an HA cluster
By disconnecting an HA cluster, the data on your primary HA console or managed host is not protected
against network or hardware failure.
Procedure
1. On the navigation menu ( ), click Admin.
2. On the navigation menu, click System Configuration.
3. Click the System and License Management icon.
4. Select the HA host that you want to remove.
5. From the toolbar, select High Availability > Remove HA Host.
6. Click OK.
Note: When you remove an HA host from a cluster, the host restarts.
Updating the /etc/fstab file
Before you disconnect a Fibre Channel HA cluster, you must modify
the /store and /storetmp mount information in the /etc/fstab file.
About this task
You must update the /etc/fstab file on the primary HA host and the secondary HA host.
Procedure
1. Use SSH to log in to your QRadar® host as the root user:
2. Modify the etc/fstab file.
a. Locate the existing mount information for the /store and /storetmp file systems.
b. Remove the noauto option for the /store and /storetmp file systems.
3. Save and close the file.
Editing an HA cluster
You can edit the advanced options for your HA cluster.
Procedure
1. On the navigation menu ( ), click Admin.
2. On the navigation menu, click System Configuration.
3. Click the System and License Management icon.
4. Select the row for the HA cluster that you want to edit.
5. From the toolbar, select High Availability > Edit HA Host.
6. Edit the parameters in the table in the advanced options section.
7. Click Next.
8. Review the information.
9. Click Finish.
Procedure
1. On the navigation menu ( ), click Admin.
2. On the navigation menu, click System Configuration.
3. Click the System and License Management icon.
4. Select the offline HA host that you want to set to Online.
5. From the toolbar, select High Availability > Set System Online.
What to do next
On the System and License Management window, verify the status of the HA host. Choose from one of the
following options:
Procedure
1. On the navigation menu ( ), click Admin.
2. On the navigation menu, click System Configuration.
3. Click the System and License Management icon.
4. In the System and License Management window, select the secondary HA host.
5. From the toolbar, select High Availability > Set System Offline.
Your primary HA host is automatically switched to be the Active HA host.
Note: Your IBM® QRadar® SIEM user interface might be inaccessible during this time. To
return the secondary HA host to standby and return the primary HA host to active run either of the
following commands:
/opt/qradar/ha/bin/ha giveback
The host that was active is now the standby host and the standby host is now the active
host.
/opt/qradar/ha/bin/ha takeover
The host that was on standby is now the active host and the host that was the active host is
now the standby host.
6. In the System and License Management window, select the secondary HA host.
7. From the toolbar, select High Availability > Set System Online.
Your secondary HA host is now the standby system.
What to do next
When you can access the System and License Management window, check the status column. Ensure
that the primary HA host is the active system and the secondary HA host is the standby system.
Determine the minimum QRadar version that is required for the version of QRadar to which you want to
update.
Software versions for all IBM QRadar appliances in a deployment must be the same version and
fix level. Deployments that use different QRadar versions of software are not supported.
Upgrade your QRadar Console first, and then upgrade each managed host. In high-availability (HA)
deployments, when you upgrade the HA primary host, the HA secondary host is automatically upgraded.
The following QRadar systems can be upgraded concurrently:
Event processors
Event collectors
Flow processors
QFlow collectors
Data nodes
App hosts
With QRadar 7.5.0 Update Package 2 you can enable secure boot. If Secure Boot is to be enabled on the
system the public key must be imported after the patch completes.
Procedure
1. Download the .sfs file from Fix Central (www.ibm.com/support/fixcentral).
o If you are upgrading QRadar SIEM, download the <QRadar>.sfs file.
o If your deployment includes an IBM QRadar Incident Forensics (6000) appliance,
download the <identifier>_Forensics_patchupdate-<build_number>.sfs file. The .sfs file
upgrades the entire QRadar deployment, including QRadar Incident Forensics and QRadar
Network Insights.
2. Use SSH to log in to your system as the root user.
3. Copy the SFS file to the /storetmp or /var/log directory or to another location that has sufficient
disk space.
Important: If the SFS file is in the /storetmp directory and you do not upgrade, when the
overnight diskmaintd.pl utility runs, the SFS file is deleted.
To verify you have enough space (5 GB) in the QRadar Console, type the following command:
What to do next
1. Unmount /media/updates by typing the following command:
umount /media/updates
3. Perform an automatic update to ensure that your configuration files contain the latest network
security information. For more information, see Checking for new updates.
5. Clear your web browser cache. After you upgrade QRadar, the Vulnerabilities tab might not be
displayed. To use QRadar Vulnerability Manager after you upgrade, you must upload and allocate
a valid license key. For more information, see the Administration Guide for your product.
6. Determine whether there are changes that must be deployed. For more information, see Deploy
Changes.
Migrating event collectors from GlusterFS to Distributed Replicated Block Device
If the QRadar upgrade detects stand-alone or clustered event collectors with GlusterFS in your
deployment, the upgrade fails. You must run a migration script separately on QRadar 7.3.2 Fix
Pack 3 or later before you upgrade to QRadar 7.4.2 or later. If your event collectors are deployed
on QRadar 7.1 or earlier and then upgraded to a later version, you must upgrade the file systems
table (fstab) before you migrate GlusterFS to Distributed Replicated Block Device.
Insider threats
Uses in-depth packet inspection to identify advanced threats and malicious content.
Extends the capabilities of QRadar to detect phishing attacks, malware intrusions, lateral
movement, and data exfiltration.
Records application activities, captures key artifacts, and identifies assets, applications, and users
that participate in network communications.
Data nodes
A data node is an appliance that you can add to your event and flow processors to increase storage
capacity and improve search performance. You can add an unlimited number of data nodes to your IBM®
QRadar® deployment, and they can be added at any time. Each data node can be connected to only one
processor, but a processor can support multiple data nodes.
/var No No
/var/log No No
/var/log/audit No No
/tmp No No
/home No No
IMPORTANT: For the partitions listed in the table as critical for system functionality, system
services are stopped to avoid the partition becoming full and prevent further issues. A maximum
threshold notification is sent to the UI and can also be seen in /var/log/qradar.log:
[hostcontext.hostcontext]com.q1labs.hostcontext.ds.DiskSpaceSentinel: [ERROR] [-/- -]Disk usage on at least one disk has
exceeded
the maximum threshold level of 0.95. The following disks have exceeded the maximum threshold level: /transient.
Processes are being
shut down to prevent data corruption. To minimize the disruption in service, reduce disk usage on this system.
While the other partitions denoted as noncritical, the disk sentry check gives a warning when the
threshold is met, but system processes are not stopped and don't cause an outage. When the
system recovers back under the threshold, a notification is sent to the UI, and the following
message is seen in /var/log/qradar.log:
[hostcontext.hostcontext] com.q1labs.hostcontext.ds.DiskSpaceSentinel: [INFO] [-/- -]System disk resources back to
normal levels
In the previous image, the affected host has the IP 10.11.12.13 and the partition affected
is "/".
3. SSH to the Console, then to the affected managed host if not the Console.
4. Use the df -Th command to get the output of the partitions.
5. df -Th
Example output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootrhel-root 13G 2.9G 9.7G 23% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 20K 16G 1% /dev/shm
tmpfs 16G 1.7G 15G 11% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/rootrhel-var 5.0G 208M 4.8G 5% /var
/dev/sda3 32G 4.1G 28G 13% /recovery
/dev/mapper/rootrhel-home 1014M 33M 982M 4% /home
/dev/sda2 1014M 224M 791M 23% /boot
/dev/mapper/rootrhel-tmp 3.0G 53M 3.0G 2% /tmp
/dev/mapper/rootrhel-opt 13G 5.1G 7.5G 41% /opt
/dev/mapper/rootrhel-storetmp 15G 34M 15G 1% /storetmp
/dev/mapper/rootrhel-varlog 15G 3.6G 12G 24% /var/log
/dev/mapper/storerhel-transient 40G 40G 236M 100% /transient
/dev/mapper/rootrhel-varlogaudit 3.0G 205M 2.8G 7% /var/log/audit
tmpfs 3.2G 0 3.2G 0% /run/user/0
/dev/drbd0 158G 78G 80G 50% /store
Notice that /dev/mapper/storerhel-transient has 100% in the Use% column. This
means /transient is the partition causing the alert.
Use the du and find commands to list the largest directories and files.
1. The following du command return with a recursive directory output for
the /partition/directory, sorted by the smallest to the largest.
du -chaxd1 /<partition> | sort -h | tail
Output Example for "/":
61M /usr/sbin
122M /usr/local
444M /usr/lib64
589M /usr/bin
941M /usr/lib
958M /usr/share
3.2G /usr
7.9G /root
12G /
12G total
NOTE: Sometimes the ssh session can time out before the du command
completes. In this case, it is best to run the du command inside a screen session,
which does not terminate upon ssh timeout, and is accessible until the session is
terminated. From the command prompt, run screen
You can also use the "exclude" option with du to identify other large directories
when analyzing a partition with a known large directory (such as ariel on
the /store partition).
du -chaxd1 -exclude=ariel /store
Once finished, the output of the du command is presented, omitting the excluded
directory.
2. The following find command returns the largest files found in a partition, sorted
by the smallest to the largest.
find /<partition> -xdev -type f -size +100M | xargs ls -lhSr
Output Example for "/":
-rw------- 1 root root 596M Jun 13 13:29 /core.26490
-rw-r--r-- 1 root root 7.9G Jan 10 16:02 /root/scripts/test_file.zip
In the previous output, the test_file.zip is a forgotten file, and core.26490 is a
system core file resulted from an abnormal exit of a process.
2. Search in the Disk Space 101 portal for specific information about each partition.
Alternatively, use the direct links in the following table.
QRadar Partition Articles
/var/log/audit
QRadar: About /var/log/audit partition
QRadar: /var/log and /var/log/audit fills to capacity due to logrotate issue
Result
The files and directories contributing to the lack of disk space are evident. Administrators can
proceed with the troubleshooting steps to remove the files or directories.
If the files are not evident enough, the administrators can contact QRadar Support for
assistance.
Changing the network settings of a QRadar Console in
a multi-system deployment
To change the network settings in a multi-system IBM® QRadar® deployment, remove all managed
hosts, change the network settings, add the managed hosts again, and then reassign the component.
Before you begin
You must have a local connection to your QRadar Console
If you are adding a network adapter to either physical appliance or a virtual machine, you must
shut down the appliance before you add the network adapter. Power on the appliance before you
follow this procedure.
Important: You cannot change the IP address of any host to the IP address of a previously deleted Managed Host.
Procedure
1. To remove managed hosts, log in to QRadar:
https://IP_Address_QRadar
The Username is admin.
a. On the navigation menu ( ), click Admin.
b. Click the System and License Management icon.
c. Select the managed host that you want to remove.
d. Select Deployment Actions > Remove Host.
e. in the Admin settings, click Deploy Changes.
2. Type the following command: qchange_netsetup.
Important:
a. If you attempt to run qchange_netsetup over a serial connection, the connection can be
misidentified as a network connection. To run over a serial connection use qchange_netsetup -
y. This command allows you to bypass the validation check that detects a network
connection.
b. Verify all external storage which is not /store/ariel or /store is not mounted.
The following table contains descriptions and notes to help you configure the network settings.
After you configure the installation parameters, a series of messages are displayed. The
installation process might take several minutes.
Managed hosts
For greater flexibility over data collection and event and flow processing, build a distributed IBM®
QRadar® deployment by adding non-console managed hosts, such as collectors, processors, and data
nodes.
Software compatibility requirements
Software versions for all QRadar appliances in your deployment must be at the same version and update
package level. Deployments that use different versions of software are not supported because mixed
software environments can prevent rules from firing, prevent offenses from being created or updated, or
cause errors in search results.
When a managed host uses a software version that is different than the QRadar Console, you might be
able to view components that were already assigned to the host, but you cannot configure the component
or add or assign new components.
Internet Protocol (IP) requirements
The following table describes the various combinations of IP protocols that are supported when you add
non-console managed hosts.
Restriction: *By default, you cannot add an IPv4-only managed host to a dual-stack single console. You must run
a script to enable an IPv4-only managed host.
A dual-stack console supports both IPv4 and IPv6. The following list outlines the conditions you must
follow in dual-stack environments:
You can add IPv6 managed hosts to a dual-stack single console, or to an IPv6-only console.
You can add only IPv4 managed hosts to a dual-stack single console.
Do not add a managed host to a dual-stack console that is configured for HA.
Do not add an IPv4 managed host that is not in an HA pair to an IPv6-only console, or to a dual-
stack console that is in an HA pair.
Adding an IPv4 managed host that is not in an HA pair to a dual-stack console that is in an HA
pair
An Event Collector that is configured to store and forward data to an Event Processor forwards the data
according to the schedule that you set. Ensure that you have sufficient bandwidth to cover the amount of
data that is collected, otherwise the forwarding appliance cannot maintain the scheduled pace.
Use the following methods to mitigate bandwidth limitations between data centers:
You can deploy a store and forward event collector, such as a QRadar 15XX physical or virtual appliance,
in the remote locations to control bursts of data across the network. Bandwidth is used in the remote
locations, and searches for data occur at the primary data center, rather than at a remote location.
When encryption is enabled, a secure tunnel is created on the client that initiates the connection, by using
an SSH protocol connection. When encryption is enabled on a managed host, an SSH tunnel is created for
all client applications on the managed host. When encryption is enabled on a non-Console managed host,
encryption tunnels are automatically created for databases and other support service connections to the
Console. Encryption ensures that all data between managed hosts is encrypted.
For example, with encryption enabled on an Event Processor, the connection between the Event
Processor and Event Collector is encrypted, and the connection between the Event
Processor and Magistrate is encrypted.
The SSH tunnel between two managed hosts can be initiated from the remote host instead of the local
host. For example, if you have a connection from an Event Processor in a secure environment to an Event
Collector that is outside of the secure environment, and you have a firewall rule that would prevent you
from having a host outside the secure environment connect to a host in the secure environment, you can
switch which host creates the tunnel so that the connection is established from the Event Processor by
selecting the Remote Tunnel Initiation checkbox for the Event Collector.
You cannot reverse the tunnels from your Console to managed hosts.
If you want to enable Network Address Translation (NAT) for a managed host, the network must use
static NAT translation.
Warning: Your firewall might block the managed host from being added because of multiple attempts to log in to
the QRadar Console by using SSH. To resolve this problem, tune your firewall to prevent the managed host from
being blocked.
You can connect a IBM QRadar Flow Collector only to an Event Collector. The
QRadar Flow Event number of connections is not restricted.
Collector Collector
You can't connect a QRadar Flow Collector to the Event Collector on a
15xx appliance.
Event Collector Event You can connect an Event Collector to only one Event Processor.
Processor
You can connect a non-console Event Collector to an Event Processor on
Source Target
Description
connection connection
If you configured IBM QRadar Incident Forensics in your deployment, you can add a QRadar Incident
Forensics managed host.
If you configured IBM QRadar Vulnerability Manager in your deployment, you can add vulnerability
scanners and a vulnerability processor
If you configured IBM QRadar Risk Manager in your deployment, you can add a managed host.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click System and License Management.
3. In the Display list, select Systems.
4. On the Deployment Actions menu, click Add Host.
5. Configure the settings for the managed host by providing the fixed IP address, and the root
password to access the operating system shell on the appliance.
6. Click Add.
7. Optional: Use the Deployment actions > View Deployment menu to see visualizations of your
deployment. You can download a PNG image or a Microsoft Visio (2010) VDX file of your
deployment visualization.
8. On the Admin tab, click Advanced > Deploy Full Configuration.
Important: QRadar continues to collect events when you deploy the full configuration. When the
event collection service must restart, QRadar does not restart it automatically. A message displays
that gives you the option to cancel the deployment and restart the service at a more convenient
time.
Adding an IPv4-only managed host in a dual-stack
environment
To add an IPv4-only managed host to a dual-stack Console, you must run scripts to prepare both
the managed host and the Console before you can add the managed host to the Console.
About this task
A dual-stack Console is one that supports both IPv4 and IPv6. You cannot add an IPv4-only managed host to
a QRadar® High Availability (HA) deployment.
QRadar Console
Managed QRadar Console QRadar Console
(dual-stack, QRadar Console (dual-stack, HA)
hosts (IPv6, single) (IPv6, HA)
single)
IPv4,
No No Yes* No
single
IPv4, HA No No No No
IPv6,
Yes Yes Yes No
single
Procedure
1. To enable your QRadar Console for dual-stack deployment, type the following command:
/opt/qradar/bin/setup_v6v4_console.sh ip=<IPv4_address_of_the_Console> netmask=<netmask>
gateway=<gateway>
This example assumes that the IPv4 address of the Console is 192.0.2.2, the subnet mask is
255.255.255.0, and the gateway is 192.0.2.1.
/opt/qradar/bin/setup_v6v4_console.sh ip=192.0.2.2 netmask=255.255.255.0 gateway=192.0.2.1
2. To allow an IPv4-only managed host to be added to your deployment, type the following
command on the Console:
/opt/qradar/bin/add_v6v4_host.sh host=<IP_address_of_the_managed_host>
This example assumes that the IPv4 address of the managed host is 192.0.2.3.
/opt/qradar/bin/add_v6v4_host.sh host=192.0.2.3
3. Add the IPv4-only managed host to the deployment.
Important: QRadar continues to collect events when you deploy the full configuration. When the event
collection service must restart, QRadar does not restart it automatically. A message displays that gives
you the option to cancel the deployment and restart the service at a more convenient time.
Configuring your local firewall
Use the local firewall to manage access to the IBM QRadar managed host from specific devices
that are outside the network. When the firewall list is empty, access to the managed host is
disabled, except through the ports that are opened by default.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click System and License Management.
3. In the Display list, select Systems.
4. Select the host for which you want to configure firewall access settings.
5. From the Actions menu, click View and Manage System.
6. Click the Firewall tab and type the information for the device that needs to connect to the
host.
a. Configure access for devices that are outside of your deployment and need to
connect to this host.
b. Add this access rule.
7. Click Save.
If you change the External Flow Source Monitoring Port parameter in
the QFlow configuration, you must also update your firewall access configuration.
Adding an email server
IBM QRadar uses an email server to distribute alerts, reports, notifications, and event messages.
You can configure an email server for your entire QRadar deployment, or multiple email servers.
Important: QRadar only supports encryption for the email server using STARTTLS.
Important: If you configure the mail server setting for a host as localhost, then the mail messages don't
leave that host.
Procedure
1. On the Admin tab, click Email Server Management.
2. Click Add, and configure the parameters for your email server.
3. Click Save.
Tip: Keep the TLS option set to On to send encrypted email. Sending encrypted email
requires an external TLS certificate.
4. To edit the port for an email server, click the Other Settings ( ) icon for the server, enter
the port number in the Port field, and then click Save.
5. To delete an email server, click the Other Settings icon for the server, and then
click Delete.
6. After you configure an email server, you can assign it to one or more hosts.
a. On the System and License Management page, select a host.
b. Change the Display list to show Systems.
c. Click Actions > View and Manage System.
d. On the Email Server tab, select an email server and click Save.
e. Test the connection to the email server by clicking the Test Connection button.
f. Click Save.
Importing external TLS certificates
You must import an external TLS certificate on any host that sends encrypted email.
Procedure
1. Copy the TLS certificate to the /etc/pki/ca-trust/source/anchors/ directory on the host that
sends encrypted email.
For example, to import a certificate titled TLS_email.crt, type the following command:
openssl s_client -connect <emailServer>:<port> < /dev/null | openssl x509
/etc/pki/ca-trust/source/anchors/TLS_email.crt
2. To update the certificates in the certificate authority (CA), type the following commands:
update-ca-trust enable
update-ca-trust extract
Managed hosts
For greater flexibility over data collection and event and flow processing, build a distributed IBM®
QRadar® deployment by adding non-console managed hosts, such as collectors, processors, and data
nodes.
Software compatibility requirements
Deployments that use different versions of software are not supported because mixed software
environments can prevent rules from firing, prevent offenses from being created or updated, or cause
errors in search results.
When a managed host uses a software version that is different than the QRadar Console, you might be
able to view components that were already assigned to the host, but you cannot configure the component
or add or assign new components.
Internet Protocol (IP) requirements
The following table describes the various combinations of IP protocols that are supported when you add
non-console managed hosts.
Restriction: *By default, you cannot add an IPv4-only managed host to a dual-stack single console. You must run
a script to enable an IPv4-only managed host.
A dual-stack console supports both IPv4 and IPv6. The following list outlines the conditions you must
follow in dual-stack environments:
You can add IPv6 managed hosts to a dual-stack single console, or to an IPv6-only console.
You can add only IPv4 managed hosts to a dual-stack single console.
Do not add a managed host to a dual-stack console that is configured for HA.
Do not add an IPv4 managed host that is not in an HA pair to an IPv6-only console, or to a dual-
stack console that is in an HA pair.
Adding an IPv4 managed host that is not in an HA pair to a dual-stack console that is in an HA
pair
App Host
An App Host is a managed host that is dedicated to running apps. provide extra storage, memory, and
CPU resources for your apps without impacting the processing capacity of your Console. Apps such as
User Behavior Analytics with Machine Learning Analytics require more resources than are currently
available on the Console.
You cannot upgrade to V7.3.2 or later with an App Node in your deployment.
You can run all of your apps on an App Host, or on the Console. It's not possible to run some apps
on the App Host and some others on the Console.
Minimum requirements for an App Host. You can run most apps with the
Small 4 12 GB 256 GB minimum requirements, but not larger apps such as QRadar DNS Analyzer
and User Behavior Analytics with Machine Learning.
12 or 64 GB or 500 GB or *You can run all apps that exist today, but this specification does not give
Medium
more more more you room for future apps.
Large 24 or 128 GB or1 TB or *You can run all apps that exist today, and you would have room for future
CPU Disk
RAM Description
cores Space
Users can scale storage and processing power independently of data collection.
As of QRadar V.7.2.7, a new data format with native data compression is used. Data is
compressed in memory and is written out to disk in a proprietary binary compressed format. The
new data format enables a better search performance and a more efficient use of system resources
than the previous data format. The previous data format did not have a native built-in compression
in older versions of QRadar.
The following diagram shows an example of some uses for Data Nodes in a deployment.
The following list describes the different elements that you need to consider when you deploy Data
Nodes.
Data clustering
Data Nodes add storage capacity to a deployment, and also improve performance by distributing
data that is collected across multiple storage volumes. When the data is searched, multiple hosts,
or a cluster does the search. The cluster can improve search performance, but doesn't require you
to add multiple event processors. Data Nodes multiply the storage for each processor.
Note: You can connect a Data Node to only one processor at a time, but a processor can support
multiple Data Nodes.
Deployment considerations
Data Nodes perform similar search and analytic functions as event and flow processors in
a QRadar deployment.
The operational speed on a cluster is affected by the slowest member of a cluster. Data
Node system performance improves if Data Nodes are sized similarly to the Event
Processors and Flow Processors in a deployment. To facilitate similar sizing between Data
Nodes and event and flow processors, Data Nodes are available on some core appliances..
The following table describes hardware information and requirements for the QRadar xx05 appliance:
Description Value
QRadar Event Processor 1605: 20,000 EPS
You can configure the QRadar xx29 as a QRadar Event Collector 1501 (sometimes referred to as a 1529).
The following table describes hardware information and requirements for the QRadar xx29 appliance:
Description Value
QRadar Event Collector 1501 40,000 EPS
The QRadar xx29-C appliance can be used for the following appliance types:
View hardware information and requirements for the QRadar xx29-C in the following table:
Description Value
QRadar Event Processor 1629: 40,000 EPS
The QRadar xx48 appliance handles the higher levels of performance that are required by enterprise class
clients. For example, you can use the QRadar xx48 appliance for the following requirements:
You want faster processing to search and analyze a large amount of data.
You want to reduce the footprint of an IBM QRadar deployment, so you install QRadar
xx48 appliances to reduce rack space.
The QRadar xx48 supports the following appliances types:
You can configure the QRadar xx48 as a QRadar Event Collector 1501 (sometimes referred to as 1548).
The following table describes hardware information and requirements for the QRadar xx48 appliance:
Description Value
QRadar Event Collector 1501: 80,000 EPS
The QRadar xx48-C appliance handles the higher levels of performance that are required by enterprise
class clients. For example, companies can use the QRadar xx48-C for the following requirements:
A company wants faster processing to search and analyze a large amount of data.
A company wants to reduce the footprint of an IBM QRadar deployment, so they install QRadar
xx48-C appliances to reduce rack space.
The following appliances are examples of appliance types that you can use the QRadar xx48-C for:
You can configure the QRadar xx48-C as a QRadar Event Collector 1501 (sometimes referred to as
1548). The default EPS threshold (capability) on M5 xx48-C appliances is 56,000. This value can be
overridden to values not exceeding 80,000 by completing the following steps.
View hardware information and requirements for the QRadar xx48-C in the following table.
Description Value
QRadar Event Collector 1501: 56,000 EPS (default) 80,000 EPS with override
Ensure that you have a 1 Gbps link and less than 10 ms latency between hosts in the cluster.
Searches that yield many results require more bandwidth.
Appliance compatibility
Data Nodes are compatible with all existing QRadar appliances that have an Event Processor or Flow
Processor component, including All-In-One appliances. Data Nodes are not compatible
with QRadar Incident Forensics PCAP appliances.
Data Nodes use standard TCP/IP networking, and do not require proprietary or specialized
interconnect hardware.
Install each Data Node that you want to add to your deployment the same as you would install any
other QRadar appliance. Associate Data Nodes with either an event or flow processor.
You can attach multiple Data Nodes to a single Event Processor or Flow Processor in a many-to-
one configuration.
When you deploy high availability (HA) pairs with Data Node appliances, install, deploy, and
rebalance data with the HA appliances before you synchronize the HA pair. The combined effect
of the data rebalancing and the replication process that is utilized for HA results in significant
performance degradation. If HA is set up on appliances to which Data Nodes are being introduced,
then disconnect HA on the appliances and then reconnect it when the rebalance of the cluster is
complete.
Decommissioning Data Nodes
Use the System and License Management window to remove Data Nodes from your deployment,
as with any other QRadar appliance. Decommissioning does not erase data on the host, nor does it
move the data to your other appliances. If you need to retain access to the data that was on
the Data Nodes, you must identify a location to move that data to.
Data Rebalancing
Adding a Data Node to a cluster distributes data to each Data Node. If it is possible, data
rebalancing tries to maintain the same percentage of available space on each Data Node.
New Data Nodes added to a cluster initiate more rebalancing from cluster event and flow
processors to achieve efficient disk usage on the newly added Data Node appliances.
Data rebalancing is automatic and concurrent with other cluster activity, such as queries and data
collection. No downtime is experienced during data rebalancing.
Data Nodes offer no performance improvement in the cluster until data rebalancing is complete.
Rebalancing can cause minor performance degradation during search operations, but data
collection and processing continue unaffected.
Important: When you add Data Nodes to a cluster, they must either all be encrypted, or all be
unencrypted. You cannot add both encrypted and unencrypted Data Nodes to the same cluster.
Note: Encrypted data transmission between Data Nodes and Event Processors is not supported.
Management and Operations
Data Nodes are self-managed and require no regular user intervention to maintain normal
operation. QRadar manages activities, such as data backups, high availability, and retention
policies, for all hosts, including Data Node appliances.
Data Node failure
If a Data Node fails, the remaining members of the cluster continue to process data.
When the failed Data Node returns to service, data rebalancing can occur to maintain proper data
distribution in the cluster, and then normal processing resumes. During the downtime, data on the
failed Data Node is unavailable, and I/O errors that occur appear in search results from the log and
network activity viewers in the QRadar user interface.
For catastrophic failures that require appliance replacement or the reinstallation of QRadar,
decommission Data Nodes from the deployment and replace them using standard installation
steps. Copy any data that is not lost in the failure to the new Data Node before you deploy. The
rebalancing algorithm accounts for data that exists on a Data Node, and shuffles only data that was
collected during the failure.
For Data Nodes deployed with an HA pair, a hardware failure causes a failover, and operations
continue to function normally.
SAN Overview
To increase the amount of storage space on your appliance, you can move a portion of your data to an
offboard storage device. You can move your /store, /store/ariel, or /store/backup file systems.
Moving the /store file system to an external device might affect QRadar performance.
After migration, all data I/O to the /store file system is no longer done on the local disk. Before you move
your QRadar data to an external storage device, you must consider the following information:
Searches that are marked as saved are also in the /transient directory. If you experience a local
disk failure, these searches are not saved.
A transient partition that exists before you move your data is likely to remain in existence after the
move, and it can be mounted on an iSCSI, or Fiber Channel storage mount.
Data encryption
Event data in QRadar® is not encrypted when stored, there are a number of things that can be done to
protect data that is stored on the system.
The /store or /store/ariel partitions can be placed on an external device, which uses transparent (to
QRadar) cryptography.
Event data can be protected from tampering by the use of event hashing.
To enable data encryption during QRadar software installations, see Installing RHEL on your system .
Important: To set up encryption on the storage, see the documentation for your storage solution.
QRadar Console
Provides the QRadar product user interface. In distributed deployments, use the QRadar Console to
manage multiple QRadar Incident Forensics Processor hosts.
QRadar Incident Forensics Processor
Provides the QRadar Incident Forensics product interface. The interface delivers tools to retrace
the step-by-step actions of cyber criminals, reconstruct raw network data that is related to a
security incident, search across available unstructured data, and visually reconstruct sessions and
events. You must add QRadar Incident Forensics Processor as a managed host before you can use
the security intelligence forensics capability.
QRadar Incident Forensics Standalone
Provides the QRadar Incident Forensics product user interface. Installing QRadar Incident Forensics
Standalone provides the tools that you need to do forensics investigations. Only forensics investigative
and the related administrative functions are available.
QRadar Network Packet Capture
You can install an optional QRadar Network Packet Capture appliance. If no other packet capture
device is deployed, you can use this appliance to store data that is used by QRadar Incident
Forensics. You can install any number of these appliances as a network tap or subnetwork to
collect the raw packet data. If no packet capture device is attached, you can manually upload the
packet capture files in the user interface or by using FTP. Depending on your network and packet
capture requirements, you can connect up to five packet capture devices to a QRadar Incident
Forensics appliance.
All-in-One deployment
In standalone or all-in-one deployments, you install the IBM QRadar Incident Forensics
Standalone software. These single appliance deployments are similar to installing the QRadar
Console and QRadar Incident Forensics managed host on one appliance, but without log management,
network activity monitoring, or other security intelligence features. For a stand-alone network forensics
solution, install the QRadar Incident Forensics Standalone in small to midsize deployments.
The following diagram shows a basic QRadar Incident Forensics All-in-One deployment.
Figure 1. All-in-one deployment
Distributed deployment
The following diagram shows a QRadar Incident Forensics distributed deployment.
Software versions for all IBM QRadar appliances in a deployment must be the same version and fix level.
Deployments that use different versions of software are not supported.
Figure 2. Distributed deployment
The following diagram shows packet forwarding from a IBM QRadar Flow Collector 1310 with a 10G
Napatech network card to a QRadar Network Packet Capture appliance.
The QRadar Flow Collector uses a dedicated Napatech monitoring card to copy incoming packets from
one port on the card to a second port that connects to a QRadar Network Packet Capture appliance.
Figure 3. Packet forwarding
You attached the cable to port 1 of the Napatech card on the QRadar Flow Collector 1310
appliance.
You attached the cable that is connected to port 2 of the Napatech card, which is the forwarding
port, to the QRadar Network Packet Capture appliance.
Verify layer 2 connectivity by checking for link lights on both appliances.
Procedure
1. Using SSH from your IBM QRadar Console, log in to QRadar Flow Collector as the root user. On
the QRadar Flow Collector appliance, edit the following file.
/opt/qradar/init/apply_tunings
a. Locate the following line, which is around line 137.
apply_multithread_qflow_changes()
{
APPLIANCEID=`$NVABIN/myver -a`
if [ "$APPLIANCEID" == "1310" ]; then
MODELNUM=$(/opt/napatech/bin/AdapterInfo 2>&1 | grep "Active FPGA Image" | cut -d'-' -
f2)
if [ "$MODELNUM" == "9220" ]; then..
b. In the AppendToConf lines that follow the code in the preceding step, add these lines:
AppendToConf SV_NAPATECH_FORWARD YES
AppendToConf SV_NAPATECH_FORWARD_INTERFACE_SRCDST "0:1"
These statements enable packet forwarding, and forward packets from port 0 to port 1.
The "RX" packet and byte statistics increment if the card is receiving data.
b. To verify that the Napatech card is transmitting data, type the following command:
/opt/napatech/bin/Statistics -dec -interactive
5. Verify that your QRadar Network Packet Capture is receiving packets from your QRadar Flow
Collector appliance.
a. Using SSH from your QRadar Console, log in to your QRadar Network Packet
Capture appliance as root on port 4477.
b. Verify that the QRadar Network Packet Capture appliance is receiving packets by typing
the following command:
watch -d cat /var/www/html/statisdata/int0.txt
The int0.txt file updates as data flows into your QRadar Network Packet
Capture appliance.
For example, your company is growing and this growth not only increases activity in the network, but it
also requires that you expand your QRadar deployment to other countries. Data retention laws vary from
country to country, so the QRadar deployment must be planned with these regulations in mind.
Your company must collect event data from one of the office locations that has intermittent
connectivity.
Your company must comply with data retention regulations in the countries where data is
collected. For example, if Germany requires that data remains in-country, then that data must not
be stored outside that country.
Your company installs collectors and processors in the German data center to comply with local
data laws.
In the French data center, your company installs collectors, so that data is sent by the collectors to
the head office, which is processed and stored at the head office. Search speeds are increased by
having the processor appliances on the same high-speed network segment as the QRadar Console.
Your company adds a store-and-forward Event Collector that has scheduled and rate-limited
forwarding connections in the remote data center. The scheduled and rate-limited connections
compensate for intermittent network connectivity and ensures that the requirement for more
bandwidth is avoided during regular business hours.
If you’re constantly searching for data on a remote processor, it’s better to have that processor on the
same high-speed network segment as the QRadar Console. If the bandwidth between the QRadar
Console and remote processor is not good, you might experience latency with your searches, especially
when you're doing multiple concurrent searches.
To get rough estimates of the events per second (EPS) or flows per minute (FPM) that you need to
process in your deployment, use the size of your logs that are collected from firewalls, proxy servers, and
Windows boxes.
You might need to add flow or event collectors to your deployment under these conditions:
Your data collection requirements exceed the collection capability of the All-in-One appliance.
You must collect events and flows in a different location than where your All-in-One appliance is
installed.
You are monitoring larger, or higher-rate packet-based flow sources that are faster than the 50
Mbps connection on the All-in-One.
A 3128 All-in-One appliance can collect up to 15,000 events per second (EPS) and 300,000 flows per
minute (FPM). If your collection requirements are greater, you might want to add Event
Collectors and Flow Collectors to your deployment. For example, you can add a QRadar Flow Collector
1202, which collects up to 3 Gbps.
An All-in-One appliance processes the events and flows that are collected. By adding Event
Collectors and Flow Collectors, you can use the processing that the All-in-One appliance usually does for
searches and other security tasks.
Packet-based flow sources require a Flow Collector that is connected either to a Flow Processor, or to an
All-in-One appliance in deployments where there is no Flow Processor appliance. You can collect
external flow sources, such as NetFlow, or IPFIX, directly on a Flow Processor or All-in-One appliance.
Adding remote collectors to a deployment
Add QRadar Event Collectors or QRadar Flow Collectors to expand a deployment when you need to
collect more events locally and collect events and flows from a remote location.
For example, you are a manufacturing company that has a QRadar All-in-One deployment and you add e-
commerce and a remote sales office. You now must monitor for security threats and are also now subject
to PCI audits.
You hire more employees and the Internet usage changes from mostly downloading to two-way traffic
between your employees and the Internet. Here are details about your company.
1. You add the e-commerce platform at your head office, and then you open a remote sales office.
2. You install an Event Collector and a Flow Collector at the remote sales office that sends data over
the Internet to the All-in-One appliance at your head office.
3. You upgrade your EPS license from 1000 EPS to 5000 EPS to meet the requirements for the extra
events that are collected at the remote office.
The following diagram shows an example deployment of when an Event Collector and a Flow
Collector are added at a remote office.
At your remote office, the Event Collector collects data from log sources and the Flow
Collector collects data from routers and switches. The collectors coalesce and normalize the data.
The collectors compress and send data to the All-in-One appliance over the wide area network.
The All-in-One appliance processes, and stores the data.
Your company monitors network activity by using the QRadar web application for searches,
analysis, reporting, and for managing alerts and offenses.
The All-in-one collects and processes events from the local network.
Add Event Processors and Flow Processors to your QRadar deployment to increase processing capacity
and increase storage. Adding processors frees up resources on your QRadar Console by moving the
processing and storage load to dedicated servers.
When you add Event Processors or Flow Processors to an All-in-One appliance the All-in-One acts as
a QRadar Console. The processing power on the All-in-One appliance is dedicated to managing and
searching the data that is sent by the processors, and data is now stored on the Event Processors and other
storage devices, rather than on the Console.
You typically add Event Processors and Flow Processors to your QRadar deployment for the following
reasons:
As your deployment grows, the workload exceeds the processing capacity of the All-in-One
appliance.
Your security operations center employs more analysts who do more concurrent searches.
The types of monitored data, and the retention period for that data increases, which increases
processing and storage requirements.
As your security analyst team grows, you require better search performance.
Running multiple concurrent QRadar searches and adding more types of log sources that you monitor,
affects the processing performance of your All-in-One appliance. As you increase the number of searches
and the amount of monitored data, add Event Processors and Flow Processors to improve the performance
of your QRadar deployment.
When you scale your QRadar deployment beyond the 15,000 EPS and 300,000 FPM on the most
powerful All-in-One appliance, you must add processor appliances to process that data.
Example: Adding a QRadar Event Processor to your deployment
You can add a QRadar Event Processor 1628, which collects and processes up to 40,000 EPS. You
increase your capacity by another 40,000 EPS every time you add a QRadar Event Processor 1628 to your
deployment. Add a QRadar Flow Processor 1728, which collects and processes up to 1,200,000 FPM.
The QRadar Event Processor 1628 is a collector and a processor. If you have a distributed network, it’s a
good practice to add Event Collectors to distribute the load and to free system resources on the Event
Processor.
In the following diagram, processing capacity is added when an Event Processor and a Flow Processor are added to
an QRadar 3128 (All-in-One), and the following changes take place:
Event and flow processing is moved off the All-in-One appliance to the event and flow
processors.
Event processing capacity increases to 40,000 EPS, which includes the 15,000 EPS that was on
the All-in-One.
Flow processing capacity increases to 1,200,000 FPM, which includes the 300,000 FPM that was
on the All-in-One.
Data that is sent by the event and flow collectors is processed and stored on the event and flow
processors.
Your data collection requirements exceed the collection capability of your processor.
You must collect events and flows at a different location than where your processor is installed.
You are monitoring packet-based flow sources.
Note: Event Collectors can buffer events, but Flow Collectors can't buffer flows.
Because search performance is improved when processors are installed on the same network as the
console, adding collectors in remote locations, and then sending that data to the processor, speeds up
your QRadar searches.
Adding an appliance to an All-in-One console
Any All-in-One console can become part of a distributed deployment by adding another appliance to
extend the resources and performance of the All-in-One console. Adding more appliances can increase
storage, process more data, and search faster. The Console appliance manages the other QRadar
appliances in the network. When you add appliances to an All-in-One console, it becomes a distributed
deployment console.
About this task
Adding hosts allows users to expand on the capabilities, storage, and resources from an All-in-One
appliance to create a distributed deployment.
Procedure
1. Log in to the All-in One Console as an administrator.
2. From the Admin tab, click the System and License Management icon.
3. In the Display list, select Systems.
4. On the Deployment Actions menu, select Add Host.
5. Enter the Host IP, Host Password (root user password). and configure the properties to Encrypt
Host Connections or define Network Address Translation parameters.
6. Check either Encrypt Host Connections or Network Address Translation and configure your
choice.
7. Click Add.
8. From the Admin tab, click Deploy Changes.
9. Click Continue to restart services.
Results
The host is added to the deployment and the appliance is added to the user interface. After the host is
added, you might need to allocate licenses to the appliance. For more information, see License
management.
When you install QRadar on your own hardware, you may see open ports that are used by
services, daemons, and programs included in Red Hat Enterprise Linux®.
When you mount or export a network file share, you might see dynamically assigned ports that are
required for RPC services, such as rpc.mountd and rpc.rquotad.
QRadar port usage
Review the list of common ports that IBM QRadar services and components use to communicate across
the network. You can use the port list to determine which ports must be open in your network. For
example, you can determine which ports must be open for the QRadar Console to communicate with
remote event processors.
Warning: If you change any common ports, your QRadar deployment might break.
WinCollect agents that remotely poll other Microsoft Windows operating systems might require
additional port assignments.
If the telnet connection is refused, it means that IMQ is not currently running. It is probable that
the system is either starting up or shutting down, or that services were shut down manually.
Searching for ports in use by QRadar
Use the netstat command to determine which ports are in use on the IBM QRadar Console or managed
host. Use the netstat command to view all listening and established ports on the system.
Procedure
1. Using SSH, log in to your QRadar Console, as the root user.
2. To display all active connections and the TCP and UDP ports on which the computer is listening,
type the following command:
netstat -nap
3. To search for specific information from the netstat port list, type the following command:
netstat -nap | grep port
Examples:
o To display all ports that match 199, type the following command:
Table 1. Public servers that QRadar must access. This table lists descriptions for the IP addresses or hostnames
that QRadar accesses. https://www.ibm.com/support/pages/node/6244622
Docker containers and network interfaces
A Docker network defines a communication trust zone where communication is unrestricted between
containers in that network.
Each network is associated with a bridge interface on the host, and firewall rules are defined to filter
traffic between these interfaces. Typically, containers within a zone that share the same Docker network
and host bridge interface can communicate with each other. An exception to this general rule is that
apps run on the same dockerApps network, but are isolated from each other by the firewall.
Docker interfaces
To view a list of Docker interfaces, type the following command:
docker network ls
The dockerApps interface is used to apply rules for communication between apps.
The dockerInfra interface is used to host service launcher and qoauth. Apps are isolated from most
infrastructure components but they must be able to connect to service launcher and qoauth to
manage secrets and authorization.
"NetworkMode": "appProxy"
This example shows how you use the docker inspect <docker_container_ID> command and
pipe it to less to view more network details:
The output in this example shows the configuration of the network that is used by the specified
container (d9b3e58649de), and shows the Docker network interface name (dockerApps) and the
IP address of the network that is assigned to the Docker container.
You can use two types of backups: configuration backups and data backups.
Important: Individual QRadar managed hosts do not have their own nightly configuration backup files.
The QRadar Console's configuration backup is a single file that contains a full database backup of all configuration
parameters for all hosts in the deployment. All configuration backups are stored on the QRadar Console by default.
Configuration backups include the following components:
Application configuration
Assets
Custom logos
Custom rules
Event categories
Flow sources
Groups
Log sources
Offenses
Event data
Flow data
Report data
Indexes
The data backup does not include application data. To configure and manage backups for application
data, see Backing up and restoring app data.
Backup QRadar configurations and data
By default, IBM® QRadar® creates a backup archive of your configuration information daily at
midnight. The backup archive includes your configuration information, data, or both from the previous
day. You can customize this nightly backup and create an on-demand configuration backup, as required.
Scheduling nightly backup
Use the Backup Recovery Configuration window to configure a night scheduled backup process.
About this task
By default, the nightly backup process includes only your configuration files. You can customize your
nightly backup process to include data from your IBM® QRadar® Console and selected managed hosts.
You can also customize your backup retention period, the backup archive location, the time limit for a
backup to process before timing out, and the backup priority in relation to other QRadar processes.
Note: The nightly backup starts running at midnight in the timezone where the QRadar Console is installed.
If QRadar automatic updates are scheduled to run at the same time, the performance of QRadar might be impacted.
Parameter Description
Type the location where you want to store your backup file. The default location
is /store/backup. This path must exist before the backup process is initiated. If this path
does not exist, the backup process aborts.
If you modify this path, make sure the new path is valid on every system in your
deployment.
If you change the path to the backup repository, all files in the previous repository
automatically are moved to the new path.
Backup Repository The directory must match one of the following formats:
Path
/store/backup/*
/mount/*
/mnt/*
/home/*
Active data is stored on the /store directory. If you have both active data and backup
archives stored in the same directory, data storage capacity might easily be reached and
your scheduled backups might fail. We recommend you specify a storage location on
another system or copy your backup archives to another system after the backup process
is complete. You can use a Network File System (NFS) storage solution in
your QRadar deployment.
Type or select the length of time, in days, that you want to store backup files. The
Backup Retention default is 7 days.
Period (days)
This period of time only affects backup files generated as a result of a scheduled
process. On-demand backups or imported backup files are not affected by this value.
Nightly Backup
Select a backup option.
Schedule
Select the This option is only displayed if you select the Configuration and Data
managed hosts you Backups option.
would like to run
data backups: All hosts in your deployment are listed. The first host in the list is your Console; it is
enabled for data backup by default, therefore no check box is displayed. If you have
Parameter Description
managed hosts in your deployment, the managed hosts are listed below the Console and
each managed host includes a check box.
Select the check box for the managed hosts you want to run data backups on.
For each host (Console or managed hosts), you can optionally clear the data items you
want to exclude from the backup archive.
Configuration Only Backup
Type or select the length of time, in minutes, that you want to allow the backup to run. The
Backup Time
default is 180 minutes. If the backup process exceeds the configured time limit, the backup
Limit (min)
process is automatically canceled.
From this list box, select the level of importance that you want the system to place on
Backup Priority the configuration backup process compared to other processes.
Type or select the length of time, in minutes, that you want to allow the backup to run. The
Backup Time
default is 1020 minutes. If the backup process exceeds the configured time limit, the backup is
Limit (min)
automatically canceled.
From the list, select the level of importance you want the system to place on the data
Backup Priority backup process compared to other processes.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. On the toolbar, click Configure.
4. On the Backup Recovery Configuration window, customize your nightly backup.
5. Click Save.
6. Close the Backup Archives window.
7. On the Admin tab, click Deploy Changes.
Creating an on-demand configuration backup archive
If you must back up your configuration files at a time other than your nightly scheduled backup, you can
create an on-demand backup archive. On-demand backup archives include only configuration
information.
About this task
You initiate an on-demand backup archive during a period when IBM® QRadar® has low processing
load, such as after normal office hours. During the backup process, system performance is affected.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. From the toolbar, click On Demand Backup.
4. Enter values for the following parameters:
Option Description
Name Type a unique name that you want to assign to this backup archive. The name can be up
to 100 alphanumeric characters in length. The name can contain following characters:
underscore (_), dash (-), or period (.).
Description Type a description for this configuration backup archive. The description can be up to
255 characters in length.
5. Click Run Backup.
You can start a new backup or restore processes only after the on-demand backup is complete.
You can monitor the backup archive process in the Backup Archives window.
Creating an email notification for a failed backup
To receive a notification by email about a backup failure on the IBM QRadar Console or a QRadar Event
Processor, create a rule that is based on the system notification message.
Before you begin
You must configure an email server to distribute system notifications in QRadar.
If a backup fails, you see one of the following backup failure system notifications:
If a backup file is deleted, it is removed from the disk and from the database. Also, the entry is
removed from this list and an audit event is generated to indicate the removal.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. In the Existing Backups section, select the archive that you want to delete.
4. Click Delete.
Restore QRadar configurations and data
Restoring a backup archive is useful if you want to restore previously archived configuration files,
offense data, and asset data on your IBM QRadar system.
You can restore only a backup archive that is created within the same release of software and its
software update level. For example, if you are running QRadar 7.5.0 p1, make sure that the
backup archive is created on the QRadar 7.5.0 p1 Console.
The restore process restores only your configuration information, offense data, and asset data. For
more information, see Restoring data.
If the backup archive originated on a NATed Console system, you can restore only that back up
archive on a NATed system.
You cannot complete a configuration restore on a console in which the IP address matches the IP
address of a managed host in the backup.
Restriction: Your restore might fail if you are taking a configuration from another deployment and run
the qchange_netsetup utility to change the private IP address of the console. The qchange_netsetup utility modifies
the deployed configuration, but not the backup one. When you perform a restore, the backup configuration is read,
and the restore might convert components with the wrong IP address.
If possible, before you restore a configuration backup, run an on-demand backup to preserve the current
environment. The following description is a high-level view of the configuration restore process:
All files are extracted from the backup archive and restored to disk.
Important:
If you are restoring WinCollect data, you must install the WinCollect SFS that matches the
version of WinCollect in your backup before you restore the configuration.
When you do a cross deployment restore or when you restore after a factory reinstall, the managed
host that is attached to the original console is automatically pointed to the newly restored
deployment. However, any changes before the restore regarding deployment (add or remove
managed hosts), causes the restore process to fail.
For more information about how to back up or restore an archive, see the following topics.
You can restart the Console only after the restore process is complete.
The restore process can take up to several hours; the process time depends on the size of the
backup archive that must be restored. When complete, a confirmation message is displayed.
A window provides the status of the restore process. This window provides any errors for each
host and instructions for resolving the errors.
The type of backup. Only configuration backups can be restored, therefore, this parameter
Type
displays config.
Select All
When selected, this option indicates that all configuration items are included in the restoration of
Configuration
the backup archive.
Items
Restore Lists the configuration items to include in the restoration of the backup archive. To remove
Configuration items, you can clear the check boxes for each item you want to remove or clear the Select
All Configuration Items check box.
Select All DataWhen selected, this option indicates that all data items are included in the restoration of the
Items backup archive.
Lists the configuration items to include in the restoration of the backup archive. All items
Restore Data are cleared by default. To restore data items, you can select the check boxes for each item
you want to restore.
Table 1. Restore a Backup parameters
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. Select the archive that you want to restore.
4. Click Restore.
5. On the Restore a Backup window, configure the parameters.
Select the Custom Rules Configuration check box to restore the rules and reference data
that is used by apps. Select the Users Configuration check box to restore authorized
tokens that are used by apps.
The following table lists the restore configurations and what is included in each:
Note: The content included in each configuration is not limited to the content that is listed.
Restore
Configuration Content Included
results
Important:
When you restore to another system where only partial options are restored
and rules are restored but related offenses are not. For example, when you
restore deployment configuration without offenses.
When you are restoring to a new or rebuilt system and if you had rules that
created offenses that were indexed on custom properties of the system that
the backup was created on, restore the offenses so that the offense types
(offense indexed fields) are restored correctly.
If this is not done, you need to edit any rules that create offenses indexed
on custom properties and re-link them to the correct property again.
o Source IP
o Destination IP
o QID
o Username
o Source MAC
Restore
Configuration Content Included
o Destination MAC
o Device
o Hostname
o Source port
o Destination port
o Source IPV6
o Destination IPV6
o Source ASN
o Destination ASN
o Rule
o Application ID
o Source identity
o Destination identity
o Search result
6. Click Restore.
7. Click OK.
8. Click OK.
9. Choose one of the following options:
o If the user interface was closed during the restore process, open a web browser and
log in to IBM QRadar.
o If the user interface was not closed, the login window is displayed. Log in
to QRadar.
10. Follow the instructions on the status window.
What to do next
After you verify that your data is restored to your system, ensure that your DSMs,
vulnerability assessment (VA) scanners, and log source protocols are also restored.
If the backup archive originated on an HA cluster, you must click Deploy Changes to restore
the HA cluster configuration after the restore is complete. If disk replication is enabled, the
secondary host immediately synchronizes data after the system is restored. If the secondary
host was removed from the deployment after a backup, the secondary host displays a failed
status on the System and License Management window.
Restoring a backup archive created on a different QRadar system
Each backup archive includes the IP address information of the system where it was created. When you
restore a backup archive from a different IBM QRadar system, the IP address of the backup archive and
the system that you are restoring are mismatched. You can correct the mismatched IP addresses.
About this task
You can restart the Console only after the restore process is complete. The restore process can take up to
several hours; the process time depends on the size of the backup archive that must be restored. When
complete, a confirmation message is displayed.
A window provides the status of the restore process, and provides any errors for each host and
instructions for resolving the errors.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. Select the archive that you want to restore, and click Restore.
4. On the Restore a Backup window, configure the following parameters and then click Restore.
Parameter Description
Select All Indicates that all configuration items are included in the restoration of the backup
Configuration Items archive. This checkbox is selected by default.
Restore Configuration Lists the configuration items to include in the restoration of the backup
archive. All items are selected by default.
Select All Data Items Indicates that all data items are included in the restoration of the backup
archive. This checkbox is selected by default.
Restore Data Lists the configuration items to include in the restoration of the backup
archive. All items are cleared by default.
Table 1. Restore a Backup parameters
5. Stop the IP table service on each managed host in your deployment. The IP tables is a Linux®-
based firewall.
a. Using SSH, log in to the managed host as the root user.
b. For App Host, type the following commands:
systemctl stop docker_iptables_monitor.timer
systemctl stop iptables
c. For all other managed hosts, type the following command:
service iptables stop
d. Repeat for all managed hosts in your deployment.
6. Ensure that the power is off on the original QRadar console that the backup was taken from.
7. On the Restore a Backup window, click Test Hosts Access.
8. After testing is complete for all managed hosts, verify that the status in the Access Status column
indicates a status of OK.
9. If the Access Status column indicates a status of No Access for a host, stop iptables again, and
then click Test Host Access again to attempt a connection.
10. On the Restore a Backup window, configure the parameters.
Important: By selecting the Installed Applications Configuration checkbox, you restore the
install app configurations only. Extension configurations are not restored. Select the Deployment
Configuration checkbox if you want to restore extension configurations.
11. Click Restore.
12. Click OK.
13. Click OK to log in.
14. Choose one of the following options:
a. If the user interface was closed during the user restore process, open a web browser and
log in to QRadar.
b. If the interface was not closed, the login window is displayed. Log in to QRadar.
15. View the results of the restore process and follow the instructions to resolve any errors.
16. Refresh your web browser window.
17. From the Admin tab, select Advanced > Deploy Full Configuration.
QRadar continues to collect events when you deploy the full configuration. When the event
collection service must restart, QRadar does not restart it automatically. A message displays that
gives you the option to cancel the deployment and restart the service at a more convenient time.
18. To enable the IP tables for an App Host, type the following command:
systemctl start docker_iptables_monitor.timer
What to do next
After you verify that your data is restored to your system, you must reapply RPMs for any DSMs,
vulnerability assessment (VA) scanners, or log source protocols.
If the backup archive originated on an HA cluster, you must click Deploy Changes to restore the HA
cluster configuration after the restore is complete. If disk replication is enabled, the secondary host
immediately synchronizes data after the system is restored. If the secondary host was removed from the
deployment after a backup, the secondary host displays a failed status on the System and License
Management window.
Restoring data
You can restore the data on your IBM QRadar Console and managed hosts from backup files. The data
portion of the backup files includes information such as source and destination IP address information,
asset data, event category information, vulnerability data, flow data, and event data.
Each managed host in your deployment, including the QRadar Console, creates all backup files in
the /store/backup/ directory. Your system might include a /store/backup mount from an external SAN or
NAS service. External services provide long term, offline retention of data, which is commonly required
for compliancy regulations, such as PCI.
You know the location of the managed host where the data is backed up.
If your deployment includes a separate mount point for that volume,
the /store or /store/ariel directory has sufficient space for the data that you want to recover.
You know the date and time for the data that you want to recover.
If your configuration has been changed, before you restore the data backup, you must restore the
configuration backup.
Procedure
1. Use SSH to log in to IBM QRadar as the root user.
2. Go to the /store/backup directory.
3. To list the backup files, type the following command:
ls -l
4. If backup files are listed, go to the root directory by typing the following command:
cd /
Important: The restored files must be in the /store directory. If you type cd instead of cd /, the files
are restored to the /root/store directory.
5. To extract the backup files to their original directory, type the following command:
tar -zxpvPf /store/backup/backup.name.hostname_hostID .target date.backup type.timestamp.tgz
File name variable Description
The name of the QRadar system that hosts the backup file followed by the identifier
hostname_hostID
for the QRadar system.
The date that the backup file was created. The format of the target date
target date
is day_month_year.
Results
Daily backup of data captures all data on each host. If you want to restore data on a managed host that
contains only event or flow data, only that data is restored to that host. If you want to maintain the
restored data, increase your data retention settings to prevent the nightly disk maintenance routines from
deleting your restored data.
You can view the restored directories that are created for each hour of the day. If directories are
missing, data might not be captured for that time period.
The QRadar® ISO contains a built-in version of WinCollect. When you restore by using that ISO, it
deploys the WinCollect files that are stored in that ISO, rather than the files from your backup.
To remedy this issue, you must install the WinCollect SFS that matches the version of WinCollect in
your backup before you restore the configuration. Perform the following tasks in this order:
1. Perform QRadar backup.
2. Bring new hardware online and deploy the ISO.
3. Install the WinCollect SFS that matches the version of WinCollect in your backup on the Console.
4. Restore the configuration backup.
The appropriate WinCollect files are deployed with the configuration restore.
Backup and restore applications
IBM QRadar provides a way to backup and restore application configurations separate from the
application data.
Application configurations are backed up as part of the nightly configuration backup. The configuration
backup includes apps that are installed on the QRadar Console and on an App Host. You can restore the
application configuration by selecting the Installed Applications Configuration option when you restore
a backup.
Application data is backed up separate from the application configuration by using an easy-to-use script
that runs nightly. You can also use the script to restore the app data, and to configure backup times and
data retention periods for app data.
This script is on both the QRadar® Console and the App Host if one is installed. The script backs
up app data only if apps are on the current host.
Procedure
1. Use SSH to log in to your Console or your App Host as the root user.
2. Go to the /opt/qradar/bin/ directory.
./app-volume-backup.py backup
The app-volume-backup.py script runs nightly at 2:30 AM local time to back up all
installed apps. Backup archives are stored in the /store/apps/backup folder. You can
change the backup archives location by editing
the APP_VOLUME_BACKUP_DIR variable
in /store/configservices/staging/globalconfig/nva.conf. You must deploy changes
after you edit this variable.
o To view all data backups for installed apps, enter the following command:
./app-volume-backup.py ls
This command outputs all backup archives that are stored in the backup archives
folder.
o New in 7.5.0To restore data for a specific application instance, rather than
restoring all instances, enter the following command:
o New in 7.5.0 Update Package 6To backup data for an individual application
instance, enter the following command:
o By default, all backup archives are retained for one week. The retention process
runs nightly at 2:30 AM local time with the backup.
To perform retention manually, and use the default retention period, enter
the following command:
./app-volume-backup.py retention
You can also set the retention period manually by adding -t (time - defaults
to 1) and -p (period - defaults to 0) switches.
The -p switch accepts three values: 0 for a week, 1 for a day, and 2 for an
hour.
For example, to set the retention period for a backup to 3 weeks, enter the
following command:
./app-volume-backup.py retention -t 3 -p 0
o If you want to change the retention time that is used by the nightly timer, add flags
to the retention command found in the following systemd service file.
/usr/lib/systemd/system/app-data-backup.service
For example, to change the retention period that is used by the nightly retention
process to 5 days, locate the following line:
ExecStart=/opt/qradar/bin/app-volume-backup.py retention
Replace it with:
ExecStart=/opt/qradar/bin/app-volume-backup.py retention -t 5 -p 1
Save your changes, and run the systemctl daemon-reload command for systemd to
apply the changes.
3. App containers are restarted automatically after the restore is complete.
If you do not meet the requirements for the Data Synchronization app, the following are some
alternative solutions. Recovery from data loss is possible when you forward live data, for example,
flows and events from a primary QRadar system, to a parallel system at another site.
Primary QRadar Console and backup console
A hardware failure solution, where the backup console is a copy of the primary server, with the
same configuration but stays powered off. Only one console is operational at any one time. If the
primary console fails, you manually turn the power on the backup console, apply the primary
configuration backup, and use the IP address from the primary console. After you restore the
primary server and before you turn it on, you manually turn off the backup server. If the system
is down for a long time, apply the backup console configuration backup to the primary server.
Event and flow forwarding
Events and flows are forwarded from a primary site to a secondary site. Identical architectures in
two separate data centers are required.
Distributing the same events and flows to the primary and secondary sites
Distribute the same event and flow data to two live sites by using a load balancer or other
method to deliver the same data to mirrored appliances. Each site has a record of the log data that
is sent.
A backup console requires its own dedicated license key (matching the EPS and
FPM values of the primary console).
The license configuration of the backup console needs to match the values of the
primary QRadar Console; this includes the EPS and FPS values of the
primary QRadar Console.
Example: If the primary QRadar Event Processor was licensed for 15K EPS, the
redundant backup console should also be licensed for 15K EPS.
There are special failover upgrade parts that need to be purchased for the backup
console.
From a technical perspective, the license for both primary and backup consoles are
identical, however for compliance reasons the backup console (and associated
license) cannot not be processing live data unless a failure has occurred with the
primary QRadar Console.
Data collected by the backup console will need to be copied back to the Primary
console when the Primary console once again becomes functional.
If the primary fails, take the following steps to set up the backup console as the primary QRadar
Console:
3. Restore configuration backup data from the primary console to the backup console.
The backup console functions as the primary console until the primary console is brought
back online. Ensure that both servers are not online at the same time.
The following information is provided only for general guidance and is not intended or designed
as a how-to guide.
This scenario is dependent upon site 1 remaining active. If site 1 fails, data is not transmitted to
Site 2, but the data is current up to the time of failure. In the case of failure at site 1, you
implement recovery of your data, by manually changing IP addresses and use a backup and restore
to fail over from site 1 to site 2, and to switch to site 2 for all QRadar® hosts.
The following list describes the setup for event and flow forwarding from the primary site to the
secondary site:
There is an identical distributed architecture in two separate data centers, which includes a
primary data center and a secondary data center.
The primary QRadar Console is active and collecting all events and flows from log sources
and is generating correlated offenses.
You configure off-site targets on the primary QRadar Console to enable forwarding of
event and flow data from the primary data center to the event and flow processors in
another data center.
Fast path: Use routing rules instead of off-site targets because the setup is easier.
Periodically, use the content management tool to update content from the primary QRadar
Console to the secondary QRadar Console.
In the case of a failure at site 1, you can use a high-availability (HA) deployment to trigger an
automatic failover to site 2. The secondary HA host on site 2 takes over the role of the primary
HA host on site 1. Site 2 continues to collect, store, and process event and flow data. Secondary
HA hosts that are in a standby state don't have services that are running but data is synchronized if
disk replication is enabled.
Note: You can use a load balancer to divide events, and split flows such as NetFlow, J-Flow, and sFlow but
you can't use a load balancer to split QFlows. Use external technologies such as a regenerative tap to divide
QFlow and send to the backup site.
The following diagram shows how site 2 is used as a redundant data store for site 1. Event and
flow data are forwarded from site 1 to site 2.
Figure 1. Event and flow forwarding from site 1 to site 2 for disaster recovery
Event and flow forwarding configuration
For data redundancy, configure IBM® QRadar systems to forward data from one site to a
backup site.
The target system that receives the data from QRadar is known as a forwarding
destination. QRadar systems ensure that all forwarded data is unaltered. Newer versions
of QRadar systems can receive data from earlier versions of QRadar systems. However, earlier
versions cannot receive data from later versions. To avoid compatibility issues, upgrade all
receivers before you upgrade QRadar systems that send data. Follow these steps to set up
forwarding:
A forwarding destination is the target system that receives the event and flow data
from the IBM QRadar primary console. You must add forwarding destinations before
you can configure bulk or selective data forwarding.
After you add one or more forwarding destinations for your event and flow data, you
can create filter-based routing rules to forward large quantities of data.
You use the content management tool to move data from your primary QRadar
Console to the QRadar secondary console. Export security and configuration content
from IBM QRadar into an external, portable format.
Load balancing of events and flows between two sites
When you are running two live IBM QRadar deployments at both a primary and secondary site,
you send event and flow data to both sites. Each site has a record of the log data that is sent. Use
the content management tool to keep the data synchronized between the deployments
The following diagram shows two live sites, where data from each site is replicated to the other
site.
Figure 1. Load balancing of events and flows between two sites
Restoring configuration data from the primary to the secondary QRadar Console
After you set up the secondary QRadar Console as the destination for the logs, you either add or
import a backup archive from the primary QRadar Console. You can restore a backup archive that
is created on another QRadar host. Log in to the secondary QRadar Console and do a full restore
of the primary console backup archive to the secondary QRadar Console.
Connection data
Topology data
backup-<target date>-<timestamp>.tgz
Where, <target date> is the date that the backup file was created.
The format of the target date is <day>_<month>_<year>. <timestamp> is the time that the backup file
was created.
The format of the time stamp is <hour>_<minute>_<second>.
Backing up your data
Automatic backup occurs daily, at 3:00 AM, or you can start the backup process manually.
Procedure
1. Using SSH, log in your IBM® QRadar® SIEM Console as the root user.
2. Using SSH from the QRadar Console, log in to QRadar Risk Manager as the root user.
3. Start a QRadar Risk Manager backup by typing the following command:
/opt/qradar/bin/dbmaint/risk_manager_backup.sh
Results
The script that is used to start the backup process might take several minutes to start.
The following message is an example of the output that is displayed, after the script completes the backup process:
Fri Sep 11 10:14:41 EDT 2015
- Risk Manager Backup complete,
wrote /store/qrm_backups/backup-2015-09-11-10-14-39.tgz
Restoring data
You can use a restore script to restore data from a QRadar Risk Manager backup.
Before you begin
The QRadar Risk Manager appliance and the backup archive must be the same version of QRadar Risk Manager.
If the script detects a version difference between the archive and the QRadar Risk Manager managed host, an
error is displayed.
About this task
Use the restore script to specify the archive that you are restoring to QRadar Risk Manager. This process
requires you to stop services on QRadar Risk Manager. Stopping services logs off all QRadar Risk
Manager users and stops multiple processes.
The following table describes the parameters that you can use to restore a backup archive.
Option Description
Overwrites any existing QRadar Risk Manager data on your system with the data in the restore file.
-f Selecting this parameter allows the script to overwrite any existing device configurations in Configuration
Source Management with the device configurations from the backup file.
-w Do not delete directories before you restore QRadar Risk Manager data.
Procedure
1. Using SSH, log in your IBM® QRadar SIEM Console as the root user.
2. Using SSH from the QRadar SIEM Console, log in to QRadar Risk Manager as the root user.
3. Stop hostcontext by typing systemctl stop hostcontext.
4. Type the following command to restore a backup archive to QRadar Risk Manager:
/opt/qradar/bin/risk_manager_restore.sh -r /store/qrm_backups/<backup>.
Where <backup> is the QRadar Risk Manager
archive that you want to restore.
For example, backup-2012-09-11-10-14-39.tgz.
5. Start hostcontext by typing systemctl start hostcontext.
Parameter Description
Backup Repository Type the location where you want to store your backup file. The default location
Path is /store/backup. This path must exist before the backup process is initiated. If this
path does not exist, the backup process aborts.
If you modify this path, make sure the new path is valid on every system in your
deployment.
Parameter Description
If you change the path to the backup repository, all files in the previous repository
automatically are moved to the new path.
/store/backup/*
/mount/*
/mnt/*
/home/*
Active data is stored on the /store directory. If you have both active data and backup
archives stored in the same directory, data storage capacity might easily be reached
and your scheduled backups might fail. We recommend you specify a storage
location on another system or copy your backup archives to another system after the
backup process is complete. You can use a Network File System (NFS) storage
solution in your QRadar deployment.
Type or select the length of time, in days, that you want to store backup files. The
Backup Retention default is 7 days.
Period (days)
This period of time only affects backup files generated as a result of a scheduled
process. On-demand backups or imported backup files are not affected by this value.
Nightly Backup
Select a backup option.
Schedule
This option is only displayed if you select the Configuration and Data
Backups option.
All hosts in your deployment are listed. The first host in the list is your Console; it is
Select the managed enabled for data backup by default, therefore no check box is displayed. If you have
hosts you would like managed hosts in your deployment, the managed hosts are listed below the Console
to run data backups: and each managed host includes a check box.
Select the check box for the managed hosts you want to run data backups on.
For each host (Console or managed hosts), you can optionally clear the data items
you want to exclude from the backup archive.
Configuration Only Backup
Type or select the length of time, in minutes, that you want to allow the backup to run. The
Backup Time Limit
default is 180 minutes. If the backup process exceeds the configured time limit, the backup
(min)
process is automatically canceled.
From this list box, select the level of importance that you want the system to place on
Backup Priority the configuration backup process compared to other processes.
Type or select the length of time, in minutes, that you want to allow the backup to run. The
Backup Time Limit
default is 1020 minutes. If the backup process exceeds the configured time limit, the backup
(min)
is automatically canceled.
From the list, select the level of importance you want the system to place on the data
Backup Priority backup process compared to other processes.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. On the toolbar, click Configure.
4. On the Backup Recovery Configuration window, customize your nightly backup.
5. Click Save.
6. Close the Backup Archives window.
7. On the Admin tab, click Deploy Changes.
Parameter Description
Backup Type the location where you want to store your backup file. The default location
Repository is /store/backup. This path must exist before the backup process is initiated. If this path does
Path not exist, the backup process aborts.
If you modify this path, make sure the new path is valid on every system in your deployment.
If you change the path to the backup repository, all files in the previous repository
automatically are moved to the new path.
/store/backup/*
/mount/*
/mnt/*
Parameter Description
/home/*
Active data is stored on the /store directory. If you have both active data and backup archives
stored in the same directory, data storage capacity might easily be reached and your scheduled
backups might fail. We recommend you specify a storage location on another system or copy
your backup archives to another system after the backup process is complete. You can use a
Network File System (NFS) storage solution in your QRadar deployment. For more
information on using NFS, see the Offboard Storage Guide.
Backup Type or select the length of time, in days, that you want to store backup files. The default is 7
Retention days.
Period
(days) This period of time only affects backup files generated as a result of a scheduled process. On-
demand backups or imported backup files are not affected by this value.
Nightly
Backup Select a backup option.
Schedule
This option is only displayed if you select the Configuration and Data Backups option.
Select the All hosts in your deployment are listed. The first host in the list is your Console; it is enabled
managed for data backup by default, therefore no check box is displayed. If you have managed hosts in
hosts you your deployment, the managed hosts are listed below the Console and each managed host
would like includes a check box.
to run data
backups: Select the check box for the managed hosts you want to run data backups on.
For each host (Console or managed hosts), you can optionally clear the data items you want to
exclude from the backup archive.
Configuration Only Backup
Backup Type or select the length of time, in minutes, that you want to allow the backup to run. The default is
Time Limit 180 minutes. If the backup process exceeds the configured time limit, the backup process is
(min) automatically canceled.
From this list box, select the level of importance that you want the system to place on the
Backup configuration backup process compared to other processes.
Priority
A priority of medium or high have a greater impact on system performance.
Data Backup
Backup Type or select the length of time, in minutes, that you want to allow the backup to run. The default is
Time Limit 1020 minutes. If the backup process exceeds the configured time limit, the backup is automatically
(min) canceled.
From the list, select the level of importance you want the system to place on the data backup
Backup process compared to other processes.
Priority
A priority of medium or high have a greater impact on system performance.
Table 1. Backup Recovery Configuration parameters
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. On the toolbar, click Configure.
4. On the Backup Recovery Configuration window, customize your nightly backup.
5. Click Save.
6. Close the Backup Archives window.
7. On the Admin tab, click Deploy Changes.
Creating an on-demand configuration backup archive
If you must back up your configuration files at a time other than your nightly scheduled backup,. On-
demand backup archives include only configuration information.
During the backup process, system performance is affected.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Backup and Recovery.
3. From the toolbar, click On Demand Backup.
4. Enter values for the following parameters:
Option Description
Name Type a unique name that you want to assign to this backup archive. The name can be up to
100 alphanumeric characters in length. The name can contain following characters:
underscore (_), dash (-), or period (.).
Description Type a description for this configuration backup archive. The description can be up to 255
characters in length.
5. Click Run Backup.
You can start a new backup or restore processes only after the on-demand backup is complete.
You can monitor the backup archive process in the Backup Archives window.
Data retention
Retention buckets define how long event and flow data is retained in IBM® QRadar®.
As QRadar receives events and flows, each one is compared against the retention bucket filter criteria.
When an event or flow matches a retention bucket filter, it is stored in that retention bucket until the
deletion policy time period is reached. The default retention period is 30 days; then, the data is
immediately deleted.
Retention buckets are sequenced in priority order from the top row to the bottom row. A record is stored
in the bucket that matches the filter criteria with highest priority. If the record does not match any of your
configured retention buckets, the record is stored in the default retention bucket, which is always located
below the list of configurable retention buckets.
Tenant data
You can configure up to 10 retention buckets for shared data, and up to 10 retention buckets for each
tenant.
When data comes into the system, the data is assessed to determine whether it is shared data or whether
the data belongs to a tenant. Tenant-specific data is compared to the retention bucket filters that are
defined for that tenant. When the data matches a retention bucket filter, the data is stored in that retention
bucket until the retention policy time period is reached.
If you don't configure retention buckets for the tenant, the data is automatically placed in the default
retention bucket for the tenant. The default retention period is 30 days, unless you configure a tenant-
specific retention bucket.
Changes to the retention bucket filters are applied immediately to incoming data only. For example, if
you configured a retention bucket to retain all data from source IP address 10.0.0.0/8 for 1 day, and you
later edit the filter to retain data from source IP 192.168.0.1, the change is not retroactive. Immediately
upon changing the filter, the retention bucket has 24 hours of 10.0.0.0/8 data, and all data that is collected
after the filter change is 192.168.0.1 data.
The retention policy on the bucket is applied to all data in the bucket, regardless of the filters criteria.
Using the previous example, if you changed the retention policy from 1 day to 7 days, both the 10.0.0.0/8
data and the 192.168.0.1 data in the bucket is retained for 7 days.
The Distribution of a retention bucket indicates the retention bucket usage as a percentage of total data
retention in all your retention buckets. The distribution is calculated on a per-tenant basis.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data sources section, click Event Retention or Flow Retention.
3. If you configured tenants, in the Tenant list, select the tenant that you want the retention bucket to
apply to.
Note: To manage retention policies for shared data in a multi-tenant configuration, choose N/A in
the Tenant list.
4. To configure a new retention bucket, follow these steps:
a. Double-click the first empty row in the table to open the Retention Properties window.
b. Configure the retention bucket parameters.
Learn more about retention bucket parameters:
Properties Description
Keep data
The retention period that specifies how long the data is to be kept. When the retention period is
placed in
reached, data is deleted according to the Delete data in this bucket parameter. QRadar does not
this bucket
delete data within the retention period.
for
Delete data Select Immediately after the retention period has expired to delete data immediately on matching
in this the Keep data placed in this bucket for parameter. The data is deleted at the next scheduled disk
bucket maintenance process, regardless of disk storage requirements.
Select When storage space is required to keep data that matches the Keep data placed in
this bucket for parameter in storage until the disk monitoring system detects that storage is
Properties Description
required.
Deletions that are based on storage space begin when the free disk space drops to 15% or
less, and the deletions continue until the free disk space is 18% or the policy time frame that
is set in the Keep data placed in this bucket for field runs out. For example, if the used disk
space reaches 85% for records, data is deleted until the used percentage drops to 82%. When
storage is required, only data that matches the Keep data placed in this bucket for field is
deleted.
If the bucket is set to Delete data in this bucket: immediately after the retention period
has expired, no disk space checks are done and the deletion task immediately removes any
data that is past the retention.
Description Type a description for the retention bucket.
Current
Configure the filter criteria that each piece of data is to be compared against.
Filters
c. Click Add Filter after you specify each set of filter criteria.
d. Click Save.
5. To edit an existing retention bucket, select the row from the table and click Edit.
6. To delete a retention bucket, select the row from the table and click Delete.
7. Click Save.
Incoming data that matches the retention policy properties is immediately stored in the retention
bucket.
You cannot move the default retention bucket. It always resides at the bottom of the list.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data sources section, click Event Retention or Flow Retention.
3. If you configured tenants, in the Tenant list, select the tenant for the retention buckets that you
want to reorder.
Note: To manage retention policies for shared data in a multi-tenant configuration, choose N/A in
the Tenant list.
4. Select the row that corresponds to the retention bucket that you want to move, and
click Up or Down to move it to the correct location.
5. Click Save.
Enabling and disabling a retention bucket
When you configure and save a retention bucket, it is enabled by default. You can disable a bucket to tune
your event or flow retention.
About this task
When you disable a bucket, any new events or flows that match the requirements for the disabled bucket
are stored in the next bucket that matches the event or flow properties.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data sources section, click Event Retention or Flow Retention.
3. If you configured tenants, in the Tenant list, select the tenant for the retention bucket that you
want to change.
Note: To manage retention policies for shared data in a multi-tenant configuration, choose N/A in
the Tenant list.
4. Select the retention bucket you want to disable, and then click Enable/Disable.
Deleting a Retention Bucket
When you delete a retention bucket, only the criteria that defines the bucket is deleted. The events or
flows that were stored in the bucket are collected by the default retention bucket. The default retention
period is 30 days; then, the data is immediately deleted.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data sources section, click Event Retention or Flow Retention.
3. If you configured tenants, in the Tenant list, select the tenant for the retention bucket that you
want to delete.
Note: To manage retention policies for shared data in a multi-tenant configuration, choose N/A in
the Tenant list.
4. Select the retention bucket that you want to delete, and then click Delete.
TASK: 1.5 Configure custom SNMP and email templates
SUBTASKS:
Customizing the SNMP trap output
IBM® QRadar® uses SNMP to send traps that provide information when rule conditions are met.
By default, QRadar uses the QRadar management information base (MIB) to manage the devices in the
communications network. However, you can customize the output of the SNMP traps to adhere to another
MIB.
Important: SNMPv3 rule responses are sent out as SNMP informs and not traps.
Procedure
1. Use SSH to log in to QRadar as the root user.
2. Go to the /opt/qradar/conf directory and make backup copies of the following files:
o eventCRE.snmp.xml
o offenseCRE.snmp.xml
3. Open the configuration file for editing.
o To edit the SNMP parameters for event rules, open the eventCRE.snmp.xml file.
o To edit the SNMP parameters for offense rules, open the offenseCRE.snmp.xml file.
4. To change the trap that is used for SNMP trap notification, update the following text with the
appropriate trap object identifier (OID):
5. -<creSNMPTrap version="3" OID="1.3.6.1.4.1.20212.1.1"
name="eventCRENotification">
6. Use the following table to help you update the variable binding information:
Each variable binding associates a particular MIB object instance with its current value.
Alphanumeric
characters
minimum and
maximum
range
Table 1. Value types for variable binding
7. For each of the value types, include any of the following fields:
Field Description Example
You customize the content that is included in the email notification by editing the alert-config.xml file.
You must create a temporary directory where you can safely edit your copy of the files, without the risk
of overwriting the default files. After you edit and save the alert-config.xml file, you must run a script that
validates your changes. The validation script automatically applies your changes to a staging area. You
must deploy the full configuration to rebuild the configuration files for all appliances.
Procedure
1. Use SSH to log in to the QRadar® Console as the root user.
2. Create a new temporary directory to use to safely edit copies of the default files.
3. Type the following command to copy the files that are stored in the custom_alerts directory to the
temporary directory:
cp /store/configservices/staging/globalconfig/templates/custom_alerts/*.* <directory_name>
The <directory_name> is the name of the temporary directory that you created.
If the file does not exist in the staging directory, you might find it in
the /opt/qradar/conf/templates/custom_alerts directory.
4. Confirm that the files were copied successfully:
a. To list the files in the directory, type ls -lah.
b. Verify that the alert-config.xml file is listed.
5. Open the alert-config.xml file for editing.
6. Add a new <template> element for the new offense template.
a. Required: Enter offense for the template type value.
<templatetype>offense</templatetype>
b. Type a name for the offense template.
For example, <templatename>Default offense template</templatename>
If you have more than one template, ensure that the template name is unique.
The following lists provide the values that you can use in the offense template. $Label
values provide the label for the item and the $Value values provide the data.
Offense parameters
$Value.DefaultSubject
$Value.Intro
$Value.OffenseId
$Value.OffenseStartTime
$Value.OffenseUrl
$Value.OffenseMRSC
$Value.OffenseDescription
$Value.EventCounts
$Label.OffenseSourceSummary
$Value.OffenseSourceSummary
$Label.TopSourceIPs
$Value.TopSourceIPs
$Label.TopDestinationIPs
$Value.TopDestinationIPs
$Label.TopLogSources
$Value.TopLogSources
$Label.TopUsers
$Value.TopUsers
$Label.TopCategories
$Value.TopCategories
$Label.TopAnnotations
$Value.TopAnnotations
$Label.ContributingCreRules
$Value.ContributingCreRules
You must create a temporary directory where you can safely edit your copy of the files, without the risk
of overwriting the default files. After you edit and save the alert-config.xml file, you must run a script that
validates your changes. The validation script automatically applies your changes to a staging area. You
must deploy the full configuration to rebuild the configuration files for all appliances.
Important: For IBM QRadar on Cloud, you must open a ticket with IBM Support to get a copy of the alert-
config.xml file. You must open another ticket to apply the updated alert-config.xml file to your QRadar on
Cloud instance.
Procedure
1. Use SSH to log in to the QRadar Console as the root user.
2. Create a new temporary directory to use to safely edit copies of the default files.
3. To copy the files that are stored in the custom_alerts directory to the temporary directory, type the
following command:
cp /store/configservices/staging/globalconfig/templates/custom_alerts/*.* <directory_name>
The <directory_name> is the name of the temporary directory that you created.
4. Confirm that the files were copied successfully:
a. To list the files in the directory, type ls -lah.
b. Verify that the alert-config.xml file is listed.
5. Open the alert-config.xml file for editing.
6. Edit the contents of the <template> element.
a. Required: Specify the type of template to use. Valid options are event or flow.
<templatetype>event</templatetype>
<templatetype>flow</templatetype>
b. Type a name for the email template:
<templatename>Default flow template</templatename>
If you have more than one template, ensure that the template name is unique.
c. Set the <active> element to true:
<active>true</active>
d. Edit the parameters in the <body> or <subject> elements to include the information that you
want to see.
Important: The <active></active> property must be set to True for each event and flow
template type that you want to appear as an option in QRadar. There must be at least one
active template for each type.
You must also ensure that the <filename></filename> property is left empty.
Notification parameters that you can use in the template:
Common Parameters Event Parameters Flow Parameters
SourceIP SourceDSCP
Destination SourcePrecedence
DestinationPort DestinationTOS
DestinationIP DestinationDSCP
DestinationUserName SourceASN
Protocol DestinationASN
StartTime InputIFIndex
Duration OutputIFIndex
StopTime FirstPacketTime
EventCount LastPacketTime
Common Parameters Event Parameters Flow Parameters
SourceV6 TotalSourceBytes
DestinationV6 TotalDestinationBytes
UserName TotalSourcePackets
DestinationNetwork TotalDestinationPackets
SourceNetwork SourceQOS
Severity DestinationQOS
CustomProperty SourcePayload
CustomPropertiesList
CalculatedProperty
CalculatedPropertiesList
AQLCustomProperty
AqlCustomPropertiesList
LogSourceId
LogSourceName
Note: If you do not want to retrieve the entire list when you use the CustomProperties,
CalculatedProperties, or AqlCustomProperties parameter, you can select a specific
property by using the following tags:
Custom Property: ${body.CustomProperty("<custom_property_name>")}
Calculated Property: ${body.CalculatedProperty("<calculated_property_name>")}
AQL Custom Property: $
{body.AqlCustomProperty("<AQL_custom_property_name>")}
7. Optional: To create multiple email templates, copy and paste the following sample email template
in the <template> element in the alert-config.xml file. Repeat Step 6 for each template that you add.
Sample email template:
<template>
<templatename>Default Flow</templatename>
<templatetype>flow</templatetype>
<active>true</active>
<filename></filename>
<subject>${RuleName} Fired </subject>
<body>
The ${AppName} event custom rule engine sent an automated response:
${StartTime}
Protocol: ${Protocol}
QID: ${Qid}
Payload: ${Payload}
CustomPropertiesList: ${CustomPropertiesList}
</body>
<from></from>
<to></to>
<cc></cc>
<bcc></bcc>
</template>
Note: Currently, the DomainID for multi-tenancy or overlapping IP addresses isn’t available in
the custom email templates.
8. Save and close the alert-config.xml file.
9. Validate the changes by typing the following command.
/opt/qradar/bin/runCustAlertValidator.sh <directory_name>
The <directory_name> parameter is the name of the temporary directory that you created.
If the script validates the changes successfully, the following message is displayed: File alert-
config.xml was deployed successfully to staging!
10. Deploy the changes in QRadar.
a. Log in to QRadar.
b. On the navigation menu ( ), click Admin.
c. Click Advanced > Deploy Full Configuration.
Important: QRadar continues to collect events when you deploy the full configuration.
When the event collection service must restart, QRadar does not restart it automatically. A
message displays that gives you the option to cancel the deployment and restart the service
at a more convenient time.
TASK: 1.6 Manage network hierarchy
TASK: 1.7 Use and manage reference data
Network objects are containers for Classless Inter-Domain Routing (CIDR) addresses. Any IP address
that is defined in a CIDR range in the network hierarchy is considered to be a local address. Any IP
address that is not defined in a CIDR range in the network hierarchy is considered to be a remote address.
A CIDR can belong only to one network object, but subsets of a CIDR range can belong to another
network object. Network traffic matches the most exact CIDR. A network object can have multiple CIDR
ranges assigned to it.
Some of the default building blocks and rules in QRadar use the default network hierarchy objects. Before
you change a default network hierarchy object, search the rules and building blocks to understand how the
object is used and which rules and building blocks might need adjustments after you modify the object. It
is important to keep the network hierarchy, rules, and building blocks up to date to prevent false offenses.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Network Hierarchy.
3. From the menu tree on the Network Views window, select the area of the network in which you
want to work.
4. To add network objects, click Add and complete the fol} lowing fields:
Option Description
Group The network group in which to add the network object. Select from the Group list, or click Add
a New Group.
Tip: When you add a network group, you can use periods in network group names to define
network group hierarchies. For example, if you enter the group name A.B.C, you create a three-
tier hierarchy with B as a subnode of A, and C as a subnode of B.
Restriction: The lengths of the name and the group combined must not be more than 255
characters.
IP/CIDR(s) Type an IP address or CIDR range for the network object, and click Add. You can add multiple
IP addresses and CIDR ranges.
Country / Region The country or region in which the network object is located.
Longitude and The geographic location (longitude and latitude) of the network object. These fields are co-
Latitude dependent.
5. Click Create.
6. Repeat the steps to add more network objects, or click Edit or Delete to work with existing
network objects.
Types of reference data collections
IBM® QRadar® has different types of reference data collections that can handle different levels of data
complexity. The most common types are reference sets and reference maps.
If you want to use the same reference data in both QRadar SIEM and QRadar Risk Manager, use a
reference set. You can't use other types of reference data collections with QRadar Risk Manager.
Type of
Description How to use Examples
collection
Use a reference set to
compare a property To verify whether a login ID that was used to
A collection of
Reference set value against a list, log in to QRadar is assigned to a user, create a
unique values.
such as IP addresses or reference set with the LoginID parameter.
user names.
A collection of Use a reference map to To correlate user activity on your network,
data that maps a verify a unique create a reference map that uses
Reference map
unique key to a combination of two the LoginID parameter as a key, and
value. property values. the Username as a value.
A collection of
To test for authorized access to a patent, create
data that maps a Use a reference map of
a map of sets that uses a custom event property
key to multiple sets to verify a
Reference map for Patent ID as the key, and
values. Every key combination of two
of sets the Username parameter as the value. Use the
is unique and property values against
map of sets to populate a list of authorized
maps to one a list.
users.
reference set.
A collection of
data that maps one
key to another key, To test for network bandwidth violations,
Use a reference map of
which is then create a map of maps that uses the Source
Reference map maps to verify a
mapped to a single IP parameter as the first key,
of maps combination of three
value. Every key is the Application parameter as the second key,
property values.
unique and maps and the Total Bytes parameter as the value.
to one reference
map.
A collection of
data that maps one Use a reference table to
key to another key,verify a combination of Create a reference table that
which is then three property values stores Username as the first key, Source IP as
Reference table
mapped to a single when one of the the second key with an assigned cidr data
value. The second properties is a specific type, and Source Port as the value.
key is assigned a data type.
data type.
Table 1. Types of reference data collections
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Reference Set Management.
3. Select a reference set and click View Contents.
4. Click the Content tab to view information about each data element.
Tip: Use the search field to filter for all elements that match a keyword. You can't search for data
in the Time To Live column.
Learn more about the data elements:
The following table describes the information that is shown for each data element in the reference
set.
Parameter Description
Domain-specific reference data can be viewed by tenant users who have access to the domain, MSSP
Domain Administrators, and users who do not have a tenant assignment. Users in all tenants can view shared
reference data.
The data element that is stored in the reference set. For example, the value might show user names or
Value
IP addresses.
Shows the user name when the data element is added manually, and the file name when the data was
Origin added by importing it from an external file. Shows the rule name when the data element is added in
response to a rule.
Time to
The time that is remaining until this element is removed from the reference set.
Live
Date Last
The date and time that this element was last detected on your network.
Seen
5. Click the References tab to view the rules that use the reference set in a rule test or in a rule
response.
Parameter Description
Rule Name Name of the rule that is configured to use the reference set.
Type Shows event, flow, common, or offense to indicate the type of data that the rule is tested against.
Enabled A rule must be enabled for the custom rule engine to evaluate it.
When you edit a reference set, you can change the data values, but you can't change the type of data that
the reference set contains.
Before a reference set is deleted, QRadar® runs a dependency check to see whether the reference set has
rules that are associated with it.
Note: If you use techniques to obfuscate data on the event properties that you want to compare to the reference set
data, use an alphanumeric reference set and add the obfuscated data values.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Reference Set Management.
3. To add a reference set:
a. Click Add and configure the parameters.
Learn more about reference set parameters:
The following table describes each of the parameters that are used to configure a reference
set.
Parameter Description
Name The maximum length of the reference set name is 255 characters.
Select the data types for the reference elements. You can't edit the Type parameter after you
create a reference set.
The IP type stores IPv4 addresses. The Alphanumeric (Ignore Case) type automatically
Type changes any alphanumeric value to lowercase.
To compare obfuscated event and flow properties to the reference data, you must use an
alphanumeric reference set.
Specifies when reference elements expire. If you select the Lives Forever default setting, the
reference elements don’t expire.
Time to
Live of If you specify an amount of time, indicate whether the time-to-live interval is based on when
elements the data was first seen, or was last seen.
QRadar removes expired elements from the reference set periodically (by default, every 5
minutes).
When Specifies how expired reference elements are logged in the qradar.log file when they are
Parameter Description
b. Click Create.
4. Click Edit or Delete to work with existing reference sets.
Tip: To delete multiple reference sets, use the Quick Search text box to search for the reference
sets that you want to delete, and then click Delete Listed.
When you use an external file to populate the reference data collection, the first non-comment line in the
file identifies the column names in the reference data collection. Each line after that is a data record that
gets added to the collection. While the data type for the reference collection values is specified when the
collection is created, each key is an alphanumeric string.
The following table shows examples of how to format data in an external file that is to be used for populating
reference maps.
Type of
reference Data formatting examples
collection
key1,data
Reference
map key1,value1
key2,value2
Reference key1,data
map of
sets key1,value1
key1,value2
Reference key1,key2,data
map of
maps map1,key1,value1
map1,key2,value2
Type of
reference Data formatting examples
collection
Table 1. Formatting data in an external file to be used for populating reference data collections
You can also create reference data collections by using the /reference_data endpoint in
the QRadar RESTful API.
Procedure
1. Using SSH, log in to IBM QRadar as the root user.
2. Go to the /opt/qradar/bin directory.
3. To create the reference data collection, type the following command:
4. ./ReferenceDataUtil.sh create name
5. [SET | MAP | MAPOFSETS | MAPOFMAPS | REFTABLE]
6. [ALN | NUM | IP | PORT | ALNIC | DATE]
[-timeoutType=[FIRST_SEEN | LAST_SEEN]] [-timeToLive=]
7. To populate the map with data from an external file, type the following command:
8. ./ReferenceDataUtil.sh load name filename
[-encoding=...] [-sdf=" ... "]
Example
Here are some examples of how to use the command line to create different types of reference data
collections:
Create a map of sets that contains port values that will age out 3 hours after they were last seen:
Create a map of maps that contains numeric values that will age out 3 hours 15 minutes after they
were first seen:
Use the following examples to help you create queries to extract data from your reference data.
Use reference tables to get external metadata for user names that show up in events
SELECT
REFERENCETABLE('user_data','FullName',username) AS 'Full Name',
REFERENCETABLE('user_data','Location',username) AS 'Location',
REFERENCETABLE('user_data','Manager',username) AS 'Manager',
UNIQUECOUNT(username) AS 'Userid Count',
UNIQUECOUNT(sourceip) AS 'Source IP Count',
COUNT(*) AS 'Event Count'
FROM events
WHERE qidname(qid)ILIKE '%logon%'
GROUP BY "Full Name", "Location", "Manager"
LAST 1 days
Use the reference table to get external data such as the full name, location, and manager name for users
who logged in to the network in the last 24 hours.
Get the global user IDs for users in events who are flagged for suspicious activity
SELECT
REFERENCEMAP('GlobalID_Mapping',username) AS 'Global ID',
REFERENCETABLE('user_data','FullName', 'Global ID') AS 'Full Name',
UNIQUECOUNT(username),
COUNT(*) AS 'Event count'
FROM events
WHERE RULENAME(creEventlist)
ILIKE '%suspicious%'
GROUP BY "Global ID"
LAST 2 days
In this example, individual users have multiple accounts across the network. The organization requires a
single view of a user's activity. Use reference data to map local user IDs to a global ID. The query returns
the user accounts that are used by a global ID for events that are flagged as suspicious.
Use a reference map lookup to extract global user names for user names that are returned
in events
SELECT
QIDNAME(qid) as 'Event name',
starttime AS Time,
sourceip AS 'Source IP',
destinationip AS 'Destination IP',
username AS 'Event Username',
REFERENCEMAP('GlobalID_Mapping', username) AS 'Global User'
FROM events
WHERE "Global User" = 'John Ariel'
LAST 1 days
Use the reference map to look up the global user names for user names that are returned in events. Use
the WHERE clause to return only events for the global user John Ariel. John Ariel might have a few
different user names but these user names are mapped to a global user, for example, in an external
identity mapping system, you can map a global user to several user names used by the same global user.
Monitoring high network utilization by users
SELECT
LONG(REFERENCETABLE('PeerGroupStats', 'average',
REFERENCEMAP('PeerGroup',username)))
AS PGave,
LONG(REFERENCETABLE('PeerGroupStats', 'stdev',
REFERENCEMAP('PeerGroup',username)))
AS PGstd,
SUM(sourcebytes+destinationbytes) AS UserTotal
FROM flows
WHERE flowtype = 'L2R'
GROUP BY UserTotal
HAVING UserTotal > (PGAve+ 3*PGStd)
Returns user names where the flow utilization is three times greater than the average user.
You need a reference set to store network utilization of peers by user name and total bytes.
If you copy and paste a query example that contains single or double quotation marks from the AQL
Guide, you must retype the quotation marks to be sure that the query parses.
Data MIME
Parameter Type Value Sample
Type Type
element_type query (required) String text/plain String <one of: ALN, NUM, IP, PORT, ALNIC, DATE>
timeout_type query (optional) String text/plain String <one of: UNKNOWN, FIRST_SEEN, LAST_SEEN>
c. Click Try It Out! to finish creating the reference data collection and to view the results.
5. To create a new reference map, follow these steps:
a. Click /maps.
b. Click POST and enter the relevant information in the Value fields.
Learn more about the parameters to create a reference map:
The following table provides information about the parameters that are required to create a
reference map:
Data MIME
Parameter Type Value Sample
Type Type
element_type query (required) String text/plain String <one of: ALN, NUM, IP, PORT, ALNIC, DATE>
timeout_type query (optional) String text/plain String <one of: UNKNOWN, FIRST_SEEN, LAST_SEEN>
c. Click Try It Out! to finish creating the reference data collection and to view the results.
6. To create a new reference map of sets, follow these steps:
a. Select /map_of_sets.
b. Click POST and enter the relevant information in the Value fields.
Learn more about the parameters to create a reference map of sets:
The following table provides information about the parameters that are required to create a
reference map of sets:
Data MIME
Parameter Type Value Sample
Type Type
element_type query (required) String text/plain String <one of: ALN, NUM, IP, PORT, ALNIC, DATE>
timeout_type query (optional) String text/plain String <one of: UNKNOWN, FIRST_SEEN, LAST_SEEN>
c. Click Try It Out! to finish creating the reference data collection and to view the results.
7. To create a new reference table or map of maps, follow these steps:
a. Click /tables.
b. Click POST and enter the relevant information in the Value fields.
Learn more about the parameters to create a reference table or a map of maps:
The following table provides information about the parameters that are required to create a
reference table or a map of maps:
Data
Parameter Type Value MIME Type Sample
Type
c. Click Try It Out! to finish creating the reference data collection and to view the results.
TASK: 1.8 Manage automatic update
SUBTASKS:
Automatic updates
Using IBM® QRadar®, you can either replace your existing configuration files or integrate the updated
files with your existing files.
The QRadar console must be connected to the internet to receive updates. If your console is not connected
to the internet, you must configure an internal update server.
Download software updates from IBM Fix Central (www.ibm.com/support/fixcentral/).
Configuration updates, which include configuration file changes, vulnerability, QID map, and
security threat information updates.
DSM updates, which include corrections to parsing issues, scanner changes, and protocol updates.
Minor updates, which include items such as extra online help content or updated scripts.
You can select Auto Restart Service to allow automatic updates that require the user interface to restart.
A user interface disruption occurs when the service restarts. Alternatively, you can manually install the
updated from the Check for Updates window.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Auto Update.
3. Click Change Settings.
4. On the Basic tab, select the schedule for updates.
a. In the Configuration Updates section, select the method that you want to use for updating
your configuration files.
To merge your existing configuration files with the server updates without
affecting your custom signatures, custom entries, and remote network
configurations, select Auto Integrate.
To override your customizations with server settings, select Auto Update.
b. In the DSM, Scanner, Protocol Updates section, select an option to install updates.
c. In the Major Updates section, select an option for receiving major updates for new
releases.
d. In the Minor Updates section, select an option for receiving patches for minor system
issues.
e. If you want to deploy update changes automatically after updates are installed, select
the Auto Deploy check box.
f. If you want to restart the user interface service automatically after updates are installed,
select the Auto Restart Service check box.
5. Click the Advanced tab to configure the update server and backup settings.
a. In Web Server field, type the web server from which you want to obtain the updates.
The default web server is https://auto-update.qradar.ibmcloud.com/.
b. In the Directory field, type the directory location on which the web server stores the
updates.
The default directory is autoupdates/.
c. Optional: Configure the settings for proxy server.
If the application server uses a proxy server to connect to the Internet, you must configure
the proxy server. If you are using an authenticated proxy, you must provide the username
and password for the proxy server.
d. In the Backup Retention Period list, type or select the number of days that you want to
store files that are replaced during the update process.
The files are stored in the location that is specified in the Backup Location. The minimum
is one day and the maximum is 65535 years.
e. In the Backup Location field, type the location where you want to store backup files.
f. In the Download Path field, type the directory path location to which you want to store
DSM, minor, and major updates.
The default directory path is /store/configservices/staging/updates.
6. Click Save.
Problem
IBM® is migrating QRadar SIEM auto update servers to a new location in the IBM Cloud®.
This notice is intended to remind administrators that they must change their auto update
configuration to use a new IBM Cloud® web server to avoid interruptions with daily and
weekly software updates. Administrators who use IP-based firewall rules in their organization
must also update their corporate firewall rules to allow traffic to the IBM Cloud auto update
web server.
Resolving The Problem
Notice: Administrators must change their auto update server configuration before 30 November
2020 to avoid interruptions with your auto update download. A benign error message can be
displayed in the user interface when you update your configuration as described in APAR
IJ29298.
About
IBM® is migrating the QRadar® auto update servers to the IBM Cloud to better serve updates
globally starting on 27 July 2020. Administrators with firewalls that use IP-based rules will
experience interruptions to auto update downloads after November 30,2020. This server change
allows a single host name and IP address to be leveraged through IBM Cloud® clusters for all
QRadar auto updates globally.
Static IP address
Server changes Web server hostname Location Description
and port
Affected versions
All QRadar® products and versions are impacted by this change.
IMPORTANT: Administrators who fail to update their corporate firewalls might experience an
interruption in service after 30 November 2020. QRadar® Support recommends that all
administrators update their QRadar Console's auto update settings during a maintenance
window and confirm that auto updates complete successfully.
Summary
Administrators can update their QRadar auto update servers to use the new server location
starting on 27 July 2020. Administrators must contact their corporate firewall team to ensure
that any IP-based firewall rules are updated before November 30, 2020 to use the new static IP
address at 169.47.251.244. QRadar Support recommends that administrators can add both the
auto update host name and the static IP address to their corporate firewall policy rules to allow
traffic:
Asset data usually comes from one of the following asset data sources:
Events
Event payloads, such as those created by DHCP or authentication servers, often contain user logins, IP
addresses, host names, MAC addresses, and other asset information. This data is immediately provided
to the asset database to help determine which asset the asset update applies to.
Events are the primary cause for asset growth deviations.
Flows
Flow payloads contain communication information such as IP address, port, and protocol that is
collected over regular, configurable intervals. At the end of each interval, the data is provided to the
asset database, one IP address at a time.
Because asset data from flows is paired with an asset based on a single identifier, the IP address,
flow data is never the cause of asset growth deviations.
User interface
Users who have the Assets role can import or provide asset information directly to the asset database.
Asset updates that are provided directly by a user are for a specific asset. Therefore the asset
reconciliation stage is bypassed.
Asset updates that are provided by users do not introduce asset growth deviations.
When you view the asset profile, some fields might be blank. Blank fields exist when the system did not
receive this information in an asset update, or the information exceeded the asset retention period. The
default retention period is 120 days. An IP address that appears as 0.0.0.0 indicates that the asset does not
contain IP address information.
Incoming asset data workflow
IBM® QRadar® uses identity information in an event payload to determine whether to create a new asset
or update an existing asset.
1. QRadar receives the event. The asset profiler examines the event payload for identity information.
2. If the identity information includes a MAC address, a NetBIOS host name, or a DNS host name
that are already associated with an asset in the asset database, then that asset is updated with any
new information.
3. If the only available identity information is an IP address, the system reconciles the update to the
existing asset that has the same IP address.
4. If an asset update has an IP address that matches an existing asset but the other identity
information does not match, the system uses other information to rule out a false-positive match
before the existing asset is updated.
5. If the identity information does not match an existing asset in the database, then a new asset is
created based on the information in the event payload.
When you import asset profile data in CSV format, the file must be in the following format:
ip,name,weight,description
The following table describes the parameters that you must configure.
Parameter Description
IP Specifies any valid IP address in the dot decimal format; for example, 192.168.5.34.
Specifies the name of the asset up to 255 characters in length. Commas are not valid in this
Name
field because they invalidate the import process; for example, WebServer01.
Specifies a number 0 - 10, which indicates the importance of the asset on your network. A
Weight
value of 0 denotes low importance, while 10 denotes a high importance.
Specifies a textual description for this asset up to 255 characters in length. This value is
Description
optional.
Table 1. Asset profile import CSV format parameters
The CSV import process merges any asset profile information that is stored in your QRadar® system.
<estimated IP addresses per day> is the number of IP addresses that a single asset might
accumulate in one day under normal conditions
<retention time (days)> is the preferred amount of time to retain the asset IP addresses
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Assets section, click Asset Profiler Configuration.
3. Click Asset Profiler Configuration.
4. Adjust the configuration values and click Save.
5. Deploy the changes into your environment for the updates to take effect.
Each asset update must contain trusted information about a single asset. When QRadar receives an asset
update, the system determines which asset to which the update applies.
Asset reconciliation is the process of determining the relationship between asset updates and the related
asset in the asset database. Asset reconciliation occurs after QRadar receives the update but before the
information is written to the asset database.
Identity information
Every asset must contain at least one piece of identity data. Subsequent updates that contain one or more
pieces of that same identity data are reconciled with the asset that owns that data. Updates that are based
on IP addresses are handled carefully to avoid false-positive asset matches. False positive asset matches
occur when one physical asset is assigned ownership of an IP address that was previously owned by
another asset in the system.
When multiple pieces of identity data are provided, the asset profiler prioritizes the information from the most
deterministic to the least in the following order:
MAC address
IP address
MAC addresses, NetBIOS host names, and DNS host names are unique and therefore are considered as
definitive identity data. Incoming updates that match an existing asset only by the IP address are handled
differently than updates that match more definitive identity data.
In domain-aware environments, the asset reconciliation exclusion rules track the behavior of asset data
separately for each domain.
Asset merging
Asset merging is the process where the information for one asset is combined with the information for
another asset under the premise that they are actually the same physical asset.
Asset merging occurs when an asset update contains identity data that matches two different asset
profiles. For example, a single update that contains a NetBIOS host name that matches one asset profile
and a MAC address that matches a different asset profile might trigger an asset merge.
Some systems can cause high volumes of asset merging because they have asset data sources that
inadvertently combine identity information from two different physical assets into a single asset update.
Some examples of these systems include the following environments:
Assets that have many IP addresses, MAC addresses, or host names show deviations in asset growth and
can trigger system notifications.
Asset growth deviations occur when the number of asset updates for a single device grows beyond the
limit that is set by the retention threshold for a specific type of the identity information. Proper handling
of asset growth deviations is critical to maintaining an accurate asset model.
At the root of every asset growth deviation is an asset data source whose data is untrustworthy for
updating the asset model. When a potential asset growth deviation is identified, you must look at the
source of the information to determine whether there is a reasonable explanation for the asset to
accumulate large amounts of identity data. The cause of an asset growth deviation is specific to an
environment.
Consider a virtual private network (VPN) server in a Dynamic Host Configuration Protocol (DHCP)
network. The VPN server is configured to assign IP addresses to incoming VPN clients by proxying
DHCP requests on behalf of the client to the network's DHCP server.
From the perspective of the DHCP server, the same MAC address repeatedly requests many IP address
assignments. In the context of network operations, the VPN server is delegating the IP addresses to the
clients, but the DHCP server can't distinguish when a request is made by one asset on behalf of another.
The DHCP server log, which is configured as a QRadar log source, generates a DHCP acknowledgment
(DHCP ACK) event that associates the MAC address of the VPN server with the IP address that it
assigned to the VPN client. When asset reconciliation occurs, the system reconciles this event by MAC
address, which results in a single existing asset that grows by one IP address for every DHCP ACK event
that is parsed.
Eventually, one asset profile contains every IP address that was allocated to the VPN server. This asset
growth deviation is caused by asset updates that contain information about more than one asset.
Threshold settings
When an asset in the database reaches a specific number of properties, such as multiple IP addresses or
MAC addresses, QRadar blocks that asset from receiving more updates.
The Asset Profiler threshold settings specify the conditions under which an asset is blocked from updates.
The asset is updated normally up to the threshold value. When the system collects enough data to exceed
the threshold, the asset shows an asset growth deviation. Future updates to the asset are blocked until the
growth deviation is rectified.
The system notification messages include links to reports to help you identify the assets that have growth
deviations.
A mobile device that travels from office-to-office frequently and is assigned a new IP address
whenever it logs in.
A device that connects to a public wifi with short IP addresses leases, such as at a university
campus, might collect large volumes of asset data over a semester.
QRadar might mistakenly report this activity as an asset growth deviation. If the asset growth is
legitimate, you can take steps to ensure that the asset data is handled properly. For more information,
see c_qradar_adm_prevent_asset_grwth_deviat.html.
Example: How configuration errors for log source extensions can cause asset growth deviations
Customized log source extensions that are improperly configured can cause asset growth deviations.
You configure a customized log source extension to provide asset updates to IBM® QRadar® by parsing
user names from the event payload that is on a central log server. You configure the log source extension
to override the event host name property so that the asset updates that are generated by the custom log
source always specify the DNS host name of the central log server.
Instead of QRadar receiving an update that has the host name of the asset that the user logged in to, the
log source generates many asset updates that all have the same host name.
In this situation, the asset growth deviation is caused by one asset profile that contains many IP addresses
and user names.
Use the information in the notification payload to identify the assets that are contributing to the asset
growth deviation and determine what is causing the abnormal growth. The notification provides a link to
a report of all assets that experienced deviating asset growth over the past 24 hours.
After you resolve the asset growth deviation in your environment, you can run the report again.
1. Click the Log Activity tab and click Search > New Search.
2. Select the Deviating Asset Growth: Asset Report saved search.
3. Use the report to identify and repair inaccurate asset data that was created during the deviation.
Asset exclusion rules monitor asset data for consistency and integrity. The rules track specific pieces of
asset data over time to ensure that they are consistently being observed with the same subset of data
within a reasonable time.
For example, if an asset update includes both a MAC address and a DNS host name, the MAC address is
associated with that DNS host name for a sustained period. Subsequent asset updates that contain that
MAC address also contain that same DNS host name when one is included in the asset update. If the
MAC address suddenly is associated with a different DNS host name for a short period, the change is
monitored. If the MAC address changes again within a short period, the MAC address is flagged as
contributing to an instance of deviating or abnormal asset growth.
If your blocklists are populating too aggressively, you can tune the asset reconciliation exclusion
rules that populate them.
If you want to add the data to the asset database, you can remove the asset data from the blocklist
and add it to the corresponding asset allowlist. Adding asset data to the allowlist prevents it from
inadvertently reappearing on the blocklist.
You can adjust the retention time based on the type of asset identity data that is in the event. For example,
if multiple IP addresses are merging under one asset, you can change the Asset IP Retention period from
120 days to a lower value.
When you change the asset retention period for a specific type of asset data, the new retention period is
applied to all asset data in QRadar. Existing asset data that already exceeds the new threshold is removed
when the deployment is complete. To ensure that you can always identify named hosts even when the
asset data is beyond the retention period, the asset retention cleanup process does not remove the last
known host name value for an asset.
Before you determine how many days that you want to retain the asset data, understand the following
characteristics about longer retention periods:
increases the probability that stale data will contribute to asset growth deviation messages.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Asset Profiler Configuration.
3. Click Asset Profiler Retention Configuration.
4. Adjust the retention values and click Save.
5. Deploy the changes into your environment for the updates to take effect.
Apps overview
After a developer creates an app, IBM certifies and publishes it in the IBM Security App
Exchange. QRadar administrators can then browse and download the apps and then install the apps
into QRadar to address specific security requirements.
The IBM Security App Exchange is a community-based sharing hub, that you use to share apps across IBM
Security products. By participating in App Exchange, you can use the rapidly assembled, innovative workflows,
visualizations, analytics, and use cases that are packaged into apps to address specific security requirements.
Easy-to-use solutions are developed by partners, consultants, developers to address key security challenges. To
detect and remediate threats, use these shared security components, from real-time correlation and behavioral
modeling to custom responses and reference data.
Note: The combined memory requirements of all the apps that are installed on a QRadar Console cannot exceed 10
per cent of the total available memory, or the apps won't work. If you exceed the 10 per cent memory allocation
and want to run more apps, use a dedicated appliance for your apps (AppNode appliance for QRadar V7.3.1 or the
AppHost appliance for QRadar V7.3.2 or later).
The IBM QRadar Assistant app helps you to manage and update your app and content extension
inventory, view app and content extension recommendations, follow the QRadar Twitter feed, and get
links to useful information. The app is automatically installed with QRadar V7.3.2 or later.
The following diagram shows the workflow for an app and the role who is typically responsible for the
work.
Apps create or add new functions in QRadar by providing new tabs, API methods, dashboard
items, menus, toolbar buttons, configuration pages, and more within the QRadar user interface.
You download apps from the IBM Security App Exchange. Apps that are created by using the
GUI Application Framework Software Development Kit integrate with the QRadar user interface
to deliver new security intelligence capabilities or extend the current functions.
Every downloaded file from the IBM Security App Exchange is known as an extension. An
extension can consist of an app or security product enhancement (content extension) that is
packaged as an archive (.zip) file, which you can deploy on QRadar by using the Extensions
Management tool on the Admin tab.
Who can create an app?
You can use the GUI Application Framework Software Development Kit to create apps. For more
information about the GUI Application Framework Software Development Kit, see the IBM
Security QRadar App Framework Guide.
How do I share my app?
Only certified content is shared in the IBM Security App Exchange, a new platform for
collaborating where you can respond quickly and address your security and platform enhancement
requirements. In the IBM Security App Exchange, you can find available apps, discover their
purpose, and what they look like, and learn what other users say about the apps.
How do I get an app that I downloaded into QRadar?
If your app requires a minimum memory allocation, you must specify this allocation as part of
your app manifest. The default allocation is 200 MB.
What is the difference between an app, a content extension, and a content pack?
Extension
From within QRadar, an extension is a term that is used for everything that you download from
the IBM Security App Exchange. Sometimes that extension contains individual content items,
such as custom AQL functions or custom actions, and sometimes the extension contains an app
that is developed by using the GUI App Framework Software Development Kit. You use
the Extensions Management tool to install extensions.
App
An app is content that is created when you use the GUI App Framework Software Development
Kit. The app extends or creates new functions in QRadar.
Content extension
A content extension is typically used to update QRadar security template information or add new
content such as rules, reports, searches, logos, reference sets, custom properties. Content
extensions are not created by using the GUI Application Framework Software Development Kit.
You download content packs from IBM Fix Central in RPM format.
Typically, content extensions differ from content packs because you download content packs
(RPM files) from IBM Fix Central (www.ibm.com/support/fixcentral).
Application data is backed up separate from the application configuration by using an easy-to-use script
that runs nightly. You can also use the script to restore the app data, and to configure backup times and
data retention periods for app data.
This script is on both the QRadar® Console and the App Host if one is installed. The script backs up app
data only if apps are on the current host.
Procedure
1. Use SSH to log in to your Console or your App Host as the root user.
2. Go to the /opt/qradar/bin/ directory.
./app-volume-backup.py backup
The app-volume-backup.py script runs nightly at 2:30 AM local time to back up all
installed apps. Backup archives are stored in the /store/apps/backup folder. You can
change the backup archives location by editing the APP_VOLUME_BACKUP_DIR variable
in /store/configservices/staging/globalconfig/nva.conf. You must deploy changes after you
edit this variable.
o To view all data backups for installed apps, enter the following command:
./app-volume-backup.py ls
This command outputs all backup archives that are stored in the backup archives folder.
o To restore a backup archive, enter the following command:
o New in 7.5.0To restore data for a specific application instance, rather than restoring all
instances, enter the following command:
o New in 7.5.0 Update Package 6To backup data for an individual application instance,
enter the following command:
o New in 7.5.0 Update Package 6To restore data for an individual application instance,
enter the following command:
o By default, all backup archives are retained for one week. The retention process runs
nightly at 2:30 AM local time with the backup.
To perform retention manually, and use the default retention period, enter the
following command:
./app-volume-backup.py retention
You can also set the retention period manually by adding -t (time - defaults to 1)
and -p (period - defaults to 0) switches.
The -p switch accepts three values: 0 for a week, 1 for a day, and 2 for an hour.
For example, to set the retention period for a backup to 3 weeks, enter the
following command:
./app-volume-backup.py retention -t 3 -p 0
o If you want to change the retention time that is used by the nightly timer, add flags to the
retention command found in the following systemd service file.
/usr/lib/systemd/system/app-data-backup.service
For example, to change the retention period that is used by the nightly retention process to
5 days, locate the following line:
ExecStart=/opt/qradar/bin/app-volume-backup.py retention
Replace it with:
ExecStart=/opt/qradar/bin/app-volume-backup.py retention -t 5 -p 1
Save your changes, and run the systemctl daemon-reload command for systemd to apply the
changes.
App containers are restarted automatically after the restore is complete.
a. Protocol
b. Address
c. Port
d. User name
e. Password
Authorized By The name of the user or administrator that authorized the addition of the service.
Authentication
Removed in 7.4.3 The token that is associated with this authorized service.
Token
User Role The user role that is associated with this authorized service.
Security Profile The security profile that is associated with this authorized service.
The date and time that the authorized service expires. By default, the authorized service is
Expires
valid for 30 days.
For example, log sources can provide large volumes of asset identity information to the asset database.
They provide IBM® QRadar® with near real-time changes to asset information and they can keep your
asset database current. But log sources are most often the source of asset growth deviations and other
asset-related anomalies.
When a log source sends incorrect asset data to QRadar, try to fix the log source so that the data it sends
is usable by the asset database. If the log source cannot be fixed, you can build an identity exclusion
search that blocks the asset information from entering the asset database.
You can also use an identity exclusion search where Identity_Username+Is Any Of + Anonymous Logon to
ensure that you are not updating assets that are related to service accounts or automated services.
Differences between identity exclusion searches and blacklists
While identity exclusion searches appear to have similar functionality to asset blacklists, there are
significant differences.
Blacklists can specify only raw asset data, such as MAC addresses and host names, that is to be excluded.
Identity exclusion searches filter out asset data based on search fields like log source, category, and event
name.
Blacklists do not account for the type of data source that is providing the data, whereas identity exclusion
searches can be applied to events only. Identity exclusion searches can block asset updates based on
common event search fields, such as event type, event name, category, and log source.
The filters that you create for the search must match events that you want to exclude, not the events that
you want to keep.
You might find it helpful to run the search against events that are already in the system. However, when
you save the search, you must select Real Time (streaming) in the Timespan options. If you do not
choose this setting, the search does not match any results when it runs against the live stream of events
that are coming into QRadar.
When you update the saved identity exclusion search without changing the name, the identity exclusion
list that is used by the Asset Profiler is updated. For example, you might edit the search to add more
filtering of the asset data that you want to exclude. The new values are included and the asset exclusion
starts immediately after the search is saved.
Procedure
1. Create a search to identify the events that do not provide asset data to the asset database.
a. On the Log Activity tab, click Search > New Search.
b. Create the search by adding search criteria and filters to match the events that you want to
exclude from asset updates.
c. In the Time Range box, select Real Time (streaming) and then click Filter to run the
search.
d. On the search results screen, click Save Criteria and provide the information for the saved
search.
Note: You can assign the saved search to a search group. An Identity Exclusion search
group exists in the Authentication, Identity and User Activity folder.
e. Click OK to save the search.
2. Identify the search that you created as an identity exclusion search.
a. On the navigation menu ( ), click Admin.
b. In the System Configuration section, click Asset Profiler Configuration.
c. Click Manage Identity Exclusion at the bottom of the screen.
d. Select the identity exclusion search that you created from the list of searches on the left
and click the add icon (>).
Tip: If you can't find the search, type the first few letters into the filter at the top of the list.
e. Click Save.
3. On the Admin tab, click Deploy changes for the updates to take effect.
You can set the following types of restrictions on event and flow searches:
The number of records that are processed by the Ariel query server.
Note: Ariel search stops when the record limit is reached, but all in-progress search results are
returned to the search manager and are not truncated. Therefore, the search result set is often
larger than the specified record limit.
User-based restrictions
User-based restrictions define limits for an individual user, and they take precedence over role and tenant
restrictions.
For example, your organization hires university students to work with the junior analysts in your SOC.
The students have the same user role as the other junior analysts, but you apply more restrictive user-
based restrictions until the students are properly trained in building QRadar® queries.
Role-based restrictions
Role-based restrictions allow you to define groups of users who require different levels of access to
your QRadar deployment. By setting role-based restrictions, you can balance the needs of different types
of users.
Tenant-based restrictions
As an MSSP, you might define standard resource restrictions based on a set of criteria that each tenant is
compared to. For example, the standard configuration for a medium-sized tenant might include resource
restrictions that limit searches to accessing only the last 14 days of data and a maximum of 10,000 records
returned.
Execution time restrictions specify the maximum amount of time that a query runs before data is
returned.
Time span restrictions specify the time span of the data to be searched.
Record limit restrictions specify the number of data records that are returned by a search query.
Users who run searches that are limited by resource restrictions see the resource restriction icon ( ) next
to the search criteria.
Note: The search result set is often larger than the specified record limit. When the record limit is reached, the
search manager signals all search participants to stop (there are multiple search participants even on a single
system), but results continue to accumulate until the search fully stops on all participants. All search results are
added to the result set.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Resource Restrictions.
3. If your deployment has tenants that are configured, click Role or Tenant to specify the type of
restrictions to set.
4. Double-click the role or tenant that you want to set restrictions for.
5. To set restrictions for all users who are assigned to the user role or tenant, follow these steps:
a. Click the summary row at the top to open the Edit Restriction dialog box.
b. Click Enabled for the type of restriction that you want to set, and specify the restriction
values.
c. Click Save.
6. To set restrictions for a specific user, follow these steps:
a. Double-click the user that you want to set the restrictions for.
To search for a user, type the user name in the filter field.
b. Click Enabled for the type of restriction that you want to set, and specify the restriction
values.
c. Click Save.
Optimize event and flow payload searches by enabling a payload index on the Log
Activity and Network Activity.
Provide a faster initial deployment and easier tuning by automatically or manually adding servers
to building blocks.
Configure responses to event, flow, and offense conditions by creating or modifying custom rules
and anomaly detection rules.
Ensure that each host in your network creates offenses that are based on the most current rules,
discovered servers, and network hierarchy.
Payload indexing
Use the Quick Filter function, which is available on the Log Activity and Network Activity tabs, to
search event and flow payloads.
To optimize the Quick Filter, you can enable a payload index Quick Filter property.
Enabling payload indexing might decrease system performance. Monitor the index statistics after you
enable payload indexing on the Quick Filter property.
Enabling payload indexing
You can optimize event and flow payload searches by enabling a payload index on the Log
Activity and Network Activity Quick Filter property.
Procedure
1. Click the Admin tab.
2. In the System Configuration section, click Index Management.
3. In the Search field, type quick filter.
4. Right-click the Quick Filter property that you want to index.
5. Click Enable Index.
6. Click Save, and then click OK.
7. Optional: To disable a payload index, choose one of the following options:
o Click Disable Index.
o Right-click a property and select Disable Index from the menu.
Servers and building blocks
IBM QRadar automatically discovers and classifies servers in your network, providing a faster initial
deployment and easier tuning when network changes occur.
To ensure that the appropriate rules are applied to the server type, you can add individual devices or entire
address ranges of devices. You can manually enter server types that do not conform to unique protocols into their
respective Host Definition Building Block. For example, adding the following server types to building blocks
reduces the need for further false positive tuning:
The Server Discovery function uses the asset profile database to discover several types of servers on your
network. The Server Discovery function lists automatically discovered servers and you can select which
servers you want to include in building blocks.
Using Building blocks, you can reuse specific rule tests in other rules. You can reduce the number of false
positives by using building blocks to tune QRadar and enable extra correlation rules.
Adding servers to building blocks automatically
The Server Discovery function uses the asset profile database to discover different server types that are
based on port definitions. Then, you can select the servers to add to a server-type building block for rules.
Procedure
1. Click Assets > Server Discovery.
2. In the Server Type list, select the server type that you want to discover.
When you clean the SIM model, all existing offenses are closed. Cleaning the SIM model does not affect
existing events and flows.
Creating a custom rule
IBM® QRadar® includes rules that detect a wide range of activities, including excessive firewall denies,
multiple failed login attempts, and potential botnet activity. You can also create your own rules to detect unusual
activity.
When you define rule tests, test against the smallest data possible. Testing in this way helps rule test
performance and ensures that you don't create expensive rules. To optimize performance, start with broad
categories that narrow the data that is evaluated by the rule test. For example, start with a rule test for a
specific log source type, network location, flow source, or context (R2L, L2R, L2L). Any mid-level tests
might include IP addresses, port traffic, or any other associated test. The rule must test payload and regex
expressions last.
Similar rules are grouped by category. For example, Audit, Exploit, DDoS, Recon, and more. When you
delete an item from a group, the rule or building block is only deleted from the group; it remains available
on the Rules page. When you delete a group, the rules or building blocks of that group remain available
on the Rules page.
Procedure
1. From the Offenses, Log Activity, or Network Activity tabs, click Rules.
2. From the Display list, select Rules to create a new rule.
3. Optional: From the Display list, select Building Blocks to create a new rule by using building
blocks.
4. From the Actions list, select a rule type.
Each rule type tests against incoming data from different sources in real time. For example, event
rules test incoming log source data and offense rules test the parameters of an offense to trigger
more responses.
5. In the Rule Wizard window, select the Skip this page when running this rules wizard checkbox
and click Next.
If you select the Skip this page when running this rules wizard checkbox, the Welcome page
does not appear each time that you start.
6. On the Rule Test Stack Editor page, in the Rule pane, type a unique name that you want to assign
to this rule in the Apply text box.
7. From the list box, select Local or Global.
o If you select Local, all rules are processed on the Event Processor on which they were
received and offenses are created only for the events that are processed locally.
o If you select Global, all matching events are sent to the QRadar Console for processing
and therefore, the QRadar Console uses more bandwidth and processing resources.
Learn more about Local and Global rules:
Global rule tests
Use global rules to detect things like "multiple user login failures" where the events from that user
might appear on multiple Event processors. For example, if you configured a Local rule for five
login failures in 10 minutes from the same username, all 5 of those login failures must appear on
the same Event Processor. Therefore, if three login failures were on one Event Processor and 2
were on another, no offense is generated. However, if you set this rule to Global, it generates an
offense.
8. From the Test Group list, select one or more tests that you want to add to this rule. The CRE
evaluates rule tests line-by-line in order. The first test is evaluated and when true, the next line is
evaluated until the final test is reached.
If you want to select the when the event matches this AQL filter query test for a new event rule,
click the add (+) icon. In the Rule pane, click This and enter an AQL WHERE clause query in
the Enter an AQL filter query text box.
Learn more about using rules for events that are not detected:
The following rule tests can be triggered individually, but rule tests in the same rule test stack are
not acted upon.
o when the event(s) have not been detected by one or more of these log source types for
this many seconds
o when the event(s) have not been detected by one or more of these log sources for this
many seconds
o when the event(s) have not been detected by one or more of these log source groups
for this many seconds
These rule tests are not activated by an incoming event, but instead are activated when a specific
event is not seen for a specific time interval that you configured. QRadar uses a watcher task that
periodically queries the last time that an event was seen (last seen time), and stores this time for
the event, for each log source. The rule is triggered when the difference between this last seen
time and the current time exceeds the number of seconds that is configured in the rule.
9. To export the configured rule as a building block to use with other rules, click Export as Building
Block.
10. On the Rule Responses page, configure the responses that you want this rule to generate.
Learn more about rule response page parameters:
Description
Parameter
Select this checkbox to assign a severity level to the event, where 0 is the lowest and 10 is the
Severity
highest. The severity is displayed in the Annotation pane of the event details.
Select this checkbox to assign credibility to the log source. For example, is the log source noisy
Credibility or expensive? The range is 0 (lowest) to 10 (highest) and the default is 10. Credibility is displayed
in the Annotation pane of the event details.
Select this checkbox to assign relevance to the weight of the asset. For example, how much do
Relevance you care about the asset? The range is 0 (lowest) to 10 (highest) and the default is 10. Relevance
is displayed in the Annotation pane of the event details.
Bypass further Select this checkbox to match an event or flow to bypass all other rules in the rule engine
rule correlation and prevent it from creating an offense. The event is written to storage for searching and
event reporting.
Dispatch New Select this checkbox to dispatch a new event in addition to the original event or flow,
Event which is processed like all other events in the system.
Dispatches a new event with the original event, and is processed like all other events in
Description
Parameter
the system.
The Dispatch New Event parameters are displayed when you select this checkbox . By
default, the checkbox is clear.
Select this checkbox to change the Email Locale setting from the System Settings on
Email
the Admin tab.
Send to Local
By default, this checkbox is clear.
Syslog
Note: Only normalized events can be logged locally on an appliance. If you want to send raw
event data, you must use the Send to Forwarding Destinations option to send the data to a remote
syslog host.
To add, edit, or delete a forwarding destination, click the Manage Destinations link.
Select this checkbox to display events that are generated as a result of this rule in the
Notify System Notifications item on the Dashboard tab.
Add to Reference
Data To use this rule response, you must create the reference data collection.
Remove from Select this checkbox to remove data from a reference set.
Reference Set
To remove data from a reference set:
o From the first list box, select the property of the event or flow that you
want to remove. Options include all normalized or custom data.
o From the second list box, select the reference set from which you want to
Description
Parameter
The Remove from Reference Set rule response provides the following function:
Refresh
Click Refresh to refresh the first list box to ensure that the list is current.
Remove from
Reference Data To use this rule response, you must have a reference data collection.
Select this checkbox to write scripts that do specific actions in response to network events. For
example, you might write a script to create a firewall rule that blocks a particular source IP
Execute Custom
address from your network in response to repeated login failures.
Action
You add and configure custom actions by using the Define Actions icon on
the Admin tab.
Response LimiterSelect this checkbox to configure the frequency in which you want this rule to respond.
Table 1. Event , Flow and Common Rule, and Offense Rule Response page parameters
Edit the following building blocks to reduce the number of offenses that are generated by high volume
traffic servers:
BB:HostDefinition
VA Scanner Source IP
BB:HostDefinition
Network Management Servers
BB:HostDefinition
Virus Definition and Other Update Servers
BB:HostDefinition
Proxy Servers
BB:NetworkDefinition
NAT Address Range
BB:NetworkDefinition
Trusted Network
To edit building blocks, you must add the IP address or IP addresses of the server or servers into the
appropriate building blocks.
Procedure
1. Click the Offenses tab.
2. On the navigation menu, click Rules.
3. From the Display list, select Building Blocks.
4. Double-click the building block that you want to edit.
5. Update the building block.
6. Click Finish.
o VA Scanners
o Authorized Scanners
BB:HostDefinition: Virus Edit the and when either the source or destination IP is one of the
Definition and Other Update following test to include the IP addresses of virus protection and update
Servers function servers.
Edit the and when the source is located in test to include geographic
BB:Category Definition: locations that you want to prevent from accessing your network. This change
Countries with no Remote enables the use of rules, such as anomaly: Remote Access from Foreign
Access Country to create an offense when successful logins are detected from remote
locations.
Edit the and when either the source or destination IP is one of the
BB:ComplianceDefinition: following test to include the IP addresses of servers that are used for Gramm-
GLBA Servers Leach-Bliley Act (GLBA) compliance. By populating this building block, you
can use rules such as Compliance: Excessive Failed Logins to Compliance
IS, which create offenses for compliance and regulation-based situations.
Edit the and when either the source or destination IP is one of the
following test to include the IP addresses of servers that are used for Health
BB:ComplianceDefinition: Insurance Portability and Accountability Act (HIPAA) compliance. By
HIPAA Servers populating this building block, you can use rules, such as Compliance:
Excessive Failed Logins to Compliance IS, which create offenses for
compliance and regulation-based situations.
BB:ComplianceDefinition: Edit the and when either the source or destination IP is one of the
SOX Servers following test to include the IP addresses of servers that are used for SOX
(Sarbanes-Oxley Act) compliance. By populating this building block, you can
Building Block Description
use rules, such as Compliance: Excessive Failed Logins to Compliance IS,
which create offenses for compliance and regulation-based situations.
Edit the and when either the source or destination IP is one of the
following test to include the IP addresses of servers that are used for PCI DSS
BB:ComplianceDefinition: (Payment Card Industry Data Security Standards) compliance. By populating
PCI DSS Servers this building block, you can use rules such as Compliance: Excessive Failed
Logins to Compliance IS, which creates offenses for compliance and
regulation-based situations.
Edit the and when either the source or destination IP is one of the
BB:NetworkDefinition: following test to include the broadcast addresses of your network. This change
Broadcast Address Space removes false positive events that might be caused by the use of broadcast
messages.
BB:NetworkDefinition: Edit the and when the local network is test to include workstation networks
Client Networks that users are operating.
BB:NetworkDefinition:
Server Networks Edit the when the local network is test to include any server networks.
BB:NetworkDefinition: Edit the and when the local network is test to include the IP addresses that
Darknet Addresses are considered to be a dark net. Any traffic or events that are directed towards
a dark net is considered suspicious.
Edit the and when the any IP is a part of any of the following test to include
BB:NetworkDefinition:
the remote services that might be used to obtain information from the network.
DLP Addresses
This change can include services, such as WebMail hosts or file sharing sites.
BB:NetworkDefinition: Edit the and when the local network test to include networks that are
DMZ Addresses considered to be part of the network’s DMZ.
BB:PortDefinition: Edit the and when the destination port is one of the following test to include
Authorized L2R Ports common outbound ports that are allowed on the network.
BB:NetworkDefinition: Edit the and when the local network is to include the remote networks that
Watch List Addresses are on a watch list. This change helps to identify events from hosts that are on
a watch list.
BB:FalsePositive: User Edit this building block to include any categories that you want to consider as
Defined Server Type False false positives for hosts that are defined in the BB:HostDefinition: User
Positive Category Defined Server Type building block.
BB:FalsePositive: User Edit this building block to include any events that you want to consider as
Defined Server Type False false positives for hosts that are defined in the BB:HostDefinition: User
Positive Events Defined Server Type building block.
Edit this building block to include the IP address of your custom server type.
After you add the servers, you must add any events or categories that you want
BB:HostDefinition: User to consider as false positives to this server, as defined in
Defined Server Type the BB:FalsePositives: User Defined Server Type False Positive Category or
the BB:False Positives: User Defined Server Type False Positive Events
building blocks.
Table 1. Building blocks to edit.
You can include a CIDR range or subnet in any of the building blocks instead of listing the IP
addresses. For example, 192.168.1/24 includes addresses 192.168.1.0 to 192.168.1.255. You can
also include CIDR ranges in any of the BB:HostDefinition building blocks.
Tip: For more information, see the IBM QRadar Administration Guide.
Use the IBM QRadar Use Case Manager to review your building blocks. Download the app at
the IBM Security App
Exchange (https://exchange.xforce.ibmcloud.com/hub/extension/bf01ee398bde8e5866fe51d0e1ee
684a).
Creating an OR condition within the CRE
You can create a conditional OR test within the Custom Rules Engine (CRE), which gives you more
options for defining your rules by using building blocks.
Both building blocks are then loaded when the test is applied.
Procedure
1. Click the Offenses tab.
2. On the navigation menu, click Rules.
3. From the Actions list, select one of the following options:
o New Event Rule Select this option to configure a rule for events.
o New Flow Rule Select this option to configure a rule for flows.
o New Common Rule Select this option to configure a rule for events and flows.
o New Offense Rule Select this option to configure a rule for offenses.
4. Read the introductory text and then click Next.
You are prompted to Choose the source from which you want this rule to generate.
The default is the rule type that you selected on the Offenses tab.
5. Optional: Select the rule type that you want to apply to the rule, and then click Next.
6. From the Type to filter menu, select one of the following corresponding tests that is based on the
option that you choose in step 3, and then click the (+) icon for the test.
o when an event matches any|all of the following rules
o when a flow matches any|all of the following rules
o when a flow or an event matches any|all of the following rules
o when the offense matches any of the following offense rules
For example, if you select New Event Rule in step 3, select when an event matches any|all of
the following rules.
7. In the Rule pane, select one of the following corresponding tests that is based on the option that
you choose in step 3, and then click rules.
o and when an event matches any of the following rules
o and when a flow matches any of the following rules
o and when a flow or an event matches any of the following rules
o and when the offense matches any of the following offense rules
For example, if you select New Event Rule in step 3, select and when an event matches any of
the following rules, and then click rules.
8. From the Select the rule(s) to match and click 'Add' field, select multiple building blocks by
holding down the Ctrl key and click Add +.
If you select the offense rule, select building blocks from the Select an offense rule and click
'Add' field.
9. Click Submit.
To ensure reliable system performance, you must consider the following guidelines:
To tune CRE rules, increase the rule threshold by doubling the numeric parameters and the time
interval.
Consider modifying rules to consider the local network context rather than the remote network
context.
When you edit a rule with the attach events for the next 300 seconds option enabled, wait 300
seconds before you close the related offenses.
The following table provides information on how to tune false positives according to these differing
scenarios.
6. Click Tune.
QRadar® prevents you from selecting Any Events/Flow(s) and Any Source To Any
Destination. This change creates a custom rule and prevents QRadar from creating offenses.
False positives configuration
The rule FalsePositive: False Positive Rules and Building Blocks is the first rule to execute in the CRE.
When it loads, all of its dependencies are loaded and tested.
Manage the configuration of false positives to minimize their impact on legitimate threats and
vulnerabilities. To prevent IBM® QRadar® from generating an excessive number of false positives, you
can tune false positive events and flows to prevent them from creating offenses.
False positive rule chains
The first rule to execute in the custom rules engine (CRE) is FalsePositive:False Positive Rules and Building
Blocks. When it loads, all of its dependencies are loaded and tested.
When an event or flow successfully matches the rule, it bypasses all other rules in the CRE to prevent it
from creating an offense.
The tests in a rule are executed in the order that they are displayed in the user interface. The most memory
intensive tests for the CRE are the payload and regular expression searches. To ensure that these tests run
against a smaller subset of data and execute faster, you must first include one of the following tests:
when the event(s) were detected by one or more of these log source types
when the event QID is one of the following QIDs
when the source IP is one of the following IP addresses
when the destination IP is one of the following IP addresses
when the local IP is one of the following IP addresses
when the remote IP is one of the following IP addresses
when either the source or destination IP is one of the following IP addresses
when the event(s) were detected by one of more of these log sources
You can further optimize QRadar® by exporting common tests to building blocks. Building blocks
execute per event as opposed to multiple times if tests are individually included in a rule.
Example output:
Data can be found in /root/CustomRule-2022-03-02-905476050.tar.gz
3. Use WinSCP or an equivalent tool to move the CustomRule-yyyy-mm-dd-
seconds.tar.gz file to your local laptop or workstation.
4. Use a compression utility to extract the CustomRule-yyyy-mm-dd-seconds.tar.gz file to
a .tar file.
5. Extract the .tar file a second time to access the Expensive Custom Rules report text file.
The output contains a .txt file, .xml file, and a folder named "reports".
6. Open CustomRule-yyyy-mm-dd-seconds.txt file in any spreadsheet program as
a CSV file.
Alternatively the output can be analyzed by using the command line by:
Decompressing the file generated
tar -xvzf <file_name>
Example:
tar -xvzf CustomRule-2022-10-24-17-37.tar.gz
Moving to the new decompressed folder:
cd CustomRule-2022-10-24-17-37
Open the reports/AverageTestTime-
<date_stamp>.report and reports/AverageResponseTime.report files.
head reports/AverageTestTime-* reports/AverageResponseTime-* | sed 's/com.*\,name\=//g'
Example report:
Output example if the file is being reviewed by using the command line:
Note: The sequence that the rules are laid out can make a difference in performance.
Information on best practices for writing rules can be found in Everything you need to
know about QRadar Rules (for beginners and experts)
Part 4: Alternate issues that can cause Custom Rule error messages
Verify whether any rules are configured as Global rules. In some cases, Global rules can
cause excessive events to be processed by the console and resulting in an MPC queue to
max out. In this case, change any Global rules to Local rules, if possible.
Verify how many Managed Hosts (MHs) are in the deployment. Sometimes, when there
are several MHs in the deployment, and only the console has 1k EPS, it might be
required to get a 5k EPS console license from the licensing team. In this case, request a
5k EPS console license from q1pd@us.ibm.com.
Payload-related tests that use "Pattern" or "Curly" regex-based calls can be expensive.
Payload tests can end up scanning every event, if not careful. Try to filter on log source,
log source type, and maybe an IP address before doing payload tests. The order of tests
makes a difference in the rule or CRE.
Host and Port profiles are also expensive, especially if the asset database, port profile, or
vulnerabilities data is large. For more information on host and port profiles, see
the tuning guide.
If the X-force data feed is enabled, X-force lookups can cause the error message.
Custom rules
IBM® QRadar® includes rules that detect a wide range of activities, including excessive firewall denies,
multiple failed login attempts, and potential botnet activity. You can also create your own rules to detect
unusual activity.
What are custom rules?
Rule types
Each of the event, flow, common, and offense rule types test against incoming data from different sources in real
time. There are multiple types of rule tests. Some check for simple properties from the data set. Other rule tests
are more complicated. They track multiple, event, flow, and offense sequences over a period of time and use
"counter" that is on one or more parameters before a rule response is triggered.
Event rules
Test against incoming log source data that is processed in real time by the QRadar Event Processor. You
create an event rule to detect a single event or event sequences. For example, to monitor your network
for unsuccessful login attempts, access multiple hosts, or a reconnaissance event followed by an exploit,
you create an event rule. It is common for event rules to create offenses as a response.
Flow rules
Test against incoming flow data that is processed by the QRadar Flow Processor. You can create a flow
rule to detect a single flow or flow sequences. It is common for flow rules to create offenses as a
response.
Common rules
Test against event and flow data. For example, you can create a common rule to detect events and flows
that have a specific source IP address. It is common for common rules to create offenses as a response.
Offense rules
Test the parameters of an offense to trigger more responses. For example, a response generates when an
offense occurs during a specific date and time. An offense rule processes offenses only when changes are
made to the offense. For example, when new events are added, or the system scheduled the offense for
reassessment. It is common for offense rules to email a notification as a response.
Managing rules
You can create, edit, assign rules to groups, and delete groups of rules. By categorizing your rules or
building blocks into groups, you can efficiently view and track your rules. For example, you can view all
rules that are related to compliance.
Domain-specific rules
If a rule has a domain test, you can restrict that rule so that it is applied only to events that are happening
within a specified domain. An event that has a domain tag that is different from the domain that is set on,
the rule does not trigger a response.
To create a rule that tests conditions across the entire system, set the domain condition to Any Domain.
Rule conditions
Most rule tests evaluate a single condition, like the existence of an element in a reference data collection
or testing a value against a property of an event. For complex comparisons, you can test event rules by
building an Ariel Query Language (AQL) query with WHERE clause conditions. You can use all of the
WHERE clause functions to write complex criteria that can eliminate the need to run numerous individual
tests. For example, use an AQL WHERE clause to check whether inbound SSL or web traffic is being
tracked on a reference set.
You can run tests on the property of an event, flow, or offense, such as source IP address, severity of
event, or rate analysis.
With functions, you can use building blocks and other rules to create a multi-event, multi-flow, or multi-
offense function. You can connect rules by using functions that support Boolean operators, such as OR
and AND. For example, if you want to connect event rules, you can use when an event matches any|all
of the following rules function.
TASK: 2.4 Index management
SUBTASKS:
2.4.1. Enable indexes
2.4.2. Enable payload indexing
2.4.3. Describe the use of index management
• What is an index
• Reading the index management interface
2.4.4. Configure retention period for payload indexes
Enabling indexes
The Index Management window lists all event and flow properties that can be indexed and provides statistics for
the properties. Toolbar options allow you to enable and disable indexing on selected event and flow properties.
Modifying database indexing might decrease system performance. Ensure that you monitor the statistics
after you enable indexing on multiple properties.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Index Management.
3. Select one or more properties from the Index Management list.
4. Choose one of the following options:
Situation Time frame Action Reason
5. Click Save.
6. Click OK.
Results
In lists that include event and flow properties, indexed property names are appended with the following
text: [Indexed]. Examples of such lists include the search parameters on the Log Activity and Network
Activity tab search criteria pages, and the Add Filter window.
Index management
Use Index Management to control database indexing on event and flow properties. To improve the speed
of searches in IBM® QRadar®, narrow the overall data by adding an indexed field in your search query.
An index is a set of items that specify information about data in a file and its location in the file system.
Data indexes are built in real-time as data is streamed or are built upon request after data is collected.
Searching is more efficient because systems that use indexes don't have to read through every piece of
data to locate matches. The index contains references to unique terms in the data and their locations.
Because indexes use disk space, storage space might be used to decrease search time.
Use indexing event and flow properties first to optimize your searches. You can enable indexing on any
property that is listed in the Index Management window and you can enable indexing on more than one
property. When a search starts in QRadar, the search engine first filters the data set by indexed properties.
The indexed filter eliminates portions of the data set and reduces the overall data volume and number of
event or flow logs that must be searched. Without any filters, QRadar takes more time to return the results
for large data sets.
For example, you might want to find all the logs in the past six months that match the text: The operation
is not allowed. By default, QRadar stores full text indexing for the past 30 days. Therefore, to complete a
search from the last 6 months, the system must reread every payload value from every event or flow in
that time frame to find matches. Your results display faster when you search with an indexed value filter
such as a Log Source Type, Event Name, or Source IP.
The percentage of saved searches running in your deployment that include the indexed property
The volume of data that is written to the disk by the index during the selected time frame
To enable payload indexing, you must enable indexing on the Quick Filter property.
Enabling indexes
The Index Management window lists all event and flow properties that can be indexed and
provides statistics for the properties. Toolbar options allow you to enable and disable indexing on
selected event and flow properties.
Enabling payload indexing to optimize search times
You use the Quick Filter feature on the Log Activity and Network Activity tab to search event
and flow payloads by using a text string. To optimize event and flow search times, enable payload
indexing on the Quick Filter property.
Restriction: Payload indexing increases disk storage requirements and might affect system
performance. Enable payload indexing if your deployment meets the following conditions:
The event and flow processors are at less than 70% disk usage.
The event and flow processors are less than 70% of the maximum events per second (EPS)
or flows per interface (FPI) rating.
Your virtual and physical appliances require a minimum of 24 GB of RAM to enable full payload
indexing. However, 48 GB of RAM is suggested.
The minimum and suggested RAM values applies to all QRadar systems, such as 16xx, 17xx, or 18xx
appliances, that are processing events or flows.
About this task
The retention values reflect the time spans that you are typically searching. The minimum retention period is 1
day and the maximum is 2 years.
Note: Quick Filter searches that use a time frame outside of the Payload Index Retention setting can trigger slow
and resource-intensive system responses. For example, if the payload index retention is set for 1 day, and you use a
time frame for the last 30 hours in the search.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click System Settings.
3. In the Database Settings section, select a retention time period from the Payload Index
Retention list.
4. Click Save.
5. Close the System Settings window.
6. On the Admin tab, click Deploy Changes.
What to do next
If you retain payload indexes longer than the default value, extra disk space is used. After you select a greater
value in the Payload Index Retention field, monitor system notifications to ensure that you do not fill disk
space.
Scheduled search
Use the Scheduled search option to schedule a search and view the results.
You can schedule a search that runs at a specific time of day or night. If you schedule a search to run in
the night, you can investigate in the morning. Unlike reports, you have the option of grouping the search
results and investigating further. You can search on number of failed logins in your network group. If the
result is typically 10 and the result of the search is 100, you can group the search results for easier
investigating. To see which user has the most failed logins, you can group by user name. You can
continue to investigate further.
You can schedule a search on events or flows from the Reports tab. You must select a previously saved
set of search criteria for scheduling.
1. Create a report
Note: QRadar® does not support reports based on AQL searches that contain subselect
statements.
o Generate an offense.
You can choose the create an individual offense option or the add result to an existing
offense option.
You can view the results of your scheduled search from the Offenses tab.
If you create an individual offense, an offense is generated each time that the report is run. If you
add the saved search result to an existing offense, an offense is created the first time that the report
runs. Subsequent report runs append to this offense. If no results are returned, the system does not
append or create an offense.
To view the most recent search result in the Offense Summary window, double-click a scheduled
search offense in the offense list. To view the list of all scheduled search runs, click Search
Results in the Last 5 Search Results pane.
Attention: The XFORCE_IP_CONFIDENCE function does not work in AQL advanced searches in languages
other than English.
The Advanced Search field has auto completion and syntax highlighting.
Use auto completion and syntax highlighting to help create queries. For information about supported web
browsers, see Supported web browsers
Note: If you use a quick filter on the Log Activity tab, you must refresh your browser window before you run an
advanced search.
Accessing Advanced Search
Access the Advanced Search option from the Search toolbar that is on the Network Activity and Log
Activity tabs to type an AQL query.
Select Advanced Search from the list box on the Search toolbar.
Expand the Advanced Search field by following these steps:
3. Press Enter.
You can right-click any value in the search result and filter on that value.
Double-click any row in the search result to see more detail.
All searches, including AQL searches, are included in the audit log.
AQL search string examples
The following table provides examples of AQL search strings.
Description Example
Check an IP address
against an X-
select * from events where XFORCE_IP_CONFIDENCE('Spam',sourceip)>3
Force category with a
confidence value.
Retrieve X-Force IP
select sourceip, XFORCE_IP_CATEGORY(sourceip) as IPcategories from
categories that are
associated with an IP.
events where XFORCE_IP_CATEGORY(sourceip) IS NOT NULL
For more information about functions, search fields and operators, see the Ariel Query Language guide.
AQL search string examples
Use the Ariel Query Language (AQL) to retrieve specific fields from the events, flows, and simarc
tables in the Ariel database.
Converting a saved search to an AQL string
Convert a saved search to an AQL string and modify it to create your own searches to quickly find
the data you want. Now you can create searches faster than by typing the search criteria. You can
also save the search for future use.
If you change a column set in a previously saved search, and then save the search criteria using the same
name, previous accumulations for time series charts are lost.
Procedure
1. Choose one of the following options:
o Click the Log Activity tab.
o Click the Network Activity tab.
2. Perform a search.
3. Click Save Criteria.
4. Enter values for the parameters:
Option Description
Parameter Description
Search Type the unique name that you want to assign to this search criteria.
Name
Assign Select the check box for the group you want to assign this saved search. If you do not select a group,
Search to this saved search is assigned to the Other group by default. For more information, see Managing search
Group(s) groups.
Manage Click Manage Groups to manage search groups. For more information, see Managing search groups.
Groups
o Specific Interval- Select this option and, from the calendar, select the date and time
range you want to filter for.
Include in Select this check box to include this search in your Quick Search list box on the toolbar.
my Quick
Searches
Include in Select this check box to include the data from your saved search on the Dashboard tab. For more
my information about the Dashboard tab, see Dashboard management.
Dashboard
Note: This parameter is only displayed if the search is grouped.
Set as Select this check box to set this search as your default search.
Default
Share with Select this check box to share these search requirements with all users.
Everyone
Restriction: You must have the Admin security profile to share search requirements.
5. Click OK.
Every firewall device that is assigned to a specific address range in the past week
A series of PDF files that were sent by a Gmail account in the past five days
All records in a two-month period that exactly match a hyphenated user name
Limitations
Quick filter searches operate on raw event or flow log data and don't distinguish between the fields. For
example, quick filter searches return matches for both source IP address and destination IP address, unless
you include terms that can narrow the results.
Search terms are matched in sequence from the first character in the payload word or phrase. The search
term user matches user_1 and user_2, but does not match the following phrases: ruser, myuser,
or anyuser.
Quick filter searches use the English locale. Locale is a setting that identifies language or geography and
determines formatting conventions such as collation, case conversion, character classification, the
language of messages, date and time representation, and numeric representation.
The locale is set by your operating system. You can configure QRadar® to override the operating system
locale setting. For example, you can set the locale to English and the QRadar Console can be set
to Italiano (Italian).
If you use Unicode characters in your quick filter search query, unexpected search results might be
returned.
If you choose a locale that is not English, you can use the Advanced search option in QRadar for
searching event and payload data.
How does Quick Filter search and payload tokens work?
Text that is in the payload is split into words, phrases, symbols, or other elements. These tokens are delimited by
space and punctuation. The tokens don't always match user-specified search terms, which cause some search
terms not to be found when they don't match the generated token. The delimiter characters are discarded but
exceptions exist such as the following exceptions:
Periods that are not followed by white space are included as part of the token.
For example, 192.0.2.0:56 is tokenized as host token 192.0.2.0 and port token 56.
Words are split at hyphens, unless the word contains a number, in which case, the token is not
split and the numbers and hyphens are retained as one token.
Internet domain names and email addresses are preserved as a single token.
File names and URL names that contain more than one underscore are split before a period (.).
If you use hurricane_katrina_ladm118.jpg as a search term, it is split into the following tokens:
hurricane
katrina_ladm118.jpg
Search the payload for the full search term by placing double quotation marks around the search
term: "hurricane_katrina_ladm118.jpg"
Lazy search cannot be used by users with non-administrator security profiles on networks where domains
are configured.
Procedure
1. To do a lazy search for quick filters, do these steps:
a. On the Log Activity tab, in the Quick Filter field, enter a value.
b. From the View list, select a time range.
2. To do a lazy search for basic searches, do these steps:
a. On the Log Activity tab, click Search > New Search.
b. Select a Recent time range or set a Specific Interval.
c. Ensure that Order by field value is set to Start Time and the Results Limit field value
is 1000 or less. Aggregated columns must not be included in the search.
d. Enter a value for the Quick Filter parameter and click Add Filter.
3. To disable lazy search completely, do these steps:
a. Click the System Settings on the Admin tab.
b. In the System Settings window, remove any values from the Default Search Limit field.
If you delete the base search criteria for saved subsearch criteria, you still have access to saved subsearch
criteria. If you add a filter, the subsearch searches the entire database since the search function no longer
bases the search on a previously searched data set.
Adding filters to improve search performance
When you search for event or flow information, you can improve performance by adding filters to search
fields that are indexed.
The following table provides information about the fields that are indexed:
Username
Source or Destination IP
Destination Port
Has Identity
Log Activity tab (Events)
Device Type
Device ID
Category
Matches Custom Rule
Application
Destination Port
Table 1. Log viewer and flow viewer indexed fields
Procedure
1. Click the Log Activity tab, or the Network Activity tab.
2. On the toolbar, click Add Filter.
3. From the first list, select an index filter.
4. From the second list, select the modifier that you want to use.
5. Type or select the information for your filter. The controls that are displayed depend on the index
filter that you added.
6. Click Add Filter.
What to do next
You can monitor the performance of your search by expanding the Current Statistics option on
the Search page. The page displays the volume of data that loads from data files and indexes. If your search does
not display a count in the index file count, then add an indexed filter to the search.
Procedure
1. Click the Offenses tab.
2. From the Search list, select New Search.
3. On the Offense Source pane, select the custom property in the Offense Type list.
The Offense Type list shows only normalized fields and custom properties that are used as rule
indexes. You cannot use Offense Source to search DateTime properties.
4. Optional: To search for offenses that have a specific value in the custom property capture result,
type the value that you want to search for in the filter box.
5. Configure other search parameters to satisfy your search requirements.
6. Click Search.
Results
All offenses that meet the search criteria are shown in the offense list. When you view the offense summary, the
custom property that you searched on is shown in the Offense Type field. The custom property capture result is
shown in the Custom Property Value field in the Offense Source Summary pane.
Note: If you select 1 day, then after 24 hours from that time, your search results
will be cleared for all users. The default is 1 day.
Results: You can now modify the default Search Results Retention Period.
Data is forwarded to the specified forwarding destination. Data is also stored in the
Forward
database and processed by the Custom Rules Engine (CRE).
Data is dropped. The data is not stored in the database and is not processed by the CRE.
Drop This option is not available if you select the Offline option. Any events that are dropped
are credited back 100% to the license.
Data bypasses CRE, but it is stored in the database. This option is not available if you
select the Offline option.
Bypass Correlation The Bypass correlation option does not require an entitlement for QRadar® Data
Store. Bypass correlation allows events that are received in batches to bypass real-
time rules. You can use the events in analytic apps and for historical correlation
runs. For historical correlation runs, the events can be replayed as though they were
received in real time.
Log Only (Exclude Events are stored and flagged in the database as Log Only and bypass CRE. These events
Analytics) are not available for historical correlation, and are credited back 100% to the license. This
option is not available for flows or if you select the Offline option.
Important: Do not choose this option for internal events because they are automatically
credited back 100% to the license. These events still bypass correlation if they match the
routing rule but are not flagged as Log Only.
The Log Only option requires an entitlement for QRadar Data Store. After the
entitlement is purchased and the Log Only option is selected, events that match the
Routing type Description
routing rule are stored to disk and are available to view and for searches. The events
bypass the custom rule engine and no real-time correlation or analytics occur. The
events can't contribute to offenses and are ignored when historical correlation
runs. Some apps will also ignore Log Only
events (https://www-ibm.com/support/docview.wss?uid=swg22009471).
Table 1. Rule routing options
The following table describes different routing option combinations that you can use. These options are not
available in offline mode.
Routing combination Description
Data is forwarded to the specified forwarding destination. Data is not stored in the database
Forward and Drop and is not processed by the CRE. Any events that are dropped are credited back 100% to the
license.
Forward and Bypass Data is forwarded to the specified forwarding destination. Data is stored in the database, but
Correlation it is not processed by the CRE.
Forward and Log Events are forwarded to the specified forwarding destination. Events are stored and flagged
Only (Exclude in the database as Log Only and bypass CRE. These events are not available for historical
Analytics) correlation, and are credited back 100% to the license.
If data matches multiple rules, the safest routing option is applied. For example, if data that matches a rule
that is configured to drop and a rule to bypass CRE processing, the data is not dropped. Instead, the data
bypasses the CRE and is stored in the database.
Configuring routing rules to forward data
About this task
You can configure routing rules to forward data in either online or offline mode:
In Online mode, your data remains current because forwarding is done in real time. If the
forwarding destination becomes unreachable, any data that is sent to that destination is not
delivered, resulting in missing data on that remote system. To ensure that delivery is successful,
use offline mode.
In Offline mode, all data is first stored in the database and then sent to the forwarding destination.
This mode ensures that no data is lost; however, delays in data forwarding can occur.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Routing Rules.
3. On the toolbar, click Add.
4. In the Routing Rule window, type a name and description for your routing rule.
5. In the Mode field, select one of the following options: Online or Offline.
6. In the Forwarding Event Collector or Forwarding Event Processor list, select the event
collector from which you want to forward data.
Learn more about the forwarding appliance:
Forwarding Event Collector
Specifies the Event Collector that you want this routing rule to process data from. This option
displays when you select the Online option.
Note: Online/Realtime forwarding is not impacted by any Rate Limit or Scheduling
configurations that might be configured on a Store and Forward (15xx) event collectors.
Forwarding Event Processor
Specifies the Event Processor that you want this routing rule to process data from. This option is
displayed when you select the Offline option.
Restriction: This option is not available if Drop is selected from the Routing Options pane.
7. In the Data Source field, select which data source you want to route: Events or Flows.
The labels for the next section change based on which data source you select.
NetFlow
IPFIX
sFlow
J-Flow
Packeteer
Napatech interface
Network interface
External sources do not require as much CPU utilization to process so you can send the flows directly to
a Flow Processor. In this configuration, you may have a dedicated flow collector and a flow processor,
both receiving and creating flow data.
If your Flow Collector collects flows from multiple sources, you can assign each flow source a distinct
name. A distinct name helps to distinguish the external flow data from other sources.
QRadar SIEM can forward external flow source data by using the spoofing or non-spoofing method:
Spoofing
Resends the inbound data that is received from a flow source to a secondary destination.
To configure the spoofing method, configure the flow source so that the Monitoring Interface is
set to the management port on which the data is received.
When you use a specific interface, the Flow Collector uses a promiscuous mode capture to collect
the flow data, rather than the default UDP listening port on port 2055. This way, the Flow
Collector can capture and forward the data.
Non-Spoofing
For the non-spoofing method, configure the Monitoring Interface parameter in the flow source
configuration as Any.
The Flow Collector opens the listening port, which is the port that is configured as
the Monitoring Port, to accept the flow data. The data is processed and forwarded to another
flow source destination.When the data is forwarded, the source IP address of the flow becomes the
IP address of the QRadar SIEM system, not the original router that sent the data.
Procedure
1. Log in to the QRadar® Console as an administrator.
2. Click the Admin tab.
3. In the Flows section, click Flow Sources, and click Add.
4. Configure the flow source details.
a. In the Flow Source Name field, type a descriptive name.
b. In the Target Flow Collector field, select a flow collector or accept the value provided.
c. In the Flow Source Type list, select Netflow v.1/v.5/v.7/v.9/IPFIX.
d. In the Monitoring Interface, select the network interface that supplies the flow traffic.
e. In the Monitoring Port field, select a port or accept the value provided.
f. In the Linking Protocol list, select the protocol to use.
g. To forward flows, select the Enable Flow Forwarding checkbox and configure the
settings.
5. Click Save.
6. On the Admin tab, click Deploy Changes.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data Sources section, under Flows, click Flow Sources.
3. Do one of the following actions:
o To add a flow source, click Add.
o To edit a flow source, select the flow source and click Edit.
4. To create this flow source from an existing flow source, select the Build from existing flow
source check box, and select a flow source from the Use as Template list.
5. Enter the name for the Flow Source Name.
Tip: If the external flow source is also a physical device, use the device name as the flow source
name. If the flow source is not a physical device, use a recognizable name.
For example, if you want to use IPFIX traffic, enter ipf1. If you want to use NetFlow traffic,
enter nf1.
6. Select a flow source from the Flow Source Type list and configure the properties.
o If you select the Flowlog File option, ensure that you configure the location of the Flowlog
file for the Source File Path parameter.
o If you select the JFlow, Netflow, Packeteer FDR, or sFlow options in the Flow Source
Type parameter, ensure that you configure an available port for the Monitoring
Port parameter.
The default port for the first NetFlow flow source that is configured in your network is
2055. For each additional NetFlow flow source, the default port number increments by 1.
For example, the default NetFlow flow source for the second NetFlow flow source is 2056.
o If you select the Napatech Interface option, enter the Flow Interface that you want to
assign to the flow source.
Restriction: The Napatech Interface option is displayed only if you installed
the Napatech Network Adapter on your system.
o If you select the Network Interface option, for the Flow Interface, configure only one
log source for each Ethernet interface.
Restriction: You cannot send different flow types to the same port.
7. If traffic on your network is configured to take alternate paths for inbound and outbound traffic,
select the Enable Asymmetric Flows check box.
8. Click Save.
9. On the Admin tab, click Deploy Changes.
Flow sources
IBM® QRadar® Network Insights uses two types of flow sources.
Flow source for incoming network traffic
The QRadar Network Insights host processes raw traffic from a network interface flow source.
When you add a QRadar Network Insights host, an input flow source is automatically created for all non-
management interfaces that are available on the host.
For Napatech network interfaces, flow sources are auto-detected and enabled by default.
They cannot be edited, disabled, or deleted.
For non-Napatech network interfaces, the auto-detected flow sources are disabled by
default. You must enable them if you want to use them for monitoring network flows.
When you install IBM QRadar, a default_Netflow flow source is automatically added to the
deployment. This flow source is enabled by default.
New flow sources are created as you add QRadar Flow Collectors and Flow Processors to the
deployment.
QRadar Network Insights exports the network traffic flow records to an IPFIX flow source that is
running elsewhere in your QRadar deployment.
Flow source example
In the following example, a QRadar Network Insights host (qnihw1) is connected to a QRadar
Console (qradarhw1).
You can edit or change the enabled status for each of the flow sources.
You cannot edit, delete, or change the enabled status for these flow sources.
Protocol
Protocols provide the capability of collecting a set of data files by using various connection options.
These connections pull the data back, or passively receive data, into the event pipeline in QRadar®.
Then, the corresponding DSM parses and normalizes the data.
DSM
A DSM is a code module that parses received events from multiple log sources and converts them to a
standard taxonomy format that can be displayed as output. Each type of log source has a corresponding
DSM.
Log source
A log source is a data source that creates an event log. For more information, see Introduction to log
source management .
Gateway log sources support the following protocols:
HTTP Receiver
Kafka
TCPMutilineSyslog
TLS Syslog
UDPMutilineSyslog
Tip: To provide the best fit for the generic data, use the Universal DSM when you configure your gateway log
source.
A gateway log source does not use a DSM. It delegates the DSM parsing to stand-alone Syslog log
sources that have an appropriate identifier and DSM. These log sources are a collector log source (the
gateway) and a parser log source. Parser log sources match data that comes in from the gateway and do
not actively collect the events themselves.
Before you create your gateway log source, you must know what types of data you expect to collect from
the data gateway. Data gateways can collect many data types and QRadar does not support all data types
by default. To parse the data correctly, a DSM must exist that can handle the events that you are
collecting. Even if QRadar supports the event’s source, if the gateway returns it in an unexpected format,
the DSM might not parse it. For example, if the data gateway returns an event in a JSON format, but the
DSM expects a LEEF format, you might need a custom DSM to parse the data.
A gateway log source works in the same way as other log sources by using its selected protocol to reach
out and collect events. The difference between a gateway log source and other log sources occurs when
the collected events are ready to be posted. A normal log source attempts to force the events to be parsed
by the selected DSM. A gateway log source sends the events as a Syslog payload with a default identifier
set to either 0.0.0.0 or to the connected Services IP address.
When an event is posted to the event pipeline as a Syslog payload, the events are handled by log source
auto detection. If an existing dummy log source with the provided identifier exists, the event is handled
by that log source, regardless of whether the event parses with that DSM. If no existing dummy log
source exists, the event is parsed by DSMs that support auto detection. If the event correctly parses with a
DSM, then it updates the identifier to “IP or Host @ DSM” and creates a log source.
Log sources that are automatically created do not have their identifier set by the protocol. These log
sources identifiers are in the “IP or Host @ DSM Type” format. To match to an automatically created
dummy log source, the Syslog payload must have an identifier that is the same IP or Host, and the
selected DSM must be able to parse it. The default identifier is sent as [IP or Host], not “IP or Host @
DSM Type”. For the identifier to be updated with the DSM Type, it must parse with that DSM Type. If
you use the default settings, events that cannot be parsed are sent directly to sim-generic.
Tip:
Manually created log sources that have an identifier that matches the identifier of the Syslog payload are
used even if the DSM of the log source fails to parse the event.
To configure a gateway log source, enable the Use As A Gateway Log Source option for the selected
protocol. If you enable this option, the events are sent to the event pipeline and are autodetected. To get
the maximum value from this feature, use the Log source identifier pattern.
Log source identifier pattern
Use the log source identifier pattern to customize the identifier that is sent with payloads when an
event is posted. You can choose identifiers for your event formats and event types.
To use the full capabilities of the log source identifier pattern, you must have a basic
understanding of regular expressions (regex) and how to use them.
The log source identifier pattern accepts a list of key-value pairs. A key is an identifier that is
represented as a regex. The value is a regex that matches to event data. When the regex matches to
the data within the event, then it posts the event with the identifier that is associated with that
value.
Basic example
Key1 = value1
Key2 = value2
Key3 = value3
In this example, if an event contains a value of “value1”, the identifier for that event is key1. If an
event contains a value of “value2”, the identifier is Key2. If an event contains both “value1” and
“value2”, the first identifier, Key1, is matched.
In this example, you must manually create three Syslog log sources, one for each of the identifier
options.
Regex example
Key1 = value1
Key2 = \d\d\d (where \d\d\d is a regex that matches to three consecutive digits.)
\1= \d(\d) (where \d(\d) is a regex that matches to two consecutive digits and captures the second digit. \1
references the first captured value in the value field.)
In this example, the following statements are true:
\d represents a "digit".
If an event contains a value of “value1”, the identifier for that event is Key1.
If an event contains “111” or “652” or any three consecutive digits, the identifier would be
Key2.
If an event contains two consecutive digits such as “11’, “73”, and “24”, the identifier is \1.
The regex saves the second value (“1”, “3” and “4” in the examples) for future use with the
parentheses. The value that is saved by the regex is used later as the identifier. If you set
the key to “\1”, the key matches to the first saved value in the regex. In this case, the
identifier is not a hardcoded value, but instead it can be ten values (0 - 9.) Three identifiers
are in the sample events (“1”, “3” and “4”.)
You must create a log source with the identifiers Key1 and Key2, and a log source for each
possible Key1 value. In this example, for the events to go to the correct log source, you must
create three Syslog log sources. For the log sources, one has the identifier set to “1”, one has the
identifier set to “3” and one has the identifier set to “4”. To capture all of the possible identifiers,
you need ten Syslog log sources. Each log source corresponds to a single digit.
Most gateway supported services, such as Microsoft Azure Event Hubs and Google Pub
Sub, offer ways to separate the data at the source. Keep your data in separate sources to
reduce the complexity on the QRadar side. For example, if you want to collect Windows
Logs, Linux® Logs and Audit Logs, use three separate gateway log sources to simplify the
configuration. If you collect all of those logs in one source, QRadar must identify the
events and associate them with the correct log source.
Hardcode the regex if possible
If you hardcode the key, all of the events that match the value’s regex are collected by a
single log source. This action requires less effort to create and maintain log sources, and
they are easier to monitor.
Before you save and enable your log source, use online tools to test the regex.
Use the Event Retriever to display the assigned log source identifier in QRadar. You can
also use it to test that your regex and identifier are matching correctly.
Workspace
The Workspace shows you raw event data. Use sample event payloads to test the behavior of the log
source type, and then the Workspace area shows you the data that you capture in real time.
All sample events are sent from the workspace to the DSM simulator, where properties are parsed and
QID maps are looked up. The results are displayed in the Log Activity Preview section. Click the edit
icon to open in edit mode.
In the edit mode, you paste up to 100,000 characters of event data into the workspace or edit data directly.
When you edit properties on the Properties tab, matches in the payload are highlighted in the workspace.
Custom properties and overridden system properties are also highlighted in the Workspace.
New in 7.4.1You can specify a custom delimiter that makes it easier for QRadar® to ingest multiline
events. To ensure that your event is kept intact as a single multiline event, select the Override event
delimiter checkbox to separate the individual events based on another character or sequence of
characters. For example, if your configuration is ingesting multiline events, you can add a special
character to the end of each distinct event in the Workspace, and then identify this special character as
the event delimiter.
New in 7.4.2QRadar can suggest regular expressions (regex) when you enter event data in
the Workspace. If you are not familiar with creating regex expressions, use this feature to generate your
regex. Highlight the payload text that you want to capture and in the Properties tab, click Suggest
Regex. The suggested expression appears in the Expression field. Alternatively, you can click
the Regex button in the Workspace and select the property that you want to write an expression for.
If QRadar cannot generate a suitable regex for your data sample, a system message appears.
Tip: The regex generator works best for fields in well-structured event payloads. If your payload consists of
complex data from natural language or unstructured events, the regex generator might not be able to parse it and
does not return a result.
Click the configure icon to select which columns to show or to hide in the Log Activity Preview window,
and to reorder the columns.
Properties
The Properties tab contains the combined set of system and custom properties that constitute a DSM
configuration. Configuring a system property differs from configuring a custom property. You can
override a property, by selecting the Override system behaviour check box and defining the expression.
Note: If you override the Event Category property, you must also override the Event ID property.
Matches in the payload are highlighted in the event data in the workspace. The highlighting color is two-
toned, depending on what you capture. For example, the orange highlighting represents the capture group
value while the bright yellow highlighting represents the rest of the regex that you specified. The
feedback in the workspace shows whether you have the correct regex. If an expression is in focus, the
highlighting in the workspace reflects only what that expression can match. If the overall property is in
focus, then the highlighting turns green and shows what the aggregate set of expressions can match,
taking into account the order of precedence.
In the Format String field, capture groups are represented by using the $<number> notation. For
example, $1 represents the first capture group from the regex, $2 is the second capture group, and so on.
You can add multiple expressions to the same property, and you can assign precedence by dragging and
dropping the expressions to the top of the list.
A warning icon beside any of the properties indicates that no expression was added.
You can configure Auto Property Discovery for structured data that are in JSON format. By default, log
source types have Auto Property Discovery turned off.
When you enable Auto Property Discovery on the Configuration tab, the property discovery engine
automatically generates new properties to capture all fields that are present in the events that are received by a
log source type. You can configure the number of consecutive events to be inspected for new properties in
the Discovery Completion Threshold field. Newly discovered properties appear in the Properties tab, and are
made available for use in the rules and search indexes. However, if no new properties are discovered before the
threshold, the discovery process is considered complete and Auto Property Discovery for that log source type is
disabled. You can manually enable the Auto Property Discovery on the Configuration tab at any time.
Note: To continuously inspect events for a log source type, you must make sure that you set the Discovery
Completion Threshold value to 0.
JDBC
FTP
SFTP
SCP
The following standard connection options receive data into the event pipeline:
Syslog
HTTP Receiver
SNMP
QRadar also supports proprietary vendor-specific protocol API calls, such as Amazon Web Services.
Akamai Kona REST API protocol configuration options
To receive events from your Akamai Kona Platform, configure a log source to use the Akamai
Kona REST API protocol.
Amazon AWS S3 REST API protocol configuration options
The Amazon AWS S3 REST API protocol is an active outbound protocol that collects AWS
CloudTrail logs from Amazon S3 buckets.
Amazon Web Services protocol configuration options
The Amazon Web Services (AWS) protocol is an outbound/active protocol for IBM QRadar that
collects AWS CloudWatch Logs, Amazon Kinesis Data Streams, and Amazon Simple Queue
Service (SQS) messages.
Apache Kafka protocol configuration options
IBM QRadar uses the Apache Kafka protocol to read streams of event data from topics in a Kafka
cluster that uses the Consumer API. A topic is a category or feed name in Kafka where messages
are stored and published. The Apache Kafka protocol is an outbound or active protocol, and can
be used as a gateway log source by using a custom log source type.
Blue Coat Web Security Service REST API protocol configuration options
To receive events from Blue Coat Web Security Service, configure a log source to use the Blue
Coat Web Security Service REST API protocol.
Centrify Redrock REST API protocol configuration options
The Centrify Redrock REST API protocol is an outbound/active protocol for IBM Security
QRadar that collects events from Centrify Identity Platform.
Cisco Duo protocol configuration options
To receive authentication events from Cisco Duo, configure a log source to use the Cisco Duo
protocol.
Cisco Firepower eStreamer protocol configuration options
To collect events in IBM QRadar from a Cisco Firepower eStreamer (Event Streamer) service,
configure a log source to use the Cisco Firepower eStreamer protocol.
Cisco NSEL protocol configuration options
To monitor NetFlow packet flows from a Cisco Adaptive Security Appliance (ASA), configure
the Cisco Network Security Event Logging (NSEL) protocol source.
EMC VMware protocol configuration options
To receive event data from the VMWare web service for virtual environments, configure a log
source to use the EMC VMware protocol.
Forwarded protocol configuration options
To receive events from another Console in your deployment, configure a log source to use the
Forwarded protocol.
Google Cloud Pub/Sub protocol configuration options
The Google Cloud Pub/Sub protocol is an outbound/active protocol for IBM QRadar that collects
Google Cloud Platform (GCP) logs.
Google G Suite Activity Reports REST API protocol options
The Google G Suite Activity Reports REST API protocol is an active outbound protocol for IBM
QRadarthat retrieves logs from Google G Suite.
HCL BigFix SOAP protocol configuration options (formerly known as IBM BigFix)
To receive Log Event Extended Format (LEEF) formatted events from HCL BigFix® appliances,
configure a log source that uses the HCL BigFix SOAP protocol.
HTTP Receiver protocol configuration options
To collect events from devices that forward HTTP or HTTPS requests, configure a log source to
use the HTTP Receiver protocol.
IBM Cloud Object Storage protocol configuration options
The IBM Cloud Object Storage protocol for IBM QRadar is an outbound or active protocol that
collects logs that are contained in objects from IBM Cloud Object Storage buckets.
IBM Fiberlink REST API protocol configuration options
To receive Log Event Extended Format (LEEF) formatted events from IBM MaaS360®
appliances, configure a log source that uses the IBM Fiberlink® REST API protocol.
IBM Security Randori REST API protocol configuration options
To receive events from IBM Security Randori, configure a log source to communicate with
the IBM Security Randori REST API protocol.
IBM Security QRadar EDR REST API protocol configuration options
To receive events from IBM Security QRadar EDR, configure a log source to communicate with
the IBM Security QRadar EDR REST API protocol.
IBM Security Verify Event Service protocol configuration options
To receive events from IBM Security Verify, configure a log source in IBM QRadar to use the
IBM Security Verify Event Service protocol.
JDBC protocol configuration options
QRadar uses the JDBC protocol to collect information from tables or views that contain event data
from several database types.
JDBC - SiteProtector protocol configuration options
You can configure log sources to use the Java™ Database Connectivity (JDBC) - SiteProtector
protocol to remotely poll IBM Proventia® Management SiteProtector® databases for events.
Juniper Networks NSM protocol configuration options
To receive Juniper Networks NSM and Juniper Networks Secure Service Gateway (SSG) logs
events, configure a log source to use the Juniper Networks NSM protocol.
Juniper Security Binary Log Collector protocol configuration options
You can configure a log source to use the Security Binary Log Collector protocol. With this
protocol, Juniper appliances can send audit, system, firewall, and intrusion prevention system
(IPS) events in binary format to QRadar.
Log File protocol configuration options
To receive events from remote hosts, configure a log source to use the Log File protocol.
Microsoft Azure Event Hubs protocol configuration options
The Microsoft Azure Event Hubs protocol is an outbound and active protocol for IBM Security
QRadar that collects events from Microsoft Azure Event Hubs.
Microsoft Defender for Endpoint SIEM REST API protocol configuration options
Configure a Microsoft Defender® for Endpoint SIEM REST API protocol to receive events from
supported Device Support Modules (DSMs).
Microsoft DHCP protocol configuration options
To receive events from Microsoft DHCP servers, configure a log source to use the Microsoft
DHCP protocol.
Microsoft Exchange protocol configuration options
To receive events from SMTP, OWA, and message tracking events from Microsoft Windows
Exchange 2007, 2010, 2013 and 2017 servers, configure a log source to use the Microsoft
Exchange protocol.
Microsoft Graph Security API protocol configuration options
To receive events from the Microsoft Graph Security API, configure a log source in IBM
QRadar to use the Microsoft Graph Security API protocol.
Microsoft IIS protocol configuration options
You can configure a log source to use the Microsoft IIS protocol. This protocol supports a single
point of collection for W3C format log files that are located on a Microsoft IIS web server.
Microsoft Security Event Log protocol configuration options
Support for the Windows Event Log protocols ended on 31 October 2022.
MQ protocol configuration options
To receive messages from a message queue (MQ) service, configure a log source to use the MQ
protocol. The protocol name displays in IBM QRadar as MQ JMS.
Office 365 Message Trace REST API protocol configuration options
The Office 365 Message Trace REST API protocol for IBM Security QRadar collects message
trace logs from the Message Trace REST API. This active outbound protocol is used to collect
Office 365 email logs.
Office 365 REST API protocol configuration options
The Office 365 REST API protocol for IBM Security QRadar is an active outbound protocol.
Okta REST API protocol configuration options
To receive events from Okta, configure a log source in IBM QRadar by using the Okta REST API
protocol.
OPSEC/LEA protocol configuration options
To receive events on port 18184, configure a log source to use the OPSEC/LEA protocol.
Oracle Database Listener protocol configuration options
To remotely collect log files that are generated from an Oracle database server, configure a log
source to use the Oracle Database Listener protocol source.
PCAP Syslog Combination protocol configuration options
To collect events from Juniper SRX Series Services Gateway or Juniper Junos OS Platform that
forward packet capture (PCAP) data, configure a log source to use the PCAP Syslog Combination
protocol.
RabbitMQ protocol configuration options
To receive messages from a Cisco AMP DSM, configure a log source to use the RabbitMQ
protocol.
SDEE protocol configuration options
You can configure a log source to use the Security Device Event Exchange (SDEE)
protocol. QRadar uses the protocol to collect events from appliances that use SDEE servers.
Seculert Protection REST API protocol configuration options
To receive events from Seculert, configure a log source to use the Seculert Protection REST API
protocol.
SMB Tail protocol configuration options
You can configure a log source to use the SMB Tail protocol. Use this protocol to watch events on
a remote Samba share and receive events from the Samba share when new lines are added to the
event log.
SNMPv2 protocol configuration options
You can configure a log source to use the SNMPv2 protocol to receive SNMPv2 events.
SNMPv3 protocol configuration options
You can configure a log source to use the SNMPv3 protocol to receive SNMPv3 events.
Sophos Enterprise Console JDBC protocol configuration options
To receive events from Sophos Enterprise Consoles, configure a log source to use the Sophos
Enterprise Console JDBC protocol.
Sourcefire Defense Center eStreamer protocol options
Sourcefire Defense Center eStreamer protocol is now known as Cisco Firepower eStreamer
protocol.
Syslog Redirect protocol overview
The Syslog Redirect protocol is an inbound/passive protocol that is used as an alternative to the
Syslog protocol. Use this protocol when you want QRadar to identify the specific device name
that sent the events. QRadar can passively listen for Syslog events by using TCP or UDP on any
unused port that you specify.
TCP Multiline Syslog protocol configuration options
The TCP Multiline Syslog protocol is a passive inbound protocol that uses regular expressions to
identify the start and end pattern of multiline events.
TLS Syslog protocol configuration options
Configure a TLS Syslog protocol log source to receive encrypted syslog events from up to 50
network devices that support TLS Syslog event forwarding for each listener port.
UDP multiline syslog protocol configuration options
To create a single-line syslog event from a multiline event, configure a log source to use the UDP
multiline protocol. The UDP multiline syslog protocol uses a regular expression to identify and
reassemble the multiline syslog messages into single event payload.
VMware vCloud Director protocol configuration options
To collect events from VMware vCloud Director virtual environments, create a log source that
uses the VMware vCloud Director protocol, which is an active outbound protocol.
Adding a log source to receive events
Use the QRadar® Log Source Management app to add new log sources to receive events from your
network devices or appliances.
Before you begin
Download and install a device support module (DSM) that supports the log source. A DSM is a software
application that contains the event patterns that are required to identify and parse events. The events are parsed
from the original format of the event log to the format that QRadar can use. You can install a DSM from IBM®
Fix Central (https://www-945.ibm.com/support/fixcentral/). For more information, see the DSM Configuration
Guide.
You can also create a custom log source type without a DSM.
Procedure
1. In the QRadar Log Source Management app, click + New Log Source.
2. Click Single Log Source.
3. On the Select a Log Source Type page, select a log source type and click Select Protocol Type.
4. On the Select a Protocol Type page, select a protocol and click Configure Log Source
Parameters.
5. On the Configure the Log Source parameters page, configure the log source parameters and
click Configure Protocol Parameters.
6. On the Configure the protocol parameters page, configure the protocol-specific parameters.
7. Optional: If the server certificates for the protocol are uploaded to the centralized certificate store,
select the certificate from the Server Certificate Store Alias list.
If your log source requires a server certificate that is not uploaded to the centralized certificate
store, and you have System Administrator permission, you can upload the certificate from
the IBM QRadar Certificate Management app.
o If the QRadar Certificate Management app is installed, in the Server Certificate Store
Alias list, select Upload new certificate. The Certificate Management app opens.
o If the QRadar Certificate Management app is not installed, in the Server Certificate Store
Alias list, select Download Certificate Management app to open the IBM Security App
Exchange and download the app.
8. Optional: If your configuration can be tested, the Test Protocol Parameters option is listed in the
Steps pane. When you test your configuration, you can identify any errors with your protocol
parameters. For more information, see Testing log sources. To test your configuration, follow
these steps:
o Click Test Protocol Parameters, and then click Start Test.
o To fix any errors, click Configure Protocol Parameters.
On the Configure the protocol parameters page, configure the protocol-specific
parameters, then test your protocol again.
If your configuration can be tested, but you don't want to test it, click Skip Test and
Finish.
9. Click Finish.
Results
Your log source is listed on the Log Sources page.
If you include a comma in a parameter, enclose the value in double quotation marks.
For example, a multitude of similar events created during a Denial Of Service attack can be
converted from hundreds of thousands of events into only a few dozen records, while
maintaining the count of the number of actual events received.
Event coalescing starts after three events have been found with matching properties within a 10
second window. Additional events that occur within the 10 second period are coalesced
together, with a count of the events noted. For each record containing coalesced events, only the
payload of the first coalesced event is retained.
For example, if 1,005 events are received by QRadar within a 10 second window. Each of the
1,005 events has the same QRadar Identifier (QID), Source IP, Destination IP, Destination
port, and Username and coalescing is enabled for the log source. The Log Activity tab
represents these 1,005 events as follows:
The first three events display in Log Activity as individual events with unique event
payloads.
Event 1: full event details saved.
Event 2: full event details saved.
Event 3: full event details saved.
The fourth event contains a unique payload; however, the remaining 1,001 events that
also arrived within that 10 second period are coalesced and only the original payload for
the fourth event is retained.
Event 4: full event details saved.
Event 5-1,005: event details are NOT saved to disk for each individual event. The count
for the fourth event displays in the user interface as Multiple (1001). The value within
parenthesis (x) indicates to the user that this log source coalesced a number of events as
the with a duplicate QRadar Identifier (QID), Source IP, Destination IP, Destination
port, and Username in to the fourth event as these all occurred within the 10 second
window. Events 5-1005 do not have unique event payloads stored on disk.
Note: Rules created by users intended to count events do update the event count even if
the event is coalesced. For example, a user creates a rule with the test and when at
least 5 events are seen with the same Username. The events that occur within the
multiple field (x) update the count tracked in QRadar and coalesced data can trigger an
offense or rule response.
QRadar provides the ability to disable coalescing if there is a requirement to retain all event
payloads. This can be done either at system level, or per log source basis.
Procedure:
445 TCP Microsoft Directory Services for file transfers that use Windows share
49152 – 65535
Note: Exchange
servers are
configured for a TCP Default dynamic port range for TCP/IP
port range of
6005 – 58321
by default.
The MSEVEN protocol uses port 445. The NETBIOS ports (137 - 139) can be used for host name
resolution. When the WinCollect agent polls a remote event log by using MSEVEN6, the initial
communication with the remote machine occurs on port 135 (dynamic port mapper), which assigns the
connection to a dynamic port. The default port range for dynamic ports is between port 49152 and port
65535, but might be different dependent on the server type. For example, Exchange servers are
configured for a port range of 6005 – 58321 by default.
To allow traffic on these dynamic ports, enable and allow the two following inbound rules on the Windows
server that is being polled:
Procedure
1. On your desktop, select Start > Control Panel.
2. Click the System and Security icon.
3. Click Allow a program through Windows Firewall.
4. If prompted, click Continue.
5. Click Change Settings.
6. From the Allowed programs and features pane, select Remote Event Log Management.
Depending on your network, you might need to correct or select more network types.
7. Click OK.
To change the column order, hover over the icon ( ), and drag the columns to arrange
them in the order that you want them to display.
3. Each filter shows a count of the number of options for that filter. Each option in a filter also has a
count, which indicates how many log sources match the option for the filters that you apply. If the
filters exclude log sources that match the option, this number might decrease as you apply more
filters.
Exporting events
You can export events in Extensible Markup Language (XML) or Comma-Separated Values (CSV)
format.
Procedure
1. Click the Log Activity tab.
2. Optional. If you are viewing events in streaming mode, click the Pause icon to pause streaming.
3. From the Actions list box, select one of the following options:
o Export to XML > Visible Columns - Select this option to export only the columns that
are visible on the Log Activity tab. This is the recommended option.
o Export to XML > Full Export (All Columns) - Select this option to export all event
parameters. A full export can take an extended period of time to complete.
Export to CSV > Visible Columns - Select this option to export only the columns that are
o
visible on the Log Activity tab. This is the recommended option.
o Export to CSV > Full Export (All Columns) - Select this option to export all event
parameters. A full export can take an extended period of time to complete.
4. If you want to resume your activities while the export is in progress, click Notify When Done.
The top of the table displays the details of the filters that are applied to the search results. To clear
Current® these filter values, click Clear Filter.
Filters Note: This parameter is only displayed after you apply a filter.
View From this list box, you can select the time range that you want to filter for.
When not in Real Time (streaming) or Last Minute (auto refresh) mode, current statistics are displayed,
including:
Note: Click the arrow next to Current Statistics to display or hide the statistics
Total Results - Specifies the total number of results that matched your search criteria.
Data Files Searched - Specifies the total number of data files searched during the
Current specified time span.
Statistics Compressed Data Files Searched - Specifies the total number of compressed data
files searched within the specified time span.
Index File Count - Specifies the total number of index files searched during the
specified time span.
Duration - Specifies the duration of the search.
Note: Current statistics are useful for troubleshooting. When you contact Customer
Support to troubleshoot events, you might be asked to supply current statistical
information.
Charts Displays configurable charts that represent the records that are matched by the time interval and
grouping option. Click Hide Charts if you want to remove the charts from your display. The charts are
only displayed after you select a time frame of Last Interval (auto refresh) or above, and a grouping
option to display. For more information about configuring charts, see Chart management.
Parameter Description
Note: If you use Mozilla Firefox as your browser and an ad blocker browser extension is installed,
charts do not display. To displayed charts, you must remove the ad blocker browser extension. For
more information, see your browser documentation.
Click this icon to view details of the offense that is associated with this event. For more
Offenses information, see Chart management.
icon Note: Depending on your product, this icon is might not be available. You must have IBM® QRadar®
SIEM.
Start Time Specifies the time of the first event, as reported to QRadar by the log source.
Event
Specifies the normalized name of the event.
Name
Specifies the log source that originated the event. If there are multiple log sources that are associated
Log Source
with this event, this field specifies the term Multiple and the number of log sources.
Specifies the total number of events that are bundled in this normalized event. Events are bundled
Event
when many of the same type of event for the same source and destination IP address are detected
Count
within a short time.
Time Specifies the date and time when QRadar received the event.
Low Level Specifies the low-level category that is associated with this event.
Category
For more information about event categories, see the IBM QRadar Administration Guide.
Specifies the source IP address of the event.
Source IP Note: If you select the Normalized (With IPv6 Columns) display, refer to the Source IPv6 parameter
for IPv6 events.
Destination
Specifies the destination port of the event.
Port
Specifies the user name that is associated with this event. User names are often available in
Username authentication-related events. For all other types of events where the user name is not available, this
field specifies N/A.
Specifies the magnitude of this event. Variables include credibility, relevance, and severity. Point your
Magnitude
mouse over the magnitude bar to display values and the calculated magnitude.
Note: IPv4 events display 0.0.0.0.0.0.0.0 in the Source IPv6 and Destination IPv6 columns.
Procedure
1. Click the Log Activity tab.
2. Optional: From the Display list box, select Normalized (With IPv6 Columns).
The Normalized (With IPv6 Columns) display shows source and destination IPv6 addresses for
IPv6 events.
3. From the View list box, select the time frame that you want to display.
4. Click the Pause icon to pause streaming.
5. Double-click the event that you want to view in greater detail. For more information, see Event
details.
https://www.securitylearningacademy.com/course/view.php?id=6585
If the export directory does not contain enough space, the export of event, flow, and offense data is
canceled.
User response
Exporting flows
You can export flows in Extensible Markup Language (XML) or Comma Separated Values (CSV)
format. The length of time that is required to export your data depends on the number of parameters
specified.
Procedure
1. Click the Network Activity tab.
2. Optional. If you are viewing flows in streaming mode, click the Pause icon to pause streaming.
3. From the Actions list box, select one of the following options:
o Export to XML > Visible Columns - Select this option to export only the columns that
are visible on the Log Activity tab. This is the recommended option.
o Export to XML > Full Export (All Columns) - Select this option to export all flow
parameters. A full export can take an extended period of time to complete.
o Export to CSV > Visible Columns - Select this option to export only the columns that are
visible on the Log Activity tab. This is the recommended option.
o Export to CSV > Full Export (All Columns) - Select this option to export all flow
parameters. A full export can take an extended period of time to complete.
4. If you want to resume your activities, click Notify When Done.
Results
When the export is complete, you receive notification that the export is complete. If you did not select the Notify
When Done icon, the Status window is displayed.
Scan policies
A scan policy provides you with a central location to configure specific scanning requirements.
You must have the correct license capabilities to perform the following scanning operations. You can use
scan policies to specify scan types, ports to be scanned, vulnerabilities to scan for and scanning tools to
use. In IBM QRadar Vulnerability Manager, a scan policy is associated with a scan profile and is used to
control a vulnerability scan. You use the Scan Policies list on the Details tab of the Scan Profile
Configuration page to associate a scan policy with a scan profile.
You can create a new scan policy or copy and modify a pre-configured policy that is distributed
with QRadar Vulnerability Manager.
Pre-configured scan policies
The following pre-configured scan policies are distributed with QRadar Vulnerability Manager:
Full scan, Discovery scan, Database scan, Patch scan, PCI scan, Web scan
A description of each pre-configured scan policy is displayed on the Scan Policies page.
Scan policy automatic updates for critical vulnerabilities
As part of IBM QRadar Vulnerability Manager daily automatic updates, you receive new scan
policies for tasks such as detecting zero-day vulnerabilities on your assets.
Use scan policies that are delivered by automatic update to create scan profiles to scan for specific
vulnerabilities. To view all scan policies on your system, go to Administrative > Scan
Policies on the Vulnerabilities tab.
You must not edit scan policies that are delivered by automatic update as your changes might be
overwritten by later updates. You can create a copy and edit it.
If you delete a scan policy that is delivered by automatic update, it can be recovered only
by QRadar customer support.
Modifying a pre-configured scan policy
In IBM QRadar Vulnerability Manager, you can copy a pre-configured scan policy and modify the
policy to your exact scanning requirements.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scan Policies.
3. On the Scan Policies page, click a pre-configured scan policy.
4. On the toolbar, click Edit.
5. Click Copy.
6. In the Copy scan policy window, type a new name in the Name field and click OK.
7. Click the copy of your scan policy and on the toolbar, click Edit.
8. In the Description field, type new information about the scan policy.
Important: If you modify the new scan policy, you must update the information in the
description.
9. To modify your scan policy, use the Port Scan, Vulnerabilities, Tool Groups,
or Tools tabs.
Restriction: Depending on the Scan Type that you select, you cannot use all the tabs on
the Scan Policy window.
Configuring a scan policy
In IBM QRadar Vulnerability Manager, you can configure a scan policy to meet any specific
requirements for your vulnerability scans. You can copy and rename a preconfigured scan policy
or you can add a new scan policy. You can't edit a preconfigured scan policy.
Before you begin
You must have the correct license capabilities to perform the following scanning operations.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scan Policies.
3. On the toolbar, click Add.
4. Type the name and description of your scan policy.
To configure a scan policy, you must at least configure the mandatory fields in the New
Scan Policy window, which are the Name and Description fields.
5. From the Scan Type list, select the scan type.
6. To manage and optimize the asset-discovery process, click the Asset Discovery tab.
7. To manage the ports and protocols that are used for a scan, click the Port Scan tab.
8. To include specific vulnerabilities in your patch scan policy, click the Vulnerabilities tab.
Note: The Vulnerabilities tab is available only when you select a patch scan.
9. To include or exclude tool groups from your scan policy, click the Tool Groups tab.
Note: The Tool Groups tab is available only when you select a zero-credentialed full-scan
or full-scan plus policy.
10. To include or exclude tools from a scan policy, click the Tools tab.
Note: The Tools tab is available only when you select a zero-credentialed Full Scan or Full
Scan Plus policy.
Important: If you do not modify the tools or tool groups, and you select the Full option as
your scan type, then all the tools and tool groups that are associated with a full scan are
included in your scan policy.
11. Click Save.
3. Change the retention value or check that it is suitable for your needs. The default asset IP retention
value is 120 days.
QRadar Vulnerability Manager and QRadar Risk
Manager licenses
IBM QRadar Vulnerability Manager and IBM QRadar Risk Manager are combined into one offering and
both are enabled through a single base license. The combined offering provides an integrated network
scanning and vulnerability management workflow. With the base license, you are entitled to use QRadar
Vulnerability Manager to view vulnerability dashboards, vulnerabilities detected by third-party
scanners, QRadar Vulnerability Manager filters, and QRadar Vulnerability Manager reports. You can
integrate QRadar Risk Manager with up to 50 standard configuration sources. If you are entitled to
either QRadar Vulnerability Manager or QRadar Risk Manager, you are automatically entitled to the base
license allowance for the other product. You require extra licenses to integrate with more than 50
configuration sources.
Extending the QRadar Vulnerability Manager temporary license period
By default, when you install IBM QRadar SIEM, you can see the Vulnerabilities tab because a
temporary license key is also installed. When the temporary license expires, you can extend it for
an extra four weeks.
Procedure
1. On the navigation menu ( ), click Admin.
2. Click the Vulnerability Manager icon in the Try it out area.
3. To accept the end-user license agreement, click OK.
When the extended license period is finished, you must wait six months before you can
activate the temporary license again. To have permanent access to QRadar Vulnerability
Manager, you must purchase a license.
Base
Temporal
Environmental
Base
The base score severity range is 0 - 10 and represents the inherent characteristics of the vulnerability. The base
score has the largest bearing on the final CVSS score, and can be further divided into the following subscores:
Impact
The impact subscore represents metrics for confidentiality impact, integrity impact, and the
availability impact of a successfully exploited vulnerability.
Exploitability
In CVSS v2, the exploitability subscore represents metrics for Access Vector, Access Complexity,
and Authentication. The subscore measures how the vulnerability is accessed, the complexity of
the attack, and the number of times an attacker must authenticate to successfully exploit a
vulnerability.
In CVSS v3, the exploitability subscore represents metrics for Attack Vector, Attack Complexity,
Privileges Required, User Interaction, and Scope. The subscore measures how the vulnerability is
accessed, the complexity of the attack, any required privileges, the interaction needed between the
attacker and another user, and the impact on resources beyond the vulnerable component.
Temporal
The temporal score represents the characteristics of a vulnerability threat that change over time, and consists of
the following metrics:
The availability of techniques or code that can be used to exploit the vulnerability, which changes
over time.
Remediation Level
Report Confidence
The level of confidence in the existence of the vulnerability and the credibility of its technical
details.
Environmental
The environmental score represents characteristics of the vulnerability that are impacted by the user's
environment. Configure the following environmental metrics to highlight the vulnerabilities of important
or critical assets by applying higher environmental metrics. Apply the highest scores to the most
important assets because losses that are associated with these assets have greater consequences for the
organization.
The potential for loss of life or physical assets through the damage or theft of this asset, or the
economic loss of productivity or revenue.
This metric indicates the level of impact to the loss of integrity when a vulnerability is
successfully exploited on this asset.
The level of impact to the asset's availability when a vulnerability is successfully exploited on this
asset.
When you deploy your vulnerability processor to a managed host, all vulnerabilities are processed on the
managed host.
Restriction: After you deploy processing to a dedicated QRadar Vulnerability Manager managed host, any scan
profiles or scan results that are associated with a QRadar console processor are not displayed. You can continue to
search and view vulnerability data on the Manage Vulnerabilities pages.
NAT If the managed host is on the same subnet as the QRadar Console, select the QRadar Console that
Group is on the NATed network.
If the managed host is not on the same subnet as the QRadar Console, select the managed host that
Field Description
is on NATed network.
Public The managed host uses this IP address to communicate with other managed hosts in different networks that
IP use NAT.
Run an automatic update when you add a scanner or other managed host with scanning capabilities.
Configure access to an IBM hosted scanner and scan your DMZ
You can configure access to an IBM hosted scanner and scan the assets in your DMZ.
Run an automatic update after you add the scanner or other managed host with scanning
capabilities. Alternatively, you can scan after the default daily scheduled automatic update runs. If
the automatic updates for other scanners are run earlier, then the automatic updates for all the
scanners might not be fully synchronized until the next daily update.
Scanning the assets in your DMZ
In IBM QRadar Vulnerability Manager, you can connect to an external scanner and scan the assets
in your DMZ for vulnerabilities.
If you want to scan the assets in the DMZ for vulnerabilities, you do not need to deploy a scanner
in your DMZ. You must configure QRadar Vulnerability Manager with a hosted IBM scanner that
is located outside your network.
Detected vulnerabilities are processed by the processor on either your QRadar console or QRadar
Vulnerability Manager managed host.
Procedure
1. Configure your network and assets for external scans.
2. Configure QRadar Vulnerability Manager to scan your external assets.
1. Configuring your network and assets for external scans
To scan the assets in a DMZ network, you must configure your network and
inform IBM of the assets that you want to scan.
2. Configuring QRadar Vulnerability Manager to scan your external assets
To scan the assets in your DMZ, you must configure IBM QRadar Vulnerability
Manager by using the System and License Management tool on the Admin tab.
Supported web browsers
For the features in IBM QRadar products to work properly, you must use a supported web
browser.
The following table lists the supported web browser versions.
Web
Supported versions
browser
64-bit Moz
Latest
illa Firefox
64-bit
Microsoft Latest
Edge
64-bit Goo
gle Latest
Chrome
Table 1. Supported web browsers for QRadar products
The Microsoft Internet Explorer web browser is no longer supported on QRadar 7.4.0 or later.
Scan configuration
In IBM® QRadar® Vulnerability Manager, all network scanning is controlled by the scan profiles that
you create. You can create multiple scan profiles and configure each profile differently depending on the
specific requirements of your network.
Scan profiles
Specify the network nodes, domains, or virtual domains that you want to scan.
Specify the network assets that you want to exclude from scans.
Create operational windows, which define the times at which scans can run.
Security profiles must be updated with an associated domain. Domain-level restrictions are
not applied until the security profiles are updated, and the changes are deployed.
o To scan your network by using a predefined set of scanning criteria, select a scan type
from the Scan Policies list.
o If you configured centralized credentials for assets, click the Use Centralized
Credentials check box.
4. Click Save.
Creating an external scanner scan profile
In IBM QRadar Vulnerability Manager, you can configure scan profiles to use a hosted scanner to
scan assets in your DMZ.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, click Administrative > Scan Profiles.
3. On the toolbar, click Add.
When you create a scan profile, the only mandatory fields are Name and IP Addresses on
the Details tab of the Scan Profile Configuration page. To create an external scanner
profile, you must also follow the remaining steps in this procedure.
4. To create an external scanner profile, use the following procedure.
a. From the Scan Server list, select IbmExternalScanner.
b. From the Scan Policies list, select Full Scan or Web Scan.
c. Click the Domain and Web App tab. In the Virtual Webs pane, enter the domain
and IP address information for the websites and applications that you want to scan.
d. Click Save.
Important: Authenticated scans are not conducted from the external scanner.
Creating a benchmark profile
To create Center for Internet Security compliance scans, you must configure benchmark profiles.
You use CIS compliance scans to test for Windows and Red Hat Enterprise Linux® CIS
benchmark compliance.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, click Administrative > Scan Profiles.
3. On the toolbar, click Add Benchmark.
4. If you want to use pre-defined centralized credentials, select the Use Centralized
Credentials checkbox .
Credentials that are used to scan Linux operating systems must have root privileges.
Credentials that are used to scan Windows operating systems must have administrator
privileges.
5. If you are not using dynamic scanning, select a QRadar® Vulnerability Manager scanner
from the Scan Server list.
6. To enable dynamic scanning, click the Dynamic server selection checkbox.
If you configured domains in the Domain Management window in the Admin tab, you can
select a domain from the Domain list. Only assets within the CIDR ranges and domains
that are configured for your scanners are scanned.
7. In the When To Scan tab, set the run schedule, scan start time, and any pre-defined
operational windows.
8. In the Email tab, define what information to send about this scan and to whom to send it.
9. If you are not using centralized credentials, add the credentials that the scan requires in
the Additional Credentials tab.
Credentials that are used to scan Linux operating systems must have root privileges.
Credentials that are used to scan Windows operating systems must have administrator
privileges.
By default, scans complete a fast scan by using the Transmission Control Protocol (TCP)
and User Datagram Protocol (UDP) protocol. A fast scan includes most ports in the range
1 - 1024.
The scanning process requires a scan profile. The scan profile determines the scanning
configuration options that are used when the scan runs.
To view a scan profile in the Run Vulnerability Scan window, you must select the On
Demand Scanning Enabled check box in the Details tab on the Scan Profile
Configuration page.
Important: The scan profile that you select might be associated with multiple scan targets
or IP address ranges. However, when you use the right-click option, only the asset that you
select is scanned.
6. Click Scan Now.
7. Click Close Window.
8. To review the progress of your right-click scan, in the navigation pane, click Scan Results.
Right-click scans are identified by the prefix RC:.
Scan profile details
In IBM QRadar Vulnerability Manager, you can describe your scan, select the scanner that you
want to use, and choose from a number of scan policy options.
Scan profile details are specified in the Details tab, in the Scan Profile Configuration page.
Use
Specifies that the profile uses pre-defined credentials. Centralized credentials are defined in
Centralized
the Admin > System Configuration > Centralized Credentials window.
Credentials
The scanner that you select depends on your network configuration. For example, to scan
DMZ assets, then select a scanner that has access to that area of your network.
Scan Server
The Controller scan server is deployed with the vulnerability processor on
your QRadar console or QRadar Vulnerability Manager managed host.
Restriction: You can have only 1 vulnerability processor in your deployment. However, you can
deploy multiple scanners either on dedicated QRadar Vulnerability Manager managed host scanner
appliances or QRadar managed hosts.
Enables on-demand asset scanning for the profile. Use the right-click menu on the Assets page to run
On Demand on-demand vulnerability scanning. By selecting this option, you also make the profile available to use
Scanning if you want to trigger a scan in response to a custom rule event.
Scan The pre-configured scanning criteria about ports and protocols. For more information,
Policies see Scan policies.
Table 1. Scan profile details configuration options
Scan scheduling
In IBM QRadar Vulnerability Manager, you can schedule the dates and times to scan your network assets
for known vulnerabilities.
Scan scheduling is controlled by using the When To Scan pane, in the Scan Profile Configuration page. A
scan profile that is configured with a manual setting must be run manually. However, scan profiles that
are not configured as manual scans, can also be run manually. When you select a scan schedule, you can
further refine your schedule by using operational windows to configure allowed scan times.
Use cron expressions to create schedules, for example, 9:00 AM every Monday through Friday, or
3:30 AM the first Friday of every month. Cron expressions give you the ability to create irregular
scan schedules. Schedule a maximum of one scan per day.
Advanced scan schedules that are configured to run between 1:00 AM and 1:59 AM on the 27th of
March 2016 are skipped and do not run. All subsequent scans run at the scheduled times.
Scanning domains monthly
In IBM QRadar Vulnerability Manager, you can configure a scan profile to scan the domains on
your network each month.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scan Profiles.
3. On the toolbar, click Add.
When you create a scan profile, the only mandatory fields are Name and IP Addresses on
the Details tab of the Scan Profile Configuration page. To set up monthly scans, you must
also follow the remaining steps in this procedure.
4. Click the When To Scan pane.
5. In the Run Schedule list, select Monthly.
6. In the Start Time field, select a start date and time for your scan.
7. In the Day of the month field, select a day each month that your scan runs.
8. Click the Domain and Web App tab.
9. In the Domains field, type the URL of the asset that you want to scan and click (> ).
10. Click Save.
11. Optional: During and after the scan, you can monitor scan progress and review completed
scans.
Scheduling scans of new unscanned assets
In IBM QRadar Vulnerability Manager, you can configure scheduled scans of newly discovered,
unscanned network assets.
Procedure
1. Click the Assets tab.
2. In the navigation pane, click Asset Profiles, then on the toolbar click Search > New
Search.
3. To specify your newly discovered, unscanned assets, complete the following steps in
the Search Parameters pane:
a. Select Days Since Asset Found, Less than 2 then click Add Filter.
b. Select Days Since Asset Scanned Greater than 2 then click Add Filter.
c. Click Search.
4. On the toolbar, click Save Criteria and complete the following steps:
a. In the Enter the name of this search field, type the name of your asset search.
b. Click Include in my Quick Searches.
c. Click Share with Everyone.
d. Click OK.
5. Click the Vulnerabilities tab.
6. In the navigation pane, select Administrative > Scan Profiles.
7. On the toolbar, click Add.
When you create a scan profile, the only mandatory fields are Name and IP Addresses on
the Details tab of the Scan Profile Configuration page. To schedule scans for unscanned
assets, you must also follow the remaining steps in this procedure.
8. In the Include Saved Searches pane, select your saved asset search from the Available
Saved Searches list and click (>).
9. Click the When To Scan pane and in the Run Schedule list, select Weekly.
10. In the Start Time fields, type or select the date and time that you want your scan to run on
each selected day of the week.
11. Select the check boxes for the days of the week that you want your scan to run.
12. Click Save.
Reviewing your scheduled scans in calendar format
In IBM QRadar Vulnerability Manager, the scheduled scan calendar provides a central location
where you can review information about scheduled scans.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, click Administrative > Scheduled Scans.
3. Optional: Hover your mouse on the scheduled scan to display information about the
scheduled scan.
For example, you can show the time that a scan took to complete.
You can exclude a specific host or range of hosts that must never be scanned. For example, you might
restrict a scan from running on critical servers that are hosting your production applications. You might
also want to configure your scan to target only specific areas of your network.
QRadar Vulnerability Manager integrates with QRadar by providing the option to scan the assets that
form part of a saved asset search.
Scan targets
You can specify your scan targets by defining a CIDR range, IP address, IP address range, or a
combination of all three.
Domain scanning
You can add domains to your scan profile to test for DNS zone transfers on each of the domains that you
specify.
A host can use the DNS zone transfer to request and receive a full zone transfer for a domain. Zone
transfer is a security issue because DNS data is used to decipher the topology of your network. The data
that is contained in a DNS zone transfer is sensitive and therefore any exposure of the data might be
perceived as a vulnerability. The information that is obtained might be used for malicious exploitation
such as DNS poisoning or spoofing.
When you configure a scan exclusion in a scan profile configuration, the exclusion applies only to the
scan profile.
Virtual webs
You can configure a scan profile to scan different URLs that are hosted on the same IP address.
When you scan a virtual web, QRadar Vulnerability Manager checks each web page for SQL injection
and cross site scripting vulnerabilities.
Excluding assets from all scans
In IBM QRadar Vulnerability Manager, scan exclusions specify the assets in your network that are not
scanned.
About this task
Scan exclusions apply to all scan profile configurations and might be used to exclude scanning activity
from unstable or sensitive servers. Use the IP Addresses field on the Scan Exclusion page to enter the IP
addresses, IP address ranges, or CIDR ranges that you want to exclude from all scanning. To access
the Scan Exclusion page:
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, click Administrative > Scan Exclusions.
3. On the toolbar, select Actions > Add.
Note: You can also use the Excluded Assets section of
the Vulnerabilities > Administrative > Scan Profiles > Add > Domain and Web App tab to
exclude assets from an individual scan profile.
Managing scan exclusions
In IBM QRadar Vulnerability Manager you can update, delete, or print scan exclusions.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, click Administrative > Scan Exclusions.
3. From the list on the Scan Exclusions page, click the Scan Exclusion that you want to modify.
4. On the toolbar, select an option from the Actions menu.
5. Depending on your selection, follow the on-screen instructions to complete this task.
Scan protocols and ports
In IBM QRadar Vulnerability Manager, you can choose different scan protocols and scan various port
ranges.
You can configure your scan profile port protocols by using TCP and UDP scan options.
Configure scanning protocols and the ports that you want to scan on the Port Scan tab of an existing or
new scan policy configuration window.
Note: You can also configure port scanning from the How To Scan tab in the Scan Profile Configuration window
but this option is only enabled for backwards compatibility. Do not use the How To Scan tab to configure new port
scans.
Scanning a full port range
In IBM QRadar Vulnerability Manager, you can scan the full port range on the assets that you
specify.
About this task
Create a scan policy to specify the ports that you want to scan, and then add this scan policy to a scan
profile, which you use to run the scan.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scan Policies.
3. On the toolbar, click Add to create a new scan policy or Edit to edit an existing policy.
4. Click the Settings tab.
a. Enter a name and description for the scan policy.
b. Select the scan type.
5. Click the Port Scan tab.
6. In the Protocol field, select a protocol. The default values are TCP & UDP.
Note: UDP port scans are much slower than TCP port scans because of the way that UDP
works. A UDP port scan can take up to 24 hours to scan all ports (1-65535) on an asset.
7. In the Range field, type 1-65535.
Restriction: Port ranges must be configured in dash-separated, comma-delimited,
consecutive, ascending, and non-overlapping order. Multiple port ranges must be separated
by a comma. For example, the following examples show the delimiters that are used to
enter port ranges:(1-1024, 1055, 2000-65535).
8. In the Timeout (m) field, type the time in minutes after which you want the scan to cancel
if no scan results are discovered.
Important: You can type any value in the range 1 - 500. Ensure that you do not enter too
short a time, otherwise the port scan cannot detect all running ports. Scan results that are
discovered before the timeout period are displayed.
9. Optional: Configure more options on the other tabs if you want to use the scan policy to
complete more tasks.
10. Click Save.
11. From the Scan Profiles page, create a new scan profile.
a. Add the scan policy that you saved.
b. Configure the remaining parameters for the scan profile and save.
c. From the Scan Profiles page, select the new scan profile, and then click Run on the
toolbar to run the scan.
For more information about creating a scan profile, see Creating a scan profile.
Note: You can also configure port scanning from the How To Scan tab in the Scan Profile
Configuration window but this option is only enabled for backwards compatibility. Do not
use the How To Scan tab to configure new port scans.
Scanning assets with open ports
In IBM QRadar Vulnerability Manager, you can configure a scan profile to scan assets with open
ports.
Procedure
1. Click the Assets tab.
2. In the navigation pane, click Asset Profiles then on the toolbar, click Search > New
Search.
3. To specify assets with open ports, configure the following options in the Search
Parameters pane:
a. Select Assets With Open Port, Equals any of 80 and click Add Filter.
b. Select Assets With Open Port, Equals any of 8080 and click Add Filter.
c. Click Search.
4. On the toolbar, click Save Criteria and configure the following options:
a. In the Enter the name of this search field, type the name of your asset search.
b. Click Include in my Quick Searches.
c. Click Share with Everyone and click OK.
5. Click the Vulnerabilities tab.
6. In the navigation pane, select Administrative > Scan Profiles.
7. On the toolbar, click Add.
When you create a scan profile, the only mandatory fields are Name and IP Addresses on
the Details tab of the Scan Profile Configuration page. To scan assets with open ports, you
must also follow the remaining steps in this procedure.
8. On the Details tab, select your saved asset search from the Available Saved Searches list
and click >.
When you include a saved asset search in your scan profile, the assets and IP addresses
associated with the saved search are scanned.
9. Click the When To Scan pane and in the Run Schedule list, select Manual.
10. Click the What To Scan pane.
11. Click Save.
Configuring a permitted scan interval
In IBM QRadar Vulnerability Manager, you can create an operational window to specify the times that a
scan can run.
About this task
If a scan does not complete within the operational window, it is paused and continues when the
operational window reopens. To configure an operational window:
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, click Administrative > Operational Window.
3. On the toolbar, click Actions > Add.
4. Enter a name for the operational window in the Name field.
5. Choose an operational window schedule from the Schedule list.
6. Optional: Select the times when scanning is permitted.
7. Optional: Select your timezone.
8. If you selected Weekly from the Schedule list, then click the desired days of the week check
boxes in the Weekly area.
9. If you selected Monthly from the Schedule list, then select a day from the Day of the month list.
10. Click Save.
Operational windows can be associated with scan profiles by using the When To Scan tab on
the Scan Profile Configuration page.
If you assign two overlapping operational windows to a scan profile, the scan profile runs from the
beginning of the earliest operational window to the end of the later operational window. For
example, if you configure two daily operational windows for the periods 1 a.m. to 6 a.m. and 5
a.m. to 9 a.m., the scan runs between 1 a.m. and 9 a.m.
For operational windows that do not overlap, the scan starts from the earliest operational window
and pauses if there's a gap between the operational windows, and then resumes at the beginning of
the next operational window.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Operational Window.
3. On the toolbar, select Actions > Add.
4. Type a name for your operational window, configure a permitted time interval and
click Save.
5. In the navigation pane, select Administrative > Scan Profiles.
6. On the toolbar, click Add.
When you create a scan profile, the only mandatory fields are Name and IP Addresses on
the Details tab of the Scan Profile Configuration page. To configure scanning during
permitted times, you must also follow the remaining steps in this procedure.
7. Click the When To Scan tab.
8. In the Run Schedule list, select Daily.
9. In the Start Time fields, type or select the date and time that you want your scan to run
each day.
10. In the Operational Windows pane, select your operational window from the list and click
(>).
11. Click Save.
Managing operational windows
In IBM QRadar Vulnerability Manager, you can edit, delete, and print operational windows.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Operational Window.
3. Select the operational window that you want to edit.
4. On the toolbar, select an option from the Actions menu.
5. Follow the instructions in the user interface.
Restriction: You cannot delete an operational window that is associated with a scan
profile. You must first disconnect the operational window from the scan profile.
Disconnecting an operational window
If you want to delete an operational window that is associated with a scan profile, you must
disconnect the operational window from the scan profile.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scan Profiles.
3. Select the scan profile that you want to edit.
4. On the toolbar, click Edit.
5. Click the When To Scan pane.
6. Select the relevant option from the Run Schedule list as required.
7. In the Name field, select the operational window that you want to disconnect and click (<).
8. Click Save.
Dynamic vulnerability scans
In IBM QRadar Vulnerability Manager, you can configure a scan to use certain vulnerability scanners for
specific CIDR ranges in your network. For example, your scanners might have access only to certain
areas of your network.
During a scan, QRadar Vulnerability Manager determines which scanner to use for each CIDR, IP
address, or IP range that you specify in your scan profile.
Dynamic scanning and domains
If you configured domains in the Domain Management window on the Admin tab, you can associate
scanners with the domains that you added.
For example, you might associate different scanners each with a different domain, or with different CIDR
ranges within the same domain. QRadar dynamically scans the configured CIDR ranges that contain the
IP addresses you specify on all domains that are associated with the scanners on your system. Assets with
the same IP address on different domains are scanned individually if the CIDR range for each domain
includes that IP address. If an IP address is not within a configured CIDR range for a scanner
domain, QRadar scans the domain that is configured for the Controller scanner for the asset.
Setting up dynamic scanning
To use dynamic scanning, you must do the following actions:
1. Add vulnerability scanners to your QRadar Vulnerability Manager deployment. For more
information,
3. Configure a scan of multiple CIDR ranges and enable Dynamic server selection in
the Details tab of the Scan Profile Configuration page.
Associating vulnerability scanners with CIDR ranges
In IBM QRadar Vulnerability Manager, to do dynamic scanning, you must associate vulnerability
scanners with different segments of your network.
Before you begin
You must add extra vulnerability scanners to your deployment.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scanners.
Attention: By default, the Controller scanner is displayed. The Controller scanner is part
of the QRadar Vulnerability Manager processor that is deployed on either
your QRadar Console or on a dedicated QRadar Vulnerability Manager processing
appliance. You can assign a CIDR range to the Controller scanner, but you must deploy
extra scanners to use dynamic scanning.
3. Click a scanner on the Scanners page.
4. On the toolbar, click Edit.
Restriction: You cannot edit the name of the scanner. To edit a scanner name,
click Admin > System and License Management > Deployment Actions > Manage
Vulnerability Deployment.
5. In the CIDR field, type a CIDR range or multiple CIDR ranges that are separated by
commas.
6. Click Save.
Scanning CIDR ranges with different vulnerability scanners
In IBM QRadar Vulnerability Manager, you can scan areas of your network with different
vulnerability scanners.
Before you begin
You must configure your network CIDR ranges to use the different vulnerability scanners in
your QRadar Vulnerability Manager deployment.
Procedure
1. Click the Vulnerabilities tab.
2. In the navigation pane, select Administrative > Scan Profiles.
3. On the toolbar, click Add.
4. Click the Dynamic server selection check box.
If you configured domains in the Admin > Domain Management window, you can select
a domain from the Domain list. Only assets within the domain you selected are scanned.
5. Optional: Add more CIDR ranges.
6. Click Save.
7. Click the check box on the row that is assigned to your scan on the Scan Profiles page and
click Run.
Scan policies
A scan policy provides you with a central location to configure specific scanning requirements.
You can use scan policies to specify scan types, ports to be scanned, vulnerabilities to scan for and
scanning tools to use. In IBM QRadar® Vulnerability Manager, a scan policy is associated with a scan
profile and is used to control a vulnerability scan. You use the Scan Policies list on the Details tab of
the Scan Profile Configuration page to associate a scan policy with a scan profile.
You can create a new scan policy or copy and modify a pre-configured policy that is distributed
with QRadar Vulnerability Manager.
Pre-configured scan policies
The following pre-configured scan policies are distributed with QRadar Vulnerability Manager:
Full scan ,Discovery scan, Database scan, Patch scan, PCI scan, Web scan
A description of each pre-configured scan policy is displayed on the Scan Policies page.
Scan policy automatic updates for critical vulnerabilities
As part of IBM QRadar Vulnerability Manager daily automatic updates, you receive new scan
policies for tasks such as detecting zero-day vulnerabilities on your assets.
Use scan policies that are delivered by automatic update to create scan profiles to scan for specific
vulnerabilities. To view all scan policies on your system, go to Administrative > Scan Policies on
the Vulnerabilities tab.
You must not edit scan policies that are delivered by automatic update as your changes might be
overwritten by later updates. You can create a copy and edit it.
If you delete a scan policy that is delivered by automatic update, it can be recovered only
by QRadar customer support.
Modifying a pre-configured scan policy
In IBM QRadar Vulnerability Manager, you can copy a pre-configured scan policy and modify the
policy to your exact scanning requirements.
Procedure
1. Click the Vulnerabilities tab.
2.In the navigation pane, select Administrative > Scan Policies.
3.On the Scan Policies page, click a pre-configured scan policy.
4.On the toolbar, click Edit.
5.Click Copy.
6.In the Copy scan policy window, type a new name in the Name field and click OK.
7.Click the copy of your scan policy and on the toolbar, click Edit.
8.In the Description field, type new information about the scan policy.
Important: If you modify the new scan policy, you must update the information in the
description.
9. To modify your scan policy, use the Port Scan, Vulnerabilities, Tool Groups,
or Tools tabs.
Restriction: Depending on the Scan Type that you select, you cannot use all the tabs on
the Scan Policy window.
Configuring a scan policy
In IBM QRadar Vulnerability Manager, you can configure a scan policy to meet any specific
requirements for your vulnerability scans. You can copy and rename a preconfigured scan policy
or you can add a new scan policy. You can't edit a preconfigured scan policy.
Before you begin
https://www.ibm.com/docs/en/SS42VS_DSM/pdf/b_vuln.pdf
IBM QRadar Vulnerability Manager is a third-party scanning platform that detects vulnerabilities within
the applications, systems, and devices on your network.
QRadar Vulnerability Manager uses security intelligence to help you manage and prioritize your network
vulnerabilities. For example, you can use QRadar Vulnerability Manager to continuously monitor
vulnerabilities, improve resource configuration, and identify software patches. You can also, prioritize
security gaps by correlating vulnerability data with network flows, log data, firewall, and intrusion
prevention system (IPS) data.
You can maintain real-time visibility of the vulnerabilities that are detected by third-party scanners.
Third-party scanners are integrated with QRadar and include HCL BigFix®, Guardium®, AppScan®,
Nessus, nCircle, and Rapid 7.
Unless otherwise noted, all references to QRadar Vulnerability Manager refer to IBM QRadar
Vulnerability Manager. All references to QRadar refer to IBM QRadar SIEM and IBM QRadar Log
Manager and all references to SiteProtector refer to IBM Security SiteProtector.
Categories of QRadar Vulnerability Manager vulnerability checks
IBM QRadar Vulnerability Manager checks for multiple types of vulnerabilities in your network.
Software features
Misconfiguration
Vendor flaws
Software features
Some software settings for systems or applications are designed to aid usability but these settings can
introduce risk to your network. For example, the Microsoft NetBIOS protocol is useful in internal
networks, but if it is exposed to the Internet or an untrusted network segment it introduces risk to your
network.
The following examples are software features or commands that can expose your network to risk:
ICMP time stamp or netmask requests
Sendmail expand or verify commands
Ident protocol services that identify the owner of a running process.
Misconfiguration
In addition to identifying misconfigurations in default settings, QRadar Vulnerability Manager can
identify a broader range of misconfigurations such as in the following cases:
SMTP Relay
Unrestricted NetBios file sharing
DNS zone transfers
FTP World writable directories
Default administration accounts that have no passwords
NFS World exportable directories
Vendor flaws
Vendor flaws is a broad category that includes events such as buffer overflows, string format issues,
directory transversals, and cross-site scripting. Vulnerabilities that require a patch or an upgrade fix are
included in this category.
Database checks
Web server checks
Web application server checks
Common web scripts checks
Custom web application checks
DNS server checks
Mail server checks
Application server checks
Wireless access point checks
Common service checks
Obsolete software and systems
The following table describes some checks that are made by QRadar Vulnerability Manager.
Type of Check Description
Port scan Scans for active hosts and the ports and services that are open on each active host.
Type of Check Description
Returns OS information.
Checks each web application and web page on a web server by using the following checks:
File upload
Information disclosure
Injection (XSS/script/HTML)
Note: Authenticated web app scanning is not supported. For example, if authentication is
required to access the site, you can't run web app tests.
Privilege escalation
Denial of service
Default passwords
Database Compromised user names and passwords
Denial of service
Admin rights
Known vulnerabilities, exploits, and configuration issues on web servers.
Denial of service
Web server Default admin passwords
PHP
Weak password encryption
Denial of service
Denial of service
Domain name system (DNS)
Denial of service
Information disclosure
Application server
Default user names and passwords
Windows patch Collects registry key entries, windows services, installed windows applications, and patched
scanning Microsoft bugs.
UNIX patch
Collects details of installed RPMs
scanning
SQL injection vulnerabilities occur when poorly written programs accept user-provided data in a
database query without validating the input, which is found on web pages that have dynamic
content. By testing for SQL injection vulnerabilities, QRadar Vulnerability Manager assures that
the required authorization is in place to prevent these exploits from occurring.
Cross-Site Scripting vulnerabilities can allow malicious users to inject code into web pages that
are viewed by other users. HTML and client-side scripts are examples of code that might be
injected into web pages. An exploited cross-site scripting vulnerability can be used by attackers to
bypass access controls such as the same origin policy. QRadar Vulnerability Manager tests for
varieties of persistent and non-persistent cross-site scripting vulnerabilities to ensure that the web
application is not susceptible to this threat.
QRadar Vulnerability Manager includes thousands of checks that check default configurations, cgi
scripts, installed and supporting application, underlying operating systems and devices.
For in-depth web application scanning, QRadar Vulnerability Manager integrates with IBM® Security
AppScan® to provide greater web application visibility to your vulnerabilities.
Network device scanning
QRadar Vulnerability Manager includes the SNMP plug-in that supports scanning of network
devices. QRadar Vulnerability Manager supports SNMP V1 and SNMP V2. SNMP V3 is not
supported. QRadar Vulnerability Manager uses a dictionary of known community defaults for various
SNMP-enabled devices. You can customize the dictionary.
External scanner checks
The external scanner scans the following OWASP (Open Web Application Security Project) CWEs (Common
Weakness Enumerations):
Directory Listing
Path Traversal, Windows File Parameter Alteration, UNIX File Parameter Alteration, Poison Null
Byte Windows Files Retrieval, Poison Null Byte UNIX Files Retrieval
Cross-Site Scripting, DOM-Based Cross-Site Scripting
SQL Injection, Blind SQL Injection, Blind SQL Injection (Time Based)
Autocomplete HTML Attribute Not Disabled for Password Field
Unencrypted Login Request, Unencrypted Password Parameter
Remote Code Execution, Parameter System Call Code Injection, File Parameter Shell Command
Injection, Format String Remote Command Execution
Database scanning
QRadar Vulnerability Manager detects vulnerabilities on major databases by using unauthenticated
scanning of target hosts. In addition, QRadar Vulnerability Manager targets several databases by using
plug-ins.
Operating system checks
Vulnerability
Operating system Patch scanning Configuration
scanning
CISCO No No No
AS/400® / iSeries No No No
Microsoft Windows 10
Microsoft Windows 8.1
Microsoft Windows 8
Microsoft Windows 7
Microsoft Windows Vista
Microsoft Windows Server 2016
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2012
Microsoft Windows Server 2008 R2
Microsoft Windows Server 2008
Microsoft Windows Server 2003
CentOS versions 3 - 7
IBM AIX versions 4-7
RHEL versions 3 - 7
SUSE versions 10 - 11
Ubuntu versions 6-14
Red Hat 9
Solaris versions 2.6, 7 - 10
You can create a new dashboard, manage your existing dashboards, and modify the display settings of
each vulnerability dashboard item.
QRadar Vulnerability
Manager deployments
Locate and manage the vulnerabilities in your network. Enhance your network security by integrating
add-on features such as HCL BigFix® and IBM Security SiteProtector.
Important: The IBM QRadar Vulnerability Manager scanner is end of life (EOL) in 7.5.0 Update Package 6, and
is no longer supported in any version of IBM QRadar.
IBM QRadar Vulnerability Manager discovers vulnerabilities on your network devices, applications, and
software adds context to the vulnerabilities, prioritizes asset risk in your network, and supports the
remediation of discovered vulnerabilities.
You can integrate QRadar Risk Manager for added protection, which provides network topology, active
attack paths and high-risk assets risk-score adjustment on assets based on policy compliance. QRadar
Vulnerability Manager and QRadar Risk Manager are combined into one offering and both are enabled
through a single base license.
Depending on the product that you install, and whether you upgrade IBM QRadar or install a new system,
the Vulnerabilities tab might not be displayed. Access IBM QRadar Vulnerability Manager by using
the Vulnerabilities tab. If you install IBM QRadar SIEM, the Vulnerabilities tab is enabled by default
with a temporary license key. If you install QRadar Log Manager, the Vulnerabilities tab is not enabled.
You can use the Try it Out option to try out QRadar Vulnerability Manager for 30 days. You can
purchase the license for QRadar Vulnerability Manager separately and enable it by using a license key.
QRadar Vulnerability Manager components
The following information describes the QRadar Vulnerability Manager Processor.
The scan processor is responsible for the scheduling and managing scans, and delegating work to
the scanners that might be distributed throughout your network.
The vulnerability processor provides a scanning component by default. If required, you can move
the vulnerability processor to a different managed host in your deployment.
If you add a 600 managed host appliance, and QRadar Vulnerability Manager is used for the first
time, then the scan processor is assigned to the 600 managed host appliance.
The scanning processor is governed by the processing license, which determines the maximum
number of assets that can be processed by QRadar Vulnerability Manager.
The scan processor can run on the QRadar Console or a managed host.
The following information describes the QRadar Vulnerability Manager scanner.
You can deploy a QRadar Vulnerability Manager scanner dedicated scanner appliance, which is a
610 appliance.
You can deploy a scanner on a QRadar Console or on the following managed hosts: Flow
Collector, Flow Processor Event Collector, Event Processor, or Data Node.
The number of assets that you can scan with a scanner is determined by the scanner capacity and
is not impacted by licensing.
Scan jobs are completed by a processor and a scanner component. The following diagram shows the scan
components and the processes that run.
1. You create a scan job by specifying parameters such as IP addresses of assets, type of scan, and
required credentials for authenticated scans.
2. The scan job is accepted by the processor, logged, and added to the database along with
scheduling information to determine when the job runs.
3. The scheduler component manages the scheduling of scans. When the scheduler initiates a scan, it
determines the list of tools that are required and queues them for invocation, and then the tools are
assigned to the relevant scanner.
4. Scanners poll the scan processor continuously for scan tools that it must run by sending a unique
scanner ID. When the scheduler has queued tools that are relevant to the specific scanner the tools
are sent to the scanner for invocation.
QRadar Vulnerability Manager uses an attack tree methodology to manage scans and to determine
which tools are launched. The phases are: asset discovery, port/service discovery, service scan,
and patch scan.
5. The dispatcher runs and manages each scan tool in the list. For each tool that is run, the dispatcher
sends a message to the processor that indicates when a scan tool starts and finishes.
6. The output from the scan tool is read by the result writer, which then passes these results back to
the processor.
7. The result dispatcher processes the raw results from the scan tools and records them into the
Postgres database.
8. The result exporter finds completed scans in the processor database and exports the results to
the QRadar Console.
9. The exported results are added to the QRadar database where users can view and manage the scan
results.
All-in-one deployment
You can run QRadar Vulnerability Manager from an All-in-one system, where the scanning and processing
functions are on the Console. The following information describes what you can do with a basic setup:
Manage scan data from third-party scanners that are integrated with QRadar.
Expanding a deployment
As your deployment grows, you might need to move the processing function off the QRadar Console to
free up resources, and you might want to deploy scanners closer to your assets.
The following list describes reasons to add scanners to your deployment:
To scan assets in a different geographic region than the QRadar Vulnerability Manager processor.
If you want to scan many assets concurrently within a short time frame.
You might want to add a scanner to avoid scanning through a firewall that is a log source. You
might also consider adding the scanner directly to the network by adding an interface on the
scanner host that by-passes the firewall.
The following diagram shows a scanning deployment with external scanning and scanners deployed on
managed hosts.
Detection of newly published vulnerabilities that are not present in any scan results
Compliance assessment
Risk policies that are based on vulnerability data and risk scores that help you quickly identify
high-risk vulnerabilities.
Visibility into potential exploit paths from potential threats and untrusted networks through the
network topology view.
Topology visualization
Visibility into what vulnerabilities are blocked by firewalls and Intrusion Prevention Systems
(IPS).
You must install IBM QRadar Console before you set up and configure the QRadar Risk
Manager appliance. It is a good practice to install QRadar and QRadar Risk Manager on the same
network switch.
You require only one QRadar Risk Manager appliance per deployment.
The following diagram shows a deployment that has a scanner and QRadar Risk Manager.
Figure 1. Scanning deployment with Risk Manager
Use Risk Manager to complete the following tasks:
o If you are parsing known events, select your log source type from the list.
o If you are parsing stored events, click Create New. Enter a name for your log source type
in the Log Source Type Name field and click Save.
4. In the Properties tab, select the Override system properties checkbox for the properties that you
want to edit.
Identity properties for event mappings
Identity data is a special set of system properties that includes Identity Username, Identity IP, Identity
NetBIOS Name, Identity Extended Field, Identity Host Name, Identity MAC, Identity Group
Name.
When identity properties are populated by a DSM, the identity data is forwarded to the asset profiler
service that runs on the IBM® QRadar® console. The asset profiler is used to update the asset model,
either by adding new assets or by updating the information on existing assets, including the Last
User and User Last Seen asset fields when an Identity Username is provided.
IBM QRadar DSMs can populate identity data for certain events, such as those that establish an
association or disassociation between identity properties. This association or disassociation is for
performance and also for certain events that provide new or useful information that is needed for asset
updates. For example, a login event establishes a new association between a user name and an asset (an IP
address, a MAC address, or a host name, or a combination of them). The DSM generates identity data for
any login events that it parses, but subsequent events of different types that involve the same user, provide
no new association information. Therefore, the DSM does not generate identity for other event types.
Also, the DSMs for DHCP services can generate identity data for DHCP assigned events because these
events establish an association between an IP address and a MAC address. DSMs for DNS services
generate identity information for events that represents DNS lookups because these events establish an
association between an IP address and a host name or DNS name.
You can configure the DSM Editor to override the behavior of the identity properties. However, unlike
other system properties, overridden identity property has no effect unless it is linked to specific Event ID
or Event Category combinations (event mappings). When identity property overrides are configured, you
can go to the Event Mappings tab and select an event mapping to configure specific identity properties
for that event. Only identity properties that are available and captured by the configured property regex or
json are populated for an event.
Note: The Identity Username property is unique and cannot be independently configured. If any identity
properties are enabled for a particular event mapping, then the Identity Username property is
automatically populated for the event from the available Username property value.
Referencing capture strings by using format string
fields
Use the Format String field on the Property Configuration tab to reference capture groups that you
defined in the regex. Capture groups are referenced in their order of precedence.
About this task
A capture group is any regex that is enclosed within parenthesis. A capture group is referenced with an $n
notation, where n is a group number that contains a regular expression (regex). You can define multiple
capture groups.
For example, you have a payload with company and host name variables.
"company":"ibm", "hostname":"localhost.com"
"company":"ibm", "hostname":"johndoe.com"
You can customize the host name from the payload to display ibm.hostname.com by using capture
groups.
Procedure
1. In the regex field, enter the following regular expression: "company":"(.*?)".*"hostname":"(.*?)"
2. In the Format String field, enter the capture group $1.$2 where $1 is the value for the company
variable (in this case ibm) and $2 is the value for the host name in the payload. The following
output is given:
ibm.localhost.com ibm.johndoe.com
o If you are parsing known events, select your log source type from the list.
o If you are parsing stored events, click Create New. Enter a name for your log source type
in the Log Source Type Name field and click Save.
4. In the Properties tab, select the Override system properties checkbox for the properties that you
want to edit.
Field-based properties
Use a field-based property to hide user names, group names, host names, and NetBIOS names.
Expressions that use field-based properties obfuscate all instances of the data string. The data is hidden
regardless of its log source, log source type, event name, or event category.
If the same data value exists in more than one of the fields, the data is obfuscated in all fields that contain
the data even if you configured the profile to obfuscate only one of the four fields. For example, if you
have a host name that is called IBMHost and a group name that is called IBMHost, the value IBMHost is
obfuscated in both the host name field and the group name field even if the data obfuscation profile is
configured to obfuscate only host names.
Regular expressions
Use a regular expression to obfuscate one data string in the payload. The data is hidden only if it matches
the log source, log source type, event name, or category that is defined in the expression.
You can use high-level and low-level categories to create a regular expression that is more specific than a field-
based property. For example, you can use the following regex patterns to parse user names:
Example regex patterns Matches
usrName=([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@([0-9
john_smith@EXAMPLE.com,
a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,20})$ jon@example.com, jon@us.example.com
usrName=^([a-zA-Z])[a-zA-Z_-]*[\w_-]*[\S]$|^([a
johnsmith, Johnsmith123, john_smith123,
-zA-Z])[0-9_-]*[\S]$|^[a-zA-Z]*[\S]$ john123_smith, john-smith
Matches any non-white space after the equal, =,
usrName=(/S+) sign. This regular expression is non-specific and
can lead to system performance issues.
You must be an administrator and have the private key and the password for the key before you can
deobfuscate data. The private key must be on your local computer.
After you toggle the auto-deobfuscation setting, you must refresh the browser window and
reload the event details page for the changes to appear.
Although you can see the expressions, you cannot edit or disable data obfuscation expressions that were
created in earlier versions. You must manually disable them and create a data obfuscation profile that
contains the revised expressions.
Ensure that you disable old expressions before you create new expressions that obfuscate the same data.
Multiple expressions that obfuscate the same data cause the data to be obfuscated twice. To decrypt data
that is obfuscated multiple times, each keystore that is used in the obfuscation process must be applied in
the order that the obfuscation occurred.
Procedure
1. Use SSH to log in to your QRadar Console as the root user.
2. Edit the obfuscation expressions .xml configuration file that you created when you configured the
expressions.
3. For each expression that you want to disable, change the Enabled attribute to false.
4. To disable the expressions, run the obfuscation_updater.sh script by typing the following
command:
obfuscation_updater.sh [-p <path_to_private_key>] [-e <path_to_obfuscation_xml_config_file>]
The obfuscation_updater.sh script is in the /opt/qradar/bin directory, but you can run the script
from any directory on your QRadar Console.
Data obfuscation is the process of strategically hiding data from QRadar users. You can hide custom
properties, normalized properties such as user names, or you can hide the content of a payload, such as
credit card or social security numbers.
The expressions in the data obfuscation profile are evaluated against the payload and normalized
properties. If the data matches the obfuscation expression, the data is hidden in QRadar. The data might
be hidden to all users, or only to users belonging to particular domains or tenants. Affected users who try
to query the database directly can’t see the sensitive data. The data must be reverted to the original form
by uploading the private key that was generated when the data obfuscation profile was created.
To ensure that QRadar can still correlate the hidden data values, the obfuscation process is deterministic.
It displays the same set of characters each time the data value is found.
How does data obfuscation work?
Before you configure data obfuscation in your IBM QRadar deployment, you must understand how it
works for new and existing offenses, assets, rules, and log source extensions.
Existing event data
When a data obfuscation profile is enabled, the system masks the data for each event as it is received
by QRadar. Events that are received by the appliance before data obfuscation is configured remain in the
original unobfuscated state. The older event data is not masked and users can see the information.
Assets
When data obfuscation is configured, the asset model accumulates data that is masked while the pre-
existing asset model data remains unmasked.
To prevent someone from using unmasked data to trace the obfuscated information, purge the asset model
data to remove the unmasked data. QRadar will repopulate the asset database with obfuscated values.
Offenses
To ensure that offenses do not display data that was previously unmasked, close all existing offenses by
resetting the SIM model. For more information, see Resetting SIM.
Rules
You must update rules that depend on data that was previously unmasked. For example, rules that are
based on a specific user name do not fire when the user name is obfuscated.
Log source extensions that change the format of the event payload can cause issues with data obfuscation.
A profile that is enabled immediately begins obfuscating data as defined by the enabled
expressions in the profile. The enabled profile is automatically locked. Only the user who has the
private key can disable or change the profile after it is enabled.
To ensure that obfuscated data can be traced back to an obfuscation profile, you cannot delete a
profile that was enabled, even after you disable it.
Locked profiles
A profile is automatically locked when you enable it, or you can lock it manually.
A locked profile has the following restrictions:
The following table shows examples of profiles that are locked or unlocked:
Scenario Result
Profile A is locked. It
was created by using
keystore A.
Profile B is automatically locked.
Profile B is also
created by using
keystore A.
Profile A is created
Profile A is automatically locked.
and enabled.
Profile A, Profile B,
and Profile C are
currently locked. All
were created by
using keystore A. Profile A, Profile B, and Profile C are all unlocked.
Profile B is selected
and Lock/Unlock is
clicked.
Table 1. Locked profile examples
Use a field-based property to hide user names, group names, host names, and NetBIOS names.
Expressions that use field-based properties obfuscate all instances of the data string. The data is hidden
regardless of its log source, log source type, event name, or event category.
If the same data value exists in more than one of the fields, the data is obfuscated in all fields that contain
the data even if you configured the profile to obfuscate only one of the four fields. For example, if you
have a host name that is called IBMHost and a group name that is called IBMHost, the value IBMHost is
obfuscated in both the host name field and the group name field even if the data obfuscation profile is
configured to obfuscate only host names.
Regular expressions
Use a regular expression to obfuscate one data string in the payload. The data is hidden only if it matches
the log source, log source type, event name, or category that is defined in the expression.
You can use high-level and low-level categories to create a regular expression that is more specific than a field-
based property. For example, you can use the following regex patterns to parse user names:
Example regex patterns Matches
usrName=([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@([0-9
john_smith@EXAMPLE.com,
a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,20})$ jon@example.com, jon@us.example.com
usrName=^([a-zA-Z])[a-zA-Z_-]*[\w_-]*[\S]$|^([a
johnsmith, Johnsmith123, john_smith123,
-zA-Z])[0-9_-]*[\S]$|^[a-zA-Z]*[\S]$ john123_smith, john-smith
Matches any non-white space after the equal, =,
usrName=(/S+) sign. This regular expression is non-specific and
can lead to system performance issues.
1. Create a data obfuscation profile and download the system-generated private key. Save the key in
a secure location.
2. Create the data obfuscation expressions to target the data that you want to hide.
3. Enable the profile so that the system begins to obfuscate the data.
4. To read the data in QRadar, upload the private key to deobfuscate the data.
Creating a data obfuscation profile
IBM QRadar uses data obfuscation profiles to determine which data to mask, and to ensure that
the correct keystore is used to unmask the data.
About this task
You can create a profile that creates a new keystore or you can use an existing keystore. If you
create a keystore, it must be downloaded and stored in a secure location. Remove the keystore
from the local system and store it in a location that can be accessed only by users who are
authorized to view the unmasked data.
Configuring profiles that use different keystores is useful when you want to limit data access to
different groups of users. For example, create two profiles that use different keystores when you
want one group of users to see user names and another group of users to see host names.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data Sources section, click Data Obfuscation Management.
3. To create a new profile, click Add and type a unique name and description for the profile.
4. To create a new keystore for the profile, complete these steps:
a. Click System generate keystore.
b. In the Provider list box, select IBMJCE.
c. In the Algorithm list box, select JCE and select whether to generate 512-bit or
1024-bit encryption keys.
In the Keystore Certificate CN box, the fully qualified domain name for
the QRadar server is auto-populated.
d. In the Keystore password box, enter the keystore password.
The keystore password is required to protect the integrity of the keystore. The
password must be at least 8 characters in length.
e. In the Verify keystore password, retype the password.
5. To use an existing keystore with the profile, complete these steps:
a. Click Upload keystore.
b. Click Browse and select the keystore file.
c. In the Keystore password box, type the password for the keystore.
6. Click Submit.
7. Download the keystore.
Remove the keystore from your system and store it in a secure location.
Creating data obfuscation expressions
The data obfuscation profile uses expressions to specify which data to hide from IBM
QRadar users. The expressions can use either field-based properties or regular expressions.
About this task
After an expression is created, you cannot change the type. For example, you cannot create a
property-based expression and then later change it to a regular expression.
You cannot hide a normalized numeric field, such as port number or an IP address.
Multiple expressions that hide the same data cause data to be hidden twice. To decrypt data that is
hidden multiple times, each keystore that is used in the obfuscation process must be applied in the
order that the obfuscation occurred.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the Data Sources section, click Data Obfuscation Management.
3. Click the profile that you want to configure, and click View Contents.
You cannot configure profiles that are locked.
4. To create a new data obfuscation expression, click Add and type a unique name and
description for the profile.
5. Select the Enabled check box to enable the profile.
6. Optional: To apply the obfuscation expression to specific domains or tenants, select them
from the Domain field. Or select All Domains to apply the obfuscation expression to all
domains and tenants.
7. To create a field-based expression, click Field Based and select the field type to obfuscate.
8. To create a regular expression, click RegEx and configure the regex properties.
9. Click Save.
You must be an administrator and have the private key and the password for the key before you
can deobfuscate data. The private key must be on your local computer.
After you toggle the auto-deobfuscation setting, you must refresh the browser
window and reload the event details page for the changes to appear.
Ensure that you disable old expressions before you create new expressions that obfuscate the same
data. Multiple expressions that obfuscate the same data cause the data to be obfuscated twice. To
decrypt data that is obfuscated multiple times, each keystore that is used in the obfuscation process
must be applied in the order that the obfuscation occurred.
Procedure
1. Use SSH to log in to your QRadar Console as the root user.
2. Edit the obfuscation expressions .xml configuration file that you created when you
configured the expressions.
3. For each expression that you want to disable, change the Enabled attribute to false.
4. To disable the expressions, run the obfuscation_updater.sh script by typing the following
command:
obfuscation_updater.sh [-p <path_to_private_key>] [-
e <path_to_obfuscation_xml_config_file>]
The obfuscation_updater.sh script is in the /opt/qradar/bin directory, but you can run the
script from any directory on your QRadar Console.
Anomaly detection rules require a saved search that is grouped around a common parameter, and a time
series graph that is enabled. Typically the search needs to accumulate data before the anomaly rule returns
any result that identifies patterns for anomalies, thresholds, or behavior changes.
Anomaly rules
Test event and flow traffic for changes in short-term events when you are comparing against a longer
timeframe. For example, new services or applications that appear in a network, a web server crashes,
firewalls that all start to deny traffic.
Example: You want to be notified when one of your firewall devices is reporting more often than it usually does
because your network might be under attack. You want to be notified when you receive twice as many events in 1
hour. You follow these steps:
1. Create and save a search that groups by log source, and displays only the count column.
2. Apply the saved search to an anomaly rule, and add the rule test, and when the average value
(per interval) of count over the last 1 hour is at least 100% different from the average value
(per interval) of the same property over the last 24 hours.
Threshold rules
Test events or flows for activity that is greater than or less than a specified range. Use these rules to detect
bandwidth usage changes in applications, failed services, the number of users connected to a VPN, and
detecting large outbound transfers.
Example: A user who was involved in a previous incident has large outbound transfer.
When a user is involved in a previous offense, automatically set the Rule response to add to the Reference
set. If you have a watch list of users, add them to the Reference set. Tune acceptable limits within the
Threshold rule.
A reference set, WatchUsers, and Key:username are required for your search.
Test events or flows for volume changes that occur in regular patterns to detect outliers. For example, a
mail server that has an open relay and suddenly communicates with many hosts, or an IPS (intrusion
protection system) that starts to generate numerous alert activities.
A behavioral rule learns the rate or volume of a property over a pre-defined season. The season defines
the baseline comparison timeline for what you're evaluating. When you set a season of 1 week, the
behavior of the property over that 1 week is learned and then you use rule tests to alert you to any
significant changes.
After a behavioral rule is set, the season adjusts automatically. When the data in the season is learned, it is
continually evaluated so that business growth is profiled within the season; you do not have to change
your rules. The longer a behavioral rule runs, the more accurate it becomes. You can then adjust the rule
responses to capture more subtle changes.
The following table describes the behavioral rule test parameter options.
The most important value. The season defines the baseline behavior of the
property that you are testing and which the other rule tests use. To define a
season, consider the type of traffic that you are monitoring. For example, for
Season network traffic or processes that include human interaction, 1 week is a good
season timeframe. For tracking automated services where patterns are
consistent, you might want to create a season as short as 1 day to define that
pattern of behavior.
Weight of the original data with seasonal changes and random error
accounted for. This rule test asks the question, "Is the data the same as
Current traffic level yesterday at the same time?"
The weight must be in the range of 1 to 100. A higher value places more
weight on the previously recorded value.
Current traffic trend Weight of changes in the data for each time interval. This rule test asks the
question, "How much does the data change when it compares this minute to
Rule test parameter Description
The weight must be in the range of 1 to 100. A higher value places more
weight on traffic trends than the calculated behavior.
Weight of the seasonal effect for each period. This rule test asks the question,
"Did the data increase the same amount from week 2 to week 3, as it did from
Current traffic behavior week 1 to week 2?"
The weight must be in the range of 1 to 100. A higher value places more
weight on the learned behavior.
Use predicted values to scale baselines to make alerting more or less
sensitive.
Predicted value The sensitivity must be in the range of 1 to 100. A value of 1 indicates that
the measured value cannot be outside the predicted value. A value of 100
indicates that the traffic can be more than four times larger than the predicted
value.
Table 1. Behavioral rule test definitions
The forecast for value from (n+1)th interval is calculated by using the following formula:
Fn+1 = Bn + Tn + Tn+1-s
Where F is the predicted value, B is the base value for interval n, T is the trend value for interval n,
and T is the trend value for season intervals ago and s is the number of intervals within the season.
Bn+1 = (0.2 + 0.3*( <Current traffic level> / 100.0))*(value n+1 – Tn+1-s) + (1 – (0.2 + 0.3*( <Current traffic level> /
100.0)))*T n
Tn+1 = (0.2 + 0.3*( <Current traffic trend> / 100.0))*(B n+1 - Bn) + (1 - (0.2 + 0.3*( <Current traffic trend> / 100.0)))*T n
Dn+1 = (0.2 + 0.3*( <Current traffic behavior> / 100.0))*|value n+1 – Fn+1| + (1 – (0.2 + 0.3*( <Current traffic behavior> /
100.0)))*D n+1-s
The behavioral rule produces an alert for the interval if the following expression is false:
During the first season, the behavioral rule learns for future calculations and doesn't produce any alerts.
5. On the Rule Test Stack Editor page, in the enter rule name here field, type a unique name that
you want to assign to this rule.
6. To apply your rule by using the default test, select the first rule in the anomaly Test Group list.
You might need to set the accumulated property parameter to the property that you selected from
the Value to Graph list that you saved in the search criteria. If you want to see the result sooner,
set the percentage to a lower value, such as 10%. Change last 24 hours to a lesser time period,
such as 1 hour. Because an anomaly detection tests on aggregated fields in real time to alert you of
anomalous network activity, you might want to increase or decrease events or flows in your
network traffic.
7. Add a test to a rule.
a. To filter the options in the Test Group list, type the text that you want to filter for in
the Type to filter field.
b. From the Test Group list, select the type of test that you want to add to this rule.
c. Optional: To identify a test as an excluded test, click and at the beginning of the test in the
Rule pane. The and is displayed as and not.
d. Click the underlined configurable parameters to customize the variables of the test.
e. From the dialog box, select values for the variable, and then click Submit.
8. To test the total selected accumulated properties for each event or flow group, disable Test the
[Selected Accumulated Property] value of each [group] separately.
9. In the groups pane, enable the groups you want to assign this rule to.
10. In the Notes field, type any notes that you want to include for this rule, and then Click Next.
11. On the Rule Responses page, configure the responses that you want this rule to generate.
Learn more about rule response page parameters for anomaly detection rules:
The following table provides the Rule Response page parameters if the rule type is Anomaly.
Parameter Description
Specifies that this rule dispatches a new event with the original event or flow, which is
Dispatch New Eventprocessed like all other events in the system. By default, this check box is selected and
cannot be cleared.
If you want the Event Name information to contribute to the name of the
offense, select the This information should contribute to the name of the
associated offense(s) option.
If you want the configured Event Name to contribute to the offense, select the This
information should set or replace the name of the associated offense(s).
Offense Naming
Note: After you replace the name of the offense, the name won't change until the
offense is closed. For example, if an offense is associated with more than one rule, and
the last event doesn't trigger the rule that is configured to override the name of the
offense, the offense's name won't be updated by the last event. Instead, the offense
name remains the name that is set by the override rule.
The severity level that you want to assign to the event. The range is 0 (lowest) to 10
Severity (highest) and the default is 5. The Severity is displayed in the Annotations pane of the
event details.
The credibility that you want to assign to the log source. For example, is the log source
noisy or expensive? Using the list boxes, select the credibility of the event. The range
Credibility
is 0 (lowest) to 10 (highest) and the default is 5. Credibility is displayed in the
Annotations pane of the event details.
The relevance that you want to assign to the weight of the asset. For example, how
much do you care about the asset? Using the list boxes, select the relevance of the
Relevance
event. The range is 0 (lowest) to 10 (highest) and the default is 5. Relevance is
displayed in the Annotations pane of the event details.
Ensure that the As a result of this rule, the event is forwarded to the magistrate. If an offense
dispatched event is exists, this event is added. If no offense was created on the Offenses tab, a new
Parameter Description
Select this check box if you want to log the event or flow locally. By default, the
check box is clear.
Send to Local
SysLog Note: Only normalized events can be logged locally on a QRadar® appliance. If you
want to send raw event data, you must use the Send to Forwarding
Destinations option to send the data to a remote syslog host.
Adds events that are generated as a result of this rule to a reference set. You
must be an administrator to add data to a reference set.
Add to Reference
Data To use this rule response, you must create the reference data collection.
If you want this rule to remove data from a reference set, select this check box.
To remove data from a reference set, follow these steps:
Remove from a. From the first list, select the property of the event or flow that you want
Reference Set to remove.
b. From the second list, select the reference set from which you want to
remove the specified data.
Remove from
Reference Data To use this rule response, you must have a reference data collection.
You can write scripts that do specific actions in response to network events. For
example, you might write a script to create a firewall rule that blocks a particular
source IP address from your network in response to repeated login failures.
Execute Custom
Action Select this check box and select a custom action from the Custom action to
execute list.
You add and configure custom actions by using the Define Actions icon on
the Admin tab.
Publish on the IF- If the IF-MAP parameters are configured and deployed in the system settings, select
MAP Server this option to publish the offense information about the IF-MAP server.
Select this check box and use the list boxes to configure the frequency with which you
Response Limiter
want this rule to respond
Enable Rule Select this check box to enable this rule. By default, the check box is selected.
Parameter Description
To edit building blocks, you must add the IP address or IP addresses of the server or servers into the
appropriate building blocks.
Procedure
1. Click the Offenses tab.
2. On the navigation menu, click Rules.
3. From the Display list, select Building Blocks.
4. Double-click the building block that you want to edit.
5. Update the building block.
6. Click Finish.
The following table describes editable building blocks.
o VA Scanners
o Authorized Scanners
BB:HostDefinition: Virus Edit the and when either the source or destination IP is one of the
Definition and Other Update following test to include the IP addresses of virus protection and update
Servers function servers.
Edit the and when the source is located in test to include geographic
BB:Category Definition: locations that you want to prevent from accessing your network. This change
Countries with no Remote enables the use of rules, such as anomaly: Remote Access from Foreign
Access Country to create an offense when successful logins are detected from remote
locations.
Edit the and when either the source or destination IP is one of the
BB:ComplianceDefinition: following test to include the IP addresses of servers that are used for Gramm-
GLBA Servers Leach-Bliley Act (GLBA) compliance. By populating this building block, you
can use rules such as Compliance: Excessive Failed Logins to Compliance
IS, which create offenses for compliance and regulation-based situations.
BB:ComplianceDefinition: Edit the and when either the source or destination IP is one of the
Building Block Description
following test to include the IP addresses of servers that are used for Health
Insurance Portability and Accountability Act (HIPAA) compliance. By
HIPAA Servers populating this building block, you can use rules, such as Compliance:
Excessive Failed Logins to Compliance IS, which create offenses for
compliance and regulation-based situations.
Edit the and when either the source or destination IP is one of the
BB:ComplianceDefinition: following test to include the IP addresses of servers that are used for SOX
SOX Servers (Sarbanes-Oxley Act) compliance. By populating this building block, you can
use rules, such as Compliance: Excessive Failed Logins to Compliance IS,
which create offenses for compliance and regulation-based situations.
Edit the and when either the source or destination IP is one of the
following test to include the IP addresses of servers that are used for PCI DSS
BB:ComplianceDefinition: (Payment Card Industry Data Security Standards) compliance. By populating
PCI DSS Servers this building block, you can use rules such as Compliance: Excessive Failed
Logins to Compliance IS, which creates offenses for compliance and
regulation-based situations.
Edit the and when either the source or destination IP is one of the
BB:NetworkDefinition: following test to include the broadcast addresses of your network. This change
Broadcast Address Space removes false positive events that might be caused by the use of broadcast
messages.
BB:NetworkDefinition: Edit the and when the local network is test to include workstation networks
Client Networks that users are operating.
BB:NetworkDefinition:
Server Networks Edit the when the local network is test to include any server networks.
BB:NetworkDefinition: Edit the and when the local network is test to include the IP addresses that
Darknet Addresses are considered to be a dark net. Any traffic or events that are directed towards
a dark net is considered suspicious.
Edit the and when the any IP is a part of any of the following test to include
BB:NetworkDefinition:
the remote services that might be used to obtain information from the network.
DLP Addresses
This change can include services, such as WebMail hosts or file sharing sites.
BB:NetworkDefinition: Edit the and when the local network test to include networks that are
DMZ Addresses considered to be part of the network’s DMZ.
BB:PortDefinition: Edit the and when the destination port is one of the following test to include
Authorized L2R Ports common outbound ports that are allowed on the network.
BB:NetworkDefinition: Edit the and when the local network is to include the remote networks that
Watch List Addresses are on a watch list. This change helps to identify events from hosts that are on
a watch list.
BB:FalsePositive: User Edit this building block to include any categories that you want to consider as
Defined Server Type False false positives for hosts that are defined in the BB:HostDefinition: User
Positive Category Defined Server Type building block.
BB:FalsePositive: User Edit this building block to include any events that you want to consider as
Defined Server Type False false positives for hosts that are defined in the BB:HostDefinition: User
Positive Events Defined Server Type building block.
Building Block Description
Edit this building block to include the IP address of your custom server type.
After you add the servers, you must add any events or categories that you want
BB:HostDefinition: User to consider as false positives to this server, as defined in
Defined Server Type the BB:FalsePositives: User Defined Server Type False Positive Category or
the BB:False Positives: User Defined Server Type False Positive Events
building blocks.
Table 1. Building blocks to edit.
You can include a CIDR range or subnet in any of the building blocks instead of listing the IP
addresses. For example, 192.168.1/24 includes addresses 192.168.1.0 to 192.168.1.255. You can
also include CIDR ranges in any of the BB:HostDefinition building blocks.
QRadar Use Case Manager app
Use the guided tips in the IBM® QRadar® Use Case Manager app to help you ensure that IBM QRadar is
optimally configured to accurately detect threats throughout the attack chain.
QRadar Use Case Manager includes a use case explorer that offers flexible reports that are related to your
rules. QRadar Use Case Manager also exposes pre-defined mappings to system rules and helps you map
your own custom rules to MITRE ATT&CK tactics and techniques.
Explore rules through visualization and generated reports
Explore the rules through different filters to ensure that they work as intended.
Generate reports from predefined templates, such as searches based on rule response and actions,
log source coverage, and many others.
Customize reports to see only the information that is critical to your analysis.
Gain tuning recommendations unique to your environment right within the app.
Identify the topmost offense-generating or CRE-generating rules, and then follow the guide to
tune them.
Reduce the number of false positives by reviewing the most common configuration steps. Easily
update network hierarchy, building blocks, and server discovery based on recommendations.
Visually understand your ability to detect threats based on ATT&CK tactics and techniques.
View predefined QRadar tactic and technique mappings and add your own custom mappings to
help ensure complete coverage.
Use new insights to prioritize the rollout of new use cases and apps to effectively strengthen your
security posture.
Known issues
The IBM QRadar Use Case Manager app has required information for known issues.
Important: When an issue is resolved, the table is updated to reflect which version contains the fix.
Video demonstrations
Watch video tutorials to learn how to use the workflows and features in IBM QRadar Use Case Manager.
Watch video tutorials to learn how to use the workflows and features in IBM® QRadar® Use Case
Manager.
Video demonstrations on YouTube
Tutorials and general overview of QRadar Use Case Manager.
Version 3.0.0
Important: QRadar Use Case Manager 3.2.0 or later is not supported on QRadar 7.4.0.
3.1.0 or
7.3.2 or later
earlier
QRadar on Cloud
1.1.0 or
later Important: You must use a SaaS Admin token during configuration, and assign the Security
Administrator role to your administrative users.
Supported browsers
The QRadar Use Case Manager app is supported on Google Chrome and Mozilla Firefox.
Supported languages
The following languages are supported based on QRadar user preferences: English, Simplified Chinese,
Traditional Chinese, French, German, Korean, Portuguese, Russian, Spanish, Italian, and Japanese.
Installation and configuration checklist for QRadar Use Case Manager
As you install the IBM QRadar Use Case Manager app, review and complete all of the necessary tasks on
the installation checklist.
The following phases of an attack are represented in the MITRE ATT&CK framework:
MITRE
ATT&CK
Tactic Description
This tactic displays in the MITRE reports only when the PRE platform is selected in your
user preferences.
Resource Establish resources to support malicious operations.
Development
This tactic displays in the MITRE reports only when the PRE platform is selected in your
user preferences.
Tactics, techniques, and sub-techniques
Tactics represent the goal of an ATT&CK technique or sub-technique. For example, an adversary might
want to get credential access to your network.
Techniques represent how an adversary achieves their goal. For example, an adversary might dump
credentials to get credential access to your network.
Sub-techniques provide a more specific description of the behavior an adversary uses to achieve their
goal. For example, an adversary might dump credentials by accessing the Local Security Authority (LSA)
Secrets.
Workflow for MITRE ATT&CK mapping and visualization
Create your own rule and building block mappings in IBM® QRadar® Use Case Manager, or
modify IBM QRadar default mappings to map your custom rules and building blocks to specific tactics
and techniques.
Save time and effort by editing multiple rules or building blocks at the same time, and by sharing rule-
mapping files between QRadar instances. Export your MITRE mappings (custom and IBM default) as a
backup of custom MITRE mappings in case you uninstall the app and then decide later to reinstall it. For
more information, see Uninstalling QRadar Use Case Manager.
After you finish mapping your rules and building blocks, organize the rule report and then visualize the
data through diagrams and heat maps. Current® and potential MITRE coverage data is available in the
following reports: Detected in timeframe report, Coverage map and report, and Coverage summary
and trend.
Editing MITRE mappings in a rule or building block
Create your own rule and building block mappings or modify IBM QRadar default mappings to
map your custom rules and building blocks to specific tactics and techniques.
Editing MITRE mappings in multiple rules or building blocks
Save time and effort by editing multiple rules or building blocks at the same time. Export your
mappings to a JSON file to share with other colleagues.
Sharing MITRE-mapping files
Save time and effort when mapping rules and building blocks to tactics and techniques by sharing
rule-mapping files between QRadar instances.
Visualizing MITRE tactic and technique coverage in your environment
Visualize the coverage of MITRE ATT&CK tactics and techniques that the rules provide in IBM
QRadar. After you organize the rule report, you can visualize the data through diagrams and heat
maps and export the data to share with others.
Visualizing MITRE coverage summary and trends
The MITRE summary and trend reports provide an overview of the different tactics that are
covered by QRadar Use Case Manager. You can analyze the summary data in table, bar, and radar
charts. Only the number of enabled mappings to enabled rules are counted in the charts because
disabled mappings don't contribute to your security posture.
Visualizing MITRE tactics and techniques that are detected in a specific timeframe
See which MITRE ATT&CK tactics and techniques were detected in your environment based on
the offenses that were updated within a specific timeframe. QRadar Use Case Manager displays a
list of the offenses and their related rules that were found within that timeframe, along with the
tactics and techniques that are mapped to those rules.
MITRE heat map calculations
The colors in the MITRE heat maps are calculated based on the number of rule mappings to a
tactic or technique plus the level of mapping confidence (low, medium, or high).
Investigating tuning findings
Sometimes, rules or building blocks might be incorrectly defined. Use the Tuning Finding report to
investigate whether the rule or building block needs to be edited for more robust information, or if the
rule is working as designed. Then, you can hide the finding if it’s not relevant (for example you set the
rule up that way) or is a false positive.
Sometimes, rules or building blocks might be incorrectly defined. Use the Tuning Finding report to
investigate whether the rule or building block needs to be edited for more robust information, or if the
rule is working as designed. Then, you can hide the finding if it’s not relevant (for example you set the
rule up that way) or is a false positive.
On the Configuration page, set the number of reference set elements that trigger a finding if the
number is exceeded. The default is 5000.
Procedure
1. From the QRadar Use Case Manager main menu, click Tuning Finding Report.
2. To scan for updated rule findings from IBM QRadar, click the Refresh findings report icon. It
might take a few minutes to update the report.
Tip: The scan results automatically update overnight. Update the rules manually when you make
changes and you want to verify that they updated correctly in QRadar Use Case Manager.
3. Sort the Tuning Finding column to investigate similar finding types together.
4. To hide findings in the report, select the checkboxes for the relevant findings and click Hide
findings.
The selected tuning findings are flagged as hidden. When you hide a finding, the tuning findings
don't appear in the rule details for the rule when you access it from any other report. Any warning
icons for the rule are also hidden in other reports the rule might appear in.
5. Fine-tune the view by showing or hiding the selected findings.
Click Show hidden findings to see all the findings in the report.
6. To clear the hidden flags from a finding so that you always see it in the report, click Show hidden
findings. Select the checkboxes for the relevant findings, and then click Remove hidden flag.
The hidden findings are updated to display in the report.
7. Click the rule name to further investigate and edit if necessary on the Rule Details page. Full
details appear in the Tuning Finding section.
a. Click Edit in rule wizard to make your changes. For example, if a rule is enabled but a
rule in its dependency tree is disabled, you can enable it in the rule wizard.
b. Select the finding and click Hide findings.
c. In the Tuning Finding report, click the refresh icon to update the report display.
8. After you make your changes, refresh the report to see that your changes are implemented.
Accessing report data by using APIs
As an alternative to using the interface in IBM QRadar Use Case Manager, you can use APIs to interact
with the data. Use the interactive API documentation interface to test the APIs before you use them.
About this task
The following public APIs are available for use:
Use Case Explorer: Generate and download report data in CSV or JSON formats.
Log Source Coverage: Get information about rule-log source type activity and coverage.
MITRE Endpoints: Get information about rule mappings. Create custom adversary groups and
map them to existing tactics and techniques. Upload custom MITRE group-technique files.
Tip: Make sure that the mapping file is in JSON format and contains the following data format:
Each time you upload a custom mapping file, the previous file is replaced.
Procedure
1. From the Admin tab, click Apps > QRadar Use Case Manager > API Docs.
2. Select an endpoint and click Try it out.
3. Click Execute to send the API request to your console and receive a properly formatted HTTPS
response.
4. Review and gather the information that you need to integrate with QRadar.
Public API endpoints
IBM QRadar Use Case Manager provides APIs that you can use to interact with the data.
Public Use Case Manager API workflows
Use these workflows to download report data to CSV or JSON files. For JSON format, you can
choose to generate a paginated format or a full report. Paginated files are useful for places where
you need JSON, like the user interface, and you need to paginate through the results. You can
request page by page and not the whole result set. Full report files are faster to download than the
JSON body, but pagination is not available in the file download format.
Downloading report data
Example API workflow script
Use the script in this example to download a Use Case Explorer report in CSV format.
Use Case Explorer filters
Use these filters in the example script to download a Use Case Explorer report in CSV format.
Report column codes for report APIs
Use the report column codes in the tables in the following APIs: POST
/api/use_case_explorer/{reportId}/download_csv, POST
/api/use_case_explorer/{reportId}/download_json, or GET
/api/use_case_explorer/{reportId}/result.
The Server Discovery function uses the asset profile database to discover several types of servers on your
network. The Server Discovery function lists automatically discovered servers and you can select which
servers you want to include in building blocks.
Using Building blocks, you can reuse specific rule tests in other rules. You can reduce the number of false
positives by using building blocks to tune QRadar and enable extra correlation rules.
Adding servers to building blocks automatically
The Server Discovery function uses the asset profile database to discover different server types
that are based on port definitions. Then, you can select the servers to add to a server-type building
block for rules.
Procedure
1. Click Assets > Server Discovery.
2. In the Server Type list, select the server type that you want to discover.
Open the drawer to see a list of apps that are queued for download or installation. You can place
up to five apps in the download queue at one time. To keep the download drawer open, click the
pin icon.
From Version 2.3.0, you can also view the list of currently installed QRadar extensions and
contents.
Watson Integration
On the Watson Integration page, you can learn how the IBM QRadar Advisor with Watson works
with QRadar to investigate and respond to threats. You can review requirements for your QRadar system
to install and run the Advisor app. You can also download and install QRadar Advisor with Watson
directly from the Assistant app.
For more information about QRadar Advisor with Watson, see the Advisor documentation
here: https://www.ibm.com/docs/en/qradar-common?topic=apps-qradar-advisor-watson-app
Configuring the QRadar Assistant app
The QRadar Assistant app is included in QRadar installations of version 7.3.1 and later. You can
download the app from the IBM Security App Exchange (https://exchange.xforce.ibmcloud.com/)
for those versions. An Admin user must configure certain options to enable the Assistant app in
your QRadar environment.
Managing installed extensions
You can use the Installed section on the Applications page to view and manage currently installed
QRadar extensions and contents.
Managing multitenant apps
IBM QRadar Assistant app 3.0.0 supports multitenant environments in QRadar 7.4.0 Fix Pack 1 or
later.
Downloading apps with the QRadar Assistant app
On the Applications page, you can download and install apps from the IBM X-Force® Exchange.
Phone Home
QRadar Assistant app V 1.1.5 and later include a new Phone Home feature, which anonymously
shares data with the IBM X-Force Exchange to monitor QRadar and detect issues early.
Configure URL access on firewalls
If your IBM QRadar Console is behind a restricted firewall, you must allow traffic to specific
URLs to use the full features of the Assistant app.
Running the Assistant app in offline mode
Admin can run the QRadar Assistant app V3.1.0 or later in offline mode to manage installed
extensions.
Installing extensions by using an admin level authorized service token
Administrators can create an admin level authorized service token that enables IBM QRadar
Assistant to install extensions in the background.
Security content packs contain enhancements to specific types of security content. Often, they
include content for third-party integrations or operating systems. For example, a security content
pack for a third-party integration might contain new custom event properties that make
information in the event payload searchable for the log source and available for reporting.
Extensions
IBM and other vendors write security extensions that enhance or extend QRadar capabilities. An
extension can contain apps, content items, such as custom rules, report templates, saved searches,
or contain updates to existing content items. For example, an extension might include an app to
add a tab in QRadar that provides visualizations for an offense.
On IBM Security App Exchange, extensions are known as apps. You can download QRadar apps
from IBM Security App Exchange and use the Extensions Management tool to install them.
Apps are not available as part of an auto-update.
Sources of security content
QRadar content is available from the following sources:
IBM Security App Exchange
IBM Security App Exchange (https://apps.xforce.ibmcloud.com) is an app store and portal where you
can browse and download QRadar extensions. It is a new way to share code, visualizations, reports,
rules, and applications.
IBM Fix Central
IBM Fix Central (www.ibm.com/support/fixcentral) provides fixes and updates to your system software,
hardware, and operating system. You can download security content packs and extensions from IBM Fix
Central.
QRadar deployments
You export custom content from a QRadar deployment as an extension and then import it into another
system when you want to reuse the content. For example, you can export content from your development
environment to your production environment. You can use the content management script to export all
content, or you can choose to export only some custom content.
This screen appears only if any of the apps in the extension that you are installing are
configured to request authentication for background processes.
c. If the extension does not include a digital signature, or it is signed but the signature is not
associated with the IBM Security Certificate Authority (CA), you must confirm that you
still want to install it. Click Install to proceed with the installation.
d. Review the changes that the installation makes to the system.
e. Select Preserve Existing Items or Replace Existing Items to specify how to handle
existing content items.
Note: If the extension contains overridden system rules, select Replace Existing Items to
ensure that the rules are imported correctly.
f. Click Install.
g. Review the installation summary and click OK.
IBM QRadar Security Threat
Monitoring Content Extension
The IBM® QRadar® Security Threat Monitoring Content Extension on the IBM Security App
Exchange (https://exchange.xforce.ibmcloud.com/hub) contains rules, building blocks, and custom
properties that are intended for use with X-Force® feed data.
The X-Force data includes a list of potentially malicious IP addresses and URLs with a corresponding
threat score. You use the X-Force rules to automatically flag any security event or network activity data
that involves the addresses, and to prioritize the incidents before you begin to investigate them.
The following list shows examples of the types of incidents that you can identify using the X-Force rules:
when the [source IP|destinationIP|anyIP] is part of any of the following [remote network
locations]
when [this host property] is categorized by X-Force as [Anonymization Servers|Botnet C&C|
DynamicIPs|Malware|ScanningIPs|Spam] with confidence value [equal to] [this amount]
when [this URL property] is categorized by X-Force as [Gambling|Auctions|Job Search|
Alcohol|Social Networking|Dating]
QRadar downloads approximately 30 MB of IP reputation data per day when you enable the X-Force
Threat Intelligence feed for use with the IBM QRadar Security Threat Monitoring Content Extension.
Installing the IBM QRadar Security Threat Monitoring Content Extension application
The IBM QRadar Security Threat Monitoring Content Extension application contains IBM
QRadar content, such as rules, building blocks, and custom properties, that are designed specifically for
use with X-Force data. The enhanced content can help you to identify and to remediate undesirable
activity in your environment before it threatens the stability of your network.
Before you begin
Download the IBM QRadar Security Threat Monitoring Content Extension application from the IBM Security App
Exchange
Procedure
1. On the Admin tab, go to Apps > Pulse - Dashboard and click the Pulse - Dashboard app icon.
2. On the Pulse Dashboard Templates page, click Synchronize to set the new or updated dashboard
content in QRadar Pulse, and then close the page.
When you uninstall a content extension, any rules, custom properties, and reference data that were
installed by the content extension are removed or reverted to their previous state. Saved searches can't be
reverted. They can only be removed.
For example, if you've edited custom rules in an app that you now want to uninstall, you can preserve the
changes you made for each customized rule. If the custom rule previously existed on the system, you can
revert the rule to its previous state. If the custom rule didn't previously exist, you can remove it.
Note:
If you have introduced an outside dependency on a content extension that is installed by the
app, QRadar® doesn't remove that piece of content when you uninstall the app. For example, if you
create a custom rule that uses one of the app's custom properties, that custom property isn't removed when
you uninstall the app.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Extensions Management.
3. Select the extension that you want to uninstall and click Uninstall.
QRadar checks for any applications, rules, custom properties, reference data, and saved searches
that are installed by the content extension that can be removed.
4. If you have manually altered any rules, custom properties, or reference data after you installed the
app, choose whether to Preserve or Remove/Revert that content extension.
5. Click Uninstall, and then click OK.
Procedure
1. From the navigation menu on the Threat Intelligence dashboard, click the Feeds
Downloader icon ( ).
2. Click Add Threat Feed, and then click Add TAXII Feed.
3. On the Add TAXII Feed window, click the Connection tab, and configure the following options:
Option Description
TAXII Type the URL of the TAXII server you want to use.
Endpoint
Existing TAXII endpoints in your deployment appear in a list. If you choose an existing
endpoint, the corresponding options are prepopulated.
Option Description
Note: For getting collections from TAXII 2.0 servers, see TAXII™ API - Collections.
Authentication Select the authentication that you want to use and complete the corresponding options
Method based on your choice.
The available authentication method varies depending on the TAXII version you select.
Downloader icon ( ).
2. In the Configured Threat Intelligence Feeds section, click Poll Now on the relevant feed to
override the predefined polling interval.
The polling start time, date, and status displays during polling. The end date displays after the
polling is finished.
Refetching data from TAXII data collections
If you recently started to follow a new XFE Public Collections I follow collection, you can refetch all its
previous data. Without refetching, you get only the data added to the collection from the time you started
to follow it.
That date is set as the new start date for the feed.
The signature counts are reset to 0.
Data that is already collected by this feed remains intact in the reference set.
Procedure
1. Click the Admin tab.
2. Click Plug-ins > Threat Intelligence > STIX/TAXII Configuration.
3. In the Configured Threat Intelligence Feeds section, click Refetch on the relevant feed.
4. On the Refetch data page, select the timeframe for refetching the data, and click Refetch.
You must have IBM Advanced Threat Protection Feed license to use associated capabilities
such as the Am I Affected scan.
What's new in QRadar Threat Intelligence
Learn about the new features in each IBM QRadar Threat Intelligence app release.
Known issues
Learn about the known issues in each QRadar Threat Intelligence app release.
Supported threat languages and specifications
IBM QRadar Threat Intelligence supports different methods for sharing threat intelligence
information.
The following table lists the supported versions of threat languages and specifications.
Threat
language or Supported version
specification
STIX 1.2
TAXII 1.1
Table 1. Supported threat languages and specifications
Note:
IBM QRadar Threat Intelligence does not support SWIFT feeds due to the extra authentication
process required by the SWIFT feeds compared to normal feeds.
All remote network and service groups have group levels and leaf object levels. You can edit remote
network and service groups by adding objects to existing groups or changing preexisting properties to suit
your environment.
If you move an existing object to another group, the object name moves from the existing group to the
newly selected group. However, when the configuration changes are deployed, the object data that is
stored in the database is lost and the object ceases to function. To resolve this issue, create a new view
and re-create the object that exists with another group.
You can group remote networks and services for use in the custom rules engine, flow, and event searches.
You can also group networks and services in IBM® QRadar® Risk Manager, if it is available.
Default remote network groups
IBM QRadar includes default remote network groups.
Group Description
website (http://www.team-cymru.org/Services/Bogons/bogon-bn-nonagg.txt).
Specifies traffic that originates from known hostile networks.
You must configure this group to classify traffic that originates from trusted
networks.
Classifies traffic that originates from networks that you want to monitor.
Watchlists
This group is blank by default.
Table 1. Default remote network groups
Groups and objects that include superflows are only for informational purposes and cannot be edited.
Groups and objects that include bogons are configured by the automatic update function.
Note: You can use reference sets instead of remote networks to provide some of this functionality. Although you
can assign a confidence level to an IP value in a reference table, reference sets are used only with single IPs and
cannot be used with CIDR ranges. You can use a CIDR value after a remote network update, but not with weight or
confidence levels.
Default remote service groups
IBM QRadar includes the default remote service groups.
Parameter Description
IRC_Servers Specifies traffic that originates from addresses commonly known as chat servers.
Specifies traffic that originates from addresses commonly known online services that
Online_Services
might involve data loss.
Specifies traffic that originates from addresses commonly known to contain explicit
Porn
pornographic material.
Proxies Specifies traffic that originates from commonly known open proxy servers.
Reserved_IP_
Specifies traffic that originates from reserved IP address ranges.
Ranges
Specifies traffic that originates from addresses commonly known to produce SPAM or
Spam
unwanted email.
Specifies traffic that originates from addresses commonly known to contain spyware or
Spy_Adware
adware.
Superflows Specifies traffic that originates from addresses commonly known to produce superflows.
Specifies traffic that originates from addresses commonly known to contain pirated
Warez
software.
Table 1. Default remote network groups
Guidelines for network resources
Given the complexities and network resources that are required for IBM QRadar SIEM in large structured
networks, follow the suggested guidelines.
The following list describes some of the suggested practices that you can follow:
Bundle objects and use the Network Activity and Log Activity tabs to analyze your network
data.
Fewer objects create less input and output to your disk.
Typically, for standard system requirements, do not exceed more than 200 objects per group.
More objects might impact your processing power when you investigate your traffic.
A connection between an internal endpoint and a known botnet command and control
Procedure
1. Use SSH to log in to the QRadar Console.
2. Open the /etc/httpd/conf.d/ssl.conf file in a text editor.
3. Add the following lines before </VirtualHost>:
ProxyRemote https://license.xforce-security.com/ http://PROXY_IP:PROXY_PORT
ProxyRemote https://update.xforce-security.com/ http://PROXY_IP:PROXY_PORT
4. Update the IP address and port of the corporate proxy server to allow an anonymous
connection to the X-Force security servers.
5. Save the changes to the ssl.conf file.
6. Restart the Apache server by typing the following command:
apachectl restart
Restarting the Apache server on the QRadar Console logs out all users and the managed
hosts might produce error messages. Restart the Apache server during scheduled
maintenance windows.
Preventing X-Force data from downloading data locally
QRadar downloads approximately 30 MB of IP reputation data per day. To stop QRadar from
downloading the X-Force data to your local system, disable the X-Force Threat Intelligence feed.
Before you begin
Before you disable the X-Force feed, ensure that the X-Force rules are disabled, and that you are
not using X-Force functions in saved searches.
About this task
After the X-Force Threat Intelligence feed is disabled, the X-Force content is still visible
in QRadar, but you cannot use the X-Force rules or add X-Force functions to AQL searches.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click System Settings.
3. Select No in the Enable X-Force Threat Intelligence Feed field.
IBM QRadar Security Threat Monitoring Content Extension
The IBM QRadar Security Threat Monitoring Content Extension on the IBM Security App
Exchange (https://exchange.xforce.ibmcloud.com/hub) contains rules, building blocks, and custom
properties that are intended for use with X-Force feed data.
The X-Force data includes a list of potentially malicious IP addresses and URLs with a corresponding
threat score. You use the X-Force rules to automatically flag any security event or network activity data
that involves the addresses, and to prioritize the incidents before you begin to investigate them.
The following list shows examples of the types of incidents that you can identify using the X-Force rules:
when the [source IP|destinationIP|anyIP] is part of any of the following [remote network
locations]
when [this host property] is categorized by X-Force as [Anonymization Servers|Botnet C&C|
DynamicIPs|Malware|ScanningIPs|Spam] with confidence value [equal to] [this amount]
when [this URL property] is categorized by X-Force as [Gambling|Auctions|Job Search|
Alcohol|Social Networking|Dating]
QRadar downloads approximately 30 MB of IP reputation data per day when you enable the X-Force
Threat Intelligence feed for use with the IBM QRadar Security Threat Monitoring Content Extension.
Installing the IBM QRadar Security Threat Monitoring Content Extension application
The IBM QRadar Security Threat Monitoring Content Extension application contains IBM
QRadar content, such as rules, building blocks, and custom properties, that are designed
specifically for use with X-Force data. The enhanced content can help you to identify and to
remediate undesirable activity in your environment before it threatens the stability of your
network.
Before you begin
Download the IBM QRadar Security Threat Monitoring Content Extension application from the IBM
Security App Exchange
You can also use the right-click lookup option to submit IP addresses or URL data from QRadar searches,
offenses, and rules to a public or private collection. The collection stores the information in one place as
you use the data for more research.
Collections also contain a section that serves as a wiki-style notepad, where you can add comments or any
free text that is relevant. You can use the collection to save X-Force reports, text comments, or any other
content. An X-Force report has both a version of the report from the time that it was saved and a link to
the current version of the report.
Installing the IBM X-Force Exchange plug-in
Install the IBM X-Force Exchange plug-in on your QRadar Console so that you have right-click
functionality to access data in IBM X-Force Exchange.
Before you begin
This procedure requires a web server restart from the Admin tab to load the plug-in after the RPM is
installed. Restarting the web server logs out all QRadar users, so it is advised that you install this plug-in
during scheduled maintenance.
Guest users are not able to use all features on the X-Force Exchange website.
6. Close the browser window after the initial login to the IBM X-Force Exchange website.
Problem
GeoLite2 data is required to resolve geographic locations from IP addresses in QRadar. As of
30 December 2019, a MaxMind account must be configured by the administrator in QRadar
System Settings. The default userid and license key values can no longer be used to receive
geographic data updates.
Symptom
Administrators who use the default userid and license value receive a '401 Unauthorized' error
in QRadar when geographic data attempts to update. When this issue occurs,
the geodata_update.sh utility displays the following error message:
[Example-console-lab#/]/opt/qradar/bin/geodata_update.sh
Result: 401 Unauthorized at /opt/qradar/bin/geoipupdate-pureperl.pl line 222, <$fh> line 37
Cause
The default geographic account included in QRadar requires a unique userid and license key to
allow updates after 30 December 2019. Administrators must create a MaxMind account and
update Admin > System Settings > Geographic Settings to continue to receive geographic
location lookup data for IP addresses.
Environment
QRadar V7.3.1 and later.
o Click Confirm.
o Record the Account and User ID and License Key information.
Important: If you exit the license key screen without recording the information,
you must generate a new license key from Step 4.
8. Configure in QRadar
o Log in to QRadar as an administrator.
o Click the Admin tab.
o Click the System Settings icon.
o Navigate to Geographic Settings.
o Update the User ID and License Key values from Step 5.
o Click Save.
o From the Admin tab, click Deploy Changes.
Results
After the deploy changes completes, geographic data settings are updated for the QRadar
deployment. Administrators can confirm their MaxMind geographic settings from the command
line of the QRadar Console.
You can also use the right-click lookup option to submit IP addresses or URL data from QRadar searches,
offenses, and rules to a public or private collection. The collection stores the information in one place as
you use the data for more research.
Collections also contain a section that serves as a wiki-style notepad, where you can add comments or any
free text that is relevant. You can use the collection to save X-Force reports, text comments, or any other
content. An X-Force report has both a version of the report from the time that it was saved and a link to
the current version of the report.
Installing the IBM X-Force Exchange plug-in
Install the IBM X-Force Exchange plug-in on your QRadar Console so that you have right-click
functionality to access data in IBM X-Force Exchange.
Before you begin
This procedure requires a web server restart from the Admin tab to load the plug-in after the RPM is installed.
Restarting the web server logs out all QRadar users, so it is advised that you install this plug-in during scheduled
maintenance.
Guest users are not able to use all features on the X-Force Exchange website.
6. Close the browser window after the initial login to the IBM X-Force Exchange website.
Saved searches that were created by a deleted user remain associated with the user until you delete the
searches.
Procedure
1. On the Log Activity or Network Activity tab, click Search > Manage Search Results.
2. Click the Status column to sort the saved searches.
3. Select the saved searches with a status of “ERROR!”, then click Delete.
Before you can create a user account, you must ensure that the required user role and security profile are
created.
About this task
You can create multiple user accounts that include administrative privileges; however, any user role with
Administrator Manager privileges can create other administrative user accounts.
Procedure
1. On the Admin tab, click Users.
The User Management window opens.
2. Click Add.
3. Enter values for the following parameters:
Parameter Description
User Name Enter a unique username for the new user. The username must contain 1 - 60 characters.
User
Enter a description for the user. The description cannot contain more than 2048 characters.
Description
Enter an email address to be associated with this user. The address cannot contain more than 255
Email
characters, and cannot contain spaces.
New Enter a new password for the user to gain access. The password must meet the minimum length and
Password complexity requirements that are enforced by the password policy.
Confirm
New Enter the new password again.
Password
User Role Select a role for this user from the list.
Security
Select a security profile for this user from the list.
Profile
Override
System
Enable this setting to configure the inactivity timeout threshold for the user account.
Inactivity
Timeout
4. Click Save.
5. Close the User Management window.
6. On the Admin tab, click Deploy Changes.
If the user with the disabled account attempts to log in, a message informs them that the user name and
password are no longer valid. Items that they created, such as saved searches and reports, remain
associated with the user.
Procedure
1. On the Admin tab, click Users.
2. In the User Management window, select the user account that you want to disable.
You can use the Advanced Filter to search by User Role or Security Profile.
3. Click Edit.
4. From the User Details window, select Disabled from the User Role list.
5. Click Save.
6. Close the User Management window.
7. On the Admin tab, click Deploy Changes.
QRadar® includes one default security profile for administrative users. The Admin security profile
includes access to all networks, log sources, and domains.
Before you add user accounts, you must create more security profiles to meet the specific access
requirements of your users.
Domains
Security profiles must be updated with an associated domain. You must define domains on the Domain
Management window before the Domains tab is shown on the Security Profile Management window.
Domain-level restrictions are not applied until the security profiles are updated, and the changes are
deployed.
Domain assignments take precedence over all settings on the Permission Precedence, Networks,
and Log Sources tabs.
If the domain is assigned to a tenant, the tenant name appears in brackets beside the domain name in
the Assigned Domains window.
Permission precedence
Permission precedence determines which security profile components to consider when the system
displays events in the Log Activity tab and flows in the Network Activity tab.
Choose from the following restrictions when you create a security profile:
No Restrictions - This option does not place restrictions on which events are displayed in the Log
Activity tab, and which flows are displayed in the Network Activity tab.
Network Only - This option restricts the user to view only events and flows that are associated
with the networks that are specified in this security profile.
Log Sources Only - This option restricts the user to view only events that are associated with the
log sources that are specified in this security profile.
Networks AND Log Sources - This option allows the user to view only events and flows that are
associated with the log sources and networks that are specified in this security profile.
For example, if the security profile allows access to events from a log source but the destination
network is restricted, the event is not displayed in the Log Activity tab. The event must match
both requirements.
Networks OR Log Sources - This option allows the user to view events and flows that are
associated with either the log sources or networks that are specified in this security profile.
For example, if a security profile allows access to events from a log source but the destination network is
restricted, the event is displayed on the Log Activity tab if the permission precedence is set to Networks
OR Log Sources. If the permission precedence is set to Networks AND Log Sources, the event is not
displayed on the Log Activity tab.
Permission precedence for offense data
Security profiles automatically use the Networks OR Log Sources permission when offense data is
shown. For example, if an offense has a destination IP address that your security profile permits you to
see, but the security profile does not grant permissions to the source IP address, the Offense
Summary window shows both the destination and source IP addresses.
Creating a security profile
To add user accounts, you must first create security profiles to meet the specific access requirements of
your users.
About this task
IBM® QRadar® SIEM includes one default security profile for administrative users. The Admin security
profile includes access to all networks, log sources, and domains.
To select multiple items on the Security Profile Management window, hold the Control key while you
select each network or network group that you want to add.
If after you add networks, log sources or domains you want to remove one or more before you save the
configuration, you can select the item and click the Remove (<) icon. To remove all items, click Remove
All.
Procedure
1. On the Admin tab, click Security Profiles.
2. On the Security Profile Management window toolbar, click New.
3. Configure the following parameters:
a. In the Security Profile Name field, type a unique name for the security profile. The
security profile name must meet the following requirements: minimum of 3 characters and
maximum of 30 characters.
b. OptionalType a description of the security profile. The maximum number of characters is
255.
4. Click the Permission Precedence tab.
5. In the Permission Precedence Setting pane, select a permission precedence option. See Permission
precedence.
6. Configure the networks that you want to assign to the security profile:
a. Click the Networks tab.
b. From the navigation tree in the left pane of the Networks tab, select the network that you
want this security profile to have access to.
c. Click the Add (>) icon to add the network to the Assigned Networks pane.
d. Repeat for each network you want to add.
7. Configure the log sources that you want to assign to the security profile:
a. Click the Log Sources tab.
b. From the navigation tree in the left pane, select the log source group or log source you
want this security profile to have access to.
c. Click the Add (>) icon to add the log source to the Assigned Log Sources pane.
d. Repeat for each log source you want to add.
8. Configure the domains that you want to assign to the security profile:
Domains must be configured before the Domains tab appears.
a. Click the Domains tab.
b. From the navigation tree in the left pane, select the domain that you want this security
profile to have access to.
c. Click the Add (>) icon to add the domain to the Assigned Domains pane.
d. Repeat for each domain that you want to add.
9. Click Save.
Note: The log sources and domains that are assigned to the security profile must match. If the log
sources and domains do not match, you cannot save the security profile .
10. Close the Security Profile Management window.
11. On the Admin tab, click Deploy Changes.
Editing a security profile
You can edit an existing security profile to update which networks and log sources a user can access and
the permission precedence.
About this task
To quickly locate the security profile you want to edit on the Security Profile Management window, type
the security profile name in the Type to filter text box.
Procedure
1. On the Admin tab, click Security Profiles.
2. In the left pane, select the security profile that you want to edit.
3. On the toolbar, click Edit.
4. Update the parameters as necessary.
5. Click Save.
6. If the Security Profile Has Time Series Data window opens, select one of the following options:
Option Description
Keep Old Data and Select this option to keep previously accumulated time series data. If you choose this option,
Save users with this security profile might see previous data that they no longer have permission
to see when they view time series charts.
Hide Old Data and Select this option to hide the time series data. If you choose this option, time series data
Save accumulation restarts after you deploy your configuration changes.
User roles
A user role defines the functions that a user can access in IBM® QRadar®.
During the installation, four default user roles are defined: Admin, All, WinCollect, and Disabled.
Before you add user accounts, you must create the user roles to meet the permission requirements of your
users.
Users who are assigned an administrative user role cannot edit their own account. This restriction applies
to the default Admin user role. Another administrative user must make any account changes.
Procedure
1. Click the Admin tab.
2. In the User Management section, click User Roles and then click New.
3. In the User Role Name field, type a unique name.
4. Select the permissions that you want to assign to the user role.
The permissions that are visible on the User Role Management window depend on
which QRadar components are installed.
Important: If you select a user role that has Admin privileges, you must also grant that user role
the Admin security profile. See Creating a security profile.
Permission Description
Grants administrative access to the user interface. You can grant specific Admin
permissions.
Users with System Administrator permission can access all areas of the user interface. Users
who have this access cannot edit other administrator accounts.
Administrator Manager
Grants users permission to create and edit other administrative user accounts.
Remote Networks and Services Configuration
Admin
Grants users access to the Remote Networks and Services icon on the Admin tab.
System Administrator
Grants users permission to access all areas of user interface. Users with this access are
not able to edit other administrator accounts.
Manage Local Only
Grants permission to assign and manage Local Only authentication. For more
information about Local Only authentication, see Assigning Local Only authentication.
Grants access to all the functions in the Network Activity tab. You can grant specific access to
the following permissions:
Maintain Custom Rules
Grants permission to create or edit rules that are displayed on the Network Activity tab.
Manage Time Series
Grants permission to configure and view time series data charts.
Network User Defined Flow Properties
Activity
Grants permission to create custom flow properties.
View Custom Rules
Grants permission to view custom rules. If the user role does not also have the Maintain
Custom Rules permission, the user role cannot create or edit custom rules.
View Flow Content
Grants permission to view source payload and destination payload in the flow data
details.
Assets This permission is displayed only if IBM QRadar Vulnerability Manager is installed on your
system.
Grants access to the function in the Assets tab. You can grant specific permissions:
Perform VA Scans
Grants permission to complete vulnerability assessment scans. For more information
about vulnerability assessment, see the Managing Vulnerability Assessment Guide.
Permission Description
Remove Vulnerabilities
Grants permission to remove vulnerabilities from assets.
Server Discovery
Grants permission to discover servers.
View VA Data
Grants permission to vulnerability assessment data. For more information about
vulnerability assessment, see the Managing Vulnerability Assessment guide.
Grants users permission to access QRadar Risk Manager functions. QRadar Risk Manager must
Risk Manager
be activated.
IP Right Click
Menu Grants permission to options added to the right-click menu.
Extensions
QRadar Log
Source Grants permission to the QRadar Log Source Management app.
Management
Permission Description
Pulse -
Grants permission to dashboards in the IBM QRadar Pulse app.
Dashboard
Pulse - Threat
Grants permission to Threat Globe dashboard in the IBM QRadar Pulse app.
Globe
QRadar
Grants permission to the IBM QRadar Assistant app.
Assistant
QRadar Use
Grants permission to the QRadar Use Case Manager app.
Case Manager
5. In the Dashboards section of the User Role Management page, select the dashboards that you want
the user role to access, and click Add.
Tip: A dashboard displays no information when the user role does not have permission to view
dashboard data. If a user modifies the displayed dashboards, the defined dashboards for the user
role appear at the next login.
6. Click Save and close the User Role Management window.
7. On the Admin tab menu, click Deploy Changes.
Editing a user role
You can edit an existing role to change the permissions that are assigned to the role.
About this task
To quickly locate the user role you want to edit on the User Role Management window, you can type a
role name in the Type to filter text box.
Procedure
Deleting a user role
If a user role is no longer required, you can delete the user role.
If user accounts are assigned to the user role you want to delete, you must reassign the user accounts to
another user role. The system automatically detects this condition and prompts you to update the user
accounts.
You can quickly locate the user role that you want to delete on the User Role Management window. Type
a role name in the Type to filter text box, which is located above the left pane.
Procedure
1. On the Admin tab, click User Roles.
2. In the left pane of the User Role Management window, select the role that you want to delete.
3. On the toolbar, click Delete.
4. Click OK.
o If user accounts are assigned to this user role, the Users are Assigned to this User
Role window opens. Go to Step 7.
o If no user accounts are assigned to this role, the user role is successfully deleted. Go to
Step 8.
User authentication
When authentication is configured and a user enters an invalid username and password combination, a
message is displayed to indicate that the login was invalid.
If the user attempts to access the system multiple times with invalid information, the user must wait the
configured amount of time before they can attempt to access the system again. You can configure console
settings to determine the maximum number of failed logins and other related settings.
System authentication - Users are authenticated locally. System authentication is the default
authentication type.
RADIUS authentication - Users are authenticated by a Remote Authentication Dial-in User
Service (RADIUS) server. When a user attempts to log in, QRadar encrypts the password only,
and forwards the username and password to the RADIUS server for authentication.
TACACS authentication - Users are authenticated by a Terminal Access Controller Access
Control System (TACACS) server. When a user attempts to log in, QRadar encrypts the username
and password, and forwards this information to the TACACS server for authentication. TACACS
Authentication uses Cisco Secure ACS Express as a TACACS server. QRadar supports up to
Cisco Secure ACS Express 4.3.
Removed in 7.4.2 Microsoft Active Directory - Users are authenticated by a Lightweight
Directory Access Protocol (LDAP) server that uses Kerberos.
LDAP - Users are authenticated by an LDAP server.
SAML single sign-on authentication - Users can easily integrate QRadar with your corporate
identity server to provide single sign-on, and eliminate the need to maintain QRadar local users.
Users who are authenticated to your identity server can automatically authenticate to QRadar.
They don't need to remember separate passwords or type in credentials every time they
access QRadar.
Prerequisite checklist for external authentication providers
Before you can configure an external authentication type, you must complete the following tasks:
Ensure that all users have appropriate user accounts and roles to allow authentication with the
vendor servers.
Ensure that your external provider is trustworthy because you are delegating an important security
decision to this external provider. A compromised provider might allow access to
your QRadar system to unintended parties.
Ensure that the connection to the external provider is secure. Choose only secure communications
protocols, by using LDAPS instead of LDAP, for example.
Consider whether you want to enable a local authentication fallback in case the external provider
is unavailable. If the external provider is compromised, it might be used as a denial of access
attack.
The decision to configure an external authentication provider applies to all administrator and non-
administrator QRadar users.
If you enable auto-provisioning of QRadar accounts, a compromised provider might be used to
force the creation of a rogue QRadar account, so use caution when you are combining these
features.
QRadar users that do not have an entry in the external provider are relying on the fallback feature
to check the local password. A compromised external authentication provider might be used to
create a “shadow” for an existing QRadar account, providing an alternative password for
authentication.
Local authentication fallback
Each non-administrator user can be configured for local authentication fallback. Local authentication
fallback is turned off by default. If enabled, a non-administrator QRadar user can access the system by
using the locally stored password even if the external provider is unavailable, or if the password in the
external provider is locked out or is unknown to the user. This also means that a
rogue QRadar administrator might change the locally stored password and log in as that user, so ensure
that your QRadar administrators are trustworthy. This is also the case if an external authentication
provider is not configured.
The default administrator account, named admin, is always configured for local authentication fallback by
default. This prevents the administrative user from being locked out of the system, but also means you
must ensure that the configured external authentication provider has the correct entry for the admin user,
and that the password is known only to the authorized QRadar administrator. If you cannot maintain
control of the admin entry in the external authentication provider, disable the admin account
within QRadar to prevent unauthorized users from logging in to QRadar as admin. When you enable auto-
provisioning, such as when you use LDAP group authentication, any user account that matches the LDAP
query are created or reactivated with the appropriate roles as mapped. To prevent this from happening,
disable auto-provisioning by using LDAP local.
For other privileged QRadar users (users with the admin role), you can choose on a user-by-user basis
whether to enable local authentication fallback. The ENABLE_FALLBACK_ALL_ADMINS setting
(disabled by default) can be used to force all privileged users to use local authentication fallback. If local
authentication fallback is configured, the same considerations apply as for the admin account.
When you configure an external authentication provider and create a new user, that user doesn't
automatically have a local password set for QRadar. If a user needs a local password, then you must
configure local authentication fallback for that user. Local authentication fallback allows a user to
authenticate locally if external authentication fails for any reason, including invalid passwords. Fallback
users can then access QRadar regardless of the state of the external authentication.
Even if local authentication fallback is enabled for a user account, QRadar first attempts to authenticate
the user to the external authentication module before it attempts local authentication. When external
authentication fails, QRadar automatically attempts to authenticate locally if local authentication fallback
is enabled for that user. User accounts cannot be configured only to authenticate locally when an external
authentication provider is configured. For this reason, it is important that all QRadar user accounts
correspond to an external authentication provider account of the same name associated with the same
authorized user.
Ensure that the external authentication provider is trustworthy, as this configuration outsources a security
decision and a rogue authentication admin can allow unauthorized access to your QRadar system. Make
this connection in a secure way, by using the secure version of protocols (for example by using LDAPS
rather than LDAP).
Local authentication fallback is not available with SAML authentication. No users are able to authenticate
locally when you use SAML authentication.
When offboarding users, disable local authentication fallback for that user before you remove their
authentication access from the external authentication provider.
Manage Local
Only
Authentication User Authorized Service
Setting
capability
Not enabled Can create new users with the Create users without the Local Only setting
same Local Only setting as enabled
their own Can create or delete users or authorized
Can create or delete users or services with user roles that do not contain
authorized services with user the Manage Local Only Authentication
roles that do not contain the setting capability
Manage Local Only Cannot modify a user's Local Only setting
Authentication setting Cannot modify a user or authenticated
capability service role to assign or remove the
Cannot modify a user's Local Manage Local Only Authentication setting
Only setting capability
Cannot modify a user or
authenticated service role to
assign or remove the Manage
Local Only Authentication
setting capability
Cannot assign user roles with
the Manage Local Only
Authentication setting
capability when their original
role is deleted
Cannot delete user roles with
the Manage Local Only
Authentication setting
Manage Local
Only
Authentication User Authorized Service
Setting
capability
capability
If users or authorized services must not inherit Manage Local Only authentication role, then a new
user role must be assigned.
The default administration account is automatically set to Manage Local Only authentication.
Only an Administration manager or a System manager with the Manage Local Only authentication
role can create or delete a user role with Manage Local Only authentication.
Procedure
1. On the Admin tab, click Users.
2. Locate the user to be assigned Local Only authentication and switch the Local Only authentication
to On.
The user can use only Local Only authentication to log in.
Configuring system authentication
You can configure local authentication on your IBM QRadar system. You can specify length, complexity,
and expiry requirements for local passwords.
About this task
The local authentication password policy applies to local passwords for administrative users. The policy
also applies to non-administrative users if no external authentication is configured.
When the local authentication password policy is updated, users are prompted to change their password if
they log in with a password that does not meet the new requirements.
Procedure
1. On the Admin tab, click Authentication.
2. Click Authentication Module Settings.
3. Optional: From the Authentication Module list, select System Authentication.
System authentication is the default authentication module. If you change from another
authentication module, then you must deploy QRadar before you do the next steps.
4. Click Save Authentication Module.
5. Click Home.
6. Click Local Password Policy Configuration.
7. Select the password complexity settings for local authentication.
Learn more about password complexity settings:
Password
complexity Description
setting
Minimum Specifies the minimum number of characters that must be in a password.
Password
Length Important: To provide adequate security, passwords should contain at least 8 characters.
Use
Requires that passwords meet a number of complexity rules, such as containing uppercase characters,
Complexity
lowercase characters, special characters, or numbers.
Rules
Number of The number of complexity rules that individual passwords must meet. Must be between one
rules and the number of enabled complexity rules. For example, if all four complexity rules are
required enabled and individual passwords must meet any three of them, then enter 3.
Contain an
uppercase Requires that passwords contain at least one uppercase character.
character
Contain a
lowercase Requires that passwords contain at least one lowercase character.
character
Contain a
digit Requires that passwords contain at least one number.
Contain a
special Requires that passwords contain at least one space or other character that is not a letter or
character number (for example, "$%&'()*,-./:;<=>?@[\]_`|~).
Not contain
repeating Disallows more than two repeating characters. For example, abbc is allowed but abbbc is not
characters allowed.
Password Prevents passwords from being reused for a number of days. The number of days is calculated
History by Unique password count multiplied by Days before password will expire.
Unique
password This parameter displays when Password History is selected. The number of password
count changes before a previous password can be reused.
Days before
password This parameter displays when Password History is selected. The number of days before a
will expire password must be changed.
CHAP
Challenge Handshake Authentication Protocol (CHAP) establishes a Point-to-Point Protocol
(PPP) connection between the user and the server.
MSCHAP
Microsoft Challenge Handshake Authentication Protocol (MSCHAP) authenticates remote
Windows workstations.
PAP
Password Authentication Protocol (PAP) sends clear text between the user and the server.
d. In the Shared Secret field, type the shared secret that QRadar uses to encrypt RADIUS
passwords for transmission to the RADIUS server.
5. Click Save Authentication Module.
Configuring TACACS authentication
You can configure TACACS authentication on your IBM QRadar system.
Procedure
1. On the Admin tab, click Authentication.
2. Click Authentication Module Settings.
3. From the Authentication Module list, select TACACS Authentication.
4. Configure the parameters:
a. In the TACACS Server field, type the host name or IP address of the TACACS server.
b. In the TACACS Port field, type the port of the TACACS server.
c. From the Authentication Type list box, select the type of authentication you want to
perform.
Option Description
ASCII American Standard Code for Information Interchange (ASCII) sends the user name and password in clear
text.
PAP Password Authentication Protocol (PAP) sends clear text between the user and the server. PAP is the
default authentication type.
CHAP Challenge Handshake Authentication Protocol (CHAP) establishes a Point-to-Point Protocol (PPP)
connection between the user and the server.
MSCHA Microsoft Challenge Handshake Authentication Protocol (MSCHAP) authenticates remote Windows
P workstations.
d. In the Shared Secret field, type the shared secret that QRadar uses to encrypt TACACS
passwords for transmission to the TACACS server.
5. Click Save Authentication Module.
What to do next
For TACACS user authentication, you must create a local QRadar user account that is the same as the
TACACS account on the authentication server.
Configuring Active Directory authentication
Removed in 7.4.2 You can configure Microsoft Active Directory authentication on your IBM
QRadar system.
About this task
Important: As of QRadar 7.4.2, you can no longer use Kerberos-based Active Directory (AD) authentication. For
more information, see https://www.ibm.com/support/pages/node/6253911.
Procedure
1. On the Admin tab, click Authentication.
2. Click Authentication Module Settings.
3. From the Authentication Module list, select Active Directory.
4. Click Add, and configure parameters for the Active Directory Repository.
Parameter Description
Repository The Repository ID is an identifier or alias that uniquely represents the server that is
ID entered in the Server URL field and the domain from the Domain field. Use the
Repository ID when you enter your login details.
For example, you might use AD_1 to represent server_A on Domain_A in one Active
Directory Repository, and AD_2 to represent server_B on Domain_A in your second
repository.
Server URL The URL that is used to connect to the LDAP server. For example,
type ldaps://host_name:port.
Note: If you specify a secure LDAP connection, the password is secure but the username is
passed in clear text.
5. Enter the user name and password that you use to authenticate with the repository.
6. To test connectivity to the repository, click Test Connection.
Note: When you enable Active Directory, ensure that port 88 is open to allow Kerberos
connections from the QRadar Console.
7. To edit or remove a repository, select the repository, and then click Edit or Remove.
8. Click Save.
Users can log in by using the Domain\user or Repository_ID\user login formats.
The login request that uses Repository_ID\user is attempted on a specific server that is linked to a
specific domain. For example, Server A on Domain A, which is more specific than the Domain\
user login request format.
The login request that uses the Domain\user format is attempted on servers that are linked to the
specified domain until a successful login is achieved. For example, there might be more that one
server on a specific domain.
Note: For Active Directory user authentication, you must create a local QRadar user account that
is the same as the Active Directory (AD) account on the authentication server.
9. On the Admin page, click Deploy Changes.
LDAP authentication
You can configure IBM QRadar to use supported Lightweight Directory Access Protocol (LDAP)
providers for user authentication and authorization.
In geographically dispersed environments, performance can be negatively impacted if the LDAP server
and the QRadar Console are not geographically close to each other. For example, user attributes can take
a long time to populate if the QRadar Console is in North America and the LDAP server is in Europe.
You can use the LDAP authentication with an Active Directory server.
User attribute values are case-sensitive. The mapping of group names to user roles and security
profiles is also case-sensitive.
Procedure
1. On the Admin tab, click Authentication.
2. Click Authentication Module Settings.
3. From the Authentication Module list, select LDAP.
4. Click Add and complete the basic configuration parameters.
There are three configuration types and each has specific requirements for the Server
URL, SSL Connection, and TLS Authentication parameters:
Secure LDAP (LDAPS)
The Server URL parameter must use ldaps:// as the protocol, and specify an LDAP over
SSL encrypted port (typically 636). For example ldaps://ldap1.example.com:636
If you are using Global Catalog because you're using multiple domains, use port 3269. For
example ldaps://ldap1.example.com:3269
The SSL Connection parameter must be set to “True” and the TLS
Authentication parameter must be set to “False”.
LDAP with StartTLS
The Server URL parameter must use ldap:// as the protocol, and specify an LDAP
unencrypted port that supports the StartTLS option (typically 389). For
example ldap://ldap1.example.com:389
The SSL Connection parameter must be set to “False” and the TLS Authentication must
be set to “True”.
TLS 1.2 using StartTLS is not the same as the LDAP SSL port.
TLS Authentication does not support referrals, so referrals must be set to “ignore”, and the
LDAP server must include a complete structure to search.
Unencrypted
An unencrypted LDAP configuration is not recommended.
The Server URL parameter must use the ldap:// protocol and specify an unencrypted port
(typically 389). For example ldap://ldap1.example.com:389
The SSL Connection parameter and the TLS Authentication parameter must both be set
to “False”.
Parameter Description
The Repository ID is an alias for the User Base DN (distinguished name) that
you use when you enter your login details to avoid having to type a long string.
When you have more than one repository in your network, you can place the
User Base DN before the user name or you can use the shorter Repository ID.
For example, the User Base DN is: CN=Users,DC=IBM,DC=com. You create a
repository ID such as UsersIBM that is an alias for the user base DN.
You can type the short repository ID UsersIBM instead of typing the following
example of a complete User Base DN CN=Users,DC=IBM,DC=com
Here's an example where you configure the repository ID to use as an alias for the
User Base DN.Figure 1. LDAP repository
Repository
ID
When you enter your user name on the login page, you can enter the Repository
ID UsersIBM\<username>, instead of typing the full User Base DN.
Note: The Repository ID and User Base DN must be unique.
Search Select True to search all subdirectories of the specified Directory Name (DN).
entire base Select False to search only the immediate contents of the Base DN. The
Parameter Description
subdirectories are not searched. This search is faster than one which searches
all directories.
The user field identifier that you want to search on.
LDAP User
Field You can specify multiple user fields in a comma-separated list to allow users to
authenticate against multiple fields. For example, if you specify uid,mailid, a
user can be authenticated by providing either their user ID or their mail ID.
The Distinguished Name (DN) of the node where the search for a user would start.
The User Base DN becomes the start location for loading users. For performance
User Base reasons, ensure that the User Base DN is as specific as possible.
DN
For example, if all of your user accounts are on the directory server in the Users
folder, and your domain name is ibm.com, the User Base DN value would
be cn=Users,dc=ibm,dc=com.
Referral Select Ignore or Follow to specify how referrals are handled.
Anonymous Use anonymous bind to create a session with the LDAP directory server that
bind doesn't require that you provide authentication information.
Use authenticated bind when you want the session to require a valid user name and
password combination. A successful authenticated bind authorizes the
authenticated user to read the list of users and roles from the LDAP directory
during the session. For increased security, ensure that the user ID that is used for
Authenticated
the bind connection does not have permissions to do anything other than reading
bind
the LDAP directory.
Provide the Login DN and Password. For example, if the login name
is admin and the domain is ibm.com, the Login DN would
be cn=admin,dc=ibm,dc=com.
Table 2. LDAP bind connections
Local The user name and password combination is verified for each user that logs in, but no
authorization information is exchanged between the LDAP server and QRadar server.
Authorization
method Description
parameter
If you chose Local authorization, you must create each user on the QRadar console.
Choose User Attributes when you want to specify which user role and security profile
attributes can be used to determine authorization levels.
User attributes
You must specify both a user role attribute and a security profile attribute. The
attributes that you can use are retrieved from the LDAP server, based on your
connection settings. User attribute values are case-sensitive.
Choose Group Based when you want users to inherit role-based access permissions
after they authenticate with the LDAP server. The mapping of group names to user
roles and security profiles is case-sensitive.
Important: If you map an Active Directory group to the Admin user role, you must
map the same Active Directory group to the Admin security profile, or the user will not
be able to log in.
Group base DN
Specifies the start node in the LDAP directory for loading groups.
For example, if all of your groups are on the directory server in
the Groups folder, and your domain name is ibm.com, the Group Base
DN value might be cn=Groups,dc=ibm,dc=com.
Query limit enabled
Sets a limit on the number of groups that are returned.
Query result limit
Group based The maximum number of groups that are returned by the query. By default, the
query results are limited to show only the first 1000 query results.
By member
Select By Member to search for groups based on the group members. In
the Group Member Field box, specify the LDAP attribute that is used to
define the users group membership.
For example, if the group uses the memberUid attribute to determine group
membership, type memberUid in the Group Member Field box.
By query
Select By Query to search for groups by running a query. You provide the
query information in the Group Member Field and Group Query Field text
boxes.
For example, to search for all groups that have at least
one memberUid attribute and that have a cn value that starts with the
letter “s”, type memberUid in Group Member Field and
type cn=s* in Group Query Field.
Authorization
method Description
parameter
8. If you specified Group Based authorization, click Load Groups and click the plus (+) or
minus (-) icon to add or remove privilege groups.
The user role privilege options control which QRadar components the user has access to.
The security profile privilege options control the QRadar data that each user has access to.
Note: Query limits can be set by selecting the Query Limit Enabled checkbox or the
limits can be set on the LDAP server. If query limits are set on the LDAP server, you
might receive a message that indicates that the query limit is enabled even if you did not
select the Query Limit Enabled checkbox.
9. Click Save.
10. Click Manage synchronization to exchange authentication and authorization information
between the LDAP server and the QRadar console.
a. If you are configuring the LDAP connection for the first time, click Run
Synchronization Now to synchronize the data.
b. Specify the frequency for automatic synchronization.
c. Click Close.
11. Repeat the steps to add more LDAP servers, and click Save Authentication Module when
complete.
Configuring an unencrypted connection to the LDAP server
Configure a Lightweight Directory Access Protocol (LDAP) repository in IBM QRadar.
Configuring an SSL certificate
Configure a Secure Sockets Layer (SSL) certificate to build a chain of trust.
Configuring LDAPS authentication
Configure a Server LDAP (LDAPS) authentication repository for your IBM QRadar system.
Synchronizing data with an LDAP server
You can manually synchronize data between the IBM QRadar server and the LDAP authentication
server.
Configuring SSL or TLS certificates
If you use an LDAP directory server for user authentication and you want to enable SSL
encryption or TLS authentication, you must configure your SSL or TLS certificate. QRadar LDAP
authentication uses TLS 1.2.
Displaying hover text for LDAP information
You create an LDAP properties configuration file to display LDAP user information as hover text.
This configuration file queries the LDAP database for LDAP user information that is associated
with events, offenses, or assets (if available).
Multiple LDAP repositories
You can configure IBM QRadar to map entries from multiple LDAP repositories into a single
virtual repository.
Example: Least privileged access configuration and set up
Grant users only the minimum amount of access that they require to do their day-to-day tasks.
SAML single sign-on authentication
Security Assertion Markup Language (SAML) is a framework for authentication and authorization
between a service provider (SP) and an identity provider (IDP) where authentication is exchanged using
digitally signed XML documents. The service provider agrees to trust the identity provider to authenticate
users. In return, the identity provider generates an authentication assertion, which indicates that a user has
been authenticated.
By using the SAML authentication feature, you can easily integrate QRadar® with your corporate
identity server to provide single sign-on, and eliminate the need to maintain QRadar local users. Users
who are authenticated to your identity server can automatically authenticate to QRadar. They don't need
to remember separate passwords or type in credentials every time they access QRadar.
QRadar is fully compatible with SAML 2.0 web SSO profile as a service provider. It supports both SP
and IDP initiated single sign-on and single logout.
Configuring SAML authentication
You can configure IBM® QRadar to use the Security Assertion Markup Language (SAML) 2.0 single
sign-on framework for user authentication and authorization.
Importing a new certificate for signing and decrypting
The QRadar SAML 2.0 feature has options to use an x509 certificate other than the
provided QRadar_SAML certificate for signing and encryption.
Setting up SAML with Microsoft Active Directory Federation Services
After you configure SAML in QRadar, you can configure your Identity Provider by using the XML
metadata file that you created during that process. This example includes instructions for configuring
Microsoft Active Directory Federation Services (AD FS) to communicate with QRadar using the SAML
2.0 single sign-on framework.
Troubleshooting SAML authentication
Use the following information to troubleshoot errors and issues when using SAML 2.0 with QRadar.
By using the SAML authentication feature, you can easily integrate QRadar® with your corporate
identity server to provide single sign-on, and eliminate the need to maintain QRadar local users. Users
who are authenticated to your identity server can automatically authenticate to QRadar. They don't need
to remember separate passwords or type in credentials every time they access QRadar.
QRadar is fully compatible with SAML 2.0 web SSO profile as a service provider. It supports both SP
and IDP initiated single sign-on and single logout.
Configuring SAML authentication
You can configure IBM® QRadar to use the Security Assertion Markup Language (SAML) 2.0
single sign-on framework for user authentication and authorization.
Importing a new certificate for signing and decrypting
The QRadar SAML 2.0 feature has options to use an x509 certificate other than the
provided QRadar_SAML certificate for signing and encryption.
Setting up SAML with Microsoft Active Directory Federation Services
After you configure SAML in QRadar, you can configure your Identity Provider by using the
XML metadata file that you created during that process. This example includes instructions for
configuring Microsoft Active Directory Federation Services (AD FS) to communicate
with QRadar using the SAML 2.0 single sign-on framework.
Troubleshooting SAML authentication
Use the following information to troubleshoot errors and issues when using SAML 2.0
with QRadar.
Configuring TACACS authentication
You can configure TACACS authentication on your IBM® QRadar® system.
Procedure
1. On the Admin tab, click Authentication.
2. Click Authentication Module Settings.
3. From the Authentication Module list, select TACACS Authentication.
4. Configure the parameters:
a. In the TACACS Server field, type the host name or IP address of the TACACS server.
b. In the TACACS Port field, type the port of the TACACS server.
c. From the Authentication Type list box, select the type of authentication you want to
perform.
Option Description
ASCII American Standard Code for Information Interchange (ASCII) sends the user name and
password in clear text.
PAP Password Authentication Protocol (PAP) sends clear text between the user and the server.
PAP is the default authentication type.
d. In the Shared Secret field, type the shared secret that QRadar uses to encrypt TACACS
passwords for transmission to the TACACS server.
5. Click Save Authentication Module.
What to do next
For TACACS user authentication, you must create a local QRadar user account that is the same as the
TACACS account on the authentication server.
Chart types
When you create a report, you must choose a chart type for each chart you include in your report.
The chart type determines how the data and network objects appear in your report.
You can use any of the following types of charts:
Chart Type Description
Use this option if you need white space in your report. If you select the None option for any
None
container, no further configuration is required for that container.
Use this chart to view vulnerability data for each defined asset in your deployment. You can
Asset
generate Asset Vulnerability charts when vulnerabilities have been detected by a VA scan. This
Vulnerabilities
chart is available after you install IBM® QRadar® Vulnerability Manager.
This chart option is only displayed if you purchased and licensed IBM QRadar Risk Manager.
Connections
For more information, see the IBM QRadar Risk Manager User Guide.
This chart option is only displayed if you purchased and licensed IBM QRadar Risk Manager.
Device Rules
For more information, see the IBM QRadar Risk Manager User Guide.
Device Unused This chart option is only displayed if you purchased and licensed IBM QRadar Risk Manager.
Objects For more information, see the IBM QRadar Risk Manager User Guide.
Use this chart to view event information. You can base a chart on data from saved searches on
Events/Logs the Log Activity tab. You can configure the chart to plot data over a configurable period of time
to detect event trends. For more information about saved searches, see Event and flow searches.
Use this chart to export or report on log sources. Select the log sources and log source groups that
you want to appear in the report. Sort log sources by report columns. Include log sources that are
Log Sources
not reported for a defined time period. Include log sources that were created in a specified time
period.
Use this chart to view flow information. You can base a chart on data from saved searches on
the Network Activity tab. You can configure the chart to plot flow data over a configurable
Flows
period of time to detect flow trends. For more information about saved searches, see Event and
flow searches.
Top Destination
Use this chart to display the top destination IPs in the network locations you select.
IPs
Use this chart to display the top offenses that occur at present time for the network locations you
Top Offenses
select.
Offenses Over Use this chart to display all offenses that have a start time within a defined time span for the
Time network locations you select.
Use this chart to display and sort the top offense sources (IP addresses) that attack your network
Top Source IPs
or business assets.
Chart Type Description
The Vulnerabilities option is only displayed when the IBM QRadar Vulnerability Manager was
Vulnerabilities purchased and licensed. For more information, see the IBM QRadar Vulnerability Manager User
Guide.
Use this option if you need white space in your report. If you select the None option for any
None
container, no further configuration is required for that container.
Use this chart to view vulnerability data for each defined asset in your deployment. You can
Asset
generate Asset Vulnerability charts when vulnerabilities have been detected by a VA scan. This
Vulnerabilities
chart is available after you install IBM QRadar Vulnerability Manager.
The Vulnerabilities option is only displayed when the IBM QRadar Vulnerability Manager was
Vulnerabilities purchased and licensed. For more information, see the IBM QRadar Vulnerability Manager User
Guide.
The following table identifies and describes the Reports toolbar options.
Option Description
Group
Manage Click Manage Groups to manage report groups. Using the Manage Groups feature, you can
Groups organize your reports into functional groups. You can share report groups with other users.
report before a full week has elapsed since you created the report, you can generate
the report using this option.
Delete Report - Select this option to delete the selected report. To delete multiple
reports, hold the Control key and click on the reports you want to delete.
Delete Generated Content - Select this option to delete all generated content for the
selected rows. To delete multiple generated reports, hold the Control key and click
on the generate reports you want to delete.
Hide
Select this check box to hide inactive report templates. The Reports tab automatically refreshes and
Interactive
displays only active reports. Clear the check box to show the hidden inactive reports.
Reports
Type your search criteria in the Search Reports field and click the Search Reports icon. A
search is run on the following parameters to determine which match your specified criteria:
You must have appropriate network permissions to share a generated report with other users.
For more information about permissions, see the IBM® QRadar® Administration Guide.
About this task
The Report wizard provides a step-by-step guide on how to design, schedule, and generate reports.
The wizard uses the following key elements to help you create a report:
After you create a report that generates weekly or monthly, the scheduled time must elapse before the
generated report returns results. For a scheduled report, you must wait the scheduled time period for the
results to build. For example, a weekly search requires seven days to build the data. This search will
return results after 7 days.
When you specify the output format for the report, consider that the file size of generated reports can be
one to 2 megabytes, depending on the selected output format. PDF format is smaller in size and does not
use a large quantity of disk storage space.
Procedure
1. Click the Reports tab.
2. From the Actions list box, select Create.
3. On the Welcome to the Report wizard! window, click Next.
4. Select one of the following options:
Option Description
Manually By default, the report generates 1 time. You can generate the report as often as you
want.
Hourly Schedules the report to generate at the end of each hour. The data from the previous
hour is used.
From the list boxes, select a time frame to begin and end the reporting cycle. A report is
generated for each hour within this time frame. Time is available in half-hour
increments. The default is 1:00 a.m for both the From and To fields.
Daily Schedules the report to generate at the end of each day. The data from the previous day
is used.
From the list boxes, select the time and the days of the week that you want the report to
run.
Weekly Schedules the report to generate weekly using the data from the previous calendar week,
from Monday to Sunday.
Select the day that you want to generate the report. The default is Monday. From the list
box, select a time to begin the reporting cycle. Time is available in half-hour increments.
The default is 1:00 a.m.
Monthly Schedules the report to generate monthly using the data from the previous calendar
month.
From the list box, select the date that you want to generate the report. The default is the
first day of the month. Select a time to begin the reporting cycle. Time is available in
half-hour increments. The default is 1:00 a.m.
5. In the Allow this report to generate manually pane, Yes or No.
6. Configure the layout of your report:
a. From the Orientation list box, select Portrait or Landscape for the page orientation.
b. Select one of the six layout options that are displayed on the Report wizard.
c. Click Next .
7. Specify values for the following parameters:
Parameter Values
Report Title The title can be up to 60 characters in length. Do not use special characters.
Pagination From the list box, select a location for page numbers to display on the report. You can
Options choose not to have page numbers display.
Parameter Values
Report Type a classification for this report. You can type up to 75 characters in length. You can
Classification use leading spaces, special characters, and double byte characters. The report classification
displays in the header and footer of the report. You might want to classify your report
as confidential, highly confidential, sensitive, or internal.
Report Console Select this check box to send the generated report to
the Reports tab. Report Console is the default distribution channel.
Select the users that should This option displays after you select the Report Console check box.
be able to view the
generated report. From the list of users, select the users that you want to grant
permission to view the generated reports.
Select all users This option is only displayed after you select the Report
Console check box. Select this check box if you want to grant
permission to all users to view the generated reports.
Include link to Report This option is only displayed after you select the Email check box.
Console Select this check box to include a link to the Report Console in the
email.
12. On the Finishing Up page, enter values for the following parameters.
Option Description
Report Description Type a description for this report. The description is displayed on
the Report Summary page and in the generated report distribution
email.
Please select any groups you would Select the groups to which you want to assign this report. For more
like this report to be a member of information about groups, see Report groups.
Would you like to run the report Select this check box if you want to generate the report when the
now? wizard is complete. By default, the check box is selected.
Procedure
1. Click the Reports tab.
2. Select the reports for which you want to delete the generated content.
3. From the Actions list box, click Delete Generated Content.
Procedure
1. Click the Reports tab.
2. Select the report that you want to generate.
3. Click Run Report.
What to do next
After the report generates, you can view the generated report from the Generated Reports column.
Assets
Offenses
Vulninstances
You can add a field without a function as a simple field, or you can add a field with a function as a
complex field to build columns. You can also add conditions to filter your data.
Procedure
1. Click the Admin tab.
2. In the Dynamic Search section, click Dynamic Search.
3. Select a Data Source.
4. Complete the Available Columns and Available Filters sections.
5. To add a name, description, range of the search, retention period, or search type to your query,
enable one or more Extra Search Properties.
6. To copy your JSON script, click Generate JSON.
Your results appear in the JSON generated by your query section. Click Copy to Clipboard to
copy your JSON script.
7. To reset your selections, click Reset.
8. Click Run Query.
Results
The results of your query are listed in plain text or link format. For example, if you chose to query
the ASSET_ID field, you can click the results to view the Asset Summary window for each asset ID.
Procedure
1. Click the Log Activity or Network Activity tab.
2. From the Search list, select New Search or Edit Search.
3. Select a previously saved search.
4. Click Show AQL.
5. From the AQL window, click Copy to Clipboard.
6. In the Search Mode section, click Advanced Search.
7. Paste the AQL string text into the Advanced Search text box.
8. Modify the string to include the data you want to find.
9. Click Search to display the results.
What to do next
Save the search criteria so that the search appears in your list of saved searches and can be reused.
Different user communities can have different threat and usage indicators.
Use reference data to report on several user properties, for example, department, location, or manager.
You can use external reference data.
The following query returns metadata information about the user from their login events.
SELECT
REFERENCETABLE('user_data','FullName',username) as 'Full Name',
REFERENCETABLE('user_data','Location',username) as 'Location',
REFERENCETABLE('user_data','Manager',username) as 'Manager',
DISTINCTCOUNT(username) as 'Userid Count',
DISTINCTCOUNT(sourceip) as 'Source IP Count',
COUNT(*) as 'Event Count'
FROM events
WHERE qidname(qid) ILIKE '%logon%'
GROUP BY 'Full Name', 'Location', 'Manager'
LAST 1 days
Insight across multiple account identifiers
In this example, individual users have multiple accounts across the network. The organization requires a
single view of a users activity.
The following query returns the user accounts that are used by a global ID on events that are flagged as
suspicious.
SELECT
REFERENCEMAP('GlobalID Mapping',username) as 'Global ID',
REFERENCETABLE('user_data','FullName', 'Global ID') as 'Full Name',
DISTINCTCOUNT(username),
COUNT(*) as 'Event count'
FROM events
WHERE RULENAME(creEventlist) ILIKE '%suspicious%'
GROUP BY 'Global ID'
LAST 1 days
The following query shows the activities that are completed by a global ID.
SELECT
QIDNAME(qid) as 'Event name',
starttime as 'Time',
sourceip as 'Source IP', destinationip as 'Destination IP',
username as 'Event Username',
REFERENCEMAP('GlobalID_Mapping', username)as 'Global User'
FROM events
WHERE 'Global User' = 'John Doe'
LAST 1 days
Identify suspicious long-term beaconing
Many threats use command and control to communicate periodically over days, weeks, and months.
Advanced searches can identify connection patterns over time. For example, you can query consistent,
short, low volume, number of connections per day/week/month between IP addresses, or an IP address
and geographical location.
The following query detects daily beaconing between a source IP and a destination IP. The beaconing
times are not at the same time each day. The time lapse between beacons is short.
SELECT
sourceip,
LONG(DATEFORMAT(starttime,'hh')) as hourofday,
(AVG( hourofday*hourofday) - (AVG(hourofday)^2))as variance,
COUNT(*) as 'total flows'
FROM flows
GROUP BY sourceip, destinationip
HAVING variance < 01 and "total flows" < 10
LAST 7 days
The following query detects daily beaconing to a domain by using proxy log events. The beaconing times
are not at the same time each day. The time lapse between beacons is short.
SELECT sourceip,
LONG(DATEFORMAT(starttime,'hh')) as hourofday,
(AVG(hourofday*hourofday) - (AVG(hourofday)^2)) as variance,
COUNT(*) as 'total events'
FROM events
WHERE LOGSOURCEGROUPNAME(devicegrouplist) ILIKE '%proxy%'
GROUP BY url_domain
HAVING variance < 0.1 and "total events" < 10
LAST 7 days
The url_domain property is a custom property from proxy logs.
External threat intelligence
Usage and security data that is correlated with external threat intelligence data can provide important
threat indicators.
Advanced searches can cross-reference external threat intelligence indicators with other security events
and usage data.
This query shows how you can profile external threat data over many days, weeks, or months to identify
and prioritize the risk level of assets and accounts.
Select
REFERENCETABLE('ip_threat_data','Category',destinationip) as 'Category',
REFERENCETABLE('ip_threat_data','Rating', destinationip) as 'Threat Rating',
DISTINCTCOUNT(sourceip) as 'Source IP Count',
DISTINCTCOUNT(destinationip) as 'Destination IP Count'
FROM events
GROUP BY 'Category', 'Threat Rating'
LAST 1 days
Asset intelligence and configuration
Threat and usage indicators vary by asset type, operating system, vulnerability posture, server type,
classification, and other parameters.
In this query, advanced searches and the asset model provide operational insight into a location.
The Assetproperty function retrieves property values from assets, which enables you to include asset
data in the results.
SELECT
ASSETPROPERTY('Location',sourceip) as location,
COUNT(*) as 'event count'
FROM events
GROUP BY location
LAST 1 days
The following query shows how you can use advanced searches and user identity tracking in the asset
model.
The AssetUser function retrieves the user name from the asset database.
SELECT
APPLICATIONNAME(applicationid) as App,
ASSETUSER(sourceip, now()) as srcAssetUser,
COUNT(*) as 'Total Flows'
FROM flows
WHERE srcAssetUser IS NOT NULL
GROUP BY App, srcAssetUser
ORDER BY "Total Flows" DESC
LAST 3 HOURS
Network LOOKUP function
You can use the Network LOOKUP function to retrieve the network name that is associated with an IP
address.
SELECT NETWORKNAME(sourceip) as srcnet,
NETWORKNAME(destinationip) as dstnet
FROM events
Rule LOOKUP function
You can use the Rule LOOKUP function to retrieve the name of a rule by its ID.
SELECT RULENAME(123) FROM events
The following query returns events that triggered a specific rule name.
Offense chaining
IBM® QRadar® chains offenses together to reduce the number of offenses that you need to review,
which reduces the time to investigate and remediate the threat.
Offense chaining helps you find the root cause of a problem by connecting multiple symptoms together
and showing them in a single offense. By understanding how an offense changed over time, you can see
things that might be overlooked during your analysis. Some events that would not be worth investigating
on their own might suddenly be of interest when they are correlated with other events to show a pattern.
Offense chaining is based on the offense index field that is specified on the rule. For example, if your rule
is configured to use the source IP address as the offense index field, there is only one offense that has that
source IP address for while the offense is active.
You can identify a chained offense by looking for preceded by in the Description field on the Offense
Summary page. In the following example, QRadar combined all of the events that fired for each of the
three rules into one offense, and appended the rule names to the Description field:
Exploit Followed By Suspicious Host Activity - Chained
preceded by Local UDP Scanner Detected
preceded by XForce Communication to a known Bot Command and Control
Offense indexing
Offense indexing provides the capability to group events or flows from different rules indexed on the
same property together in a single offense.
IBM® QRadar® uses the offense index parameter to determine which offenses to chain together. For
example, an offense that has only one source IP address and multiple destination IP addresses indicates
that the threat has a single attacker and multiple victims. If you index this type of offense by the source IP
address, all events and flows that originate from the same IP address are added to the same offense.
You can configure rules to index an offense based on any piece of information. QRadar includes a set of
predefined, normalized fields that you can use to index your offenses. If the field that you want to index
on is not included in the normalized fields, create a custom event or a custom flow property to extract the
data from the payload and use it as the offense indexing field in your rule. The custom property that you
index on can be based on a regular expression, a calculation, or an AQL-based expression.
Offense indexing considerations
It is important to understand how offense indexing impacts your IBM QRadar deployment.
System performance
Ensure that you optimize and enable all custom properties that are used for offense indexing. Using
properties that are not optimized can have a negative impact on performance.
When you create a rule, you cannot select non-optimized properties in the Index offense based on field.
However, if an existing rule is indexed on a custom property, and then the custom property is de-
optimized, the property is still available in the offense index list. Do not de-optimize custom properties
that are used in rules.
Rule action and response
When the indexed property value is null, an offense is not created, even when you select the Ensure the
detected event is part of an offense check box in the rule action. For example, if a rule is configured to
create an offense that is indexed by host name, but the host name in the event is empty, an offense is not
created even though all of the conditions in the rule tests are met.
When the response limiter uses a custom property, and the custom property value is null, the limit is
applied to the null value. For example, if the response is Email, and the limiter says Respond no more
than 1 time per 1 hour per custom property, if the rule fires a second time with a null property within 1
hour, an email will not be sent.
When you index using a custom property, the properties that you can use in the rule index and response
limiter field depends on the type of rule that you are creating. An event rule accepts custom event
properties in the rule index and response limiter fields, while a flow rule accepts only custom flow
properties. A common rule accepts either custom event or custom flow properties in the rule index and
response limiter fields.
You cannot use custom properties to index an offense that is created by a dispatched event.
Payload contents
Offenses that are indexed by the Ariel Query Language (AQL), a regular expression (regex), or by a
calculated property include the same payload as the initial event that generated the offense.
Offenses that are indexed by a normalized event field, such as Source IP or Destination IP, include the
event name and description as the custom rules engine (CRE) payload.
1. Create a custom property to extract the MD5 signature from the logs. Ensure that the custom
property is optimized and enabled.
2. Create a rule and configure the rule to create an offense that uses the MD5 signature custom
property as the offense index field. When the rule fires, an offense is created. All fired rules that
have the same MD5 signature are grouped into one offense.
3. You can search by offense type to find the offenses that are indexed by the MD5 signature custom
property.
You must have administrative permissions to share a report group with other users.
You cannot use the Content Management Tool (CMT) to share report groups.
To view a generated report, the user must have permission to view the report.
Procedure
1. Click the Reports tab.
2. On the Reports window, click Manage Groups.
3. On the Report Groups window, select the report group that you want to share and click Share.
4. On the Sharing Options window, select one of the following options.
Option Description
Share with users matching the The report group is shared with specific users.
following criteria...
User Roles
Select from the list of user roles and press the add icon (+).
Security Profiles
Select from the list of security profiles and press the add icon
(+).
5. Click Save.
Results
On the Report Groups window, shared users see the report group in the report list. Generated reports display
content based on security profile setting.
Question
How do I create and share a custom Dashboard Item that can be shared with other users?
Answer
To create a Dashboard Item that can be shared with other users, there are three main steps that
need to be taken:
1. Create and share the Search Criteria, that the Dashboard Item will use.
2. Have Users modify the shared Search Criteria for use on their Dashboards.
3. Have Users add the shared Search Criteria as a Dashboard Item.
Create and share the Search Criteria, that the Dashboard Item will use
The following steps can be used to create a search that can be used as a Dashboard Item for all
users:
Note: The user account initiating this process must be in the Admin User Role. Only users in
the Admin User Role have the ability to share saved Search Criteria.
Parameter Description
Type the unique name that you want to assign to this search criteria. You will provide this
Search Name
Search Name to other users that might want to use the search for their Dashboard Item.
Assign Search to Select the check box for the group you want to assign this saved search. If you do not select a
Group(s) group, this saved search is assigned to the Other group by default.
Manage Groups If you want to manage search groups, click Manage Groups.
Include in my This check box must be checked to include the data from your saved search on the
Dashboard Dashboard tab. Note: This parameter is only displayed if the search is grouped.
Set as Default If you want to set this search as your default search, select this check box.
Share with
This check box must be checked to share the search criteria with all users.
Everyone
Have Users modify the shared Search Criteria for use on their Dashboards
The following steps are required by the Users that want to use the previously shared Search
Criteria as a Dashboard Item:
Each user that wants to add the shared search as a Dashboard Item needs to do the following:
1. Click the Dashboard tab.
2. In the Show Dashboard list, select the Dashboard you want to add the new Dashboard
Item to.
3. Choose one of the following options:
o To add a Log Activity search Dashboard Item, navigate to Add Item > Log
Activity > Event Searches.
o To add a Network Activity search Dashboard Item, navigate to Add Item >
Network Activity > Flow Searches.
4. Select the Search Name of the shared search, that was provided by the User in the Admin
Role, from the list. The new Dashboard Item is now added to the Dashboard that you are
currently on.
Domain segmentation
Segmenting your network into different domains helps to ensure that relevant information is available only to
those users that need it.
You can create security profiles to limit the information that is available to a group of users within that
domain. Security profiles provide authorized users access to only the information that is required to
complete their daily tasks. You modify only the security profile of the affected users, and not each user
individually.
You can also use domains to manage overlapping IP address ranges. This method is helpful when you are
using a shared IBM® QRadar® infrastructure to collect data from multiple networks. By creating
domains that represent a particular address space on the network, multiple devices that are in separate
domains can have the same IP address and still be treated as separate devices.
Figure 1. Domain segmentation
Overlapping IP addresses
An overlapping IP address is an IP address that is assigned to more than one device or logical unit, such
as an event source type, on a network. Overlapping IP address ranges can cause significant problems for
companies that merge networks after corporate acquisitions, or for Managed Security Service Providers
(MSSPs) who are bringing on new clients.
IBM® QRadar® must be able to differentiate events and flows that come from different devices and that
have the same IP address. If the same IP address is assigned to more than one event source, you can
create domains to distinguish them.
For example, let's look at a situation where Company A acquires Company B and wants to use a shared
instance of QRadar to monitor the new company's assets. The acquisition has a similar network structure
that results in the same IP address being used for different log sources in each company. Log sources that
have the same IP address cause problems with correlation, reporting, searching, and asset profiling.
To distinguish the origin of the events and flows that come in to QRadar from the log source, you can
create two domains and assign each log source to a different domain. If required, you can also assign each
event collector and flow collector to the same domain as the log source that sends events to them.
To view the incoming events by domain, create a search and include the domain information in the search
results.
The following diagram shows the precedence order for evaluating domain criteria for events.
To determine which domain that specific log messages belong to, the value of the custom property
is looked up against a mapping that is defined in the Domain Management editor.
This option is used for multi-address-range or multi-tenant log sources, such as file servers and
document repositories.
This method of tagging domains is an option for deployments in which an Event Collector can
receive events from multiple domains.
Log source groups
You can assign log source groups to a specific domain. This option allows broader control over the log
source configuration.
Any new log sources that are added to the log source group automatically get the domain tagging
that is associated with the log source group.
Event collectors
If an event collector is dedicated to a specific network segment, IP address range, tenant, geographic
location, or business unit, you can flag that entire event collector as part of that domain.
All events that arrive at that event collector belong to the domain that the event collector is
assigned to, unless the log source for the event belongs to another domain based on other tagging
methods higher in precedence, such as a custom property.
Important:
If a log source is redirected from one event collector to another in a different domain, you must
add a domain mapping to the log source to ensure that events from that log source are still
assigned to the right domain.
Unless the log source is mapped to the right domain, nonadmin users with domain restrictions
might not see offenses that are associated with the log source.
The following diagram shows the precedence order for evaluating domain criteria for flows.
All flow sources that arrive at that flow collector belong to the domain; therefore, any new auto-
detected flow sources are automatically added to the domain.
Flow sources
You can designate specific flow sources to a domain.
This option is useful when a single QFlow collector is collecting flows from multiple network
segments or routers that contain overlapping IP address ranges.
Flow VLAN ID
You can designate specific VLANs to a domain.
This option is useful when you collect traffic from multiple network segments, often with
overlapping IP ranges. This VLAN definition is based on the Enterprise and Customer VLAN
IDs.
The following information elements are sent from QFlow when flows that contain VLAN information are
analyzed. These two fields can be assigned in a domain definition:
If the domain definition is based on an event, the incoming event is first checked for any custom
properties that are mapped to the domain definition. If the result of a regular expression that is defined in
a custom property does not match a domain mapping, the event is automatically assigned to the default
domain.
If the event does not match the domain definition for custom properties, the following order of
precedence is applied:
1. DLC
2. Log source
3. Log source group
4. Event Collector
If the domain is defined based on a flow, the following order of precedence is applied:
1. Flow source
2. Flow Collector
If a scanner has an associated domain, all assets that are discovered by the scanner are automatically
assigned to the same domain as the scanner.
Everything that is not assigned to a user-defined domain is automatically assigned to the default
domain. Users who have limited domain access should not have administrative privileges because
this privilege grants unlimited access to all domains.
You can map the same custom property to two different domains, however the capture result must
be different for each one.
You cannot assign a log source, log source group, or event collector to two different domains.
When a log source group is assigned to a domain, each of the mapped attributes is visible in
the Domain Management window.
Security profiles must be updated with an associated domain. Domain-level restrictions are not applied
until the security profiles are updated, and the changes deployed.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Domain Management.
3. To add a domain, click Add and type a unique name and description for the domain.
Tip: You can check for unique names by typing the name in the Input domain name search box.
4. Depending on the domain criteria to be defined, click the appropriate tab.
o To define the domain based on a custom property, log source group, log source, or event
collector, click the Events tab.
o To define the domain based on a flow source, flow collector, or data gateway, click
the Flows tab.
o To define the domain based on a scanner, including IBM QRadar Vulnerability
Manager scanners, click the Scanners tab.
5. To assign a custom property to a domain, in the Capture Result box, type the text that matches
the result of the regular expression (regex) filter.
Important: You must select the Optimize parsing for rules, reports, and searches check box in
the Custom Event Properties window to parse and store the custom event property. Domain
segmentation will not occur if this option is not checked.
6. From the list, select the domain criteria and click Add.
7. After you add the source items to the domain, click Create.
What to do next
Create security profiles to define which users have access to the domains. After you create the first domain in
your environment, you must update the security profiles for all non-administrative users to specify the domain
assignment. In domain-aware environments, non-administrative users whose security profile does not specify a
domain assignment will not see any log activity or network activity.
Review the hierarchy configuration for your network, and assign existing IP addresses to the proper
domains.
Creating domains for VLAN flows
Use the Domain Management window to create domains based on IBM QRadar VLAN flow sources.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Domain Management.
3. Click Add and type a unique name and description for the domain.
Tip: You can check for unique names by typing the name in the Input domain name search box.
Figure 1. Input domain name
4. Click the Flows tab, and then select Flow VLAN IDs.
5. Select the enterprise VLAN ID and Customer VLAN ID values that match the values on the
incoming flows, and then click Add.
Notes:
Users can see only data within the domain boundaries that are set up for the security profiles that are
assigned to them. Security profiles include domains as one of the first criteria that is evaluated to restrict
access to the system. When a domain is assigned to a security profile, it takes priority over other security
permissions. After domain restrictions are evaluated, individual security profiles are assessed to
determine network and log permissions for that particular profile.
For example, a user is given privileges to Domain_2 and access to network 10.0.0.0/8. That user can see
only events, offenses, assets, and flows that come from Domain_2 and contain an address from the
10.0.0.0/8 network.
As a QRadar administrator, you can see all domains and you can assign domains to non-administrative
users. Do not assign administrative privileges to users whom you want to limit to a particular domain.
Security profiles must be updated with an associated domain. Domain-level restrictions are not applied
until the security profiles are updated, and the changes are deployed.
When you assign domains to a security profile, you can grant access to the following types of domains:
User-defined domains
You can create domains that are based on input sources by using the Domain Management tool. For
more information, see Creating domains.
Default domain
Everything that is not assigned to a user-defined domain is automatically assigned to the default domain.
The default domain contains system-wide events.
Note: Users who have access to the default domain can see system-wide events without restriction. Ensure
that this access is acceptable before you assign default domain access to users. All administrators have
access to the default domain.
Any log source that gets auto-discovered on a shared event collector (one that is not explicitly
assigned to a domain), is auto-discovered on the default domain. These log sources require manual
intervention. To identify these log sources, you must periodically run a search in the default
domain that is grouped by log source.
All domains
Users who are assigned to a security profile that has access to All Domains can see all active domains
within the system, the default domain, and any domains that were previously deleted across the entire
system. They can also see all domains that are created in the future.
Important: If you need to assign a user to a security profile which has a different domain profile, delete the user
account and recreate it.
If you delete a domain, it cannot be assigned to a security profile. If the user has the All
domains assignment, or if the domain was assigned to the user before it was deleted, the deleted domain
is returned in historical search results for events, flows, assets, and offenses. You can't filter by deleted
domains when you run a search.
Administrative users can see which domains are assigned to the security profiles on the Summary tab in
the Domain Management window.
Rule modifications in domain-aware environments
Rules can be viewed, modified, or disabled by any user who has both the Maintain Custom
Rules and View Custom Rules permissions, regardless of which domain that user belongs to.
Important: When you add the Log Activity capability to a user role, the Maintain Custom Rules and View
Custom Rules permissions are automatically granted. Users who have these permissions have access to all log data
for all domains, and they can edit rules in all domains, even if their security profile settings have domain-level
restrictions. To prevent domain users from being able to access log data and modify rules in other domains, edit the
user role and remove the Maintain Custom Rules and View Custom Rules permissions.
Domain-aware searches
You can use domains as search criteria in custom searches. Your security profile controls which domains
you can search against.
System-wide events and events that are not assigned to a user-defined domain are automatically assigned
to the default domain. Administrators, or users who have a security profile that provides access to the
default domain, can create a custom search to see all events that are not assigned to a user-defined
domain.
The default domain administrator can share a saved search with other domain users. When the domain
user runs that saved search, the results are limited to their domain.
You can restrict a rule so that it is applied only to events that are happening within a specified domain. An
event that has a domain tag that is different from the domain that is set on the rule does not trigger an
event response.
In an IBM® QRadar® system that does not have user-defined domains, a rule creates an offense and
keeps contributing to it each time the rule fires. In a domain-aware environment, a rule creates a new
offense each time the rule is triggered in the context of a different domain.
Rules that work in the context of all domains are referred to as system-wide rules. To create a system-
wide rule that tests conditions across the entire system, select Any Domain in the domain list for the And
Domain Is test. An Any Domain rule creates an Any Domain offense.
Single-domain rule
If the rule is a stateful rule, the states are maintained separately for each domain. The rule is
triggered separately for each domain. When the rule is triggered, offenses are created separately
for each domain that is involved and the offenses are tagged with those domains.
Single-domain offense
The offense is tagged with the corresponding domain name. It can contain only events that are
tagged with that domain.
System-wide rule
If the rule is a stateful rule, a single state is maintained for the whole system and domain tags
are ignored. When the rule runs, it creates or contributes to a single system-wide offense.
System-wide offense
The offense is tagged with Any Domain. It contains only events that are tagged with all
domains.
The following table provides examples of domain-aware rules. The examples use a system that has three
domains that are defined: Domain_A, Domain_B, and Domain_C.
The rule examples in the following table may not be applicable in your QRadar environment. For
example, rules that use flows and offenses are not applicable in IBM QRadar Log Manager.
When you view the offense table, you can sort the offenses by clicking the Domain column. The Default
Domain is not included in the sort function so it does not appear in alphabetical order. However, it
appears at the top or bottom of the Domain list, depending on whether the column is sorted in ascending
or descending order. Any Domain does not appear in the list of offenses.
You assign a custom property to a domain based on the capture result. You can assign the same custom
property to multiple domains, but the capture results must be different.
For example, a custom event property, such as userID, might evaluate to a single user or a list of users.
Each user can belong to only one domain.
In the following diagram, the log sources contain user identification information that is exposed as a
custom property, userID. The event collector returns two user files, and each user is assigned to only one
domain. In this case, one user is assigned to Domain: 9 and the other user is assigned to Domain: 12.
If the capture results return a user that is not assigned to a specific user-defined domain, that user is
automatically assigned to the default domain. Default domain assignments require manual intervention.
Perform periodic searches to ensure that all entities in the default domain are correctly assigned.
Important: Before you use a custom property in a domain definition, ensure that Optimize parsing for
rules, reports, and searches is checked on the Custom Event Properties window. This option ensures
that the custom event property is parsed and stored when IBM® QRadar® receives the event for the first
time. Domain segmentation doesn't occur if this option is not checked.
Guidelines for defining your network hierarchy
Building a network hierarchy in IBM® QRadar® is an essential first step in configuring your
deployment. Without a well configured network hierarchy, QRadar cannot determine flow directions,
build a reliable asset database, or benefit from useful building blocks in rules.
Consider the following guidelines when you define your network hierarchy:
For example, you might organize your network to include groups for mail servers, departmental
users, labs, or development teams. Using this organization, you can differentiate network behavior
and enforce behaviour-based network management security policies. However, do not group a
server that has unique behavior with other servers on your network. Placing a unique server alone
provides the server greater visibility in QRadar, and makes it easier to create specific security
policies for the server.
Place servers with high volumes of traffic, such as mail servers, at the top of the group. This
hierarchy provides you with a visual representation when a discrepancy occurs.
Avoid having too many elements at the root level.
Large numbers of root level elements can cause the Network hierarchy page to take a long time to
load.
Large network groups can cause difficulty when you view detailed information for each object. If
your deployment processes more than 600,000 flows, consider creating multiple top-level groups.
Conserve disk space by combining multiple Classless Inter-Domain Routings (CIDRs) or subnets
into a single network group.
For example, add key servers as individual objects, and group other major but related servers into
multi-CIDR objects.
10.10.1.5/32
Table 1. Example of multiple CIDRs and subnets in a single network group
Define an all-encompassing group so that when you define new networks, the appropriate policies
and behavior monitors are applied.
In the following example, if you add an HR department network, such as 10.10.50.0/24, to the
Cleveland group, the traffic displays as Cleveland-based and any rules you apply to the Cleveland
group are applied by default.
When events or flows come into IBM® QRadar®, QRadar evaluates the domain definitions that are
configured, and the events and flows are assigned to a domain. A tenant can have more than one domain.
If no domains are configured, the events and flows are assigned to the default domain.
Domain segmentation
Domains are virtual buckets that you use to segregate data based on the source of the data. They are the building
blocks for multitenant environments. You configure domains from the following input sources:
Flow sources
Custom properties
Scanners
To consolidate the hardware configuration even further, you can use one collector for multiple customers.
If log or flow sources are aggregated by the same collector but belong to different tenants, you can assign
the sources to different domains. When you use domain definitions at the log source level, each log
source name must be unique across the entire QRadar deployment.
If you need to separate data from a single log source and assign it to different domains, you can configure
domains from custom properties. QRadar looks for the custom property in the payload, and assigns it to
the correct domain. For example, if you configured QRadar to integrate with a Check Point Provider-1
device, you can use custom properties to assign the data from that log source to different domains.
Automatic log source detection
When domains are defined at the collector level and the dedicated event collector is assigned to a single
domain, new log sources that are automatically detected are assigned to that domain. For example, all log
sources that are detected on Event_Collector_1 are assigned to Domain_A. All log sources that are
automatically collected on Event_Collector_2 are assigned to Domain_B.
When domains are defined at the log source or custom property level, log sources that are automatically
detected and are not already assigned to a domain are automatically assigned to the default domain. The
MSSP administrator must review the log sources in the default domain and allocate them to the correct
client domains. In a multitenant environment, assigning log sources to a specific domain prevents data
leakage and enforces data separation across domains.
Network hierarchy changes require a full configuration deployment to apply the updates in
the QRadar environment. Full configuration deployments restart all QRadar services, and data collection
for events and flows stops until the deployment completes. Tenant administrators must contact the
Managed Security Service Provider (MSSP) administrator to deploy the changes. MSSP administrators
can plan the deployment during a scheduled outage, and notify all tenant administrators in advance.
In a multitenant environment, the network object name must be unique across the entire deployment. You
cannot use network objects that have the same name, even if they are assigned to different domains.
https://www.youtube.com/watch?v=Xrn7q9v3vAk
Creating domains
Use the Domain Management window to create domains based on IBM® QRadar® input sources.
About this task
Use the following guidelines when you create domains:
Everything that is not assigned to a user-defined domain is automatically assigned to the default
domain. Users who have limited domain access should not have administrative privileges because
this privilege grants unlimited access to all domains.
You can map the same custom property to two different domains, however the capture result must
be different for each one.
You cannot assign a log source, log source group, event collector, or data gateway to two different
domains. When a log source group is assigned to a domain, each of the mapped attributes is
visible in the Domain Management window.
Security profiles must be updated with an associated domain. Domain-level restrictions are not applied
until the security profiles are updated, and the changes deployed.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Domain Management.
3. To add a domain, click Add and type a unique name and description for the domain.
Tip: You can check for unique names by typing the name in the Input domain name search box.
4. Depending on the domain criteria to be defined, click the appropriate tab.
o To define the domain based on a custom property, log source group, log source, event
collector, or data gateway, click the Events tab.
o To define the domain based on a flow source, click the Flows tab.
o To define the domain based on a scanner, including IBM QRadar Vulnerability
Manager scanners, click the Scanners tab.
5. To assign a custom property to a domain, in the Capture Result box, type the text that matches
the result of the regular expression (regex) filter.
Important: You must select the Optimize parsing for rules, reports, and searches check box in
the Custom Event Properties window to parse and store the custom event property. Domain
segmentation will not occur if this option is not checked.
6. From the list, select the domain criteria and click Add.
7. After you add the source items to the domain, click Create.
What to do next
Create security profiles to define which users have access to the domains. After you create the first domain in
your environment, you must update the security profiles for all non-administrative users to specify the domain
assignment. In domain-aware environments, non-administrative users whose security profile does not specify a
domain assignment will not see any log activity or network activity.
Review the hierarchy configuration for your network, and assign existing IP addresses to the proper
domains. For more information, see Network hierarchy.
Provisioning a new tenant
As a Managed Security Services Provider (MSSP) administrator, you are using a single instance of IBM®
QRadar® to provide multiple customers with a unified architecture for threat detection and prioritization.
In this scenario, you are onboarding a new client. You provision a new tenant and create a tenant
administrator account that does limited administrative duties within their own tenant. You limit the access
of the tenant administrator so that they can't see or edit information in other tenants.
Before you provision a new tenant, you must create the data sources, such as log sources or flow
collectors, for the customer and assign them to a domain.
Complete the following tasks by using the tools on the Admin tab to provision the new tenant in QRadar:
If your QRadar® deployment has more than 10 tenants, you can configure a shared data retention policy
and use the domain filter to create a domain-based retention policy for each of the domains within the
tenant. Adding the domains specifies that the policy applies only to the data for that tenant.
Creating domains
Use the Domain Management window to create domains based on IBM® QRadar® input sources.
About this task
Use the following guidelines when you create domains:
Everything that is not assigned to a user-defined domain is automatically assigned to the default
domain. Users who have limited domain access should not have administrative privileges because
this privilege grants unlimited access to all domains.
You can map the same custom property to two different domains, however the capture result must
be different for each one.
You cannot assign a log source, log source group, or event collector to two different domains.
When a log source group is assigned to a domain, each of the mapped attributes is visible in
the Domain Management window.
Security profiles must be updated with an associated domain. Domain-level restrictions are not applied
until the security profiles are updated, and the changes deployed.
Procedure
1. On the navigation menu ( ), click Admin.
2. In the System Configuration section, click Domain Management.
3. To add a domain, click Add and type a unique name and description for the domain.
Tip: You can check for unique names by typing the name in the Input domain name search box.
4. Depending on the domain criteria to be defined, click the appropriate tab.
o To define the domain based on a custom property, log source group, log source, or event
collector, click the Events tab.
o To define the domain based on a flow source, flow collector, or data gateway, click
the Flows tab.
o To define the domain based on a scanner, including IBM QRadar Vulnerability
Manager scanners, click the Scanners tab.
Important: The IBM QRadar Vulnerability Manager scanner is end of life (EOL) in 7.5.0
Update Package 6, and is no longer supported in any version of IBM QRadar. For more
information, see QRadar Vulnerability Manager: End of service product
notification (https://www.ibm.com/support/pages/node/6853425).
5. To assign a custom property to a domain, in the Capture Result box, type the text that matches
the result of the regular expression (regex) filter.
Important: You must select the Optimize parsing for rules, reports, and searches check box in
the Custom Event Properties window to parse and store the custom event property. Domain
segmentation will not occur if this option is not checked.
6. From the list, select the domain criteria and click Add.
7. After you add the source items to the domain, click Create.
What to do next
Create security profiles to define which users have access to the domains. After you create the first domain in
your environment, you must update the security profiles for all non-administrative users to specify the domain
assignment. In domain-aware environments, non-administrative users whose security profile does not specify a
domain assignment will not see any log activity or network activity.
Review the hierarchy configuration for your network, and assign existing IP addresses to the proper
domains. For more information, see Network hierarchy.
The following diagram shows the workflow for an app and the role who is typically responsible for the
work.
Apps create or add new functionality in QRadar by providing new tabs, API methods, dashboard
items, menus, toolbar buttons, configuration pages, and more within the QRadar user interface.
You download apps from the IBM Security App Exchange. Apps that are created by using the
GUI Application Framework Software Development Kit integrate with the QRadar user interface
to deliver new security intelligence capabilities or extend the current functionality.
Every downloaded file from the IBM Security App Exchange is known as an extension. An
extension can consist of an app or security product enhancement (content extension) that is
packaged as an archive (.zip) file, which you can deploy on QRadar by using the Extensions
Management tool on the Admin tab.
Who can create an app?
You can use the GUI Application Framework Software Development Kit to create apps. For more
information about the GUI Application Framework Software Development Kit, see the IBM
Security QRadar App Framework Guide.
You download (https://exchange.xforce.ibmcloud.com/hub/extension/
517ff786d70b6dfa39dde485af6cbc8b) the SDK from IBM developerWorks®.
How do I share my app?
Only certified content is shared in the IBM Security App Exchange, a new platform for
collaborating where you can respond quickly and address your security and platform enhancement
requirements. In the IBM Security App Exchange, you can find available apps, discover their
purpose, and what they look like, and learn what other users say about the apps.
How do I get an app that I downloaded into QRadar?
A QRadar administrator downloads an extension, imports it into QRadar by using the Extensions
Management tool that is used to upload the downloaded extension from a local source.
Where do I get help for an app?
You can see information about an app in the overview section when you download the app from
the IBM Security App Exchange. For apps developed solely by IBM, you can find information in
the IBM Knowledge Center.
In the Additional Information section in the IBM Security App Exchange, a link to the
documentation for the app provides you with up-to-date help.
How do I contact customer support for an app?
From the X-Force Application Exchange or from the QRadar Assistant app, select the app. On the right
side of the page, a Support section contains information about how to contact customer support for apps
that are created by IBM or by an IBM Business Partner.
How much memory does an app need?
The combined memory requirements of all the apps that are installed on a QRadar Console cannot
exceed 10 per cent of the total available memory. If you install an app that causes the 10 per cent
memory limit to be exceeded, the app does not work.
If your app requires a minimum memory allocation, you must provide information about it in your
app's documentation.
Use the following table to help you find more troubleshooting information about apps:
Title Link
https://www.ibm.com/support/pages/qradar-app-
App Troubleshooting
troubleshooting
Symbols:
n - Not Applicable
- - Failure
* - Warning
+ - Success
Checks:
Service:
A - Service exists in the workload file
B - Service is set to started
Container:
C - Container is in ConMan workload file
D - Container environment file exists
E - Container image is in si-registry
G - Container Systemd Units are started
H - Container exists and is running in Docker
Port:
I - Container IP are in firewall main filter rules
J - Container IP and port is in iptables NAT filter rules
K - Container port has routes through Traefik
L - Container port is responsive on debug path
Results
If the results of the recon command show that an app is not started, you must ensure that the app is
set to RUNNING in the API.
You can use the qappmanager support utility. For more information,
see https://www.ibm.com/support/pages/qradar-about-qappmanager-support-utility.
When events or flows come into IBM® QRadar®, QRadar evaluates the domain definitions that are
configured, and the events and flows are assigned to a domain. A tenant can have more than one domain.
If no domains are configured, the events and flows are assigned to the default domain.
Domain segmentation
Domains are virtual buckets that you use to segregate data based on the source of the data. They are the building
blocks for multitenant environments. You configure domains from the following input sources:
Flow sources
Custom properties
Scanners
To consolidate the hardware configuration even further, you can use one collector for multiple customers.
If log or flow sources are aggregated by the same collector but belong to different tenants, you can assign
the sources to different domains. When you use domain definitions at the log source level, each log
source name must be unique across the entire QRadar deployment.
If you need to separate data from a single log source and assign it to different domains, you can configure
domains from custom properties. QRadar looks for the custom property in the payload, and assigns it to
the correct domain. For example, if you configured QRadar to integrate with a Check Point Provider-1
device, you can use custom properties to assign the data from that log source to different domains.
Automatic log source detection
When domains are defined at the collector level and the dedicated event collector is assigned to a single
domain, new log sources that are automatically detected are assigned to that domain. For example, all log
sources that are detected on Event_Collector_1 are assigned to Domain_A. All log sources that are
automatically collected on Event_Collector_2 are assigned to Domain_B.
When domains are defined at the log source or custom property level, log sources that are automatically
detected and are not already assigned to a domain are automatically assigned to the default domain. The
MSSP administrator must review the log sources in the default domain and allocate them to the correct
client domains. In a multitenant environment, assigning log sources to a specific domain prevents data
leakage and enforces data separation across domains.
When you create a tenant, you can set limits for both events per second (EPS) and flows per minute
(FPM). By setting EPS and FPM limits for each tenant, you can better manage license capacities across
multiple clients. If you have a processor that is collecting events or flows for a single customer, you do
not need to assign tenant EPS and FPM limits. If you have a single processor that collects events or flows
for multiple customers, you can set EPS and FPM limits for each tenant.
If you set the EPS and FPM limits to values that exceed the limits of either your software licenses or the
appliance hardware, the system automatically throttles the events and flows for that tenant to ensure that
the limits are not exceeded. If you do not set EPS and FPM limits for tenants, each tenant receives events
and flows until either the license limits or the appliance limits are reached. The licensing limits are
applied to the managed host. If you regularly exceed the license limitations, you can get a different
license that is more suitable for your deployment.
The EPS and FPM rates that you set for each tenant are not automatically validated against your license
entitlements. To see the individual limits for the software licenses that are applied to the system as
compared to the appliance hardware limits, do these steps:
SUBTASKS:
7.4.1. Understand user roles in a multi-tenant environment
• Service provider
• Tenants
7.4.2. Partition data by use of security profiles
User roles
A user role defines the functions that a user can access in IBM® QRadar®.
During the installation, four default user roles are defined: Admin, All, WinCollect, and Disabled.
Before you add user accounts, you must create the user roles to meet the permission requirements of your
users.
Users who are assigned an administrative user role cannot edit their own account. This restriction applies
to the default Admin user role. Another administrative user must make any account changes.
Procedure
1. Click the Admin tab.
2. In the User Management section, click User Roles and then click New.
3. In the User Role Name field, type a unique name.
4. Select the permissions that you want to assign to the user role.
The permissions that are visible on the User Role Management window depend on
which QRadar components are installed.
Important: If you select a user role that has Admin privileges, you must also grant that user role
the Admin security profile. See Creating a security profile.
Permission Description
Admin Grants administrative access to the user interface. You can grant specific Admin
permissions.
Users with System Administrator permission can access all areas of the user interface.
Users who have this access cannot edit other administrator accounts.
Administrator Manager
Grants users permission to create and edit other administrative user accounts.
Remote Networks and Services Configuration
Grants users access to the Remote Networks and Services icon on the Admin tab.
System Administrator
Grants users permission to access all areas of user interface. Users with this access
are not able to edit other administrator accounts.
Manage Local Only
Permission Description
Grants permission to assign and manage Local Only authentication. For more
information about Local Only authentication, see Assigning Local Only
authentication.
Network Grants access to all the functions in the Network Activity tab. You can grant specific access
Activity to the following permissions:
Maintain Custom Rules
Grants permission to create or edit rules that are displayed on the Network
Activity tab.
Manage Time Series
Grants permission to configure and view time series data charts.
User Defined Flow Properties
Grants permission to create custom flow properties.
View Custom Rules
Grants permission to view custom rules. If the user role does not also have
the Maintain Custom Rules permission, the user role cannot create or edit custom
rules.
View Flow Content
Permission Description
Grants permission to view source payload and destination payload in the flow data
details.
This permission is displayed only if IBM QRadar Vulnerability Manager is installed on your
system.
Grants access to the function in the Assets tab. You can grant specific permissions:
Perform VA Scans
Grants permission to complete vulnerability assessment scans. For more information
about vulnerability assessment, see the Managing Vulnerability Assessment Guide.
Risk Grants users permission to access QRadar Risk Manager functions. QRadar Risk
Manager Manager must be activated.
IP Right
Click Menu Grants permission to options added to the right-click menu.
Extensions
QRadar Log
Source Grants permission to the QRadar Log Source Management app.
Management
Pulse -
Grants permission to dashboards in the IBM QRadar Pulse app.
Dashboard
Pulse -
Grants permission to Threat Globe dashboard in the IBM QRadar Pulse app.
Threat Globe
QRadar
Grants permission to the IBM QRadar Assistant app.
Assistant
QRadar Use
Case Grants permission to the QRadar Use Case Manager app.
Manager
5. In the Dashboards section of the User Role Management page, select the dashboards that you want
the user role to access, and click Add.
Tip: A dashboard displays no information when the user role does not have permission to view
dashboard data. If a user modifies the displayed dashboards, the defined dashboards for the user
role appear at the next login.
6. Click Save and close the User Role Management window.
7. On the Admin tab menu, click Deploy Changes.
Editing a user role
You can edit an existing role to change the permissions that are assigned to the role.
If user accounts are assigned to the user role you want to delete, you must reassign the user accounts to
another user role. The system automatically detects this condition and prompts you to update the user
accounts.
You can quickly locate the user role that you want to delete on the User Role Management window. Type
a role name in the Type to filter text box, which is located above the left pane.
Procedure
1. On the Admin tab, click User Roles.
2. In the left pane of the User Role Management window, select the role that you want to delete.
3. On the toolbar, click Delete.
4. Click OK.
o If user accounts are assigned to this user role, the Users are Assigned to this User
Role window opens. Go to Step 7.
o If no user accounts are assigned to this role, the user role is successfully deleted. Go to
Step 8.
Security profiles
Security profiles define which networks, log sources, and domains that a user can access.
QRadar® includes one default security profile for administrative users. The Admin security profile
includes access to all networks, log sources, and domains.
Before you add user accounts, you must create more security profiles to meet the specific access
requirements of your users.
Domains
Security profiles must be updated with an associated domain. You must define domains on the Domain
Management window before the Domains tab is shown on the Security Profile Management window.
Domain-level restrictions are not applied until the security profiles are updated, and the changes are
deployed.
Domain assignments take precedence over all settings on the Permission Precedence, Networks,
and Log Sources tabs.
If the domain is assigned to a tenant, the tenant name appears in brackets beside the domain name in
the Assigned Domains window.
Permission precedence
Permission precedence determines which security profile components to consider when the system
displays events in the Log Activity tab and flows in the Network Activity tab.
Choose from the following restrictions when you create a security profile:
No Restrictions - This option does not place restrictions on which events are displayed in the Log
Activity tab, and which flows are displayed in the Network Activity tab.
Network Only - This option restricts the user to view only events and flows that are associated
with the networks that are specified in this security profile.
Log Sources Only - This option restricts the user to view only events that are associated with the
log sources that are specified in this security profile.
Networks AND Log Sources - This option allows the user to view only events and flows that are
associated with the log sources and networks that are specified in this security profile.
For example, if the security profile allows access to events from a log source but the destination
network is restricted, the event is not displayed in the Log Activity tab. The event must match
both requirements.
Networks OR Log Sources - This option allows the user to view events and flows that are
associated with either the log sources or networks that are specified in this security profile.
For example, if a security profile allows access to events from a log source but the destination network is
restricted, the event is displayed on the Log Activity tab if the permission precedence is set to Networks
OR Log Sources. If the permission precedence is set to Networks AND Log Sources, the event is not
displayed on the Log Activity tab.
Permission precedence for offense data
Security profiles automatically use the Networks OR Log Sources permission when offense data is
shown. For example, if an offense has a destination IP address that your security profile permits you to
see, but the security profile does not grant permissions to the source IP address, the Offense
Summary window shows both the destination and source IP addresses.
Keep Old Select this option to keep previously accumulated time series data. If you choose this option,
Data and users with this security profile might see previous data that they no longer have permission
Save to see when they view time series charts.
Hide Old Select this option to hide the time series data. If you choose this option, time series data
Data and accumulation restarts after you deploy your configuration changes.
Save
Section 8: Troubleshooting
This section accounts for 16% of the exam (10 out of 62 questions).
TASK: 8.1 Review and respond to system notifications
SUBTASKS:
8.1.1. Create system notifications
• System Notification Rule (or any rule with 'Notification' Response)
8.1.2. Locate system notifications
• Under the bell icon on GUI
• Custom Rule is System: Notification events in Log Activity
8.1.3. React to system notifications
License expired
38750123 - An allocated license has expired and is no longer valid.
Explanation
When a license expires on the console, a new license must be applied. When a license expires on a
managed host, the appliance continues to process events and flows up to the rate that is allocated from the
shared license pool.
When the license contributes EPS and FPM capacity to the shared license pool, the expiry might force the
shared license pool into a deficit where it does not have enough capacity to meet the requirements of the
deployment. In a deficit situation, functionality on the Network Activity and Log Activity tabs is
blocked.
User response
3. If the expired license is on a managed host, review the shared license pool to ensure that the
system has enough EPS and FPM capacity.
a. If the shared license pool is over-allocated, replace the expired license with a new license
that has enough EPS and FPM to meet the system capacity requirements.
b. If the license pool has enough capacity, delete the expired license. In the License table,
select the row for the expired license (shown nested beneath the managed host summary
row), and select Actions > Delete License.
Accumulator is falling behind
38750099 - The accumulator was unable to aggregate all events/flows for this interval.
Explanation
This message appears when the system is unable to accumulate data aggregations within a 60-second
interval.
Every minute, the system creates data aggregations for each aggregated search. The data aggregations are
used in time-series graphs and reports and must be completed within a 60-second interval. If the count of
searches and unique values in the searches are too large, the time that is required to process the
aggregations might exceed 60 seconds. Time-series graphs and reports might be missing columns for the
time period when the problem occurred.
You do not lose data when this problem occurs. All raw data, events, and flows are still written to disk.
Only the accumulations, which are data sets that are generated from stored data, are incomplete.
User response
The following factors might contribute to the increased workload that is causing the accumulator to fall behind:
Frequency of the incomplete accumulations
If the accumulation fails only once or twice a day, the drops might be caused by increased system load
due to large searches, data compression cycles, or data backup.
Infrequent failures can be ignored. If the failures occur multiple times per day, during all hours,
you might want to investigate further.
For example, if the failed accumulations occur during a large data search that takes a long time to
complete, you might prevent the accumulator drops by reducing the size of the saved search.
The workload of the accumulator is driven by the number of aggregations and the number of
unique objects in those aggregations. The number of unique objects in an aggregation depends on
the group-by parameters and the filters that are applied to the search.
For example, a search that aggregates for services filters the data by using a local network
hierarchy item, such as DMZ area. Grouping by IP address might result in a search that contains
up to 200 unique objects. If you add destination ports to the search, and each server hosts 5 - 10
services on different ports, the new aggregate of destination.ip + destination.port can increase the
number of unique objects to 2000. If you add the source IP address to the aggregate and you have
thousands of remote IP addresses that hit each service, the aggregated view might have hundreds
of thousands of unique values. This search creates a heavy demand on the accumulator.
To review the aggregated views that put the highest demand on the accumulator:
3. Review the business case for each of the largest aggregations to see whether they are still
required.
The system is unable to create offenses or change a dormant offense to an active offense. The default
number of active offenses that can be open on your system is limited to 2500. An active offense is any
offense that continues to receive updated event counts in the past five days or less.
User response
Change low security offenses from open or active to closed, or to closed and protected.
Tune your system to reduce the number of events that generate offenses.
To prevent a closed offense from being removed by your data retention policy, protect the closed
offense.
Disk usage system notifications
IBM® QRadar® disk sentry monitors the /, /store, /storetmp, /transient, and /var/log partitions before the
partitions reach a pre-defined usage threshold.
The following topics can help you identify and resolve common problems in your IBM
QRadar deployment. The following table displays the host context system notifications that depend on the
disk usage of each monitored partition.
Notification Description Suggested action
Disk Sentry: Disk Disk usage is at 90% for a monitored partition. QRadar is
Usage See Disk usage exceeded warning
not affected when the partition reaches this threshold.
exceeded warning Continue to monitor your partition levels. threshold.
threshold.
Disk Sentry: Disk Disk usage is at 95% for a monitored
Usage See Disk usage exceeded max
partition. QRadar data collection and search processes are
exceeded max threshold.
shut down to protect the file system from reaching 100%.
threshold.
Disk sentry: System To lower the disk usage threshold,
disk After disk usage reaches a threshold of 95%, it must
manually remove data from the
usage back to return to 92% before QRadar automatically restarts data
affected partitions. See Disk usage
normal collection and search processes.
returned to normal.
levels.
Table 1. Disk usage notifications
Free up disk space on your appliance to allow enough space for a backup to complete
in /store/backup.
Configure your existing backups to use a partition with free disk space.
Configure more storage for your appliance. For more information, see the Offboard Storage
Guide.
Question
Is it possible to suppress QRadar system notifications?
Answer
Procedure
1. Click the Offenses tab
2. Click the Rules icon.
3. In the list of rules, select the System: Notification rule.
4. Click Actions > Duplicate.
5. Type a name for the new rule and click OK.
6. Double-click the rule you duplicated to start the Rule Wizard.
7. Click the list of event QIDs in the rule.
8. Edit the rule to include the QIDs that you want to limit responses on. In this example, we
are going to limit 'License Nearing Expiration' notifications.
9. Click Next.
10. In the Response Limiter field, set a time frame for the rule.
11. Click the Enable Rule check box.
12. Click Next to view a summary of the rule.
13. Click Finish.
Now that the new rule is created, we must edit the original System Notification rule to
remove the License Nearing Expiration QID to ensure that the original rule is updated.
14. Double-click the original System: Notification rule.
15. Click the list of event QIDs in the rule.
16. Edit the rule to remove the value (38750124) License Nearing Expiration, which is the
event QID we are limiting the response for in our other rule.
17. Click Finish.
Results
A new rule is enabled with a response limiter for 48 hours. The original system
notification is updated to remove the QID for the license message as it is a repetitive
system notification.
QRadar system notifications
Use the system notifications that are generated by IBM® QRadar® to monitor the status and health of
your system. Software and hardware tools and processes continually monitor the QRadar appliances and
deliver information, warning, and error messages to users and administrators.
System notifications are displayed on the QRadar dashboard or in the notification window when
unexpected system behavior occurs. You can troubleshoot the most common QRadar notifications.
Disk usage system notifications
IBM QRadar disk sentry monitors the /, /store, /storetmp, /transient, and /var/log partitions before the
partitions reach a pre-defined usage threshold.
Asset notifications for QRadar appliances
Asset change discarded
38750106 - Asset Changes Aborted.
Explanation
An asset change exceeded the change threshold and the asset profile manager ignores the asset
change request.
The asset profile manager includes an asset persistence process that updates the profile
information for assets. The process collects new asset data and then queues the information before
the asset model is updated. When a user attempts to add or edit an asset, the data is stored in
temporary storage and added to the end of the change queue. If the change queue is large, the asset
change can time out and the temporary storage is deleted.
User response
The system detected one or more asset profiles in the asset database that show deviating or
abnormal growth. Deviating growth occurs when a single asset accumulates more IP addresses,
DNS host names, NetBIOS names, or MAC addresses than the system thresholds allow. When
growth deviations are detected, the system suspends all subsequent incoming updates to these
asset profiles.
User response
Determine the cause of the asset growth deviations:
Hover your mouse over the notification description to review the notification payload. The
payload shows a list of the top five most frequently deviating assets. It also provides
information about why the system marked each asset as a growth deviation and the number
of times that the asset attempted to grow beyond the asset size threshold.
In the notification description, click Review a report of these assets to see a complete
report of asset growth deviations over the last 24 hours.
Review Updates to asset data (https://www.ibm.com/docs/en/qsip/7.4?topic=management-
updates-asset-data).
Blocklist notification
38750136 - The Asset Reconciliation Exclusion rules added new asset data to the asset
blocklists.
External scan of an unauthorized IP address or range
38750126 - An external scan execution tried to scan an unauthorized IP address or address
range.
Automatic update notifications for QRadar appliances
Auto update installed with errors
38750067 - Automatic updates installed with errors. See the Auto Update Log for details.
Automatic update error
38750066 - Automatic updates could not complete installation. See the Auto Update Log for
details.
Automatic update successful
38750070 - Automatic updates completed successfully.
Automatic updates successfully downloaded
38750068 - Automatic updates successfully downloaded. See the Auto Updates log for
details.
Deployment of an automatic update
38750069 - Automatic updates installed successfully. In the Admin tab, click Deploy
Changes.
When you clean the SIM data model, all existing offenses are closed. Cleaning the SIM data model does
not affect existing events and flows.
False positive offenses might occur before you complete the tuning tasks. Clean the SIM data model to
ensure that each host on the network creates new offenses based on the current configuration.
The SIM data model can be cleaned by both of the following methods:
Soft Clean
Closes all offenses, but does not remove them from the system.
Hard Clean
Closes all offenses, and completely erases them from the system.
Note:
Don't do a Hard Clean of your SIM Model unless SupportQRadar directs you to.
After a hard clean or a soft clean, the console web server restarts.
Procedure
1. On the navigation menu ( ), click Admin.
2. From the Admin toolbar, click Advanced > Clean SIM Model.
3. Click Soft Clean.
4. Select the Are you sure you want to reset the data model? check box.
5. Click Proceed.
Depending on the volume of data in your system, this process might take several minutes.
If you attempt to go to other areas of the console during the SIM reset process, an error message is
displayed.
https://www.ibm.com/docs/en/SS42VS_7.5/pdf/b_qradar_system_notifications.pdf
Question
QRadar support cases often require logs to investigate and resolve issues. This technical note
explains how users can collect and submit information for IBM support cases for QRadar
software issues depending on the nature of the issue they are facing.
Cause
Quick links
1. What information do I need to submit to QRadar support for software issues?
o 1a. How to collect log files for QRadar support from the user interface
o 1b. How to collect log files for QRadar from the command-line interface
(get_logs.sh)
o 1c. How to collect log files for QRadar on Cloud
2. What information do I submit for event parsing issues?
o 2a. How to verify what DSM is version installed
o 2b. How to export events for review by support
3. What information do I submit to QRadar support for hardware issues?
o 3a. How to determine whether an appliance is IBM xSeries or Dell?
o3b. IBM xSeries appliances: How to run a Dynamic System Analysis (DSA)
report
o 3c. IBM xSeries appliances: How to run a Dynamic System Analysis (DSA)
report for a nonbooting appliance
o 3d. How to run an IMM log for non-hard disk issues
o 3e. Dell appliances: How to open a Dell Hardware Case and Generate logs by
using the iDRAC.
4. What information do I submit for WinCollect agent issues?
5. What information do I submit for Event Pipeline issues?
Answer
1. What information do I submit to QRadar support for software issues?
The following information can be submitted with customer service requests when you report
software issues in QRadar:
A detailed description of the issue, including the steps taken or changes made before the
issue occurred.
A screen captures showing the issue or on-screen error message.
The steps taken by the user or administrator to try to resolve the problem.
Logs exported from QRadar (see 1a).
Product version and build number. To view your version, from the QRadar Dashboard,
open the hamburger menu then click About. Example:
Related article: In my case, do I need to submit logs from multiple hosts when an
error occurs?
1a. How to collect log files for QRadar support from the user interface
Procedure
1. Log in to your QRadar console as an admin.
2. Click the Admin tab.
3. Click System & License Management.
4. Select the QRadar appliances that you want to collect logs from in the user interface.
Note: You can use Shift + click or Ctrl + click to get logs from multiple appliances. If
you do not select any appliance, the default action is to collect logs from the QRadar
Console. If you are troubleshooting application issues on an App Host appliance, select
both the App Host and Console appliance then collect logs for your case.
5. Select Actions > Collect Log Files.
6. In most cases, unless you are experiencing application or extension issues, the default
options can be used. If you are troubleshooting an issue with an application, such as an
installation issue or apps that fail to start, you must open the Advanced Options and
check the box Include Application Extension Logs.
Advanced Options
o Unless advised by QRadar Support, there is no need to enable the Include Debug
Logs check box.
o If you are having issues with a QRadar extension or installing an application,
select the Include Application Extension Logs check box.
o If you recently upgraded your appliance, installed software updates, or are having
issues with managed hosts, select the Include Setup Logs (Current
Version) check box.
o Most administrators can leave the Collect Logs for this Many Days to the default
of 1 day, but if you know the issue occurred before then, adjust the period
accordingly. Be aware that if you have many hosts and select many days, it takes
longer to collect the logs.
o Encryption of log files now prompts for a user-defined password. If this option is
selected the password must be passed onto IBM Support to facilitate decryption
of the log files.
7. Click Collect Log Files.
The log collection process starts and the status bar updates when log collection is
complete.
8. Click Download and save the file.
Results
Attach the log to your support ticket. Administrators who experience issues downloading
the file from the user interface can attempt to download the file by using WinSCP or
another secure copy utility to move the backup from the /var/log directory. Root access is
required for the appliance to use the get_logs.sh utility or secure copy files from an
appliance.
1b. How to collect log files for QRadar from the command-line interface (get_logs.sh)
To collect logs from the command-line, root access is required. The get_logs.sh utility is
available on every version of QRadar and can be run on each appliance individually to collect
logs. If you are having user interface issues, use this utility as a backup when the QRadar
Console to submit logs for your appliance.
Procedure
1. SSH in to the Console appliance as the root user.
2. Type the following command:
/opt/qradar/support/get_logs.sh
The script informs you that the log was created and provides the name and the location,
which is always the /store/LOGS/ directory.
Note: For administrators having application or extension issues, use the -a option to
collect application logs with your Console log information.
3. Copy the tar.bz2 file to a system that has access to an external network to upload your
log file.
4. For DLC appliances, compress the files from /var/log/dlc/* by using the command:
tar -zcvf DLC_logs.tar.gz /var/log/dlc/*
5. Copy the tar.gz file to a system that has access to an external network to upload your log
file.
Results
Attach the log files and provide an explanation of which events appear to be parsing
incorrectly in your ticket.
Result
Example output:
yum info| grep -i 3Com
Name : DSM-3ComSwitch
From repo : /DSM-3ComSwitch-7.4-20200303185634.noarch
Summary : DSM 3Com 8800 Series Switch Install
Description : This program installs a 3Com 8800 Series Switch DSM plugin.
This version information can be compared to what is posted on IBM Fix Central,
but included it in your support request.
Result
Example output:
# dmidecode 2.12
# SMBIOS entry point at 0x7f6be000
SMBIOS 2.5 present.
3b. IBM xSeries appliances: How to run a Dynamic System Analysis (DSA) report
Administrators who experience hardware issues on xSeries appliances can run the DSA utility
and submit a report with the hardware support request.
Procedure
1. Using SSH, log in to the remote QRadar appliance that is experiencing the hardware
error.
Note: You must first SSH to the Console, then open another SSH session to a managed
host in the deployment.
2. To change directory to the support folder, type:
cd /opt/qradar/support
3. To verify the permissions on the DSA utility, type:
ls -l *dsa*
4. If the permissions are rw-r-r-, you must change the permissions to be able to run the
DSA utility. To change permissions, type:
chmod 755 <DSA_build>_x86-64.bin
5. To run a DSA report for your appliance, type:
./<DSA_build>_x86-64.bin
Result
The DSA utility creates a .gz file in /var/log/IBM_Support with the machine type, serial
number, and date.xml.gz. For
example: /var/log/IBM_Support/7944AC1_KQ97NYC_20150927-163515.xml.gz
Copy this file from the remote host and upload it to your support case.
Note: If your system will not boot, follow the instructions in the next section (3c) for
non-booting appliances.
3c. How to run a Dynamic System Analysis (DSA) report for a nonbooting appliance
Administrators who experience hardware issues on xSeries appliances can run the DSA utility
and submit a report with the hardware support request. The following procedure outlines how
an administrator can collect a hardware report for an appliance that does not boot properly. This
hardware report is required and must be submitted with the service request. This procedure can
be followed for appliances that are suspended or frozen due to a hardware or software issue.
Procedure
1. Restart the QRadar Appliance.
2. Select F2 to enter diagnostics.
3. Hit ESC to stop memory test if it starts.
4. After a menu appears, arrow over to Quit, then select Quit to DSA.
5. Choose command-line option CMD.
6. Insert a Fat 32 formatted USB flash drive. The output file is typically under 1 MB.
7. Choose to collect DSA with no other options needed. Choose option 1 to collect DSA
diagnostics.
8. After 2 passes complete, exit back to the previous menu.
9. Choose the option copy to local media.
10. If USB flash drive is not seen, reseat and try again. If the USB flash drive is still not
seen, try a different USB device.
Note: The DSA can sometimes take a long time to start and run, which might appear to
administrators that the DSA program is not functioning. However, do not interrupt this process
as it can take up to 5 minutes between steps to collect the information and complete the report
before writing data to the USB flash drive.
Results
After the data is collected on the appliance, the files are saved to the USB flash device. The
process of writing the files to the USB drive takes a few seconds.
3d. How to run a Dynamic System Analysis (DSA) report for a nonbooting appliance
Administrators who experience non-hard disk hardware issues on xSeries appliances can run the
Download Service option from the IMMDSA utility and submit a report with the hardware
support request in addition to the DSA For this procedure, refer to the following Lenovo
link: Download service data option.
3e. Dell appliances: How to open a Dell Hardware Case and Generate logs by using the iDRAC
Administrators who experience hardware issues on Dell appliances use the integrated Dell
Remote Access Controller (iDRAC) card to generate a system report. The administrator can
submit the system report to QRadar Support for review. The following content is required in
your case for QRadar Support to review a Dell hardware issue:
1. A description of the hardware issue.
2. A screen cap or provide the text of the error message.
If we require logs or additional information, your ticket is updated to include further details and
the status is set to Awaiting your Feedback. If you have questions about this procedure, you
can always ask in our forums: ibm.biz/qradarforums.
To generate logs by using the integrated Dell Remote Access Controller (iDRAC) refer to these
Dell links:
PowerEdge - How to generate servers logs with the iDRAC?
How to export a SupportAssist Collection and the RAID Controller Log from iDRAC 7
and 8?
5. Click Start > All Programs > Accessories > Windows Explorer.
6. Navigate to the WinCollect installation directory. The default path is C:\Program Files\IBM\
WinCollect
7. To select multiple folders, press Ctrl and select the config and logs folders.
8. Right-click on one of the selected folders and select Send to > Compressed (zipped)
folder.
Results
Attach the log files and provide an explanation of your issue.
Results
Attach the log files and provide an explanation of your issue.
Result
The output of the tools is generated in the directory where you ran the tool. A file is
output as Custom(Properties|Rules)-{date}-(..).tar.gz. Upload the results of the find
expensive tools to your support case.
Procedure
1. On the Admin tab, click QRadar Use Case Manager > Configuration.
2. To sync with the data in QRadar®, click Sync QID Records. This process might take
approximately 30 minutes to complete. You can still use the app while the records are syncing, but
the data you work with might not be accurate.
3. To manually refresh event mappings, click Sync DSM event mappings.
When you install the app for the first time, it automatically syncs after installation. If you upgrade
to QRadar Use Case Manager 2.0.0 or later, you don't need to sync.
4. To back up your MITRE mappings (custom and IBM default), click Export MITRE mappings.
You can then import this backup file later on the Use Case Explorer page.
Only the custom mappings are imported from the file.
5. If you're upgrading to QRadar Use Case Manager 3.1.0 or later, you might see a section that is
called Report on migration from MITRE v6.3 to vx.x. This report appears if there were MITRE
mappings in the previous version of the app that are now deprecated with the support for MITRE
v8.1 or later. All custom mappings that were created in previous versions of the app are
automatically migrated to the new version. Mappings to techniques that are now deprecated or
don't exist under a particular tactic are deleted and included in the migration report. Consider
creating new mappings to these rules.
When you've noted the mappings that are affected, you can click Clear migration report to
permanently remove the report notification. Nonadministrative users can see the report migration
notification on the Use Case Explorer page.
6. To configure a proxy server, expand the Proxy configuration section and enter the following
information for your proxy server:
o Protocol
o Address or hostname
o Port
o Username
o Password
7. To configure offense contributions, expand the Performance configuration section, modify the
following options, and then click Submit.
Offense contribution
The default is enabled. When this option is disabled, the following features are disabled:
o Inactive reports in Use Case Explorer are disabled, including the access point to the 'Tune
inactive rules' card on the Tuning home page.
o Rule offense contribution trend on the Active Rules page is not visible.
o Event count column on the Active Rules page report table is removed.
Offense contribution in days
The default is 90 days. This setting takes effect only when the Offense contribution option is
enabled. The rule offense contribution trend and event count trend charts on the Active Rules page
are visible, but you can see only the data for the past (number of) days. Data that is already stored
in the database is not cleared when you change the number of days. Only new data is affected by
an update to this setting.
8. If you experience unexpected performance problems that are caused by generating the tuning
findings, such as a long time to load QRadar Use Case Manager or the app stops responding,
complete the following steps:
o Disable individual tuning findings in the Tuning findings configuration section.
o Set the number of reference set elements that trigger a finding if the number is exceeded.
The default is 5000.
o Click Submit and then refresh the Tuning Finding report or restart the app to implement
the updated changes.
9. Click Submit and then close the Settings pag
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.ibm.com/support/pages/
system/files/inline-files/$FILE/OpenMic_MaintainingQRadar101_updated2.pdf
https://www.securitylearningacademy.com/course/view.php?id=4159
Problem
How to determine what QRadar processes are using the most resources.
Symptom
The system is running a little slower than usual, and you need to determine which process is
taking up the most resources.
Results
Processes that are over 1700 milliseconds for more than a few intervals, might be an
indicator of an issue.
3. If you need more specific results, try watching the output for the single service only on
the specific port.
4. For example, if you are interested more in the ecs-ep service you can use the following
syntax:
/opt/qradar/support/threadTop.sh -p 7799
Where -p is the port on which a particular service is running.
5. The list of ports and services available for the appliance (it depends on the component in
the deployment) you can retrieve by using this command:
grep JMXPORT /opt/qradar/systemd/env/*
Wincollect Agent error message: 'configuration file
fingerprints don't match'
Troubleshooting
Problem
The error message: 'WinCollect Agent mismatch. RetrieveConfigurationUpdate succeeded,
but the configuration file fingerprints don't match' is generated when a version mismatch
exists between the QRadar Console and a managed WinCollect agent. Administrators who
experience this error message can confirm software versions are identical between their QRadar
appliance and managed WinCollect agents.
Cause
The WinCollect version on the QRadar Console displays a different version than what is
installed on the Windows host and generates an error. In most cases, this is due to the Windows
hosts being updated using the EXE. There are two methods to confirm the error message:
1. To view error messages from an individual agent, administrators can select the
QRadar Admin tab and click the WinCollect icon. Select your agent and click Show
Events to display all log activity search for all status messages from the WinCollect
agent sorted by newest event. Status messages from the WinCollect agent will include
information, warning, and error messages from the selected WinCollect agent.
2. Administrators who have access to the remote Windows host should verify the following
message in C:\Program Files\IBM\WinCollect\WinCollect.log
INFO SRV.System.WinCollectSvc.Service : Config change (or patch) detected on
configuration server. Attempting to download and extract...
INFO SRV.Code.ConfigurationPatchStrategy : Retrieving Configuration Update
ERROR SRV.Code.ConfigurationPatchStrategy : RetrieveConfigurationUpdate succeeded, but
the configuration file fingerprints don't match, exp:[FINGERPRINT INFORMATION] act:
[FINGERPRINT INFORMATION]
WARN SRV.System.WinCollectSvc.Service : Config change (or patch) download
failed validation. Not applying.
Figure 2: Review the Version column to determine the software version for a WinCollect
agent to determine if it differs from the AgentCore version listed.
Results
Administrators should note the version difference between the Console install 7.2.8-145
and the version on the Windows hosts 7.2.9-72. If there are version differences, fingerprint
error messages can be displayed in logs and status events. The administrator will need to
ensure that the software versions match is resolved to prevent future errors.
1. Download the latest WinCollect SFS file from IBM Fix Central.
2. Using SSH, log in to your Console as the root user. The SFS file is only installed on the
QRadar Console. There is no need to install the WinCollect SFS on non-Console
appliances.
3. Copy the WinCollect SFS file to the /tmp directory on the QRadar Console. If space in
the /tmp directory is limited, copy the SFS to another location that has sufficient space,
such as the /storetmp directory on QRadar 7.3.x Consoles.
4. To verify that the mount point /media/updates exists, type: mkdir -p /media/updates
5. To mount the SFS file, type the command for your QRadar version:
o QRadar 7.2.x: mount -o loop -t squashfs 720_QRadar_wincollectupdate-<version>.sfs
/media/updates
o QRadar 7.3.x: mount -o loop -t squashfs 730_QRadar_wincollectupdate-<version>.sfs
/media/updates
6. Install the WinCollect SFS file: /media/updates/installer
NOTE: To proceed with the WinCollect Agent update services need to be restarted on
QRadar to apply protocol updates. The following message is displayed:
WARNING: Services need to be shutdown in order to apply patches. This will
cause an interruption to data collection and correlation.
Do you wish to continue (Y/N)?
7. To continue with the update, type Y to continue.
8. When the update completes, remove the mounted SFS file with the following
command: umount /media/updates
Results
Administrators should verify if any agents have automatic updates disabled. WinCollect
agents that have the Automatic Updates Enabled column as 'False' will need to click
the Enable/Disable Automatic Updates button in the WinCollect user interface to set
the Automatic Update Enabled status to 'True'. Software updates for managed agents are
only allowed to send software updates to remote Windows hosts when Automatic
Updates Enabled displays 'True'.
https://www.ibm.com/support/pages/node/6590417
Problem
QRadar user interface (UI) is inaccessible because of httpd service failure.
Cause
httpd service failure due to:
Open a vim editor and removed the duplicate line for cert.cert and cert.key
Before:
SSLCertificateFile /etc/httpd/conf/certs/cert.cert
SSLCertificateFile /etc/httpd/conf/certs/cert.cert
SSLCertificateKeyFile /etc/httpd/conf/certs/cert.key
SSLCertificateKeyFile /etc/httpd/conf/certs/cert.key
After:
SSLCertificateFile /etc/httpd/conf/certs/cert.cert
SSLCertificateKeyFile /etc/httpd/conf/certs/cert.key
Restart the tomcat service:
systemctl restart tomcat
Note: Restarting Tomcat on the QRadar Console logs out users, halts event exports in progress.
Also, scheduled reports wont run until the service is running. Administrators with change
control might need a maintenance window before you restart Tomcat.
Validate tomcat and httpd service status:
systemctl status tomcat
systemctl status httpd
After the connection test completes successfully, you can log back in to QRadar. Administrators
might need to manually run reports that were scheduled to start during the tomcat outage. Users
can export events, execute the searches and export the results from the user interface.
Question
How can I verify that my deployment is healthy?
Answer
To assess the general health of your deployment, it is helpful to have standard checks to follow in
order to verify core functionality on your QRadar Console and Managed Hosts.
If you cannot access the UI and cannot access the system via SSH:
o Use a IMM or Hypervisor session to log in to the console to confirm that the
QRadar Console host is running and responsive. If you can connect here and the
console is running, there might be a network infrastructure problem blocking your
access. Contact your network administrator.
If you cannot access the UI but can access the Console system via SSH:
o Check systemctl status tomcat. If status is active or activating, the service is still
initializing. In some cases, you may need to wait for 10 - 15 minutes after starting
tomcat for the GUI to become accessible.
If any of the Managed Hosts are not in Active status, check the following
o From the Console command-line interface (CLI), try establishing an SSH
connection to each non-Active Managed Host (MH). If the connection fails or
times out:
Establish a local console connection to the MH via IMM Remote Console
or some other local Console connection such as hypervisor if the system is
a VM.
Verify whether the system is responsive and accepts login as 'root'.
If the system is unresponsive but you can access the system's local
console, this may indicate a network issue. For more information,
see Technote 1997106- Using Linux Networking Tools to
troubleshoot Interfaces .
If the MH is responsive via the local console connection, more
troubleshooting will need to be done to check network connectivity
between the Console and MH. For more information,
see Technote 1981436 - QRadar: Troubleshooting tunnels and SSH
issues in QRadar 7.2.5 and later .
o Does a Full Deploy task complete successfully for all Managed Hosts?
After the Deploy task completes and returns status for all hosts, review any
Managed Hosts that report Timed Out or otherwise fail to deploy:
Check for the file /opt/qradar/conf/hostcontext.NODOWNLOAD on the MH.
For more information on how to resolve this issue, see Technote
10744001 - Deploy Changes Does Not complete or Times out.
o Check that all partitions have sufficient free disk space:
From the Console CLI, run /opt/qradar/support/all_servers.sh -C 'df -h' to view the
current disk space usage on all Managed Hosts' partitions
Connect to the MH via SSH as 'root' and type: df -h. If the disk space
usage ('Use%') for any partitions (other than /recovery) are above
85% free up space on the relevant partitions and check again. For
more information on how to resolve space disk issues, please see our
Support 101 page on Troubleshooting disk space issues.
Dashboards largely rely on ariel queries against accumulated data (also referred to as Global
Views (GV) or Aggregated Data Views (ADV).
If some Dashboards are failing to populate, but others are working, we're likely seeing a
problem with the individual Global Views. To troubleshoot corrupted Global Views,
please note which Dashboards are affected.
If all Dashboards are failing to populate, check the statuses of these services:
o On all hosts the accumulator service should be Active. From the Console CLI, run
/opt/qradar/support/all_servers.sh -C 'systemctl is-active accumulator'
For any hosts where accumulator is not Active, try manually restarting the
service with the command systemctl restart accumulator
o The Console and all EP, FP, EP/FP, and Data node hosts should also have a
running ariel service. On the Console ariel runs as ariel_proxy_server. All other
relevant hosts will have ariel as ariel_query_server.
On the Console run:
systemctl is-active ariel_proxy_server
To check ariel on the remaining Managed Hosts. Run this command from
the Console CLI:
/opt/qradar/support/all_servers.sh -a '16%,17%,18%,21%' 'systemctl is-active ariel_query_server'
If the ariel* service is stopped, try starting the relevant service manually
with systemctl restart <service> on the affected systems.
If ariel fails on any system, you may not be able to retrieve event or
flow data from that Managed Host, accumulated or otherwise.
In the 'Last Event/Flow Seen' for the most recently updated Offenses, is the date/time
fairly recent?
o Confirm whether ecs-ep is running on all 31xx, 18xx, 17xx, and 16xx hosts:
On 7.3.0 and above: /opt/qradar/support/all_server.sh -C -a '31%,18%,17%,16%,software'
"systemctl is-active ecs-ep"
If the service state is 'failed', try restarting the service:
on 7.3.0 and above: /opt/qradar/support/all_servers.sh "systemctl restart ecs-ep"
If ecs-ep fails to start on any of systems, collect the logs for the Console
system and any affected Managed Host.
Getting Help: What information should be submitted with a QRadar
service request?
o If ecs-ep is running on all Managed Hosts (where expected):
Rule out the possibility that the system is working correctly and there are
simply no events or flows that are triggering Rules for Offense creation or
updates by creating a test rule to test Offense generation:
In the QRadar UI, click the Offenses tab, then select Rules.
Once the Rules display loads select Actions > New Event Rule.
Identify a Source IP (or IP range) in your environment that is
consistently generating events
Configure the new event rule with this criteria, using the address or
range identified above as the value for:
Apply Dummy Test Rule on events which are detected by the
local system and when the source IP is one of the following
IP addresses: Click next.
Set the Responses page with the following options:
1. Check: Ensure the detected event is part of an offense .
2. Check: Respond no more than 1 per 1 minute per
source IP.
3. Click Finish.
4. Make sure the Rule is enabled.
5. In the Log Activity tab, add a filter for the Source IP or
range you configured in the Rule, and watch for
incoming events.
When events show up for this IP or range, you should see that a new
Offense is created and the events for the source IP or range are
associated with that Offense. Disable the Rule once testing is
complete.
Do you see new events and flows (if present in your environment) in Log Activity and
Network Activity tabs while the View is set to Real Time?
Are your Console, Event Processors, and Event and Flow Processors receiving events?
o In Log Activity, select Quick Searches and choose Event Processor Distribution.
Make sure all of your Console, Event Processors, and Event and Flow
Processors are represented on the resulting search.
Can I run normal searches?
o Try running a simple 5 minute search filtering on 'Source IP is 127.0.0.1'. Does it
complete without errors?
If you receive IO errors, try using this information to troubleshoot:
QRadar: Understanding IO Errors while searching
Is your Assets tab populated as expected?
Additional checks
The checks described above should cover most core functionality. You can also check to
make sure all expected QRadar processes are running on any QRadar system:
o Run /opt/qradar/upgrade/util/setup/upgrades/wait_for_start.sh . Wait for at least 3 iterations (in
case the system is still initializing) to see if all listed processes show up as
Running. If any show as 'stopped' after a few iterations, break out of the utility
(using ctrl-c) and note which processes were not running.
If using HA, check the state of your HA pairs:
o Troubleshooting QRadar® HA deployments
Check Notifications in UI for any messages about Predictive Disk Failure
o QRadar: Troubleshooting Disk Failure or Predictive Disk Failure Notifications
If SSH or HTTPS connectivity is lost to your QRadar Console, local console access
might be needed to further troubleshoot system health. As a best practice, make sure that
this access is available in case of future problems:
o If running on IBM hardware, is IMM accessible?
Can you connect with the IMM Remote Console?
o If running as a virtual machine, can you reach local console through your
hypervisor?
o If running on non-IBM hardware, do you have provision for local console access.
You've experienced an issue with one of the troubleshooting steps, what should you do?
An error response message is returned in JSON format even for endpoints that support other MIME types.
The error response message includes error message itself, a description of the error, a unique error code
for the endpoint, an HTTP response message, and an HTTP response code.
For example, the following API request attempts to get information about a non-existent reference set that
is called "test-set":
https://<host_ip>/api/reference_data/sets/test_set
An HTTP 404 response code and the following JSON error response message are returned:
{
"message": "test_set does not exist",
"details": {},
"description": "The reference set does not exist.",
"code": 1002,
"http_response": {
"message": "We could not find the resource you requested.",
"code": 404
}
}
The following table provides more information about the HTTP response error categories returned by
the IBM® QRadar® REST API:
HTTP
HTTP error
response HTTP response message
category
Code
MULTIPLE The requested resource corresponds to any one of a set of representations,
300
CHOICES each with its own specific location.
MOVED
301 The resource has moved permanently. Please refer to the documentation.
PERMANENTLY
FOUND 302 The resource has moved temporarily. Please refer to the documentation.
SEE OTHER 303 The resource can be found under a different URI.
NOT MODIFIED 304 The resource is available and not modified.
The requested resource must be accessed through the proxy given by the
USE PROXY 305
Location field.
TEMPORARY
307 The resource resides temporarily under a different URI.
REDIRECT
BAD REQUEST 400 Invalid syntax for this request was provided.
UNAUTHORIZED 401 You are unauthorized to access the requested resource. Please log in.
FORBIDDEN 403 Your account is not authorized to access the requested resource.
We could not find the resource you requested. Please refer to the
NOT FOUND 404
documentation for the list of resources.
METHOD NOT
405 This method type is not currently supported.
ALLOWED
NOT
406 Acceptance header is invalid for this endpoint resource.
ACCEPTABLE
PROXY
AUTHENTICATIO 407 Authentication with proxy is required.
N REQUIRED
REQUEST Client did not produce a request within the time that the server was
408
TIMEOUT prepared to wait.
The request could not be completed due to a conflict with the current state
CONFLICT 409
of the resource.
The requested resource is no longer available and has been permanently
GONE 410
removed.
LENGTH
411 Length of the content is required, please include it with the request.
REQUIRED
PRECONDITION
412 The request did not match the pre-conditions of the requested resource.
FAILED
REQUEST
ENTITY TOO 413 The request entity is larger than the server is willing or able to process.
LARGE
REQUEST-URI
414 The request URI is longer than the server is willing to interpret.
TOO LONG
UNSUPPORTED
415 The requested resource does not support the media type provided.
MEDIA TYPE
HTTP
HTTP error
response HTTP response message
category
Code
REQUESTED
RANGE NOT 416 The requested range for the resource is not available.
SATISFIABLE
EXPECTATION
417 Unable to meet the expectation given in the Expect request header.
FAILED
MISSING
419 The requested resource is missing required arguments.
ARGUMENTS
INVALID The requested resource does not support one or more of the given
420
ARGUMENTS parameters.
UNPROCESSABL The request was well-formed but was unable to be followed due to
422
E ENTITY semantic errors.
INTERNAL
500 Unexpected internal server error.
SERVER ERROR
NOT
501 The requested resource is recognized but not implemented.
IMPLEMENTED
BAD GATEWAY 502 Invalid response received when acting as a proxy or gateway.
SERVICE
503 The server is currently unavailable.
UNAVAILABLE
GATEWAY Did not receive a timely response from upstream server while acting as a
504
TIMEOUT gateway or proxy.
HTTP VERSION
505 The HTTP protocol version used in the request message is not supported.
NOT SUPPORTED
INITIALIZATION A failure occurred during initialization of services. API will be
550
FAILURE unavailable.
1. To access the interactive API documentation interface, enter the following URL in your web
browser: https://ConsoleIPaddress/api_doc/.
2. Select the API version that you want to use from the list.
3. Go to the endpoint that you want to access.
4. Read the endpoint documentation and complete the request parameters.
5. Click Try it out to send the API request to your console and receive a properly formatted HTTPS
response.
Note: When you click Try it out, the action is performed on the QRadar system. Not all actions
can be reversed, for example, you cannot reopen an offense after you close it.
6. Review and gather the information that you need to integrate with QRadar.
https://www.youtube.com/watch?v=swGI5QWB29g
QRadar 1805 Event and Flow processor: 5000 EPS, 200,000 FPM
QRadar Event and Flow Processor 1829: 15,000 EPS, 300,000 FPM
QRadar Event and Flow Processor 1848: 30,000 EPS, 1,200,000 FPM
QRadar 3105 (All-in-One): 5000 EPS, 200,000 FPM
QRadar 3129 (All-in-One): 15,000 EPS, 300,000 FPM
QRadar 3148 (All-in-One): 30,000 EPS, 1,200,000 FPM