Professional Documents
Culture Documents
iii
Using Quick Search for Enterprise . . . .. 291 How to create custom reports from stored data .. 510
Using the Investigation Dashboard . . . .. 293 How to report on dormant tables and columns .. 516
Outliers Detection . . . . . . . . . . .. 294 How to Generate API Call from Reports . . .. 526
Enabling and disabling outliers detection . .. 295 How to use Constants within API Calls . . .. 532
Interpreting outliers . . . . . . . . .. 296 How to use API Calls from Custom Reports . .. 540
Grouping users and objects for outlier detection 297 Optional External Feed . . . . . . . . .. 547
Excluding events from outlier detection . .. 297 Mapping an External Feed . . . . . . . .. 549
Distributed Report Builder . . . . . . . .. 550
Chapter 6. Reports . . . . . . . .. 299 How to create a Distributed Report . . . . .. 556
Report parameters. . . . . . . . . . .. 303
Creating dashboards . . . . . . . . . .. 304 Chapter 7. Assess and harden . . .. 563
Configuring your dashboard . . . . . .. 305 Introducing Guardium Vulnerability Assessment 563
Using your dashboard . . . . . . . .. 305 Deploying VA for DB2 for i. . . . . . .. 567
Viewing a report . . . . . . . . . . .. 306 Vulnerability Assessment tests . . . . . . .. 569
Refreshing reports . . . . . . . . . .. 307 Defining a query-based test . . . . . .. 571
Exporting a report . . . . . . . . . .. 308 Defining a CAS-based test . . . . . . .. 573
Viewing Drill-Down Reports . . . . . .. 308 Assessments . . . . . . . . . . . . .. 574
Creating a report . . . . . . . . . . .. 309 Creating an assessment . . . . . . . .. 574
Data Mart . . . . . . . . . . . . .. 310 Creating a VA Test Exception . . . . . .. 575
Audit and Report . . . . . . . . . . .. 313 How to create a security assessment . . . .. 576
Queries . . . . . . . . . . . . . .. 313 Running an assessment . . . . . . . .. 582
Using the Query Builder . . . . . . .. 316 Viewing assessment results . . . . . . .. 582
Query Conditions . . . . . . . . . .. 318 VA summary . . . . . . . . . . .. 585
Domains, Entities, and Attributes . . . . . .. 323 Required schema change . . . . . . . .. 587
Domains . . . . . . . . . . . . .. 323 Assessing RACF vulnerabilities . . . . . .. 587
Custom Domains . . . . . . . . . .. 328 Configuration Auditing System . . . . . .. 588
Entities and Attributes . . . . . . . .. 341 CAS Start-up and Failover . . . . . . .. 593
Database Entitlement Reports . . . . . .. 405 CAS Templates . . . . . . . . . . .. 596
How to take advantage of over 600 predefined Working with CAS Templates . . . . . .. 606
reports . . . . . . . . . . . . . .. 422 CAS Hosts . . . . . . . . . . . .. 612
Predefined Reports . . . . . . . . .. 447 CAS Reporting . . . . . . . . . . .. 615
Predefined admin Reports . . . . . . .. 450 CAS Status . . . . . . . . . . . .. 623
Predefined user Reports . . . . . . . .. 480 Amazon RDS Discovery . . . . . . . . .. 626
Predefined Reports Common . . . . . .. 497
How to build a report and customize parameters 500 Index . . . . . . . . . . . . . .. 633
How to ask questions of the data. . . . . .. 507
IBM Guardium
August 18, 2015 IBM Guardium prevents leaks from databases, data warehouses
and Big Data environments such as Hadoop, ensures the integrity of information
and automates compliance controls across heterogeneous environments.
The IBM Guardium products provide a simple, robust solution for preventing data
leaks from databases and files, helping to ensure the integrity of information in the
data center and automating compliance controls.
1
v Track activities of end users who access data indirectly through enterprise
applications;
v Monitor and enforce a wide range of policies, including sensitive data access,
database change control, and privileged user actions;
v Create a single, secure centralized audit repository for large numbers of
heterogeneous systems and databases; and
v Automate the entire compliance auditing process, including creating and
distributing reports as well as capturing comments and signatures.
The Guardium solution is designed for ease of use and scalability. It can be
configured for a single database or thousands of heterogeneous databases located
across the enterprise.
Release Notes
IBM Guardium offers the most complete database protection solution for reducing
risk, simplifying compliance and lowering audit cost.
Description
Guardium version 10.0 contains many new and enhanced features touching every
aspect of functionality of the IBM Guardium application.
Announcement
See the IBM Guardium version 10.0 announcement for the following information:
v Detailed product description, including a description of new functionality
v Product-positioning statement
v Packaging and ordering information
v International compatibility information
System Requirements
For information about hardware and software compatibility, see the version 10.0
System Requirements document at http://www-01.ibm.com/support/
docview.wss?uid=swg27045976
Known Issues
Known issues are documented in the form of individual Technotes in the Support
knowledge base at the Guardium Support portal, http://www.ibm.com/support/
entry/portal/Overview/Software/Information_Management/
InfoSphere_Guardium.
As problems are discovered and resolved, the Support knowledge base is updated.
By searching the knowledge base, you can quickly find workarounds or solutions
to problems as well as other documents such as downloads and detailed system
requirements.
Support lifecycle
If you are using an older version of Guardium software, plan ahead to give
yourself time to plan for upgrades. You can find information about end-of-support
dates for IBM products at http://www.ibm.com/software/support/lifecycle/.
Datasources
A Guardium datasource identifies a specific database instance. Access to
datasources may be restricted based on the roles assigned to the datasource and to
the applications that use it. For example, the Value Change Auditing application
requires a high level of administrative access that would not be appropriate for
other less privileged applications.
S-TAP
Guardium collectors gather database activity, analyze it in real time, and log it for
further analysis and use in alerting. Guardium aggregators collect and merge
information from multiple Guardium collectors, as well as from other aggregators,
and produce holistic views of an entire environment. Collection and aggregation
processes allow Guardium to easily generate enterprise-level reports
Central Management
A central management system controls and monitors an entire Guardium
environment, including all collectors and aggregators, from a single console. In this
configuration, one Guardium system is designated as a central manager that
monitors and controls other Guardium units referred to as managed units. While
some applications (Audit Processes, Queries, Portlets, etc.) can be run from either a
managed unit or from the central manager, applications definitions are stored on
the central manager while data is provided by the local machine.
5
the contents of all aggregators into a single global view spanning all geographies.
Navigation
When you first log in to the Guardium user interface, there are two main menus -
the banner and the navigation menu.
You can expand and collapse the navigation menu by clicking the chevron icon ,
or you can hide the navigation menu completely by clicking the show / hide icon
.
The initial layout of your screen is determined by the license applied, the access
allowed based on roles, the machine type and a visibility factor. Examples of roles
are user, admin, access manager, and CLI. Roles are assigned to users and
applications to grant users specific access privileges.
Internet Explorer 9 (IE9) and above on Windows 7. And make sure your company
website is not listed in the Compatibility View selection of Internet Explorer.
Banner Menu
Item Description
System time clock The universal time on your Guardium
system.
To-Do List Contains the Audit Process To-Do List,
which can be filtered by user, and the
Processes With No Pending Results.
Help Open the product help by clicking Help >
Guardium Help.
The banner menu also contains important startup messages such as Low RAM
memory, Quick Search memory and CPU 4-cores minimum requirement, Certificate
expiration, Central Management failure, SSLv3 enabled or disabled, and No
License.
Navigation Menu
Each icon in the navigation menu represents one phase of the Guardium security
lifecycle, click any icon to expand it and see the components within the phase. The
lifecycle-centric navigation menu is one way to navigate the user interface and is
consistent across roles. Menu items may be customized and may or may not
appear based on your role.
Phase Description
Setup Configure your network settings, check the
status of your services, and setup datasource
definitions, groups, aliases, and alerts.
Manage Manage your environment's overall health,
S-TAPs, data, modules, maintenance, and
reports.
Discover Automatically discover new databases that
are introduced to your environment, and
find and classify sensitive data.
Harden Assess your environment's current
weaknesses with Vulnerability Assessment
and monitor changes made to your
environment with Configuration Auditing
System (CAS).
Investigate Monitor database activities and investigate
suspicious activity in any part of your
environment.
Many of the finder and builder applications in Guardium share this set of icons.
Icon Description
New Create a new item, such as a group or
datasource definition.
System View
The System View is the default initial view for many users. It enables you to see
key elements of system status.
Three tabs under the System View display different types of status information:
v The S-TAP Status Monitor displays summary data about S-TAPs that are
deployed in your environment. Icons represent the high-level status, and you
can drill down to view information about inspection engines.
v The Unit Utilization tab displays information about the usage of each Guardium
system.
v The System Monitor tab displays up-to-date details about incoming data, CPU
usage, and other information.
Each rule in a policy defines a conditional action. The condition can be a simple
test, for example a check for any access from a client IP address not found in an
Authorized Client IPs group, or the condition can be a complex test that evaluates
multiple message and session attributes such as database user, source program,
command type, time of day, etc. Rules can also be sensitive to the number of times
a condition is met within a specified timeframe.
The action triggered by the rule can be a notification action (e-mail to one or more
recipients, for example), a blocking action (the client session might be
disconnected), or the event might simply be logged as a policy violation. Custom
actions can be developed to perform any tasks necessary for conditions that may
be unique to a given environment or application.
Workflows
Workflows consolidate several database activity monitoring tasks, including asset
discovery, vulnerability assessment and hardening, database activity monitoring
and audit reporting, report distribution, sign-off by key stakeholders, and
escalations.
Auditing
Guardium provides value change auditing features for tracking changes to values
in database tables.
For each table in which changes are to be tracked, you can select which SQL
value-change commands to monitor (insert, update, delete). Before and after values
are captured each time a value-change command is executed against a monitored
table. This change activity is uploaded to Guardium on a scheduled basis, after
which all of Guardium‘s reporting and alerting functions can be used.
You can view value-change data from the default Values Changed report, or you
can create custom reports using the Value Change Tracking domain.
A classification policy is a set of rules designed to discover and tag sensitive data
elements. Actions can be defined for each rule in a classification policy, for
example to generate an email alert or to add a member to a Guardium group, and
classification policies can be scheduled to run against specified datasources or as
tasks in a workflow.
Guardium queries describe a set of information obtained from the collected data.
Queries are comprised of three elements: entities, fields, and conditions. Entities
define the scope of a query, fields list the columns of data to be returned by the
query, and conditions define tests to match against the data (greater than, less than,
contains, etc.)
A report defines how the data collected by a query is presented. The default report
is a tabular report that reflects the structure of the query, with each attribute
displayed in a separate column. All runtime parameters and presentation
components of a tabular report can be customized.
Access Control
Guardium provides access maps as a way to conveniently show data access
between database clients and database servers.
User Roles
A role defines a group of Guardium users who share the same access privileges.
Groups
Guardium supports the grouping of elements to simplify creating and managing
policies and to clarify the presentation of reports.
Grouping can simplify the process of creating policy and query definitions. It is
often useful to group elements of the same type, and grouping can make the
presentation of information on reports more straightforward. Groups are used by
all subsystems, and all users share a single set of groups.
For an example of grouping, assume that your company has 25 separate data
objects containing sensitive employee information, and you need to report on all
access to these items. You could formulate a very long query testing for each of the
25 items. Alternatively, you could define a single group called sensitive employee
info containing those 25 objects. That way, in queries or policy rule definitions,
you only need to test if an object is a member of that group.
There are two archive operations available from the Guardium Administration
Console: Data Archive and Results Archive. The path to these archive operations is
Setup > Tools and Views > Data Management.
Data Archive
With Data Archive, data is typically archived at the end of the day on
which it is captured, which ensures that in the event of a catastrophe, only
the data of that day is lost. The purging of data depends on the application
In an aggregation environment, data can be archived from the collector, from the
aggregator, or from both locations. Most commonly, the data is archived only once,
and the location from where it is archived varies depending on the customer's
requirements.
The GIM component includes a GIM server, which is installed as part of the
Guardium system, and a GIM client, which must be installed on servers that host
databases you want to monitor. After installing the GIM client, it works with the
GIM server to perform the following tasks:
v Check for updates to installed software
v Transfer and install new software
v Uninstall software
v Update software parameters
A common scenario involves the discovery of sensitive data. Sensitive data refers
to regulated information like credit card numbers, personal financial data, social
security numbers, and other information that requires special handling. Guardium
supports two different approaches for discovering sensitive data: by using the
Discover Sensitive Data workflow builder, or by using the Policy Builder with
other Guardium tools. The Discover Sensitive Data workflow builder is intended
as an all-inclusive tool for establishing discovery and classification processes for
sensitive data. Use it to specify rules for discovery, define actions to take on
discovered data, specify which data sources to scan, distribute reports, and run the
workflow on an automated schedule. For more advanced users, the Policy Builder
supports more granular discovery and classification rules that can be easily
incorporated into existing processes and Guardium applications.
Datasources
Datasources store information about your database or repository such as the type
of database, the location of the repository, or credentials that might be associated
with it. You must define a datasource in order to use it with Guardium
applications.
Procedure
v Open the Datasource Builder by navigating to Setup > Datasource Definitions.
13
v The first screen in the Datasource Builder is the Application Selection menu,
which lists all applications with which you can use the datasource definition.
Choose an application, and click Next.
v The Datasource Finder shows existing datasource definitions created for the
application you selected. Click New to add a datasource definition for the
selected application.
v Use the Datasource Definition dialog to provide information about the
datasource to be stored for future use. Depending on the application that you
select, and the type of datasource you use, the dialog varies slightly.
1. Enter a unique name for the datasource. Include both the database type and
server name in the datasource name to prevent future confusion between
datasources.
2. From the Database Type menu, select the database or type of file. For some
applications, the datasource must be a database, and cannot be a text file.
Depending on the type of database you select, some fields on the panel are
disabled, or the labels change.
3. Select a Severity Classification (or impact level) for the datasource. Severity
classification can be used to sort, filter, or focus datasources while you are
viewing reports and results.
4. Select Share Datasource to share the datasource definition across all
applications. If you do not share the datasource, the definition you create
can be used only with the application you chose.
5. Select Save Password to save and encrypt your authentication credentials
on the Guardium appliance. Save password is required if you are defining a
datasource with an application that runs as a scheduled task (as opposed to
on demand). When save password is selected, login name and password are
required.
6. Enter your credentials for Login Name and Password.
7. For the Host Name/IP field, enter the host name or IP address for the
datasource.
8. Use the table to complete Port based on your datasource type.
Datasource type and port number table
Database type Port number
Aster Data 2046
DB2 50000
DB2 for i 446
DB2 for z/OS 446
Hadoop 21000-21050
Informix 1526
MS SQL Server Port number grayed out - Use of this datasource allows a client
(Dynamic ports) and MS without a defined port value or where the dynamic function is
SQL Server (DataDirect - enabled from the MS SQL Server database server to connect
Dynamic ports) dynamically to a MS SQL server database. To define dynamic
port, go onto the DB serve for MS SQL Server and define 0 for
Dynamic port type and remove TCP/IP which by default is port
1433. Setting Dynamic port value to 0 and restarting the services
will set a dynamic IP.
MS SQL Server 1433
(DataDirect)
MongoDB 27017
9. Depending on the datasource type, the dialog varies slightly for the fields
after port.
– If DB2, enter the database name.
Chapter 3. Discover 15
– If DB2 iSeries or Oracle, enter the service name.
– If Informix, enter the Informix server name.
– For a non-text Database Type, in the Database box, enter the database
name (Informix, Sybase, MS SQL Server, PostgreSQL, or Teradata only).
If it is blank for Sybase or MS SQL Server, the default is master. For
Sybase database, the Database text box should contain either the
database name or default to master if it is blank (This works for
Entitlement Reports and Classifier. For VA, use the database instance
name.)
– For DB2, DB2 iSeries, or Oracle enter a valid schema name in the Schema
box to use.
– For a text file Database Type, in the File Name box, enter the file name.
10. Use the Connection Property box only if additional connection properties
must be included on the JDBC URL to establish a JDBC connection with
this datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.
– For a Sybase database with a default character set of Roman8, enter the
following property: charSet=utf8.
– For an Oracle Encrypted Connection you need to define a Connection
Property as:
oracle.net.encryption_client=REQUIRED;oracle.net.encryption_types_client=RC4_40
(Replacing with an encryption algorithm required by the monitored
instance, regardless of its type).
– NOTE that 3DES168 encryption is problematic. A datasource defined to
use 3DES168 encryption will incorrectly throw an ORA-17401 protocol
error or ORA-17002 checksum error when it encounters any SQL error.
Thereafter, the connection simply won't work until it is closed and
reopened.
– For a DB2 Encrypted Connection you need to define a Connection
Property as: securityMechanism=13
– For a DB2 iSeries Connection, define a Connection Property as:
property1=com.ibm.as400.access.AS400JDBCDriver;translate binary=true
– In Oracle, sys is an Oracle default user, is owner of the database instance,
and has super user privileges, much like root on Unix. SYSDBA is a role
and has administrative privileges that are required to perform many
high-level administrative operations such as starting and stopping the
database as well as performing such operations as backup and recovery.
This role (SYSDBA) can also be granted to other users. The phrase sys
as SYSDBA refers to the connection method required to connect as the sys
user.
– For monitor values for Oracle 10 (sys as SYSDBA) (this is for the Oracle
open source driver), enter the following: internal_logon=sysdba
– For DataDirect (Oracle driver), enter the following: SysLoginRole=sysdba
– In addition, if using CRYPTO_CHECKSUM_TYPES in your sqlnet.ora,
use the following examples:
–
oracle.net.encryption_client=aes256;oracle.net.crypto_checksum_types_client=SHA1
–
oracle.net.encryption_client=rc4_256;oracle.net.crypto_checksum_types_client=MD5
–
oracle.net.encryption_client=aes256;oracle.net.crypto_checksum_types_client=MD5
Note: To search multiple directories, you can define multiple file paths
for Database Instance Directory. Refer to the MongoDB row for an
example.
Table 1. Database Instances
Database
Type Database Instance Account Database Instance Directory/ Additional Hints
Db2 Often db2inst1 Home directory of db2inst1 or C:\Program Files\IBM\SQLLIB on
Windows
Chapter 3. Discover 17
Table 1. Database Instances (continued)
Database
Type Database Instance Account Database Instance Directory/ Additional Hints
Informix Often informix Something like /opt/IBM/informix on Unix, or C:\Program
Files\IBM\Informix. An environment variable INFORMIXDIR may be
defined.
MongoDB Often mongodb or mongos With MongoDB, you must specify multiple paths for the database
instance directory. Indicate a separate path by using a pipe “|” with
spaces.
You do not need to define all the listed paths. Whichever paths are not
defined will not be analyzed.
Oracle Often oracle, or version For example, /home/oracle9 on Unix, or C:\oracle\product\10.2.0\
specific such as oracle9 or db_1 on Windows. An environment variable ORACLE_HOME may be
oracle10 defined.
MSSQL2005
MSSQL2008
Then this column will be the Microsoft SQL Server directory with
Program Files
or
Chapter 3. Discover 19
c. Optionally click Roles to assign roles for the datasource.
d. Optionally click Add Comments to add comments to the definition.
e. Optionally click Test Connection to test connectivity of the defined
datasource.
f. Click Done when you are finished with the definition.
Procedure
v Open the Datasource Builder by navigating to Setup > Datasource Definitions.
v The Application Selection menu lists all applications with which you can use a
datasource definition. Choose the application for which the datasource you want
to modify was created, and click Next, bringing you to the Datasource Finder.
Cloning a datasource
Procedure
v Select the datasource that you want to clone from the Datasource Finder, and
click Clone.
v The information that you entered when the datasource definition was created
appears in the Datasource Definition dialog, with "copy Of" appearing before the
original name of the datasource. Change whatever fields you like.
v Click Apply to save the cloned datasource.
v
Modifying a datasource
Procedure
v Select the datasource that you want to modify from the Datasource Finder, and
click Modify.
v The information that you entered when the datasource definition was created
appears in the Datasource Definition dialog. Change whatever fields you like.
v Click Apply to save the changes that you made to the datasource.
Removing a datasource
Procedure
Select the datasource that you want to modify from the Datasource Finder, and
click Delete.
Reporting on datasources
Guardium provides reports on the datasources that are in your environment and
any changes made to them.
Procedure
v Open the Datasources report by navigating to Reports > Report Configuration
Tools > Datasources. The table that appears lists all datasources, and the
information that is stored in each datasource definition.
v Right-click any cell in the table and you are given two options: Datasource
Version History, and Invoke.
Note: You can customize the run time and presentation parameters of the
Datasources report by clicking the pencil icon.
Related concepts:
../com.ibm.guardium.doc.reference/cli_api/guardapi_datasource_functions.dita
You must enter the hostname, port, and service name as well as the custom URL.
Procedure
1. Determine the oracle service name. You can use commands like these:
SQL> set line size 5000;
SQL> select host_name, instance_name from v$instance;
SQL> select name from v$database;
SQL> show parameter service
Chapter 3. Discover 21
Database Auto-discovery
The Auto-Discovery application scans and probes your servers for open ports to
prevent unknown or unwanted connections to your network. You can run
auto-discovery processes on demand, or schedule the processes on a periodic basis.
Auto-discovery uses scan and probe jobs to ensure that no database goes
undetected in your environment.
v A scan job scans each specified host (or hosts in a specified subnet), and
compiles a list of open ports that are specified for that host.
v A probe job uses the results of the scan to determine whether there are database
services that are running on the open ports. A probe job cannot be completed
without first running a scan. View the results of this job in the Databases
Discovered predefined report.
Before you begin, you must download and install the patch for the Auto-discovery
application. The patch is available at IBM Fix Central.
Auto discovery has its own processes that are independent of audit processes, but
they work exactly the same way as audit processes.
You can only enter IPs when doing a scan, not host names, but Guardium does
detect host names as part of the report. Guardium does not truncate host names in
the Guardium product. However, it may be necessary to configure the report to
have wider columns.
Note: Discovery only finds running databases. Databases will need to be started if
discovery is to be used during the installation. Due to how the AIX KTAP
interception works, the databases need to be restarted after the first time S-TAP
runs. If the databases are not restarted, some interception will not work.
Note:
v Wildcard characters are enabled. For example: to select all addresses
beginning with 192.168.2, use 192.168.2.*.
v Specify a range of ports by putting a dash between the first and last port
numbers in the range. For example: 4100-4102.
v After you add a scan, modify the host or port by typing over it. Click Apply
to save the modification.
v If you have a dual stack configuration, you will need to set up a scan for
both the IPV4 and the IPV6 addresses.
v To remove a scan, click the Delete this task icon for the scan. If a task has
scan results dependent upon it, the scan cannot be deleted.
6. When finished adding scans, click Apply, and run the job or schedule the job
in the future.
Run or schedule scan and probe jobs as part of the Auto-discovery process.
1. Click Discover > Database Discovery > Auto-discovery Configuration.
2. Select the process to-be run from the Auto-discover Process Selector list and do
one of the following:
3.
v To run a job immediately, click Run Once Now.
v To schedule a job in the future, click Modify Schedule
Note: A probe job cannot run without the results of the scan job. You can
schedule the two jobs to run individually, or you configure the probe job to
run after the scan job by modifying a process, and checking the Run probe
after scan check box.
.
4. After you start or schedule a job, you can click Progress Summary to display
the status of this process.
Auto-discovery Reports
Open the Auto-discovery reports by clicking Discover > Reports and selecting
from the available reports.
You can create custom reports with the Auto-discovery Query Builder. Open the
Auto-discovery Query Builder by clicking Discover > Database Discovery >
Auto-discovery Query Builder.
Chapter 3. Discover 23
Databases Discovered Report
Open the Databases Discovered report by clicking Discover > Reports > Databases
Discovered.
The main entity for this report is the Discovered Port. Each individual port that is
discovered has its own row in the report. The columns that are listed are: Time
Probed, Server IP address, Server Host Name, DB Type, Port, Port Type (usually
TCP), and a count of occurrences.
There are no special runtime parameters for this report, but it excludes any
discovered ports with a database type of Unknown.
When an auto-discovery process definition changes, the statistics for that process
are reset.
Classification
Classification policies and processes define how Guardium discovers and treats
sensitive data such as credit card numbers, social security numbers, and personal
financial data.
Classification rules use regular expressions, Luhn algorithms, and other criteria to
define rules for matching content when applying a classification policy.
When the classifier runs, you have the option of specifying how it samples records.
The default behavior takes a random sampling of rows using an appropriate
statement for the database platform in question. For example, the classifier samples
using a rand() statement for SQL databases. The alternative behavior is sequential
sampling, which reads rows, in order, up to the specified sample size. Random
sampling is the default behavior and is generally recommended because it
provides more representative results. However, random sampling may run incur a
slight performance penalty when compared to sequential sampling. For both
random and sequential sampling, the default sample size is 2000 rows or the total
number of available rows, whichever is fewer. Larger or smaller sample sizes may
be specified.
The classifier periodically throttles itself to idle so it does not overwhelm the
database server with requests. If many classification rules are sampling data, the
load on the database server should remain constant but the process may take
additional time to run.
The classifier handles false positives by using excluded groups for schema, table
and table columns. Previously, it could be a complex process to set up Guardium
to ignore false positive results for future classification scans. Now, when you
review classifier results, you can easily add false positive results to an exclusion
group, and add that group to the classification policy to ensure those results are
ignored in future scans.
Chapter 3. Discover 25
logged or have their actions invoked. Being able to have multiple rules fire
together becomes important when you care about sensitive data appearing together
within the same table. For example, you may want to know when a table has both
a social security number and a Massachusetts drivers license.
The Fire only with Marker is a constant value, can be named any value, and must
have the exact same value across rules you want to group. This means that if one
rule has a marker of ABC then the other rule that you want to group it with must
also have a marker named ABC. Any other marker value and the rules are no
longer grouped.
You must use at least two rules of any values based on looking for data within the
same table name.
Continue on Match
The Fire only with Marker is also based on the Continue on Match. As an example,
if the following rules were defined such that Rule 3 does not match the Continue
on Match then no results will be returned regardless if all three marker rules were
positive. This is because you didn't get to run Rule 4 and the grouping will not fire
because all Fire only with Markers must execute and with positive results.
Use this option for reducing the granularity of data results. Some organizations
may want to do a survey of their data to discover which tables and columns have
sensitive data without necessarily needing to find every type of sensitive data in
that column. A new option for Continue on match, With Unmatched Columns
only, means that as soon as the classifier finds a match for that column, it will
ignore that column as it continues its processing.
Table 2. Summary of available classifier processing options
Continue on With Unmatched
match Columns only Granularity of Result
No N/A Table. Classifier will stop processing rules after the
first hit in the table.
Yes Yes Table and column. Classifier will record the first hit
for any given column and ignore it thereafter for
subsequent rules.
Yes No Detailed. Classifier will record hits for all columns for
all rules.
Procedure
Chapter 3. Discover 27
v As a task within a Compliance Workflow Automation Process, described elsewhere.
v As part of a Discover Sensitive Data Workflow, described elsewhere.
Procedure
1. From the Classification Process Builder, select the process to run, and click
Modify to open the Classification Process Builder.
2. Click the Run Once Now button to submit the job. This places the process on
the Guardium Job Queue, from which the Guardium system runs a single job
at a time. You can view the job status using the Guardium Job Queue.
3. Click the Done button when you are finished.
The Guardium Job Queue is available from the administrator portal only.
Procedure
To view the report, open the Guardium Job Queue by navigating to Discover >
Classifications > Guardium Job Queue.
Chapter 3. Discover 29
v Define a Search for Unstructured Data Rule - Match specific values or
patterns in an unstructured data file (CSV, Text, HTTP, HTTPS, Samba)
6. Click the New Action button to add an action to be taken when this rule is
matched. See Add a Classification Rule Action.
7. Click Accept to add the rule to the policy.
A catalog search rule searches the database catalog for table and/or column names
matching specified patterns. Wildcards are allowed: % for zero to any number of
characters, or _ (underscore) for a single character.
Procedure
1. In the Table Type row, mark at least one type of table to be searched: Synonym,
Table, or View. (Table is selected by default.)
2. Optionally enter a specific name or a wildcard based pattern in the Table Name
Like box. If omitted, all table names will be selected.
3. Optionally enter a specific name or a wildcard based pattern in the Column
Name Like box. If omitted, all column names will be selected.
4. Click the Accept button when you are done.
A search for data rule searches one or more columns for specific data values.
Wildcards are allowed: % for zero to any number of characters, or _ (underscore)
for a single character. For example, the Rule Type is Search for Data, the Table
Type is Table, and the Table Name Like is CREDIT%.
Procedure
1. In the Table Type row, mark at least one type of table to be searched:
Synonym, Table, or View. (Table is selected by default.)
2. In the Table Name Like row, optionally enter a specific name or a wildcard
based pattern. If omitted, all table names will be selected.
3. In the Data Type row, select one or more data types to search.
4. In the Column Name Like row, optionally enter a specific name or wildcard
pattern. If omitted, all column names will be selected.
5. Optionally enter a Minimum Length. If omitted, no limit.
6. Optionally enter a Maximum Length. If omitted, no limit.
7. In the Search Like field, optionally enter a specific value or a wildcard based
pattern. If omitted, all values will be selected.
8. In the Search Expression field, optionally enter a regular expression to define
a pattern to be matched. To test a regular expression, click the (Regex) button
to open the Build Regular Expression panel in a separate window.
9. In the Evaluation Name, optionally enter a fully qualified Java™ class name
that has been created and uploaded. The Java class will then be used to fire
and evaluate the string. There is no validation that the class name entered was
loaded and conforms to the interface. See Custom Evaluation and Manage
Custom Classes for more information on creation and uploading of Java class
files.
10. Optionally enter a Fire only with Marker name. See Fire only with Marker.
Procedure
1. In the Search Like box, optionally enter a specific value or a wildcard based
pattern. If omitted, all values will be selected.
2. In the Search Expression box, optionally enter a regular expression to define a
pattern to be matched. To test a regular expression, click the icon to open the
Build Regular Expression panel in a separate window.
3. Optionally enter a marker name.
Chapter 3. Discover 31
4. Optionally enter a Description.
5. Select an Action Type from the list. Depending on the action selected, a
different set of fields will appear on the panel.
v For the Ignore and Log Result actions, no additional information is needed.
– Ignore - Do not log the match, and take no additional actions.
– Log Result - Log the match, and take no additional actions.
v For all other actions, additional fields will appear on the panel, and you will
have to enter additional information.
– Add To Group Of Object-Fields Action
– Add To Group Of Objects Action
– Create Access Rule Action
– Create Privacy Set Action
– Log Policy Violation Action
– Send Alert Action
6. After actions have been added to the Classification Rule panel, the controls in
the table can be used to modify the actions defined.
7. Click Accept when you are done working with the rule definition.
Each time the classification rule is matched, a member will be added to the
selected Object-Field group on the Guardium system. You have the option of
replacing all members, or adding new members.
For a database file, the object component of the member will be the database table
name, and the field component will be the column name.
For an unstructured data file, the object component of the member will be the file
name (in quotes), and the field component will be the column name, but if column
names cannot be determined, the columns will be named column1, column2, etc.
Procedure
1. Do one of the following:
v Select an Object-Field Group from the list, or
v Click the Groups button, define a new group using the Group Builder, and
then select that group from the list.
2. Optionally mark the Replace Group Content box to completely replace the
membership of the selected group with members returned by this rule. By
default, this box is not marked, which means that new members will be added
to the group, but no members will be deleted. For a job that is run on demand,
this box is ignored, and you are given the opportunity to add or replace
members on the view results panel.
3. Click the Save button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.
Each time the classification rule is matched, a member will be added to the
selected Object group on the Guardium system.
You have the option of replacing all entries, or only adding new entries.
Procedure
1. Do one of the following:
v Select an Object Group from the list, or
v Click the Groups button, define a new group using the Group Builder, and
then select that group from the list.
Note: To use aliases with groups generated from Classifier - Open the Group
Builder, select the Object group generated by Classifier and then click
Modify. Click on the Aliases button in Group button to change the name of
the Object Group.
2. Optionally mark the Replace Group Content box to completely replace the
membership of the selected group with members returned by this rule. By
default, this box is not marked, which means that new members will be added
to the group, but no members will be deleted. For a job that is run on demand,
this box is ignored, and you are given the opportunity to add or replace
members on the view results panel.
3. From the Actual Member Content, select the naming convention that will be
used when adding the member to the group where 'Full' is the
schema.tablename and 'Name' is the tablename.
4. Click Save to add the action to the rule definition, close the Action panel, and
return to the rule definition panel.
Each time the classification rule is matched, an access rule will be inserted into an
existing security policy definition. The updated security policy will not be installed
(that task is performed separately, usually by a Guardium administrator).
Procedure
1. Select an Access Policy from the list. You must be authorized to access that
policy.
2. Enter a rule name in the Rule Description box.
3. Select an action from the Access Rule Action list.
4. Optionally select a Commands Group, or click the Groups button, define a new
Commands group using the Group Builder, and then select that Commands
group from the list.
5. To log field values separately, mark the Include Field checkbox. Otherwise, only
the table will be recorded (the default).
6. To include the server IP address, check the Include Server IP checkbox.
7. If you have selected an alerting action, a Receiver row appears on the panel,
and you must add at least one receiver for the alert. Click Modify Receivers to
add one or more receivers.
8. Click Accept to add the action to the rule definition, close the Action panel,
and return to the rule definition panel.
Chapter 3. Discover 33
Create Privacy Set Action
About this task
Each time the classification rule is matched, the selected privacy set's object-field
list will be replaced.
For a database file, the object component of the privacy set will be the database
table name, and the field component will be the column name.
For an unstructured data file, the object component of the privacy set will be the
file name (in quotes), and the field component will be the column name, but if
column names cannot be determined, the columns will be named column1,
column2, etc.
Procedure
1. Select the previously defined Privacy Set whose contents you want to replace.
2. Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.
Each time the classification rule is matched, a policy violation will be logged. This
means that classification policy violations will be logged (and can be reported)
together with access policy violations (and optionally correlation alerts) that may
have been produced.
Procedure
1. Select a Severity code from the list.
2. Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.
Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.
Procedure
1. Select a Notification Type code from the list.
2. Click the Modify Receivers button to add one or more receivers. The specified
receiver will be get one mail per datasource per rule per action. So, if a
datasource has three rules and each rule has two actions (that have at least one
match), then the user will get 2 * 3 = 6 mails.
3. Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.
Sensitive data discovery scenarios span three critical aspects of enterprise security:
v Discovery: locating the sensitive data that exists anywhere in your environment
v Protection: monitoring and alerting when sensitive data is accessed
v Compliance: creating audit trails for reviewing the results of sensitive data
discovery processes
The Discover Sensitive Data end-to-end scenario builder streamlines the processes
of discovery, protection, and compliance by integrating several Guardium tools
into a single user-friendly interface.
Table 3. Discover sensitive data tools map
Value Scenario Task Description Result
Discover Name and Provide a name and Creates a classification
Description description for the scenario process and classification
and its related processes and policy.
policies.
Optionally creates new
What to Create rules and rule actions datasource definitions.
discover for discovering and
classifying data.
Where to Identify datasources to scan.
search
Run discovery Run the scenario, review the
results, and define ad hoc
Protect Review report Creates an access policy.
grouping and alerting
actions.
Comply Audit Define recipients, a Creates an audit process.
distribution sequence, and
review options.
Schedule Create a schedule to run at
defined intervals.
This sequence of tasks guides you through the processes of creating a new
discovery scenario. This includes creating classification policies consisting of rules
and rule actions for discovering sensitive data, creating classification processes by
identifying datasources to scan for sensitive data, defining ad hoc policies (for
grouping and alerting, for example), and creating audit processes that distribute
results to different stakeholders at scheduled intervals.
While a discover sensitive data scenario creates underlying policies and processes
that can be accessed using other Guardium tools (for example the Classification
Policy Builder or through GuardAPI commands), there are no GuardAPI
commands for creating or modifying a discovery scenario.
Chapter 3. Discover 35
What to do next
Continue to the next section and provide a Name and description for your
discovery and classification scenario.
The name provided for the discovery scenario will also be used to name
underlying policies and processes.
During this step, you may also specify security roles that can access the discovery
scenario.
Procedure
What to do next
What to discover
Create policies consisting of rules and rule actions for discovering and classifying
sensitive data.
This task guides you through the processes of creating and editing classification
rules and rule actions for use in your discovery scenario.
Procedure
1. Open the What to discover section to define rules for discovering data.
2. Add rules to your discovery scenario by doing one of the following:
v Click the icon to create a new rule.
v Select rules from the Classification Rule Templates table and click the icon
to add predefined rules.
3. Define a new rule, or edit a rule template by selecting the template and clicking
the icon.
a. Provide a name and description while optionally specifying a special pattern
test at the beginning of the Name field. The rule name will also be used to
name the rule associated with the classification policy in the Classification
Policy Builder. If you require a special pattern test, it is recommended that
you work with its corresponding template (for example, use Bank Card -
Credit Card Number for credit card numbers).
b. Open the Rule Criteria section to define a regular expression and other
search criteria for the rule. If you are working with a rule template, an
appropriate regular expression is provided by default.
Attention: For rules created in the discover sensitive data scenario, the
default Data type includes both Number and Text.
c. Open the Actions section and define any rule actions that should be taken
when rule criteria match.
d. When defining multiple rule actions, you can optionally click the icon and
use the and icons to change the order in which the actions are executed.
e. Click Save when you are finished adding or editing rule definitions to
return to the What to discover section of the discovery scenario.
4. Optionally click the icon and use the and icons to change the order in which
rules are applied. Rule order is important as the default behavior stops rule
execution after the first match unless Continue on match is selected under
Rule criteria.
5. When you are finished working with rules, click Next to begin working on the
next section of the discovery scenario.
What to do next
Continue to the next section of the discovery scenario, Where to search.
Related concepts:
“Regular Expressions” on page 44
Regular expressions can be used to search traffic for complex patterns in the data.
Related reference:
“Actual Member Content” on page 40
Use the Actual Member Content field to define how objects are labeled by the
Add to Group of Objects rule action.
“Rule Criteria” on page 38
Chapter 3. Discover 37
“Special pattern tests” on page 61
You can use these special pattern tests to identify sensitive data that is contained in
the traffic that flows between the database server and the client.
Rule Criteria
Table 4.
Attribute Description
Table type Select one or more table types to search: Synonym, Table, or View. Table is
selected by default.
Data type Select one or more data types to search: Number, Text, or Date. Number
and Text are selected by default.
Search Optionally enter a regular expression to define a search pattern to match. To
expression test a regular expression, click the RE button to open the regular expression
editor.
Table name Optionally enter a specific name or wildcard pattern. If omitted, all table
like names are selected.
Column Optionally enter a specific name or wildcard pattern. If omitted, all column
name like names are selected.
Continue on If the next rule in the classification policy should be evaluated after this rule
match is matched, mark the Continue on Match checkbox. The default is to stop
evaluating rules once a rule is matched.
Search Optionally enter a specific value or a wildcard pattern. If omitted, all values
wildcard are selected.
Minimum Optionally enter a minimum length. If omitted, there is no limit.
length
Maximum Optionally enter a maximum length. If omitted, there is no limit.
length
Evaluation Optionally enter a fully qualified Java class name that has been created and
name uploaded. The Java class will then be used to fire and evaluate the string.
Note: There is no validation that the class name entered was loaded and
conforms to the interface.
The fire only withMarker is a constant value, can be named to any value,
and must have the exact same value across the rules you want grouped.
This means that if one rule has a marker of ABC then the other rule that
you want to group it with must also have a marker named ABC.
The Fire only with Marker also interacts with the Continue on Match flag.
For example, if the following rules were defined such that Rule 3 does not
match the Continue on match then no results will be returned regardless if
all three marker rules were positive. This is because you didn't get to run
Rule 4 and the grouping will not fire because all Fire only with markers
must execute with positive results.
Chapter 3. Discover 39
Table 4. (continued)
Attribute Description
Compare to Optionally select a group. The group selected will then be used as a group
values in of values to search against the tables and columns selected. As long as one
group of the values within a group, that is either a public or a classifier group,
matches, then the value rule will return data.
Show unique Mark the Show Unique Values checkbox to add details on what values
values matched the classification policy rules to the comments field of the resulting
report.
Unique Use regular expressions in the Unique values mask field to redact the
values mask unique values. For example, mark the Show unique values checkbox and
use ([0-9]{2]-[0-9]{3})-[0-9]{4} in the Unique values mask field to log
the last four digits and redact the prefix digits.
If your rules return the table name JJ_CREDIT_CARD from the schema DB2INST1, and
you have specified an Add to Group of Objects action, the Actual Member
Content selections behaves as follows:
v Selecting Fully Qualified Name adds DB2INST1.JJ_CREDIT_CARD to the selected
group.
v Selecting Object Name Only adds JJ_CREDIT_CARD to the selected group.
v Selecting Change/Full adds Change/DB2INST1.JJ_CREDIT_CARD to the selected
group.
Where to search
Identify datasources to scan for sensitive data.
In this task, identify the datasources you would like to search for sensitive data.
Procedure
1. Open the Where to search section to identify the datasources you would like to
search for sensitive data.
2. Add datasources to your discovery scenario by doing one of the following:
v Click the icon to open the Create Datasource dialog and add a new
datasource definition.
v Select datasources from the Available Datasources table and click the icon
to add existing datasources.
3. Define a new datasource, or edit an existing datasource by selecting the
datasource and clicking the icon. New datasources defined through the
discovery scenario can also be viewed or edited through the Datasource
Definitions tool.
a. Provide or edit the name of the datasource.
b. Select the appropriate database type from the Database type menu and
provide the requested information to complete the datasource definition.
The available fields differ depending on the selected database type.
c. When you are finished editing the datasource definition, click Save to save
your work and optionally click Test Connection to verify the datasource
connection.
d. When you are finished working with the datasource definition, click Close
to close the dialog.
4. When you are finished adding datasources, click Next to begin working on the
next section of the discovery workflow.
Results
A classification process is created after adding datasources to your discovery scenario
and saving the scenario. To view or edit this process directly, use the Classification
Process Builder.
What to do next
Chapter 3. Discover 41
applications.
After defining policies for discovering sensitive data and identifying datasources to
search, you can run the classification process and review the results. Running the
process and reviewing the results allows you to refine your policies, for example
specifying additional search criteria if you find the results too broad. It may be
necessary to go through several iterations of refining policies, running the process,
and assessing the results before achieving the desired results.
Procedure
1. Open the Run discovery section to test your discovery scenario.
2. Click Run Now to begin.
Attention:
v Depending on the policies you have specified and the number of datasources
you have selected to search, it may take several minutes or more to complete
the process of identifying sensitive data. The process status is indicated next
to the Run Now button, or you can monitor the process using the Guardium
Job Queue.
v You can also run the classification process by visiting the Classification
Process Builder, selecting your classification process, and clicking Run Once
Now.
3. When the discovery scenario has finished running, open the Review report
section to see the results.
4. While reviewing the results, you can define additional rules and actions based
on the results. Use the Filter to refine results (filtering is not supported with
more than 10,000 results).
a. Select the row(s) containing data you want to define actions against.
b. Click Add to Group to define a grouping action, or click Advanced Actions
to define an alerting action.
c. After completing the dialog to define a grouping or alerting action, click OK
to return to the results report.
Attention:
v Grouping and alerting actions added from the results table are considered
ad hoc actions that run only as invoked from the results table. These
actions will not appear in the What to discover > Edit rule > Actions
section of your discovery scenario, and they will not run automatically as
part of the discovery scenario or related classification processes.
v Use the Policy Builder to review, edit, and install alerting actions.
v Use the Group Builder to review and edit grouping actions.
5. When you are finished reviewing the results report, click Next to begin
working on the next section of the discovery scenario.
Results
After running the search for sensitive data, monitor its status next to the Run Now
button or using the Guardium Job Queue. You can use the Group Builder to
review any grouping actions or the Policy Builder to review and install any
What to do next
Audit
Optionally create an audit process by defining receivers, a distribution sequence,
and review options for the discovery and classification report.
You can define any number of receivers for the results of a discovery workflow,
and you can control the order in which they receive results. In addition, you can
specify process control options, such as whether a receiver needs to sign off on the
results before they are sent to the next receiver.
The audit process created by adding receivers to a discovery scenario inherits the
name of the scenario. For example, adding receivers to a discovery scenario named
"Find PCI" creates an audit process named "Find PCI Audit process" followed by a
date and time stamp.
Procedure
1. Open the Audit section to define receivers for discovery reports.
2. Add receivers to your discovery scenario by clicking the icon and defining
options for how the reports are delivered.
v If sending the report to Guardium users, roles, or groups, you will need to
define process control options.
v If sending the report to email recipients, provide their email address and
filter the report by a Guardium username that is appropriate for the email
recipient.
3. Click OK to add the receiver to the discovery workflow. Continue adding
additional receivers to the scenario if needed.
4. Optionally click the icon and use the and icons to change the order in
which reports are distributed to recipients. This is important when using
sequential distribution as it determines which receivers must review or sign the
report before it is sent to subsequent receivers.
5. When you are finished adding, editing, and ordering receivers, click Next to
begin working on the next section of the discovery workflow.
Results
An audit process is created after defining receivers and saving the discovery
scenario. To view, edit, or run this process directly, use the Audit Process Builder.
The audit process remains inactive until it is scheduled using the Schedule section
of the discovery scenario or using the Audit Process Builder. You can also run the
audit process by visiting the Audit Process Builder, selecting the audit process, and
clicking Run Once Now.
Chapter 3. Discover 43
What to do next
Optionally, continue to the next section of the discovery workflow, Schedule.
Related concepts:
“Building audit processes” on page 195
Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.
Scheduling
Optionally activate the audit process by scheduling it to run at defined intervals.
A schedule becomes part of an audit process along with any receivers specified in
the Audit section of the discovery scenario. Defining a schedule runs the audit
process at specified intervals and ensures that results from the associated
classification process are regularly distributed and reviewed.
Procedure
1. Open the Schedule section to define a schedule for discovering data.
2. Use the Schedule by menu to set daily or monthly intervals for the audit
process.
3. Use the Start schedule every and Repeat every check boxes to define how
many times per day and how many times within each hour to run the audit
process.
4. Use the Start date and time controls to define an explicit date and time for the
schedule to begin.
5. Clear the Activate schedule check box to deactivate the audit process while
retaining scheduling information for later use. The Activate schedule box is
checked by default, meaning that the audit process becomes active after saving
the schedule.
6. When you have defined a schedule, click Save to finish editing and close the
workflow editor.
Results
An audit process is created after defining a schedule and saving the discovery
scenario. To view or edit this audit process directly, use the Audit Process Builder.
Review the Scheduled Jobs report to see the status, start time, and next fire time
for scheduled audit tasks.
Related concepts:
“Building audit processes” on page 195
Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.
Regular Expressions
Regular expressions can be used to search traffic for complex patterns in the data.
This help topic provides instructions for using the Build Regular Expression Tool,
and several tables of commonly used special characters and constructs. It does not
provide a comprehensive description of how regular expressions are constructed or
used. See the Open Group web site for more detailed information.
The important point to keep in mind about pattern matching or XML matching
using regular expressions, is that the search for a match starts at the beginning of a
string and stops when the first sequence matching the expression is found.
Different or the same regular expressions can be used for pattern matching and
XML matching at the same time.
Note: IBM Guardium does not support regular expressions for non-English
languages.
When an input field requires a regular expression, you can use the Build Regular
Expression tool to code and test a regular expression.
To open the Build Regular Expression tool, click the icon next to the field that
will contain the regular expression. If you have already entered anything in the
field, it will be copied to the Regular Expression box in the Build Regular
Expression panel.
1. Select a category of regular expressions from the drop-down list.
2. Select a pattern from the drop-down list.
3. Enter or modify the expression in the Regular Expression box.
4. To test the expression, enter text in the Text To Match Against box, and then
click the Test button:
v If the expression contains an error (a missing closing brace, for example), you
will be informed with a Syntax Error message.
v The Match Found message indicates that your regular expression has found
a match in the text that you have entered.
v If no match is found, the No Match Found message is displayed.
5. We suggest that you repeat the step a number of times to verify that your
regular expression both matches and does not match, as expected for your
purpose.
6. To enter a special character at the end of your expression, you can select it from
the Select element list. To enter a special character anywhere else, you must
type it or copy it there.
7. When you are done making changes and testing, click Accept to close the Build
Regular Expression panel and copy the regular expression to the definition
panel.
Chapter 3. Discover 45
Table 6. Special Characters and Constructs
Character How do I do ... Example Matches No Match
literal Match an exact sequence of characters can can Can cab caN
(case sensitive), except for the special
characters described below
. (dot) Match any character including ca. can cab c cb
carriage return or newline (\n)
characters
* Match zero or more instances of Ca*n Cn Can Caan Cb Cabn
preceding character(s)
^ Match string beginning with following ^C. Ca ca a
character(s)
$ Match string ending with preceding C.n$ Can Cn Cab
character(s)
+ Match one or more instances of ^Ca+n Can Caan Cn
preceding character(s)
? Match either zero or one instance of Ca?n Cn Can Caan
preceding character(s)
| Match either the preceding or Can|cab Can cab Cab
following pattern
(x ...) Match the sequence enclosed in (Ca)*n Can XaCan Cn CCnn
parentheses
{n} Match exactly n instances of the Ca{3}n Caaan Caan Caaaan
preceding character(s)
{n,} Match n or more instances of the Ca{2,}n Caan Caaaan Can Cn
preceding character(s)
{n,m} Match from n to m instances of the Ca{2,3}n Caan Caaan Can Caaaan
preceding character(s)
[a-ce] Match a single character in the set, [C-FL]an Can Dan Lan Ban
where the dash indicates a contiguous
sequence; for example, [0-9] matches
any digit
[^a-ce] Match any character that is NOT in [^C-FL]an aan Ban Can Dan
the specified set
[[.char.]] Match the enclosed character or the [[.~.]]an or [[.tilde.]]an ~an @an
named character from the Named
Characters Table
[[:class:]] Match any character in the specified [[:alpha:]]+ abc ab3
character class, from the Character
Classes Table
Chapter 3. Discover 47
v ampersand &
v apostrophe \'
v left-parenthesis (
v right-parenthesis )
v asterisk *
v plus-sign +
v comma ,
v hyphen -
v period .
v full-stop .
v slash /
v solidus /
v zero 0
v one 1
v two 2
v three 3
v four 4
v five 5
v six 6
v seven 7
v eight 8
v nine 9
v colon :
v semicolon ;
v less-than-sign <
v equals-sign =
v greater-than-sign >
v question-mark ?
v commercial-at @
v left-square-bracket [
v right-square-bracket ]
v backslash \
v reverse-solidus \\
v circumflex ^
v circumflex-accent ^
v underscore _
v low-line _
v grave-accent `
v left-brace {
v left-curly-bracket {
v right-brace }
v right-curly-bracket
v vertical-line |
v tilde ~
v DEL 177
The following table describes the standard character classes that you can reference
within regular expression bracket pairs ([[:class:]]). Note that character classes are
location specific, so non-English versions of Guardium may use a different set of
character names.
v alnum - Alphanumeric (a-z, A-Z, 0-9)
v alpha - Alphabetic (a-z, A-Z)
v blank - Whitespace (blank, line feed, carriage return)
v cntrl - Control
v digit - 0-9
v graph - Graphics
v lower - Lowercase alphabetic (a-z)
v print - Printable characters
v punct - Punctuation characters
v space - Space, tab, newline, and carriage return
v upper - Uppercase alphabetic
v xdigit - Hexadecimal digit (0-9, a-f)
You can copy and paste any of the expressions into a field requiring a regular
expression. When using any of these examples, we strongly suggest that you
experiment by using it in the Build Regular Expression tool, entering a variety of
matching and non-matching values, so that you understand exactly what is being
matched by the expression.
Zip Code (US) (5 digits required, hyphen followed by four digits optional)
[0-9]{5}(?:-[0-9]{4})?
Chapter 3. Discover 49
50 IBM Guardium 10.0
Chapter 4. Protect
After you identify databases and file systems that contain sensitive data, you can
take several steps to protect that data. Protection options include masking data,
alerting personnel based on data access, and establishing policies that enforce
access restrictions.
Baselines
A baseline is a profile of access commands executed in the past, helping to identify
normal activity and anomalous behavior (inconsistent with or deviating from
behavior that is usual, normal, or expected).
When included in a security policy, the baseline becomes a baseline rule, which
allows all database access that has been included in the baseline.
The Policy Builder can generate suggested policy rules from the baseline. The
suggested rules can be edited and included in the policy ahead of the baseline rule,
so that alternative actions (alerts, for example) can be taken for some commands
that were seen in the baseline period. In addition, an examination of the suggested
rules provides valuable insight into the actual traffic patterns observed (types of
commands and frequency).
The Baseline Builder provides the ability to control what gets included in the
baseline, in several ways:
v By specifying a threshold to control how many occurrences of a command must
be seen before the command will be included in the rule. A threshold of one
includes every command observed, while a threshold of 1,000 includes only
those commands occurring 1,000 times or more.
v By controlling sensitivity to one or more attributes. For example, if the baseline
is sensitive to the database user, it will include commands for specific users only.
Users who did not execute the command during the baseline period would not
be allowed by the baseline rule.
v By limiting the connections included to subsets of server and client IP addresses.
The baseline always specifies a single client network mask and a single server
network mask. Each mask can be as inclusive or as exclusive as required.
v By merging data from different time periods. There may be traffic that occurs
during non-contiguous time periods that should be included in the baseline. You
51
can merge the data from any number of time periods into a single baseline. In
addition, the data can be filtered for specific client and server addresses.
With no sensitivity selected, each command that exceeds the threshold will be
included in the baseline.
If multiple types of sensitivity are selected, separate counts of each command are
maintained for each combination of values for each selected type (for each
combination of database user and source program, for example). Thus for each
type of sensitivity included, the number of combinations can increase dramatically.
When the baseline is sensitive to the time period, separate counts are maintained
for each time period defined. If overlapping time periods are defined (which is a
normal situation), a command will be counted only once, in the most restrictive
To illustrate how the Baseline Builder assigns requests to time periods, assume that
Saturday is included in three time periods:
v 24x7 (24 hours, 7 days a week)
v Saturday (24 hours only)
v Week End (48 hours - Saturday + Sunday)
Since the time period named Saturday is the most restrictive (24 hours only), all
requests time-stamped on Saturday will be counted in that time period, and not in
the more inclusive Week End or 7x24 time periods.
Baselines are generated using only the data currently available on the appliance
that is generating the baseline.
You may want to modify the suggested rules if you discover an activity that
occurred during the baseline period that you would like to monitor or alert upon
Chapter 4. Protect 53
in the future. You simply tailor the appropriate rule suggested from the baseline,
and assign the desired action. By default, the suggested rules will be positioned
before the baseline rule, so that the action specified will be taken before the
baseline rule executes to allow that command with no further testing of rules.
Note: The Policy Builder can also generate rules from the database ACL. See
“Policies” on page 57 for more information.
When generating suggested rules from either the baseline or the database ACL
(access control), the Policy Builder minimizes the number of suggested rules by
creating suggested object groups. For example, assume the baseline includes a
particular command that references only three objects: AAA, BBB, and CCC, and
that there is not already an object group defined consisting of only those three
objects. The Policy Builder will create a suggested object group for those objects,
and will generate a single rule for the command, which references the suggested
object group.
You can display the membership of a suggested object group, and you have the
option of accepting or rejecting each group. In the example just given, if you reject
the suggested object group, the single rule that references it will be replaced by
three suggested rules (one each for AAA, BBB, and CCC).
Creating a Baseline
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. Click New to open the Baseline Builder.
3. Enter a unique baseline name in the Baseline Description box. Do not include
apostrophe characters in the baseline description.
4. In the Baseline Sensitivity pane, mark each element to which the baseline will
be sensitive. The more sensitive the baseline, the more complex the testing
that will be done both when creating the baseline and more importantly, when
inspecting traffic. See the Overview, for more information about baseline
sensitivity.
5. In the Baseline Threshold pane, enter the minimum number of occurrences for
a command during the baseline period for that command to be included in
the baseline. If one or more sensitivity boxes have been marked, this count
applies to the combination of sensitive values.
If the approach you are taking in building your security policy is to always
allow the most commonly issued commands from the past, then set this
number upwards to the appropriate level. If, on the other hand, you want to
ensure that the baseline is comprehensive, then leave this value set to 1. In
either case, you can have the Policy Builder suggest rules from the baseline.
The suggested rules are sorted in descending order by frequency in the
baseline period, so you can decide at that time whether to include or modify
rules for each unique command issued.
6. Use the Baseline Network Information pane to identify the servers and clients
to be included in the baseline. The method used to select which IP addresses
to use to construct the baseline is the same for servers and clients.
For each address encountered in the baseline data, membership in an optional
tagged group is considered first. A tagged group is a specific list of IP
addresses for which baseline constructs will be generated. If a tagged group is
selected, and if an IP address encountered in the baseline data is included in
Note: After you successfully generate the baseline for the first time, additional
fields are displayed in the Baseline Generation panel. These fields allow you to
merge data from additional time periods into the baseline, and to restrict the client
and server IP addresses used during each additional time period.
Chapter 4. Protect 55
Merge Baseline Information
To merge baseline information (to include information from additional time
periods and/or from different groups of clients and servers, for example):
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline into which additional
baseline information is to be merged.
3. Click Modify to open the Edit Baseline panel.
4. Do not modify the Baseline Sensitivity selections. If you modify the baseline
sensitivity, you are prompted to generate a completely new baseline to replace
the existing one.
5. Optional. Set the Minimum number of occurrences for addition to Baseline
value in the Baseline Threshold pane. The value entered here has no impact
on information previously included in the baseline. Once something is added
to the baseline, it is not removed during a merge operation.
6. Optional. Enter alternative network information in the Baseline Network
Information pane. The displayed values are from the last generate or merge
operation. If the merged information comes from the same set of servers
and/or clients, leave these fields unchanged. Otherwise, make the appropriate
changes in this pane to select the traffic to be included in the baseline.
7. Click anywhere on the Baseline Generation pane title to expand the pane.
8. Supply both From and To dates to define the time period from which the
baseline is to be generated. Regardless of how you enter dates, any minutes or
seconds specified will be ignored.
9. Select the Merge radio button.
10. Optional. In the Filter Selection pane, limit the baseline generation to specific
client and/or server IP addresses by entering an IP address followed by a
network mask. For example, to select all client IP addresses from the
192.168.9.x subnet, enter 192.168.9.1 in the first Client IP box, and 255.255.255.0
in the second box. To include additional addresses, click the Add button, then
enter the additional address information
11. Click Generate to generate the baseline. If you have modified the baseline
definition, you will be prompted to save the definition before generating the
baseline.
Modify a Baseline
Caution: Before modifying a baseline definition, be sure that you understand the
implications of modifying it, particularly if the baseline whose definition you want
to modify and re-generate is used in an installed policy. If you modify and
re-generate a baseline contained in an installed policy, when you re-install that
policy it will use the new baseline. To provide a fall-back option for baselines used
by installed policies, consider instead cloning these baselines and policies, and
modifying and generating the cloned definitions. See Clone a Baseline for more
information.
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline to be modified. Click the
Modify button to open the Edit Baseline panel. Apart from the panel title, this
panel is identical to the Add Baseline panel. See Create a Baseline for
instructions on using this panel.
Remove a Baseline
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline to be removed.
3. Click Delete. You are prompted to confirm the action.
Policies
A security policy contains an ordered set of rules to be applied to the observed
traffic between database clients and servers. Each rule can apply to a request from
a client, or to a response from a server. Multiple policies can be defined and
multiple policies can be installed on a Guardium appliance at the same time.
Each rule in a policy defines a conditional action. The condition tested can be a
simple test - for example it might check for any access from a client IP address that
does not belong to an Authorized Client IPs group. Or the condition tested can be
a complex test that considers multiple message and session attributes (database
user, source program, command type, time of day, etc.), and it can be sensitive to
the number of times the condition is met within a specified timeframe.
The action triggered by the rule can be a notification action (e-mail to one or more
recipients, for example), a blocking action (the client session might be
disconnected), or the event might simply be logged as a policy violation. Custom
actions can be developed to perform any tasks necessary for conditions that may
be unique to a given environment or application. For a complete list of actions, see
Rule Actions Overview.
A policy violation is logged each time that a rule is triggered (except when the rule
explicitly requests no logging). Optionally, the SQL that triggered the rule
(including data values) can be recorded with the policy violation. Policy violations
can be assigned to incidents, either automatically by a process, or manually by
authorized users (see the Incident Management tab in the Guardium GUI. For
further information, see “Incident Management” on page 179.
Note: Correlation alerts can also be written to the policy violations domain (see
“Correlation Alerts” on page 131).
Chapter 4. Protect 57
In addition to logging violations, policy rules can affect the logging of client traffic,
which is logged as constructs and construct instances.
v Constructs are basically prototypes of requests that Guardium detects in the
traffic. The combinations of commands, objects and fields included in a construct
can be very complex, but each construct basically represents a very specific type
of access request. The detection and logging of new constructs begins when the
inspection engine starts, and by default continues (except as described)
regardless of any security policy rules.
v Each instance of a construct detected in the traffic is also logged, and each
instance is related to a specific client-server session. No SQL is stored for a
construct instance, except when a policy rule requests the logging of SQL for
that instance, or for a particular client/server session of instances (with or
without values).
To completely control the client traffic that is logged, a policy can be defined as a
selective audit trail policy. In that type of policy, audit-only rules and an optional
pattern identify all of the client traffic to be logged. See Use Selective Audit Trail
discussed later in this topic.
For information on Guardium for Applications (which also uses the Policy
Builder), see “Configure data masking policy” on page 166
For each rule, an optional Category and/or Classification can be assigned. These
are used to group policy violations for both reporting and incident management.
Some activities are normal and acceptable when they occur less than a certain rate.
But those same activities may require attention when the rate exceeds a tolerable
threshold. For example, if interactive database access is allowed, a consistent but
relatively low rate of login failures might be expected, whereas a sharply higher
rate might indicate an attack is in progress.
To deal with thresholds, a minimum count and a reset interval can be specified for
each policy rule. This can be used, for example, to trigger the rule action after the
count of login failures exceeds 100 (the minimum count) within one minute (the
reset interval). If omitted, the default is to execute the rule action each time the
rule is satisfied.
By default, the evaluation of access and exception rules for a unit of traffic ends
when a rule is triggered, providing that there is not multiple actions in one rule. In
cases where it is necessary to take multiple actions for the same or similar
conditions, mark the Continue to Next Rule box for that rule.
Note: Continue to Next Rule applies to access rules following access rules and to
exception rules following exception rules, but not to an exception rule following an
access rule or an access rule following an exception rule.
As baselines are only relevant to access rules, the use of baselines with exception
or extrusion rules can not limit/stop the continuance to the next rule.
When marked, the actual construct causing the rule to be satisfied will be logged
in the SQL String attribute and is available in reports. If not marked, no SQL
statement will be logged. To include the full values in the policy violation, mark
the Rec. Vals box for that rule.
Note: The full SQL with values will be available only in the policy violation
record, within the policy violations reporting domain. It will not be available in the
client traffic log, or on reports from the data access domain. To include full SQL
(with or without data values) in the client traffic log, use the Log Full SQL rule
actions.
Chapter 4. Protect 59
For more information about working with rules, see the following topics:
v View the Policy Rules for the Installed Policy
v Specify Values and/or Groups of Values in Rules
v Filter Rules to Display only a Subset
v Copy Rules
v Using Rules Suggested from the Baseline
v Using Rules Suggested from the Database ACL.
v Add or Edit Rules
v Using the Policy Simulator
For many rule attributes, you can specify a single value and/or a group value,
using controls like those illustrated for the App User.
Be aware that a group member may contain wildcard (%) characters, so each
member of a group may match multiple actual values.
When a Group is selected, be aware that the group may contain wildcards.
v Negative Rule: Mark the Not box to create a negative rule; for example, not the
specified App User, or not any member of the selected group, or neither the
specified App User nor any member of the selected group.
v Empty Value: Enter the special value guardium://empty to test for an empty
value in the traffic. This is allowed only in the following fields: DB Name, DB
User, App User, OS User, Src App, Event Type, Event User Name, and App
Event Text.
v To define a new group to be tested: Click the Groups button to define a new
group, and then select that group from the Group list.
v To match any value: Leave the value box blank, and select nothing from the
Group list (be sure that the line of dashes is selected, as in the example).
v To match a specific value only: Enter that value in the value box, and select
nothing from the Group list.
v To match any member of a group: Leave the value box blank, and select the
group from the list. If the minimum count is greater than 1, there will be a
single counter, and it will be incremented each time any member of the group is
matched.
v To match an individual value or any member of a group: Enter a specific value
in the value box, and select a group from the list. If the minimum count is
greater than 1, there will be a single counter, and it will be incremented each
time the individual value or any member of the group is matched.
v If the minimum count is greater than 1, count each individual value separately:
Enter a dot (.) in the value box, and select nothing from the group list. Note that
the dot option cannot be used for the Service Name or Net Protocol boxes. If the
minimum count is greater than 1, count each member of a group separately:
Enter a dot (.) in the value box, and select a group from the list. Note that the
dot option cannot be used for the Service Name or Net Protocol boxes.
Note: You can also use regular expressions in the following fields (DB user, App
User, SRC App, Field name, Object, App Event Values Text) by typing the special
value guardium://regexp/(regular expression) in the text box that corresponds to
the field.
Note: IBM Security Guardium does not support regular expressions for
non-English languages.
Each policy rule can include a single special pattern test. To use one of these tests,
begin the rule name with one of the special pattern test names, followed by a
space and one or more additional characters to make the rule name unique For
example, if you are searching for Social Security numbers of your employees, you
could name the rule guardium://SSEC_NUMBER employee. You can still specify all
other components of the rule, such as specific client and server IP addresses.
These tests match a character pattern, and that match does not guarantee that the
suspected item, such as a Social Security number, has been encountered. There can
be false positives under a variety of circumstances, especially if longer sequences
of numeric values are concatenated in the data.
guardium://CREDIT_CARD
Detects credit card number patterns. It tests for a string of 16 digits or for
four sets of four digits, with each set separated by a blank. This special
pattern test also works with American Express 15-digit credit card number
patterns (first digit 3 and second digit either 4 or 7). For example:
1111222233334444 or 1111 2222 3333 4444
When a rule name begins with "guardium://CREDIT_CARD", and there is
a valid credit card number pattern in the Data pattern field, the policy uses
the Luhn algorithm, a widely-used algorithm for validating identification
numbers such as credit card numbers, in addition to standard pattern
matching. The Luhn algorithm is an additional check and does not replace
the pattern check. A valid credit card number is a string of 16 digits or
four sets of four digits, with each set separated by a blank. There is a
requirement to have both the guardium://CREDIT_CARD rule name and
a valid [0-9]{16} number in the Search Expression box in order to have the
Luhn algorithm involved in this pattern matching.
guardium://PCI_TRACK_DATA
Detects two patterns of magnetic stripe data. The first pattern consists of a
semi-colon (;), 16 digits, an equal sign (=), 20 digits, and a question mark
(?), such as:
;1111222233334444=11112222333344445555?
The second pattern consists of a percent sign (%), the character B, 16 digits,
a carat (^), a variable-length character string terminated by a forward slash
(/), a second variable-length character string terminated by a carat (^), 31
digits, and a question mark (?), such as:
%B1111222233334444^xxx/xxxx x^1111222233334444555566667777888?
Chapter 4. Protect 61
guardium://SSEC_NUMBER
Detects numbers in Social Security number format: three digits, dash (-),
two digits, dash (-), four digits, such as 123-45-6789. The dashes are
required.
guardium://CPF
The Cadastro de Pessoas Físicas (CPF), a Brazilian personal identifier. It
contains 11 digits of the format nnn.nnn.nnn-nn, where the last two digits
are check digits. Check digits are computed from the original nine digits to
provide verification that the number is valid. The formatting characters
within the expression are optional. If there is a match on the expression,
the check digits are validated.
guardium://CNPJ
Cadastro Nacional de Pessoas Jurídicas (CNPJ), an identification number
used for Brazilian companies. It contains 14 digits of the format
00.000.000/0001-00 where:
v The first eight numbers show the registration.
v The next four numbers identify the entity branch. 0001 is the default
value for head quarters.
v The last 2 numbers are the check digits.
The formatting characters within the expression are optional. If there is a
match on the expression, the check digits are validated.
Rule actions
There are a number of factors to consider when selecting the action to be taken
when a rule is satisfied.
Note: With S-TAP TERMINATE, the triggering request usually will not be blocked,
but additional requests from that session will be blocked (on high rate, sometimes
more than one request may go through before the session is terminated).
S-GATE Actions
S-GATE provides database protection via S-TAP for both network and local
connections.
When S-GATE is available, all database connections (sessions) are evaluated and
tagged to be monitored in one of the following S-GATE modes:
v Attached (S-GATE is "on") – S-TAP is in firewalling mode for that session, it
holds the database requests and waits for a verdict on each request before
releasing its responses. In this mode, latency is expected. However, it assures
that rogue requests will be blocked.
Note:
v S-GATE/ S-TAP termination does not work on a client IP group whose members
have wild-card characters. S-GATE/S-TAP termination only works with a single
IP address.
v For version 8.0 and higher, S-GATE actions do not support Oracle ASO
encrypted traffic, or shared memory sessions for DB2® or Informix®, under
Linux.
v For MySQL databases, It should be noted that MySQL's default command line
connection is 'mysql -u<user> -p<pass> <dbname>’
In this mode, MySQL will first map all the objects and fields in this database to
support auto completion (with TAB). When a terminate rule on any object or
field that is involved in this mapping, it will immediately disable the connection
session. To avoid this, connect to MySQL with the "-A" flag, which will disable
the"'auto-complete" feature, and will not trigger the "terminate" rule. Another
option is to fine tune the rule and not terminate on ANY access to these
objects/field and instead find a criteria that is more narrow and will not trigger
the rule on the login sequence.
Alerting Actions
For each alert action, multiple notifications can be sent, and the notifications can be
a combination of one or more of the following notification types:
v Email messages, which must be addressed to Guardium users, and will be sent
via the SMTP server configured for Guardium. Additional receivers for real-time
email notification are Invoker (the user that initiated the actual SQL command
that caused the trigger of the policy) and Owner (the owner/s of the database).
The Invoker and Owner are identified by retrieving user IDs (IP-based)
configured via Guardium APIs. The choice Data Security User - Database
Associations (available from accessmgr) displays the mapping (this is similar to
what is displayed if running the Guardium API command
"list_db_user_mapping").
Chapter 4. Protect 63
v SNMP traps, which will be sent to the trap community configured for the
Guardium appliance.
v Syslog messages, which will be written to syslog.
v Custom notifications, which are user-written notification handlers, implemented
as Java classes.
Note: Alerts definition and notification are not subject to Data Level Security.
Reasons for this include alerts are not evaluated in the context of user, the alert
may be related to databases associated to multiple users and to avoid situations
where no one gets the alert notification.
Message templates are used to generate alerts. Multiple Named Message Templates
are created and modified from Global Profile. There are several types of alert
actions, each of which may be appropriate for a different type of situation.
v Alert Daily sends notifications only the first time the rule is matched each day.
v Alert Once Per Session sends notifications only once for each session in which
the rule is matched. This action might be appropriate in situations where you
want to know that a certain event has occurred, but not for every instance of
that event during a single session. For example, you may want a notification
sent when a certain sensitive object is updated, but if a program updates
thousands of instances of that object in a single session, you almost certainly
would not want thousands of notifications sent to the receivers of the alert.
v Alert Only - action that will write to message and message_text tables. This
action permits all policy violation notifications to be sent to a remote destination.
Designed to improve Guardium integration with other database security
solutions. This alerting action is similar to Alert per match.
v Alert Per Match sends notifications each time the rule is satisfied. This would be
appropriate for a condition requiring attention each and every time it occurs.
v Alert Per Time Granularity sends notifications once per logging granularity
period. For example, if the logging granularity is set to one hour, notifications
will be sent for only the first match for the rule during each hour. (The
Guardium administrator sets the logging granularity on the Inspection Engine
Configuration panel.)
The Log and Ignore commands are generally always available, but the Audit Only
action is only available for a Selective Audit Trail policy. Access rules, exception
rules and extrusion rules differ in what actions are permitted. Click on the Add
Action button for offerings.
v Audit Only: Available for a Selective Audit Trail policy only. Log the construct
that triggered the rule. For a Selective Audit Trail policy, no constructs are
logged by default, so use this selection to indicate what does get logged. When
using the Application Events API, you must use this action to force the logging
of database user names, if you want that information available for reporting
(otherwise, in this case, the user name will be blank).
v Allow: When matched, do not log a policy violation. If "Allow" action is
selected, no other actions can be added to the rule. Constructs are logged.
v FAM Alert and Audit - two rule actions - Alert, on a matching event, trigger an
alert (using receiver and template) and Audit, log the construct that triggered
the rule.
Note: For ignore responses per session, since the sniffer does not receive any
response for the query or it is ignored, then the values for COUNT_FAILED and
SUCCESS are whatever the default for the table says they are, in this case
COUNT_FAILED=0 and SUCCESS=1.
v Ignore session: The current request and the remainder of the session will be
ignored. This action does not log a policy violation, but it stops the logging of
constructs and will not test for policy violations of any type for the remainder of
the session. This action might be useful if, for example, the database includes a
test region, and there is no need to apply policy rules against that region of the
database. Ignore Session rules provide the most effective method of filtering
traffic. An ignore session rule will cause activity from individual sessions to be
dropped by the S-TAP or completely ignored by the sniffer. Note: connection
(login/logout) information is always logged, even if the session is ignored.
Chapter 4. Protect 65
v Ignore S-TAP session: The current request and the remainder of the S-TAP
session will be ignored. This action is done in combination with specifying in
the policy builder menu screen of certain systems, users or applications that are
producing a high volume of network traffic. This action is useful in cases where
you know the database response from the S-TAP session will be of no interest.
Two options for Ignore S-TAP session: IGNORE_ENTIRE_STAP_SESSION - This
is a "hard" ignore and can not be revoked, and IGNORE_STAP_SESSION
(REVOCABLE) - This is a "soft" ignore, and this rule action can enable the
session traffic to be sent again without requiring a new connection to the
database. REVOKE Ignore - Sessions that were ignored by the action "IGNORE
S-TAP SESSION (REVOCABLE)" will be resumed, meaning the traffic will be
sent to Guardium system after "REVOKE Ignore" command received by the
S-TAP. (This command can be sent from S-TAP control-->send command).
v Ignore SQL per session: No SQL will be logged for the remainder of the session.
Exceptions will continue to be logged, but the system may not capture the SQL
strings that correspond to the exceptions.
v Log Extrusion Counter: Available only for extrusion rules, this action updates
the counter, but does not log any of the returned data. This action saves disk
space when the counter value is most important and returned values are the
least important.
v Log Masked Extrusion Counter: Available only for extrusion rules, this action
updates the counter; logs the SQL request, replacing values with question marks;
does not log the returned data (response).
v Quarantine: Available for access, exception and extrusion rules, the purpose of
this action is to prevent the same user from logging into the same server for a
certain period of time. There is one validation item: you cannot have a rule with
a QUARANTINE action without having filled in a value for amount of time that
the user is quarantined. See Quarantine for (minutes) to set this quarantine time.
If the session is watched (S-GATE scenario), send a drop verdict. If the session is
not watched (S-TAP TERMINATE scenario), have the S-TAP stop the session.
Take the current time and add to that the number of minutes from the reset
interval field. You get a new timestamp. In a new structure you keep a sorted
list (sorted by this timestamp). Each element has in addition to the timestamp, a
server IP, server type, a DB user name, a service name and a flag saying whether
this was a watched session or not.
v No Parse - Do not parse the SQL statement.
v Quick Parse No Fields - Do not parse fields in the SQL statement.
v Quick Parse Native - This is used only for Guardium S-TAP for DB2 on z/OS.
Use this rule action in an environment where heavy traffic is overloading the
Sniffer. Use of this rule action should improve performance in the S-TAP for DB2
on z/OS.
v Quick Parse: For access rules only, for the remainder of the session, Do not parse
the SQL statement. This reduces parsing time. In this mode all objects accessed
can be determined (since objects appear before the WHERE clause), but the exact
object instances affected will be unknown, since that is determined by the
WHERE clause.
v Redact: For extrusion rules only, this feature allows a customer to mask portions
of database query output (for example, credit card numbers) in reports for
certain users. The selection Replacement Character in the Data Pattern/SQL
Pattern section of the extrusion rule menu choices defines the masking character.
Should the output produced by the extrusion rule match the regular expression
of the Data Pattern, the portions that match sub-expressions between parenthesis
Note:
Redaction (Scrub) on Linux is supported as of version 9.1. For all UNIX platforms,
Scrub is supported only with ANSI character sets.
Redaction (Scrub) rules should be set on the session level (meaning, trigger rules
on session attributes like IPs, Users, etc), not on the SQL level / attributes (like -
OBJECT_NAME or VERB), because if you set the scrub rule on the SQL that needs
to be scrubbed it probably will take a few miliseconds for the scrub instructions to
make it to the S-TAP where some results may go though unmasked.
To guarantee all SQL is scrubbed, set the S-TAP (S-GATE) default mode to "attach"
for all sessions (in guard_tap.ini). This will guarantee that no command goes
through without being inspected by the rules engine and holding each request and
waiting for the policy's verdict on the request. This deployment will introduce
some latency but this is the way to ensure 100% scrubbed data.
Note:
For HTTP support, there are Policy action limitations. The following policy actions
are not supported for HTTP: S-TAP terminate and Skip logging.
For policy conditions - these conditions are not supported for HTTP:
Client MAC; DB Name; DB User; App User; OS User; Src App; Masking Pattern;
Replacement Character; Quarantine for minutes; Records Affected Threshold; XML
Pattern; Event Type; Event User Name; App Event Values Text; App Event Values
Text Group; App Evert Values Text and Group; Numeric; Date.
Chapter 4. Protect 67
Further discussion and examples
Log Full Details
By default the Guardium collector masks all values when logging an SQL
string. For example
insert into tableA (name,ssn,ccn) values (’Bob Jones’, ’429-29-2921’,’29249449494949494’)
You can use an action under a policy extrusion rule in order to attach alternative
character sets to the session.
As a result an extrusion rule is attached to the session and Analyzer will use
EUC-JP in the session, if there is no other character set.
Chapter 4. Protect 69
As a result an extrusion rule us attached to the session and Analyzer will use
EUC-JP character set in the session in any case. Character set used before will be
substituted by EUC-JP.
Keep in mind that extrusion rules usually attach to the session with some delay.
Therefore short sessions or the beginning of the session are not immediately
changed by a character set change. The schema works for: Oracle, Sybase, MY
SQL, and MS SQL.
Analyzer rules
Certain rules can be applied at the analyzer level. Examples of analyzer rules are:
user-defined character sets, source program changes, and issuing watch verdicts for
firewall mode. In previous releases, policies and rules were applied at the end of
request processing on the logging state. In some cases, this meant a delay in
decisions based on these rules. Rules applied at the analyzer level means decisions
can be made at an earlier stage.
Log Flat
The Log Flat option listed in Policy Definition of Policy Builder allows the
Guardium appliance to log information without immediately parsing it.
This saves processing resources, so that a heavier traffic volume can be handled.
The parsing and merging of that data to Guardium's internal database can be done
later, either on a collector or an aggregator unit.
Rules on Flat
This section describes the differences on uses of Rules on Flat.
Note: Rules on flat does not work with policy rules involving a field, an object,
SQL verb (command), Object/Command Group, and Object/Field Group. In the
Flat Log process, "flat" means that a syntax tree is not built. If there is no syntax
tree, then the fields, objects and SQL verbs cannot be determined.
Without a selective audit trail policy, the Guardium appliance logs all traffic that is
accepted by the inspection engines. Each inspection engine on the appliance or on
an S-TAP is configured to monitor a specific database protocol (Oracle, for
example) on one or more ports. In addition, the inspection engine can be
configured to accept traffic from subsets of client/server connections. This tends to
capture more information than a selective audit trail policy, but it may cause the
Guardium appliance to process and store much more information than is needed
to satisfy your security and regulatory requirements.
When a selective audit trail policy is installed, only the traffic requested by the
policy will be logged, and there are two ways to identify that traffic:
v By specifying a string that can be used to identify the traffic of interest, in the
Audit Pattern box of the Policy Definition panel. This might identify a database
or a group of database tables, for example. Note that an audit pattern is a
pattern that is applied (via regular expression matching) to EACH SQL that the
logger processes to see if it matches. This pattern match is strictly a string
match. It does NOT match against the session variables (DB name, etc) the way
the policy rules do.
v Or by specifying Audit Only or any of the Log actions (Log Only, Log Full
Details, etc.) for one or more policy rules in a Rule Definition panel. With policy
rules you can be extremely precise, specifying exact values, groups or patterns to
match for every conceivable type of attribute (DB Type, DB Name, User Name,
etc.).
If the Guardium security policy has Selective Audit Trail enabled, and a rule has
been created on a group of objects, the string on each element in the group is
checked. If there is a match, a decision is made to log the information and
continue. If the Guardium security policy has Selective Audit Trail enabled, and a
rule has been created on a group of objects using a NOT designation on the object
group, there is still a need to check the string on each element in the group, and
decide to log and continue only if none of the elements match. NOT designated
rules behave the same as normal rules when used with Selective Audit Trail.
This includes:
v OR situations such as rules based on multiple objects or commands;
v Situations with two NOT conditions (for example, NOT part of a group of
objects and NOT part of a group of commands); and,
v Situations with one NOT condition and one YES condition (for example, a NOT
part of a group of objects and a YES part of a group of commands).
When a selective audit trail policy is used, and application users or events are
being set via the Application Events API, the policy must include an Audit Only
rule that fires whenever a set/clear application event, or set/clear application user
command is encountered.
Chapter 4. Protect 71
Selective Audit Trail and Application User Translation
When a selective audit trail policy is used, an Application User Translation is also
being used:
v The policy will ignore all of the traffic that does not fit the application user
translation rule (for example, not from the application server).
v Only the SQL that matches the pattern for that policy will be available for the
special application user translation reports.
Using a selective audit policy and specifying an empty group, with the idea that
anything that does not match one of the group members in the specified group
needs to be filtered out. However, this will result in an attempt to match ANY
rather than NONE. Therefore, since there are no group members, nothing gets
filtered out and everything is logged.
Creating policies
In addition to creating policies, you can modify, clone, or remove a policy.
Create a policy
Use this section to create a policy. The steps follow the menu fields on the Policy
Builder screen.
Note: Selective Audit Trail does not work with Exception rules.
10. Click Save to save the policy definition.
11. Optionally click Roles to assign roles for the policy.
12. Optionally click Comments to add comments to the definition.
Modify/Clone/Remove a Policy
Use this section for the steps on how to modify, clone or remove a policy.
Chapter 4. Protect 73
Modify a policy
Use caution before modifying a policy definition: be sure that you understand the
implications of modifying a policy that is in use. If the existing policy has to be
re-installed before all revisions have been completed, the policy may not install, or
it may not produce the desired results when installed. For this reason, it is
preferable to clone the policy, so that the original is always available to reinstall.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be modified.
3. Do one of the following:
v To edit overall policy settings (Category, Log Flat option, etc.) click Modify.
To change any of these settings, see Create a Policy.
v To edit the rules only, click Edit Rules. To modify any components of the
rule definitions, see Add or Edit Rules.
Clone a policy
There are a number of situations where you may want to define a new policy
based on an existing one, without modifying the original definition.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be cloned.
3. Click Clone to open the Clone Policy panel.
4. Enter a unique name for the new policy in the New Name box. Do not include
apostrophe characters in the name.
5. To clone the baseline constructs (the commands, basically) that have been
generated for the baseline being cloned, mark the Clone Constructs checkbox.
6. Click Save to save the new policy. You can then open and edit the new policy
via the Policy Finder. See Modify a Policy.
Remove a policy
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be cloned.
3. Click the Delete button. You will be prompted to confirm the action.
Chapter 4. Protect 75
11. When the Rec. Vals box is marked, the actual construct causing the rule to be
satisfied will be logged in the SQL String attribute and is available in reports.
If not marked, no SQL statement will be logged.
12. Message templates are used to generate alerts. Multiple Named Message
Templates are created and modified from Global Profile.
13. Select the action to take when the rule is satisfied.
14. If an alert action is specified, the Notification pane opens, and at least one
notification type must be defined.
15. Click Save to save the rule. This closes the Rule Definition panel and returns
to the Policy Rules panel.
When a policy contains many rules, it can be useful to view a subset of the rules
having common attributes.
The Filter box in the Rules Definition panel can be used for this purpose. The
process of defining a filter is similar to the process of defining a rule.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be viewed or modified.
3. Click Edit Rules.
4. In the Filter boxd do one of the following:
v Select a filter from the Filter list.
v Click Edit to modify a filter definition.
v Click New to define a new filter.
Once the filtered set of rules is displayed, you can perform any of the actions
described in this section on the displayed rules.
Copy Rules
Use this procedure to copy selected rules from one policy to another, or to a
different location in the same policy.
All of the rules copied will be copied to a single location - after rule 3, for
example. To copy rules to different locations in the receiving policy, either perform
multiple copy operations, or copy all of the rules in one operation, and then edit
the receiving policy to move the rules as necessary.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy from which you want to copy
one or more rules.
3. Click Edit Rules.
4. Mark the checkbox for each rule to be copied.
5. Click Copy Rules.
6. From the Copy selected rules to policy list, select the policy to receive the
copied rules.
7. From the Insert after rule list, select the rule after which the copied rules
should be inserted, or select Top to insert the copied rules at the beginning of
the list.
8. Click Copy. You will be informed of the success of the operation.
The Policy Builder can suggest rules from both the baseline included in the policy
and the database security policy (internal to the DBMS) defined for a server.
Before accepting a suggested object group, you can edit the generated Group
Description field (Suggested Object Group603-25 11:54, for example) to provide a
more meaningful name. After accepting a suggested object group, you can view its
membership. You can reject the use of that group within any suggested rule, but
you cannot edit the membership of that group.
Chapter 4. Protect 77
If you reject a suggested object group, the suggested rule for that group is replaced
with a separate suggested rule for each member of the rejected group. You can
accept or reject each of those suggested rules separately. After accepting a
suggested rule, you can edit that rule.
Viewing Suggested Object Groups
Suggested object groups display in the Object column of the Suggested
Rules panel as hypertext links beginning with the words Suggested Object
Group.
To view a suggested object group's membership, click the hypertext link
for that group. If the group has not yet been accepted, the group
membership displays in the Edit Group panel. If the group has already
been accepted, it displays in the View Group panel.
Accepting Suggested Object Groups
To accept a suggest object group:
1. Enter a meaningful name in the Group Description field in the Edit
Group panel. (Not required, but strongly recommended). Do not
include apostrophe characters in the name. This is the only opportunity
you have to name this group. Otherwise, the group gets a name
beginning with Suggested Object Group and followed by a number, as
described previously.
2. Click Save to accept the edited group for the suggested rule, or click
Save for All to accept the edited group for all suggested rules in which
it appears. The new object name will replace the old one in the rule.
Rejecting Suggested Object Groups
When you reject a suggested object group, the use of that group is replaced
by one or more suggested rules. To reject a suggested object group, do one
of the following:
v To reject the group for this suggested rule only: Click the Reject button.
v To reject the group for all suggested rules: Click the Reject for All
button.
Note: If you accept a suggested object group in one rule, open that same
suggested object group again from another rule, and then click the Reject for All
button, that group will be retained in any rule where it was explicitly accepted, but
rejected in the remaining rules in which it was used.
The Policy Builder does this by examining the permissions granted to user groups
and database objects (tables, procedures, and views) within the DBMS, then
grouping the database objects into suggested object groups so that the total
number of suggested rules can be minimized. You can accept or reject any
suggested object group (see Using Suggested Object Groups). You can also accept
or reject any suggested rule.
To have the Policy Builder suggest rules from the database ACL:
Note: If adding an Oracle, DB2 or DB2 for z/OS® datasource to access the DB
ACL, the Query Parameters section, in the Database Definition pop-up window,
will be disabled.
3. Click Suggest Rules to generate the rules. The Suggested Rules panel opens in
a separate window (as described previously, for the Rules Suggested from
Baseline). If you select one or more of the suggested rules and click Save, they
will be inserted in the same order into the list of rules in the Policy Rules
panel, just before the BASELINE rule. If there is no BASELINE rule, they will
be inserted at the beginning of the list. Once the suggested rules have been
inserted into the Policy Rules panel, you can change the order of the rules or
edit them, as necessary.
4. Check the membership of the suggested object groups. In the Object column,
any suggested object groups that have been created begin with the name
Suggested Object Group and display as hypertext links (in blue and
underlined). For information about how to view, edit, accept, or reject
suggested object groups, see Using Suggested Object Groups).
5. Mark the Select box for each suggested rule you want included in the policy.
Click Save to accept the selected rules.
Use the Policy Simulator to test access rules without installing the policy.
It does not test exception rules or extrusion rules. The simulator replays logged
network traffic and applies all access rules in the policy. It produces a special
report in a separate window, listing the SQL that triggered alert or log only
actions. The report includes the following columns: Timestamp, Category Name,
Access Rule Description, Client IP, Server IP, DB User Name, Full SQL String,
Severity Description, and Count of Policy Rule Violations. Use the CLI command,
store allow_simulation, to make the Policy Simulation button active in the GUI.
The Policy Simulator can be used to test only the following types of access rule
actions:
v Log Only
v Any Alert action: Alert Daily, Alert Once Per Session, Alert Per Match, Alert Per
Time Granularity
The Policy Simulator will not produce any results if the policy includes logging
actions other than Log Only. To use the simulator for such a policy, temporarily
change all logging actions to Log Only.
Chapter 4. Protect 79
3. Click Edit Rules.
4. Click the Policy Simulator button to open the Policy Simulator panel.
5. Supply both From and To dates to define the time period to use for the
simulation.
Note: Historical data can be archived and purged from your Guardium
appliance on a schedule defined by your Guardium administrator. Be sure that
data from the time period you specify is available (and has not been purged).
6. Click Test. When the test starts and while it is running, the message * is
running is displayed in the Policy Simulator panel. When the test completes, a
special report opens in a separate window listing all rule matches that were
logged. If no alert or log only rules were triggered, you will receive a No Drill
Down Report Available message. In the latter case, you may not have included
enough data in the test period.
Installing Policies
Use this topic to install the policy and modify schedule.
Multi-policy support
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Installation to open the Policy Installer.
2. Select the policy to be installed from the Policy Description box.
3. Do one of the following:
v Click Install to install the policy immediately.
v Click Modify Schedule to open the general-purpose scheduling utility, to
schedule the policy installation.
Note: Policies defined as baseline policies can be mixed with policies not defined
as baseline policies.
The order of appearance can be controlled during the policy installation, such as
first, last or somewhere in between. But the order of appearance can not be edited
at a later date.
The first installed policy has a special meaning, as it sets the value of the global
policy parameters. These parameters are: Global pattern; Is it a selective audit;
Client and Server net mask; Tagged Client and Server group ID.
This multi-policy support is available through the GUI (Setup > Tools and Views >
Policy Installation) and through GuardAPI.
The Guardium collector has many tasks such as Policy Installation, Audit
Processes, Group updates, etc. that are scheduled to run periodically. The Job
dependencies feature finds all jobs that have a direct relationship and impact on
the success of the execution of the task you are trying to schedule. Unless you find
the jobs that are defined as prerequisites for the job you are trying to schedule,
there is a chance the task will relay on inaccurate data , which might lead to false
or inaccurate results.
Feature Highlights
v User marks a scheduled job to find and run dependencies at run time.
v When the scheduler runs the job, it automatically finds all the
subordinate jobs and runs them in order.
v There is a retry sequence in case of a failure.
Find dependencies
v Identify scenarios that require dependencies.
v Identify Runnable vs. Non-Runnable jobs.
v Calculate pre-defined job dependencies.
Chapter 4. Protect 81
Job Suggested Prerequisite Job Reason
Audit Process Groups that are defined in a Groups that are referred to
condition of an audit task of by a query condition must be
type Report are either populated with up-to-date
scheduled or not scheduled data before an audit task of
to be populated by the type Report is run.
Populate From Query
mechanism.
Populate From Query Custom upload tables that
contain any of the entities of
the query that is used to
populate a group.
Audit Process Import Relevant for an aggregator
only. This prerequisite
guarantees that information
is imported from all
aggregated units before any
audit process can run.
Scheduler enhancements
v Find job dependencies when a scheduled job is run.
v Run job dependencies in order.
Runnable jobs can be scheduled; Non-Runnable jobs cannot.
A Group is a Non-Runnable job.
Populate From Query on a Group is Runnable.
Direct dependencies are objects that are tied together by definition, for
example, Policy depends on Rule and Rule depends on Groups.
Indirect dependencies are objects that are logically tied, for example, run
Audit processes before installing policies.
GUI support
1. Check box option, Auto run dependent jobs, after selecting Create Schedule
from Policy Installation.
2. Click Save to schedule the process. This notifies the user of the dependencies
status.
GuardAPI support
GuardAPI job dependency commands:
CLI> grdapi add_job_dependency
function parameters :
dependOnJobExecutedWithin - String
dependOnTrigger - String - required
intervalBetweenRetries - Integer
jobRetries - Integer
jobTrigger - String - required
runIfDependOnJobReturns - String
api_target_host - String
Chapter 4. Protect 83
runIfDependOnJobReturns - String
api_target_host - String
CLI> grdapi show_job_dependency_execution_profile
function parameters :
dependOnTrigger - String - required
jobTrigger - String - required
api_target_host - String
Run Scheduler
Scheduler will check for job dependencies when it is time to run a job.
Dependencies are executed in reverse order.
Example: Given a dependency tree:
Policy
Install
(Runnable)
Audit
Process
(Runnable/
Indirect
dependency)
Audit Task
Classification
Process
Classification
Policy
Classification
Policy
Action
Group
(Runnable/
direct -
Populate
from
Query)
There is a predefined report that shows what policies are installed. For the admin
user, select the Administration Console tab and then choose Policy Installation.
Click on the button “View Details Report” that will bring up a default report that
shows all the rules in every field.
Chapter 4. Protect 85
This report can be added to any portal page, if you have the privileges to the
report.
There are two things you may still be lacking when an auditor asks to show which
policy you have installed:
1. What policy was installed at a certain point in time?
2. Group members - if you look at the screen capture, you can see that some of
the rules refer to a group called PHI Objects. Unless the group members are
included, then there is some confusion on what the policy refers to.
In the policy editor you can always drill down to the group members, but you
may want this in the report as well.
Policy editor drill down
The way most users deal with (1) when was the policy installed, and (2)
list the members of the associated group, is to use a naming convention
and then use an audit process with predefined reports.
Chapter 4. Protect 87
Auditing Process Task 2
The filter is important here.
This is an example of the PDF that will be produced (Only one rule is
used, but both the policy and members can be seen).
Run this audit process daily (or whatever frequency) and produce a PDF
report to show the auditor what was installed every day:
The type of information that can be placed in this field is USER=x; WKSTN=y;
APPL=z.
Chapter 4. Protect 89
Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
Client IP Clear the Not box to include, or mark the Not box to exclude:
v Any client: Leave all client fields blank. The count will be incremented every
time any client satisfies the rule. (You cannot leave all fields blank if the Not
box is marked.)
v All clients selected by an IP address and mask: Enter a client IP address in the
first box and network mask in the second box. The count will be incremented
each time that any of the specified clients satisfies the rule. For example, to
select all clients in subnet 192.168.9.x, enter 192.168.9.1 in the first box and
255.255.255.0 in the second box. For more information selecting IP addresses,
see Selecting IP Addresses Using a Mask.
v A group of clients: Select a group of client IP addresses from the Group
drop-down list, or click the Groups button to define a new group and then
select that group. The count will be incremented each time that any member of
the selected group satisfies the rule.
v All clients selected by an IP address and mask AND a group of clients: Use
both the Client IP and Group fields. The count will be incremented each time
that any client specified using either method satisfies the rule.
Tuple supports the use of one slash and a wildcard character (%). It does not
support the use of a double slash.
Click on the Every box to select all the commands shown in Groups.
Continue to Next Rule If marked, rule testing will continue with the next rule, regardless of whether or
not this rule is satisfied. This means that multiple rules may be satisfied (and
multiple actions taken) by a single SQL statement or exception. If not marked (the
default), no additional rules will be tested for the current transaction when this
rule is satisfied.
Additional regular expressions (Regex) for use only in Data Patterns with an
action of Redact (Scrub):
Use this regular expression Turn this result Into this SCRUB_SSN_ANSI
Regex with Redact - Use of Regular expressions (regex) in the IBM Security
Guardium solution (including the masking in the policy) are executed on the
appliance, and allow advanced regexp capabilities.
However, the regex library for use with Redaction is executed in the kernel of the
database server and is limited to most basic regex. Only basic regex patterns can
be used with Redaction.
Access rule, data pattern and replacement character - Using a data pattern, for
example, [a-z,2]{3}([_][0-9]{1,2}) with a replacement character of * will change the
values between the parentheses in the data pattern to ***. Use this function to
mask values.
User Defined Character Sets
Available for Oracle, Sybase, MySQL, & MSSQL and for extrusion rules
only, users may influence the character set used by defining special
extrusion rules. These character set policy rules are only used to set the
character set a user would like to convert traffic to, setting an action is
irrelevant. In order to have an action for that traffic the user needs to
define additional rules after that character set rule. Two examples of
setting a character set rule are possible (hint or force) as defined in the
following examples:
Example of extrusion rule (with hint)
Will convert the traffic by character set as defined in the extrusion rule of
the installed policy ONLY if the regular conversion failed.
Character set EUC-JP (code 274).
Extrusion rule pattern: guardium://char_set?hint=274
Example of extrusion rule (with force)
Will convert the traffic by character set as defined in the extrusion rule of
the installed policy for ALL data.
Character set EUC-JP (code 274).
Extrusion rule pattern: guardium://char_set?force=274
Chapter 4. Protect 91
See List of possible character set codes at end of this topic.
Note: Keep in mind that extrusion rules usually attached to the session with
delay. Therefore short sessions or beginning of a session may be not immediately
Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
DB Name The database name. See Specify Values and/or Groups of Values in Rules.
DB Type
Supported DB Types
For access rule: Cassandra, CIFS, CouchDB, DB2, DB2 COLLECTION PROFILE*
(only for use with z/OS), FTP, GreenPlumDB, Hadoop, HTTP, IBM INFORMIX
(DRDA®), IBM iSeries, IMS™, IMS COLLECTION PROFILE (only for uses with
z/OS, Informix, MongoDB, MS SQL SERVER, MYSQL, NETEZZA, Oracle,
PostgreSQL, Sybase, TERADATA, VSAM or VSAM COLLECTION PROFILE*
(only for use with z/OS).
For exception and extrusion rules: Cassandra, CIFS, CounchDB, DB2, FTP,
GreenPlumDB, Hadoop, IBM INFORMIX (DRDA), IBM iSeries, Informix,
MongoDB, MS SQL SERVER, MYSQL, NETEZZA, Oracle, PostgreSQL, Sybase, or
TERADATA. Note: Informix supports two protocols SQLEXEC (native Informix
protocol) or DRDA (IBM protocol). These protocols are automatically identified for
Informix traffic with no additional settings. The Server Type attribute will show
INFORMIX (for SQLEXEC protocol) and IBM INFORMIX (DRDA) (for DRDA
protocol).
Note: TERADATA has a silent login and allows clients to auto-reconnect. To block
Teradata statements in a policy, use the S-TAP firewall function with default state
ON and un-watch safe users.
DB User The database user. See Specify Values and/or Groups of Values in Rules.
Error Code The error code (for an exception). See Specify Values and/or Groups of Values in
Rules.
Exception Type
The type of exception (selected from the list).
Note: A session closed by GUI timeout, in an Exception rule, will not produce a
Session Error (Session_Error).
Field Name
The field name. See Specify Values and/or Groups of Values in Rules.
Click on the Every box to select all the fields shown in Groups.
Min. Ct. The minimum number of times the condition contained in the rule must be
matched before the rule will be satisfied (subject to the Reset interval).
Net. Protocol The network protocol. See Specify Values and/or Groups of Values in Rules.
Object
The object name. See Specify Values and/or Groups of Values in Rules.
Click on the Every box to select all the objects shown in Groups.
Object/Command Group Match a member of the selected Object/Command group.
Object/Field Group Match a member of the selected Object/Field group.
OS User Operating system user. See Specify Values and/or Groups of Values in Rules.
Pattern A regular expression to be matched, in the Pattern box. You can enter a regular
expression manually, or click the (Regex) button to open the Build Regular
Expression tool, which allows you to enter and test regular expressions.
This field affects the output of the rule rather than the definition of the rule
(example, what happens when it is triggered, rather than when should it trigger).
Should the output produced by the extrusion rule match the regular expression,
the portions that match sub-expressions between parenthesis '(' and ')' will be
replaced by the Masking character.
Reset Interval Used only if the Min. Ct. field is greater than zero. This value is the number of
minutes after which the condition met counter will be reset to zero.
Revoke This checkbox appears on extrusion rules only. It allows you to exclude from
logging a response that has already been selected for logging by a previous rule
in the policy. In most cases you can accomplish the same result more simply by
defining a single rule with one or more NOT conditions to exclude the responses
you do not want, while logging the remaining ones that satisfy the rule. (The
Revoke checkbox pre-dates NOT conditions, and is provided mainly for backward
compatibility to support existing policies.)
Rule Description
The name of the rule. To use a special pattern test in the rule, enter the special
pattern test name followed by a space and one or more additional characters to
make the rule name unique, for example: guardium://SSEC_NUMBER employee.
(See Special Pattern Tests for more information.)
When displayed, the name will be prefaced with the rule number and the label
Access Rule, Exception Rule, or Extrusion Rule, to identify the rule type. If the
rule was generated using the Suggest Rules (from a baseline) function or the
Suggest From DB function, the generated name is in the form: Suggested Rule
<n>_mm-dd hh:mm, consisting of the following components
Chapter 4. Protect 93
Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
Server IP
Clear the Not box to include, or mark the Not box to exclude:
v Any server: Leave all server fields blank. The count will be incremented every
time any server satisfies the rule. (You cannot leave all fields blank if the Not
box is marked.)
v All servers selected by an IP address and mask: Enter a server IP address in the
first box, and network mask in the second box. The count will be incremented
each time that any of the specified servers satisfies the rule. For example, to
select all servers in subnet 192.168.3.x, enter 192.168.3.1 in the first box, and
255.255.255.0 in the second box.
v A group of servers: Select a group of server IP addresses from the Group
drop-down list or click the Groups button to define a new group and then
select that group. The count will be incremented each time that any member of
the specified group satisfies the rule.
v All servers selected by an IP address and mask AND a group of servers: Use
both the Server IP and Group fields. The count will be incremented each time
that any server specified using either method satisfies the rule.
Service Name The service name. See Specify Values and/or Groups of Values in Rules.
Severity Select a severity code from the list: INFO, LOW, NONE, MED or HIGH. If HIGH
is selected and email alerts are sent by this rule, the email will be flagged Urgent.
SQL Pattern A regular expression to be matched, in the Pattern box. You can enter a regular
expression manually, or click Regex to open the Build Regular Expression tool,
which allows you to enter and test regular expressions.
Src app Application source program. See Specify Values and/or Groups of Values in
Rules.
Trigger Once Per Session
Do not analyze session for same rule after first match. Especially effective for
“Selective Audit” policies.
XML Pattern
A regular expression to be matched, in the Pattern box. You can enter a regular
expression manually, or click Regex to open the Build Regular Expression tool,
which allows you to enter and test regular expressions.
Sp_cursoropen holds the original statement, while the FULL_SQL return value in
an Extrusion rule will appear as sp_cursorfetech instead of Select * from
___________.
Steps:
1. Define a rule template
As many actions are permitted for a given policy rule, it becomes very difficult
to define the complex hierarchical structure that a rule has using the Guard
API. However, in most cases rules differ by the conditions and the
action/receiver structures usually fall into a small set of different options.
Therefore, the APIs are based on cloning an existing rule which acts as a rule
template – this defines the actions/receiver structure and then conditions are
changed using APIs.
Here we create a rule template (HowToTemplate), which includes rule action
definition and will then be cloned and updated each time a new rule of that
kind has to be added to a policy.
Click Protect > Security Policies > Policy Builder to open the Policy Finder
and create a template policy. S
Click New to create the template policy; entering a Policy description, checking
the Selective audit trail check-box, and clicking the Save button.
Chapter 4. Protect 95
Click Edit Rules button to add a template rule to this policy
Click on the Add Access Rule button to display the Access Rule Definition
panel and add a rule.
Chapter 4. Protect 97
98 IBM Guardium 10.0
2. Create the Oracle script that will generate a file with GuardAPI commands.
Key items to know before writing the script:
v GuardAPI is a set of CLI commands, all of which begin with the keyword
grdapi. To list all GuardAPI commands available, enter the command
'grdapi' with no arguments. To display the parameters for a particular
command, enter the command followed by '--help=yes'.
For example
CLI>grdapi copy_rule --help=yes
ID=0
function parameters :
fromPolicy - required
ruleDesc - required
toPolicy - required
ok
v Both the keyword and value components of parameters are case sensitive.
v If a parameter value contains one or more spaces, it must be enclosed in
double quote characters. For example:
grdapi copy_rule ruleDesc="DMLCommand - Log Full Details Template" ...
v There is no need to use all available parameters that a function supports. In
addition to the required parameters, use the parameters that you want to
change.
v Scripts, which invoke GuardAPI, may contain sensitive information, such as
passwords for datasources. To ensure that sensitive information is kept
encrypted at all times, the grdapi command supports passing of one
encrypted parameter to an API Function. This encryption is done using the
System's Shared Secret which is set by the administrator and can be shared
by many systems, and between all units of a central management and/or
aggregation cluster; allowing scripts with encrypted parameters to run on
machines that have the same shared secret. For more details about this issue
please see Guardium Help.
Chapter 4. Protect 99
v If multiple policies are installed, then install policy command (policy_install)
must include all installed policies descriptions delimited by pipe character.
This must be done even if only one policy has changes. The policy
descriptions should be in the order you want the policies to be installed.
Example of the command for installation of policies HowTo 1 and HowTo 2:
grdapi policy_install policy="HowTo 1|HowTo 2"
Logic behind writing of the script; changing the currently installed policy
HowTo in the following way:
a. For each record in the CUSTOM_ENTITLEMENT table with IS_NEW_FLAG
equals ‘1’, a new access rule with description saved in RULE_DESC column
will be added to the “HowTo” policy. The rule logs full details for all DML
Commands from OS user (OS_USER field value), client IP (CLIENT_IP),
server IP (SERVER_IP) with service name (SERVICE_NAME).
b. If IS_NEW_FLAG value is ‘0’, the rule with description equals to the value
of RULE_DESC column will be changed based on the relevant data from
this record of the table.
c. Rule3 will be set as the first rule – to show how to use change_rule_order
function.
d. In order to apply all of the changes, the policy will be reinstalled.
Data in custom_entitlement table
Table 9. Custom entitlement
os_user client_ip server_ip rule_desc service_name is_new_rule seq
User1 192.168.7.101 192.168.7.201 Rule1 PROD1 1 1
User2 192.168.7.102 192.168.7.202 Rule2 PROD2 1 2
User3 192.168.7.103 192.168.7.203 Rule3 PROD3 1 3
User4 192.168.7.104 192.168.7.204 Rule2 PROD4 0 4
Note: The last grdapi command which re-installs the policy to apply the rules
to the system
3. Run the generated script.
To run this script use the following command structure:
ssh cli@[Guardium appliance name] < [script name]
For example, to run update_policy.txt script on host 192.168.12.5 (password will
be prompted for)
ssh cli@192.168.12.5 <update_policy.txt
Sample output:
192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20015
192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20016
Value-added: Make clearer what happens when certain choices are made in Policy
Rules for log or ignore actions, which control the level of logging, based on
observed traffic.
Ignore session
The current request and the remainder of the session will be ignored. This action
does log a policy violation, but it stops the logging of constructs and will not test
for policy violations of any type for the remainder of the session. This action might
The current request and the remainder of the S-TAP session will be ignored. This
action is done in combination with specifying in the policy builder menu screen of
certain machines, users or applications that are producing a high volume of
network traffic. This action is useful in cases where you know the database
response from the S-TAP session will be of no interest.
Table 11. Ignore S-TAP session
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Ignore - SQL commands, SQL errors, Log in/ Log out Sniffer to S-TAP - Not Applicable
Result Sets One signal to S-TAP to stop sending
activity for this session. Additional If there is a need to ignore traffic
signals to S-TAP to stop sending from a Span Port/ Network TAP, use
activity to this session. Ignore session instead.
Responses for the remainder of the session will be ignored. This action logs a
policy violation, but it stops analyzing responses for the remainder of the session.
This action is useful in cases where you know the database response will be of no
interest.
Note: For ignore response per session, since the sniffer does not receive any
response for the query or it is ignored, then the values for COUNT_FAILED and
SUCCESS are whatever the default for the table says they are, in this case
COUNT_FAILED=0 and SUCCESS=1.
No SQL will be logged for the remainder of the session. Exceptions will continue
to be logged, but the system may not capture the SQL strings that correspond to
the exceptions.
Table 13. Ignore SQL per session
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Ignore - SQL commands Log in/ Log out Ignore – SQL commands
Log - SQL errors, Result Sets Sniffer to S-TAP - One signal to Log - SQL errors, Result Sets
S-TAP to stop sending activity for
this session. If additional activity is SQL commands are filtered at the
sent by S-TAP, it is ignored at the Sniffer.
sniffer level only.
Use a Selective Audit Trail policy to limit the amount of logging on the appliance.
This is appropriate when the traffic of interest is a relatively small percentage of
the traffic being accepted by the inspection engines, or when all of the traffic you
might ever want to report upon can be completely identified.
It is important to note that Ignore Session rules are still very important to include
in the policy even if using a Selective Audit Trail. Ignore Session rules decrease the
load on a collector considerably because by filtering the information at the S-TAP
level, the collector never receives it and does not have to consume resources
analyzing traffic that will not ultimately be logged. A Selective Audit Trail policy
with no Ignore Session rules would mean that all traffic would be sent from the
database server to the collector, causing the collector to analyze every command
and result set generated by the database server.
Log - SQL errors, Result Sets Ignore SQL commands, except for Log - SQL errors, Result Sets
those defined by Audit-Only or Log
Full Details rules. SQL commands are filtered at the
Sniffer.
Log SQL errors
Character sets
You can use character set codes in extrusion rules.
Correlation Alerts
An alert is a message indicating that an exception or policy rule violation was
detected.
Regardless of how they are triggered, Guardium logs all alerts the same way: the
alert information is logged in the Guardium internal database. The amount and
type of information logged depends on the specific alert type. The Guardium
Note: For SNMP or SYSLOG, the maximum message length is 3000 characters.
Any messages longer than that will be truncated.
v Custom – A user written Java class to handle alerts. The Alerter passes an alert
message and timestamp to the custom alerting class. There can be multiple
custom alerting classes, and one custom alerting class can be an extension of
another custom alerting class.
Note: Alerts definition and notification are not subject to Data Level Security.
Reasons for this include alerts are not evaluated in the context of user, the alert
may be related to databases associated to multiple users and to avoid situations
where no one gets the alert notification.
Note: If there is an alert using a query that contains 30 fields or more (including
counters) the anomaly detection will fail with an Array out of bound exception
error message Queries with 30 columns (or more) can not be used for alerts. Such
queries do not appear in the list of available queries for threshold alerts.
Note: If relative period is used, each time the alert is checked it will
execute the query twice, once for the current period and once for the
relative period.
19. Indicate in the Notification Frequency box how often (in minutes) the Alert
Receivers should be notified when the alert condition has been satisfied.
20. Click Save to save the alert definition.
Note: You cannot assign receivers or roles, or enter comments until the
definition has been saved.
21. In the Alert Receivers panel, optionally designate one or more persons or
groups to be notified when this alert condition is satisfied. To add a receiver,
click the Add Receiver button to open the Add Receiver Selection panel.
Note: If the receiver of an alert is the admin user then admin needs to be
assigned an email for the alert to fire.
Use correlation alerts to inform about events accumulated over time. Applications
do not normally have SQL errors. An increase in SQL Errors in an application is a
warning sign that a possible SQL Injection is being attempted. See the online help
topics, Correlation Alerts and Queries for further information.
Prerequisites
v Configure email (SMTP) server (Setup > Tools and Views >Alerter)
v After fully configuring the correlation alert, make sure it is active and running
(Setup > Tools and Views> Anomaly Detection)
A correlation alert is triggered by a query that looks back over a specified time
period to determine if an alert threshold has been met.
Procedure
1. Exceptions Tracking - Open the Query Finder
v Users with the admin role: Select Tools > Report Building, and then select
the Exceptions Tracking domain only.
v All Others: Select Monitor/Audit > Build Reports, and select Exceptions
Tracking Builder.
2. Open the drop-down choices for Query. Select SQL Errors. This will open a
configuration screen with SQL Errors at the main title.
3. Clone this selection, typing in a unique name in the text box for the query. Do
not include apostrophe characters in the query name.
4. In your custom query, under Query fields, add a date field (timestamp) and
change the database error text field to count field mode. Under Query
conditions, change the run time parameters of exception types to attribute and
choose Exception.App. User Name.
5. Click Save. This custom query for SQL Errors from any application user is
now available for use in the Alert Builder.
Note: You cannot assign receivers or roles, or enter comments until the
definition has been saved.
25. In the Alert Receivers panel, optionally designate one or more persons or
groups to be notified when this alert condition is satisfied. To add a receiver,
click the Add Receiver button to open the Add Receiver Selection panel. For
information about adding receivers, see notifications.
26. Optionally click the Roles button to assign roles for the alert. See Security
Roles.
27. Optionally click the Comments button to add comments to the definition.
28. Click the Apply button and then the Done button when you have finished.
Move from a purely passive alerting system to an active prevention system, even
in cases where complex conditions need to be expressed with real-time rules.
Summary of procedure
The scenario that you will implement involves data leak prevention. You will
terminate a user connection if that connection extracted a defined number of Social
Security records, for example, more than 100. Most monitoring systems can tell
when a user extracts many records in a request – as can policies in Guardium
systems.
In order to prevent data leak, three distinct capabilities of the Guardium system
are used:
1. The ability to define threshold-based alerts.
2. The ability to invoke automatically GuardAPI functions as a result of query
lines.
3. The ability to quarantine a user and terminate a connection using a GuardAPI
command.
The query lists server/instance/user information for sessions that both have a sum
of returned counters larger than some number – say 100.
Add a condition with the HAVING qualified to count the number of returned
matches and set the condition to be larger than 100. If you want to use a HAVING
clause, you must add a count so that the correct GROUP BY clause is added.
As an example, the output of the report once a user extracts more than 100 records
look like:
This report is almost good enough but not quite. This is because the GuardAPI for
quarantining users requires a time stamp for how long to quarantine the user. In
this case, it is a constant of 2099-12-31 23:59:59.
Click Query Entities & Attributes menu item and open the Customizer.
Then, double-click one of the lines, click “invoke” and then click
create_constant_attribute.
Enter the constant value and the name of the attribute (in this example, the name
is Forever):
Generate the report and add it to a pane (you can click Add to Pane; this performs
all operations on your behalf).
Now you can see why you had to add a new attribute. The quarantine GuardAPI
requires a time stamp that defines how long the user on the instance is
quarantined. However, since this is a new attribute that the system does not yet
know about, this attribute needs to be mapped to the quarantine GuardAPI. The
system must know that it can use this new attribute as the time stamp for the
quarantine.
Go back at the Query Entities and Attributes menu screen (on the Guardium
Monitor tab), double-click on the line for the Forever attribute and click invoke.
Add the quarantine GuardAPI name and the parameter to which to map:
Click API assignment and move the GuardAPI command from the one list box to
the other list box.
Click Apply.
From each line on the report, you can now double-click and invoke a quarantine
GuardAPI command:
Click on create_quarantine_until:
You can invoke a GuardAPI command on this report to remove a user from
quarantine.
The last part of this how-to topic involves automating this procedure. Running a
report and invoking an GuardAPI is good but only if there is a user that monitors
such reports. Instead, what you might want to do is run this report every period
(for example, every 10 minutes) and for each line invoke the GuardAPI. This can
be done automatically using an audit process.
Create a new audit process (Tools > Config and Control > Audit Process Builder)
and name it.
Add a task by choosing the report that you created. Since you run the report every
10 minutes, pick an appropriate from/to period. Pick the GuardAPI from the pull
down menu:
Test it using Run once now – you should see that a quarantine record is added to
the report under the Daily Monitor tab.
Once everything works you can schedule this audit process for continuous
running.
File activity monitoring (FAM) uses a discovery agent called a file crawler to
inventory the files on each server and identify sensitive data within the files. The
discovery agent/ file crawler gathers the list of folders and files, their owner,
access permissions, size, and the date and time of the last update.
FAM uses decision plans to identify sensitive data within files. Each decision plan
contains rules for recognizing a certain type of data. By default, FAM uses decision
plans that identify data for SOX, PCI, HIPAA, and source code. You can create
your own decision plans, and you can activate and deactivate decision plans to
focus on the types of sensitive data about which you are concerned. Think of this
as analogous to the classification process used with databases. Decision plans are
analogous to classification policies.
The discovery agent/ file crawler sends file metadata and data from its
classification process to the Guardium system. You can view that data in reports or
in the File version of the enterprise search function.
Note: FAM discovery and classification can not be installed if there is no S-TAP
installed on the Guardium system.
File Activity Monitoring Value Proposition
Ensure integrity and protection of structured and unstructured sensitive
data
v Discover where your sensitive data resides (through metadata collection
and classification).
v Prevent unauthorized access to your files and documents (Continuous,
policy-based, real-time monitoring of all file access activities, including
actions by privileged users.)
Meet regulatory compliance in a cost effective way
v Automate and centralize controls, provide audit trail.
v Achieve compliance with diverse regulations such as HIPAA, PCI DSS,
various state-level and national privacy regulations.
Scales with growing data volumes and expanding enterprise requirements
v Extensive heterogeneous support across all popular systems
Use case 1
Critical application files can be accessed, modified, or even destroyed
through back-end access to the application or database server
This section describes using GIM to install the FAM components. After the GIM
client is installed on the file server, you can easily use GIM to install the modules
you need on the file server:
v FAM discovery agent (also known as FAM bundle or FAM agent): Required for
file discovery and optional classification.
v S-TAP: Required for file monitoring and policy enforcement
Procedure
1. On the collector, enable the population of discovery and classification results
into enterprise search by running at the CLI prompt the following GuardAPI
command, grdapi enable_fam_crawler [extraction_start] [schedule_start]
[activity_schedule_interval] [activity_schedule_units]
[entitlement_schedule_interval] [entitlement_schedule_units] Example:
Results
Discovery and Classification results - when the installation of the FAM discovery
agent/ file crawler is complete, a basic run of the file crawler begins, using the
initial path that you specified during the installation. This process gathers the list
of folders and files, their owner, access permissions, size, and the date and time of
the last update.
To view file access data, choose File in the dropdown list in the banner. This action
opens the enterprise search function and displays file data. Enterprise search must
be enabled on your Guardium system in order to view this data. The FAM
component on the Guardium system must also be enabled.
To view data that is sent by the File Access Monitoring (FAM) discovery agent,
open the Entitlement tab. This tab displays entries that are based on the decision
plans that are being used by the FAM classifier to identify sensitive data. The
Classification Entities column shows the decision plan that caused this file to be
identified as sensitive. From this view, you can choose an entry and add it to a
group, which you will use in one or more policies. You can also create a policy that
uses an entry to form its first access rule. You can create a new policy, or create a
rule and add it to an existing policy.
To view data that is generated based on file access policies that you have created,
after following the step procedure in the Creating a FAM policy rule section, open
the Activity tab. This tab displays entries that show the file name and path and the
type of access that was detected.
Viewing discovery and classification results
To obtain file metadata and optional classification results, the FAM
Discovery agent must be installed and configured on the file server and
actively sending data to the Collector. On the collector, you can see the
results in two ways:
v From the enterprise search UI. To populate enterprise search, you must
use the following command on the Guardium collector: grdapi
enable_fam_crawler. The enterprise search UI is ideal for ad-hoc
browsing and searching. You can also use enterprise search results to
automatically create file activity monitoring policy rules.
v From the FAM – Entitlement report. Reports are ideal for creating
auditable records. Using the audit process builder (Under the Comply
tab), you can schedule reports to run periodically and send report results
to reviewers.
File activity monitoring
Rule order
The ordering of rules in the security policy is very important. The rules are
sent to the S-TAP as a set and are processed strictly in order. Any given
user access is checked against each rule in the policy in order. The first rule
that meets the criteria of this file access is applied and subsequent rules are
ignored. Let us say you have two rules:
v Rule A: audit only all access to /data/*
v Rule B: block, log violation and audit user 'joe' from accessing
/data/salaries
If you put Rule A first, and Joe tries to read /data/salaries, there is no
need to go to the next rule, and Joe will be audited. If you put Rule B first,
Joe is blocked from accessing /data/salaries and there is no need to go to
the next rule.
In most cases, put the most specific rule first and the most general rule
last.
Rule criteria
For any given file access, rule criteria are used to evaluate whether a
particular action should be taken. For any datasource or group of
datasources (file servers), the rule criteria that you can specify include:
User: This is the OS user who is accessing files. This can also be a group of
users, as defined in a Guardium group. If this is left blank then the rule
applies to all users (except root).
File Path: This can be a Windows or UNIX file path or even an individual
file or group of files. This cannot be blank (except when removable media
is selected).
You can use wild cards in the name specification:
To create rules from the quick search results, you must install FAM on one or more
file servers. FAM must send information to your Guardium server, so that it
appears in the quick search results.
FAM applies policy rules to data that is sent from your file servers. You can use
values, such as datasource names, user names, actions, and file paths, from your
quick search data to create policy rules.
Procedure
1. Choose File from the dropdown list in the product banner and click the search
icon to open the Quick Search results page for file data.
2. Open the Entitlement tab. Click Details to see individual entries.
3. Choose one or more entries in the results that you want to use to populate a
rule. You can use the Select all check box to include all the entries that are
currently displayed (not all the entries in the database).
4. Right-click and choose Add Policy Rule. The Build Rule dialog is displayed.
Fields in this dialog are pre-filled with values from the entry that you selected.
If you selected multiple entries, a group is created that contains the values from
those entries. You can create a rule that is to be added to an existing policy, or
create a new policy that includes your new rule.
Note: A overly broad rule (a rule that monitors too many files) will overload
the system and increase processing and response time.
Note: A FAM rule can have more than one pattern in it. To protect both a
directory and its contents, define a rule with two patterns /FAMtest/* and
/FAMtest.
During File Activity Monitoring, the GIM installation user must configure ICM
Decision Plan setting on the File Activity Monitoring GIM configuration page.
User must configure the list of Decision Plans (categories) with entities (NVP
fields) for each Decision Plan delimited by colons.
The customer should be able to configure all possible entities for each Decision
Plans templates, available during the File Activity Monitoring installation.
After File Activity Monitoring installation, there are four Decision Plan templates
available:
v
HIPAA, PCI, SOX, Source
v
HIPAA Decision Plan used for finding medical information
v
PCI for finding Credit Card Numbers
v
SOX for financial documents
The "Source" decision plan refers to two knowledge bases (CodeKB and
DocumentTypeKB) which are loaded by default once the Source decision plan is
configured.
Here the list of possible entities for each Decision Plan supplied out of the box
with File Activity Monitoring and can be configured via GIM.
HIPAA
PCI
SOX
Source
A decision plan is a collection of rules that you configure to determine how IBM
Classification Module classifies content items. Rules consist of triggers and actions.
A trigger determines the conditions that must be met to initiate an action. An
action determines how the document is to be classified. A decision plan can also
refer to one or more knowledge bases to combine rule, keyword-based
classification with statistical, text-based classification.
A Knowledge base is a set of collected data that is used to analyze and categorize
content items. The knowledge base reflects the kinds of data that the system is
expected to handle. Before the knowledge base can analyze text, it must be trained
with a sufficient number of sample content items that are properly classified into
categories. A trained knowledge base can compute a numerical measure of an
item's relevancy to each category.
Note: ICM is not able to work with Decision Plans with Chinese names. Content
documents in Chinese and Decision Plan rules in Chinese is supported, but not
Decision Plan names in Chinese.
Note: Distribution of decision plans from the Central Manager to managed units is
unsupported.
For the purpose of this description, we’ll assume that your company has a
confidential project named "ProjectA." You want to identify and monitor all files
that contain this string.
Procedure
1. Use the Windows Start menu to open the IBM Content Classification 8.8
Classification Workbench.
2. In the Open Project dialog, click New....
3. In the New Project dialog, choose Decision Plan for the project type. Enter a
name for this decision plan, such as ProjectA_DP. Enter a description if you
want one.
4. In the New Project Options dialog, select Create an empty project.
5. In Project Explorer click Word and string list files. In the Word and string list
files dialog, click New... to create a new file. In the New File dialog, choose
Word list for the file type and choose a name for the file. In this example we
call the file Names. Wordlist_Names.txt appears in the list of files.
6. Double-click the file name to edit the file. Insert a single line with the string
~ProjectA~ and save the file.
7. In Project Explorer click DecisionPlan > New Group > New Rule. Change the
name of the rule to ProjectA.
8. In the New Rule dialog, open the Trigger tab. Click condition.
9. Choose Trigger when fields contains specific words or phrases. Choose
Word list file. Click OK.
10. Open the Action tab. Click Add new rule.
11. Select Advanced Actions from the Action Type list. Choose the Set content
field action. This content field is created when the specified trigger fires. The
content field can be viewed in FAM reports.
DecisionPlanName1{Entity1.1,Entity1.2,..}:DecisionPlanName2{Entity2.1,Entity2.2,..
FAM_ICM_CLASS_THREAD_COUNT
Number of threads for the classifier to use. The default is 5 and
is the recommended value.
FAM_ICM_URL The url of the IBM Content Classification Workbench. The
default is http://localhost:18087
FAM_INSTALLER Windows only.
FAM_INSTALL_DIR Windows only.
Format:
dd-MM-yyyy HH:mm
For example, if you enter 01-02-2015 18:00, the scan will start at 6
PM on February 1st. . If the time interval is 12 hours, the process
will run every day at 6 PM and 6 AM.
FAM_SERVER_PORT The Guardium collector port, 16022.
FAM_SOURCE_DIRECTORIES
The directory or directories to start scanning from. Wild cards
are not supported. Example: /home/soonnee. There is also an
option to use FILE_SYSTEM_ROOT.
STAP_SQLGUARD_IP The IP or host name of the Guardium collector. Do not edit this
value.
Policy
A policy is a set of rules and actions that are required to be performed when certain
events or status conditions occur in an environment.
A policy specifies how data is to be masked. After you create and install a policy,
application data is masked according to the rules that are specified in the active
policy.
Rule
A rule is a list of conditions and actions that are triggered when certain conditions
are met.
Screen masking rules mask data from the application before the application is
displayed on the client computer
You can use conditions to limit the cases in which a rule is used. For example, if
you specify a set of client IP addresses for a screen masking rule, the screen
masking rule applies only to clients with the specified IP addresses.
For best performance, apply the strictest possible filters to the rules that you create.
Action
Note: It is not permitted to mask an element more than once, for example,
active rules should apply to mutually exclusive sets of elements.
When you identify content that is to be masked, you can optionally use a regular
expression to specify which part of the identified content is to be masked. For
example, you can specify that only the prefixes of email are to be masked, or you
can specify that all digits but the last four digits of a Social Security number are to
be masked.
Masking methods
When you create a data masking action, you must specify the masking method
that is to be used to mask the data.
Encrypt and tokenize masking methods are reversible. Redact masking methods
are not reversible.
Group
A group is a set of same-type elements that are used to simplify the processing and
reporting of those elements. For example, you can define a group of city names or
a group of IP addresses.
A group defines objects that are to be acted upon by a rule or action. For example,
to mask city names, first create a group of city names, and then create an action
that is set to mask the city names in the group.
Create Whitelist or blacklist group access from context rules (masking script).
Data classifier
A data classifier is a script, pattern, text entity, or group that defines the data to be
processed by an action. For example, to mask city names, you can create a group
of city names. You can then create a data classifier to mask city names within the
group.
Data classifiers for common classes of data elements are predefined in the system.
For example, a data classifier for email addresses is predefined. You can create
more data classifiers to meet your needs. You can define data classifiers by using
the following methods:
v Regular expression
v Guardium masking script
v Text entity
v Group of regular expressions or text entities
Access Guardium
Create a policy
A policy is a set of rules and actions that are required to be performed when certain
events or status conditions occur in an environment. A policy specifies how data
within an application is to be masked. After you install the policy, data is masked
according to the policy specifications.
Before you begin, access Guardium from a browser. If you use the selection tool,
you must have a user account and the URL for the application whose data you are
to mask. The user account must have access to the fields and columns that are to
be masked within the application.
To access the policy builder, click Protect > Security Policies > Policy Builder.
Enter a description to identify the policy and click Apply. Click Edit Rules and
wait for Add Rules to display in the last button row.
You can use groups in addition to or in place of a single value. You can also select
Not for an attribute to specify that the rule applies to all information but
information that is associated with the specified attribute values. Add at least one
action to the rule
Click Save.
When done, test the policy with the policy simulator to ensure that the data is
masked correctly.
Use case
Call Center outsourcing - health insurance company outsources its call
center. Customer Service Representatives (CSRs) access company
applications remotely. Guardium for Applications is installed in the middle
to guarantee that application screens undergo masking process. CSRs
utilize the application as usual. Sensitive information unessential for CSR
operation is masked out.
If you use the selection tool, you must have a user account and the URL for the
application whose data you are to mask. The user account must have access to the
fields and columns that are to be masked within the application.
You can use the selection tool to define contexts in the following cases:
v To define a context for a new mask in context action.
v To define a context that limits the scope of a mask by content action.
The selection tool takes the following attributes into account when you use the
selection tool to define a context:
v The position of the selected column, field, or labeled field within the page.
v The label that is associated with a labeled field. For a more accurate context
definition, select a labeled field instead of a field where possible.
v The URL suffix, and the application URL parameters that you specify as context
significant. The selected field or column is masked only when the application
URL parameters are equal to the values that are defined in the context. The
masking engine considers only the values of the parameters and not the order of
the parameters within the URL.
After you add rules to a policy or edit rules in a policy, test the policy with the
policy simulator to ensure that the data is masked correctly.
Before you begin, access Guardium from a browser. You must also have a user
account for the application whose data you are to mask. The user account must
have access to the fields and columns that are to be masked within the application.
Note: The rules in installed policies are always active, even in the policy simulator.
To limit masking in the policy simulator only to the policy that is to be simulated,
uninstall all active policies before testing.
When you are done testing the policy, you can add rules to the policy or edit rules
in the policy as necessary. If you are done adding and editing rules, you can install
or reinstall the policy.
Install a policy
When you are done adding rules to a policy or editing rules in a policy, install or
reinstall the policy to enable data masking.
Before you begin, access Guardium from a browser. If you use the selection tool,
you must have a user account and the URL for the application whose data you are
to mask. The user account must have access to the fields and columns that are to
be masked within the application.
If you discover through testing that the rules in your data masking policy do not
mask data correctly, edit the rules as necessary.
Before you begin, access Guardium from a browser. If you use the selection tool,
you must have a user account for the application whose data you are to mask. The
user account must have access to the fields and columns that are to be masked
within the application.
Limitations
Review the limitations of IBM Guardium for Applications if you encounter issues
with your data masking policies.
For better performance, ensure that policies use only one masking method for each
value to be masked. The use of only one masking method for each value also
prevents encrypted or tokenized data from being overwritten and lost by
subsequent masking. For example, if you mask a value with both format
preserving and redaction, the original value is not recoverable. If you use regular
expressions to define which data is to be masked, ensure that the regular
expressions do not overlap each other.
If a field is validated for a specific format, mask the field with the format
preserving masking method so that the masked data does not fail validation. For
example, an email address field is validated for valid email addresses. If you mask
the email address with the redact masking method, the redacted data will fail
validation, which can result in unexpected behavior or results.
In cases where masked data may be used in subsequent requests to the server,
Guardium for Applications forces masking in a reversible and format preserving
manner in order to verify referential integrity.
Guardium for Applications does not support applications that are presented in
languages other than English.
If your Guardium system seems to be stuck, you can restart the masking engines.
The Restart Masking Engines button is used only for troubleshooting situations. If
your Guardium system is not responding but has not completely failed, click this
button to restart certain processes. This restart might clear the problem.
When you use the selection tool to define masking actions, it creates scripts that
are run when rule conditions are met. These scripts modify the HTTP messages
that occur with the use of the application. If this process does not give you the
results that you require, you can create your own scripts to manipulate the
contents and properties of the HTTP messages. Designing these scripts requires
that you understand the messages that are exchanged when users interact with the
applications that you want to mask.
To use your custom scripts, identify the conditions for running the scripts, then
create a mask in context action, and add one or more action items that invoke your
custom scripts. In these scripts, you can use the objects and classes that are
described here.
In addition to the objects and classes, the API provides a function that can be used
for debug purposes:
dbgm(...); //prints the supplied arguments to stdout.
For example,
dbgm(’this ’ + ’is’ + ’ a debug output’); //prints "this is a debug output"
You can insert values from the current class or object into the output string. For an
example, see the json global object.
The Guardium for Applications JavaScript API defines objects and classes.
html
Note: The only way to get to a specific node in an HTML document is to use an
XPath expression.
Example:
var ns = html.xpath(’some xpath expression returning text nodes’);
// "ns" is an object of JS class XmlNodeSet (see classes sections for mode details)
// providing the node set is not empty we can now mask text node contents according to the
// information stored in the current action
// the following lines mask contents of the first node in the set
if (ns.size > 0)
html.mask(ns[0]);
// the following code masks the ’a1’ attribute of the second node in the set
if (ns.size > 1)
html.mask(ns[1], ’a1’);
xml
A global object representing a parsed XML message.
Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on the XML
tree
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action
Note: The only way to get to a specific node in an XML document is to use an
XPath expression.
Chapter 4. Protect 175
Example: similar to the example for the html object.
json
Example:
json.data = {"p1": "v1", "p2": "v2"}; // this would entirely replace JSON in the message
json.data.p1 = {};
json.data.p2 = null;
json.data.a1 = [1, 2, "aasdf"];
json.data.a1[0] = false; // 1 -> false
json.mask(json.data.a1, 2); // "aasdf" will be masked with "*****" if the parent action
// defines "replace" masking method
dbgm(JSON.stringify(json.data)); // should print:
// {"p1": {}, "p2": null, "a1": [false, 2, "*****"]}
form
Example:
// set value in form field "p1"
form.data["p1"] = "v1";
// mask form field "p2"
form.mask("p2");
// mask all fields in the form
for (var f in form.data)
form.mask(f);
query
A global object representing a parsed URL query part, as appears in the browser.
Properties
data: QueryData - provides access to the actual URL query data (parsed
name/value list)
Methods
mask(n: String) - mask query value with name "n".
text
A property of the global object of type String. Assignments to this property directly
modify message body. However, if during the message processing both HTML tree
structure and plain message text are modified, only modifications that are applied
to the HTML tree hold, as the modified tree is serialized back to the message
buffer and replaces its content.
Properties
none
Methods
none
Example:
text = ’this string will replace content in the message buffer’;
XmlNodeSet
Instances of this class are created by xpath() methods of html and xml global
objects. These are actually the standard JavaScript Array objects containing
XmlNode objects as their elements. Access to these elements is provided through
the [] operator as it would normally be for JS arrays.
Properties
none
Methods
none
Example:
var ns = html.xpath(’some xpath expression’); // ns: Array of XmlNode objects
dbgm(’number of nodes in set: ’ + ns.length); // print number of nodes in the set "ns"
var node = ns[0]; // node: XmlNode
XmlNode
Instances of this class are also created by xpath() methods of html and xml global
objects.
Properties
v name: String [r] - get node name
v text: String [rw] - get/set inner text for text nodes only
v attributes: XmlAttributeSet [r] - access node attributes
Methods
none
XmlAttributeSet
Instances of this class are used to access the XmlNode attributes through the
attributes property of XmlNode objects. The class behaves as a regular JS Array. All
the array elements are of type String.
Properties
any property [rw] - get/set the respective attribute value for given
XmlNode object
Methods
none
JsonNode
FormData
Provides read/write access to the parsed form data represented as a name/value
list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none
QueryData
Incident Management
The Integrated Incident Management (IIM) application provides a business-user
interface with workflow automation for tracking and resolving database security
incidents.
Incident generation processes can be defined and scheduled to read the policy
violations log and generate new incidents. From an incident generation process,
each selected incident is:
v Assigned a unique incident number
v Assigned to a user
v Assigned a severity code
v Assigned to a category
Once an incident has been generated, administrators and other users work with
incidents from the Incident Management tab, which is included on both the admin
and user portals. From there, all other tasks can be performed (assign incidents,
send notifications, assign status, and so forth).
The Incident Management functions can be accessed from the drill-down menus of
the Incident Management reports. Each user may only have a subset of reports or
functions available, depending on the security roles assigned to the user account.
You can create your own copies of the Incident Management reports, but those
copies will not have all of the capabilities available from the pre-configured reports
on the Incident Management tab. To assign incidents, severity codes, and so forth,
use the reports on the Incident Management tab.
Assign/Reassign to Incident
1. Double-click the policy violation to be assigned or reassigned, in one of the
Incident Management reports.
2. Select Assign/Reassign to incident from the drill-down menu. When selected,
this menu will be replaced by a new menu containing a list of open incidents
(for example, Assign to Incident #123), and one additional option: Assign to a
new incident.
3. Select an incident to assign this violation to, or select Assign to a new incident
to assign this Policy Violation to the next incident number available (they are
numbered in sequence).
A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed. If a new incident has been created, it will
be listed first in the Open Incidents report.
Assign to User
1. Double-click the incident to be assigned to another user, in one of the Incident
Management reports.
2. Select Assign to user from the drill-down menu. When selected, this menu will
be replaced by a new menu containing a list of users, and one additional
option: Unassign.
3. Select a user, or select Unassign to remove the current user assigned. When a
user is assigned, the Status Description will be Assigned, and when unassigned
the Status Description will be Open.
A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed.
Change Severity
1. Double-click the incident on which the severity is to be changed, in one of the
Incident Management reports.
2. Select Change Severity from the drill-down menu. When selected, this menu
will be replaced by a new menu containing a list of severity codes: Info, Low,
Med, and High.
3. Select the new severity code.
Notify
1. Double-click the incident a user is to be notified about, in one of the Incident
Management reports.
2. Select Notify from the drill-down menu. When selected, this menu will be
replaced by a new menu containing a list of users.
3. Select a user.
A message is displayed when the user has been a notification.
Change Status
1. Double-click the incident on which the status is to be changed, in one of the
Incident Management reports.
2. Select Change Status from the drill-down menu. When selected, this menu will
be replaced by a new menu containing a list of status codes:
v ASSIGNED - Once an incident has this status, it cannot have additional
policy violations added to it. To add policy violations, change the incident
status back to Open, add the violations, and then change the status back to
Assigned.
v CLOSED - Once an incident is marked Closed it cannot be modified, and is
no longer listed.
v OPEN - This is the initial status for a new incident.
3. Select the new status code.
A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed.
Add Comments
1. Double-click the incident to which comments are to be added, in one of the
Incident Management reports.
2. Select Comments from the drill-down menu, to open the User Comment
window. For instructions on how to add comments, see Commenting in the
Common Tools book.
Prerequisites
v Create a Policy (See Policies).
v Start inspection engines (See Inspection Engine Configuration).
Summary of Steps
1. Click Comply > Tools and Views > Incident Generation to open Incident
Generation Processes.
2. Edit Incident Generation Process (Query, Severity, Threshold, Scheduling).
3. Go to Incident Management tab for reports.
Incident Management
Incident generation processes can be defined and scheduled to read the policy
violations log and generate new incidents. From an incident generation process,
each selected incident is:
v Assigned a unique incident number.
v Assigned to a user.
v Assigned a severity code.
v Assigned to a category.
Once an incident has been generated, administrators and other users work with
incidents from the Incident Management tab, which is included on both the admin
and user portals. From there, all other tasks can be performed (assign incidents,
send notifications, assign status, and so forth).
The Incident Management functions can be accessed from the drill-down menus of
the Incident Management reports. Each user may only have a subset of reports or
functions available, depending on the security roles assigned to the user account.
An incident generation process executes a query against the policy violations log,
and generates incidents based on that query. By default, the definition and
scheduling of incident generation processes is restricted to users with the admin
role.
Procedure
1. Click Comply > Tools and Views > Incident Generation to open Incident
Generation Processes.
2. Click the Add Process button to open the Edit Incident Generation Process
panel.
3. Select a query from the Query list. There are several restrictions that apply to
queries used in an incident generation process. Open the query in the Query
Builder to verify that it satisfies the following criteria:
v The query must be from the Policy Violations domain.
Query rewrite
Query rewrite functionality provides fine-grained access control for databases by
intercepting database queries and rewriting them based on criteria defined in
security policies.
The modification of queries happens transparently and on-the-fly, such that a user
issuing queries seamlessly receives results based on rewritten SQL statements.
Please review the following sections to learn more about how query rewrite works
and how to configure it for use within your Guardium environment.
Overview
Once query rewrite has been enabled on the S-TAP for supported database servers
(see “Enabling query rewrite” on page 187), query rewrite functionality is
implemented through three policy rule actions:
v QUERY REWRITE: ATTACH
v QUERY REWRITE: APPLY DEFINITION
v QUERY REWRITE: DETACH
These rule actions are installed as access policy rules. The access policy rules
specify both query rewrite definitions that indicate how queries should be
rewritten and a run time context that indicates when those definitions should be
applied.
See the Guardium release notes to learn more about any version limitations or
other restrictions.
Query rewrite functionality is only enabled when both of the following conditions
are met:
v Query rewrite is enabled in the guard_tap.ini file
v Query rewrite policy rules exist and are triggered by session traffic
This task guides you through the changes you need to make in your
guard_tap.ini file.
Results
Upon completion of this task, query rewrite functionality is enabled and will
respond to policy rules that contain query rewrite actions.
Procedure
1. Open Protect > Security Policies > Query Rewrite Builder.
2. Provide a unique and meaningful name for the query rewrite definition in the
Name field.
3. Create and parse a model query.
a. Provide a model query in the Enter a model query field.
For example, to create a rewrite definition preventing the use of SELECT *
from statements, enter SELECT * from EMPLOYEE as a model.
b. Click the DB Type menu and select a SQL parser to use with the model
query.
c. Click Parse to process the model query.
Your model query will be broken down into individual components with
each actionable component highlighted with underlined text.
4. Define how to rewrite specific components of the model query.
a. Click on an underlined component of the parsed query that you would like
to rewrite. A dialog will open to help create your query rewrite definition.
Options:
v Select and modify an individual verb, field, or object from the parsed
query
v Add a component to the query (shown as gray underlined text next to
the parsed query)
v Rewrite the entire query by clicking the gray underlined [R] next to the
parsed query
In the example SELECT * from EMPLOYEE where we want to prevent the use
of SELECT * from statements, click the * to provide rewrite content.
a. The Change from field indicates what will be rewritten.
b. The To field defines the rewritten component.
For example, to prevent the use of SELECT * from statements, replace the *
component with a list of specific objects: EMPNO, FIRSTNME, MIDINIT,
LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX.
Important:
188 IBM Guardium 10.0
Rewrite definitions are based on syntax, so any statement with the form
SELECT * from [OBJECT] will match the example. For instance, both SELECT
* from DEPARTMENT and SELECT * from EMPLOYEE statements match our
example.
What to do next
When you are finished working with query rewrite definitions, continue to the
next step in this sequence to test and implement your definitions.
Related tasks:
“Defining a security policy to activate query rewrite” on page 191
Learn how to create access policy rules using your query rewrite definitions with
live queries.
To complete this task, you need to have created one or more query rewrite
definitions.
Procedure
1. Open Protect > Security Policies > Query Rewrite Builder.
2. Click Set Up Test to open a dialog and select query rewrite definitions for
testing.
a. Drag and drop items from the Available query rewrite definitions field to
the Test query rewrite definitions field.
Chapter 4. Protect 189
b. Drag and drop items with the Test query rewrite definitions field to order
multiple definitions as you would within an access policy.
c. Click Save to close the dialog when you are finished.
3. Type or paste test queries into the test field.
For example, to test a rewrite definition preventing the use of SELECT * from
statements (see “Creating query rewrite definitions” on page 188), enter sample
queries such as:
SELECT * from DEPARTMENT
SELECT * from EMPLOYEE
SELECT FIRSTNME, case
when SALARY > 150000 then ’high’
when SALARY > 100000 then ’medium’
when SALARY > 80000 then ’fair’
else ’poor’
end from EMPLOYEE
DELETE from EMPLOYEE where EMPNO=100
INSERT into TEMP_EMP SELECT * from EMPLOYEE
4. Click Run Test to process the sample queries and review the results.
For example, the sample queries provided in the previous step return the
following results:
Table 18. Query rewrite test results
Original SQL Rewritten SQL Changed
SELECT * from DEPARTMENT SELECT EMPNO, FIRSTNME, YES
MIDINIT, LASTNAME,
WORKDEPT, PHONENO,
HIREDATE, JOB, EDLEVEL,
SEX from DEPARTMENT
SELECT * from EMPLOYEE SELECT EMPNO, FIRSTNME, YES
MIDINIT, LASTNAME,
WORKDEPT, PHONENO,
HIREDATE, JOB, EDLEVEL,
SEX from EMPLOYEE
SELECT FIRSTNME, case SELECT FIRSTNME, case when NO
when SALARY > 150000 then SALARY > 150000 then
’high’ when SALARY > ’high’ when SALARY >
100000 then ’medium’ when 100000 then ’medium’ when
SALARY > 80000 then SALARY > 80000 then ’fair’
’fair’ else ’poor’ end else ’poor’ end from
from EMPLOYEE EMPLOYEE
DELETE from EMPLOYEE where DELETE from EMPLOYEE where NO
EMPNO=100 EMPNO=100
INSERT into TEMP_EMP INSERT into TEMP_EMP NO
SELECT * from EMPLOYEE SELECT * from EMPLOYEE
Important:
Rewrite definitions are based on syntax, so any statement with the form SELECT
* from [OBJECT] will match the example. For instance, both SELECT * from
DEPARTMENT and SELECT * from EMPLOYEE statements match our example.
Query rewrite definitions can be restricted to specific objects using access policy
rules. See “Defining a security policy to activate query rewrite” on page 191 for
instructions.
What to do next
When you are satisfied with the test results, create a security policy to begin using
your query rewrite definitions with live queries.
Related tasks:
“Defining a security policy to activate query rewrite”
Learn how to create access policy rules using your query rewrite definitions with
live queries.
“Creating query rewrite definitions” on page 188
Learn how to create query rewrite definitions for data masking and access control
scenarios.
To complete this task, you need to have created and tested one or more query
rewrite definitions, and you need to be familiar with creating security policies.
Procedure
1. Open Protect > Security Policies > Policy Builder
2. Create a new policy or modify an existing policy to use your query rewrite
definitions.
Tip: Consider creating a new policy for testing query rewrite definitions. Add
your rewrite rules to existing security policies once you are satisfied with the
behavior of the test policy.
3. Click Edit Rules to begin adding rewrite rules to the selected policy, then select
Add Rules > Add Access Rule.
To complete this task, you need to have created and installed access policy rules
that apply query rewrite definitions, and you need to be familiar with creating
reports.
A query rewrite tracking report helps validate query rewrite actions in both test
and production environments.
Procedure
1. Open Reports > Report Configuration Tools > Query Builder
2. Select Query Rewrite from the Domain menu.
3. Click the icon to define a new query.
4. Provide a meaningful and unique name for the query in the Query Name
field.
For example, My query rewrite report
5. Select one of the available options from the Main Entity menu.
The following options are available:
v Query Rewrite Log
v Client/Server
v Session
v Access Period
6. Click Next to open the report builder.
7. Expand sections within the Entity List and select items to build your report.
v Click an item and select Add Field to add the item as a column in the
report.
v Click an item and select Add Condition to add a conditional filter to the
report.
Automate and integrate the following audit activities into a compliance workflow:
v The ability to group multiple audit tasks (reports, vulnerability assessments, etc.)
into one process.
v Schedule these processes to run on a regular basis.
v Run these tasks in the background.
v Write the task results to a comma-separated value (CSV) file or ArcSight
Common Event Format (CEF) file and/or forward the results to other systems
using Syslog.
v Add comments and notations.
v Assign the process to its originator for viewing (he/she will get a new item in
their To-Do list once the result is ready).
v Assign the process for other users or to a group of users or a role.
v Create the requirement that these assignees sign on the result.
v Allow escalation of the result (assign to someone outside of the original audit
trail).
The Audit Process Log report, shows a detailed activity log for all tasks including
start and end times. This report is available for admin users via the Guardium
Monitor tab. Audit tasks show start and end times, however the start and end of
Security Assessments and Classifications (which go to a queue) is the same.
The results of each workflow process, including the review, sign-off trails, and
comments can be archived and later restored and reviewed through the
Investigation Center.
195
v What is the schedule for delivery?
Workflow Automation (audit processing) for the Aggregator server now includes
the capability to create ad-hoc databases for each Aggregator task and specify only
the relevant days for that task.
When defining reports in Audit Process, the number of days of the report (defined
by the FROM-TO fields) should not exceed a certain threshold (one month by
default). If this threshold is exceeded, a run-time error will result when trying to
run the audit task on the Aggregator.
It is permissible to create an audit task with a FROM-TO range that is wider than
the max_audit_reporting value (set in CLI) because Audit processes defined on
the Aggregator may be run on managed collectors (when this aggregator is a
manager). Audit tasks run on collector unit, do not have a max_audit_reporting
limitation.
So, it is valid to save tasks beyond the allowed range, but you will get a Run Time
Exception when the task is executed on the Aggregator.
The Audit Report threshold can be configured using the CLI command, show
max_audit_reporting or store max_audit_reporting. There is no warning message
when a report is created with an invalid FROM-TO range. Instead a fixed message
appears in the Task Parameters panel in the Audit Process setup menu screen
(Tools/Audit Process Builder. open up Audit Tasks to display Task Parameters).
The fixed message is:
On aggregators, only reports not exceeding the allowed time range (CLI: max_audit_reporting) will
Note: When running a patch install, all audit processes are stopped.
Stopping an audit process can be performed only if the audit tasks have not been
run or are running. Stopping an audit process will not execute any more tasks that
have not started. Stopping an audit process does not deliver partial results. The
audit process stops and a stopped error message is the result. However, if tasks are
complete, stopping an audit process will not stop the sending of results.
Stop an audit process by using invoking GuardAPI (place the cursor on any line
and double-click for a drill-down) from the Audit Process Log Report (on the
Guardium Monitor tab).
For any user, stopping an audit process, will display only the line belonging to that
user (just the tasks, not all the details). An admin user can see all the details and
can stop anyone's audit processes. A user can only stop their own audit processes.
Note:
Queries using a remote source can not be stopped. Online reports using a remote
source can not be stopped.
Stopping an audit processes does not apply to Privacy Sets Audit Tasks or External
Feed Audit Tasks. If the Privacy Set or External Feed tasks have started, they will
finish even if the process is stopped.
In the Audit Process Finder screen is the Audit Process Status Summary. This
section contains information on scheduled audit processes, as well as results,
receivers outstanding and errors. This summary is a consolidation of data from
multiple audit process reports.
There is also a button to delete any audit process results. See the Audit Process
Finder screen. Look for the Results button, next to the Run Once Now button
(choices of View or Delete).
Delete audit process results, but track or log who deleted the report. The
audit-delete role is used to track or log when an audit process result has been
deleted. Users with the audit-delete role can delete reports. Admin users can also
delete reports. Tracking is done through the User Activity Audit Trail report.
Note: Audit process results from remote sources is limited to 100,000 results. To go
beyond that limit, use the CLI command, store save_result_fetch_size (show
save_result_fetch_size).
Process Receivers
You can define any number of receivers for a workflow automation process, and
you control the order in which they receive results. In addition, receivers can notify
additional receivers, using the Escalate function. It is also possible to run an audit
process with no defined receivers. For example, an audit process with no receivers
that writes to syslog and has no need to review (or sign) the results.
On the Process Definition panel, the drop-down list of receivers includes all
Guardium users, user groups, and roles (groups and roles are labeled as such).
When a group or role is selected, all users belonging to the group or having that
role will receive the results.
If a group receiver is selected, and any workflow automation task uses the special
run-time parameter ./LoggedUser in a query condition, the query will be executed
separately for each user in the group, and each user will receive only their results.
For example, assume that your company has three DBAs, and each DBA is in
charge of a different set of servers. Using the Custom Data Upload facility, upload
the areas of responsibilities of each DBA (with server IPs) to the Guardium system,
and correlate that to the database activity domain, and then use a report in this
custom domain as an audit task. If a user group that contains the three DBAs is
designated as the receiver, each DBA will receive the report relevant for his or her
collection of servers only.
A receiver can be solely an email address and results will be sent to that email
address. When entering an email address, the user will be required to enter a user
that will be used to filter the data. The user must be the same user that is logged
in or a user under the user that is logged in the data hierarchy.
If a role receiver is selected, only one user with that role will need to sign the
results, and other users with that role will be notified when the results have been
signed.
Note:
When a workflow event is created, every status used by that event can be assigned
a role (meaning that events can only be seen by this role when in this status).
When an event is assigned to an audit process, it is important that every role that
is assigned to a status of this event have a receiver on this audit process.
Otherwise, it is possible that an audit result row can be put into a status where
none of its receivers are able to see this row or change its status.
If this is to occur, the admin user (who is able to see all events, regardless of their
roles) would be able to see the row and change its status. However, if data level
security is on, the admin user may not be able to see this row. The admin user
would need to either turn data level security off (from Global Profile) or have the
dataset_exempt role. It is important to configure the audit process so that all roles
who must act on an event associated with this audit process are receivers of this
audit process.
email Notification
Optionally, receivers can be notified of new process results via email, and there are
two options for distributing results via email:
v Link Only - The email notification will contain a hypertext link to the results
stored on the Guardium system. For the link to work, you must access your mail
from a system that has access to the Guardium system. See the following section
for more information about email links.
v Full Results - A PDF file or generated CSV file containing the results will be
attached to the email, except for an Escalation that specifies a receiver not
included in the original distribution list, in which case no PDF or CSV file will
be attached. When the Full Results option is selected, care must be taken, since
sensitive and private data may be included in the PDF or CSV file. When
running an audit process, if there is a receiver with Full Results with CSV
checked, it does not generate CSV files for tasks of type Assessment, Classifier
or External Feed. These task types also can not generate CSV/CEF/PDF files for
export. Only for tasks of type Report, Privacy Set or Entity Audit Trails, and if
there is a receiver with Full Results via CSV checked, will CSV files be
generated.
Note: When viewing audit results, if a generated PDF already exists, a Recreate
PDF button will appear for the user to recreate and download the regenerated
PDF.
Once a process has been run, the existing receiver list is frozen, which means:
v You cannot delete receivers from the list.
v You cannot move existing receivers up or down in the list.
v You can add receivers to end of the list at any time, and reposition the new
receivers at that time.
v If the Guardium user account for a receiver on the list is deleted, the admin user
account (which is never deleted) is substituted for that receiver. Thus the admin
user receives any email notifications that would have been sent to a deleted
receiver, and the admin user must act upon any results released to that receiver.
v If you need to create a totally different set of receivers for an existing process,
deactivate the original process, make a clone of it, and then make the
modifications to the receivers list in the cloned version before saving it.
Results are released to the Guardium users listed on the receivers list, subject to
the Continuous check box, as follows:
v If the Continuous check box is marked, distribution continues to the next
receiver on the list without interruption.
v If the Continuous check box is cleared, distribution to the next receiver is held
until the current receiver performs the required action (review or sign).
Note: The results will only distribute to the next receiver when the current
receiver has marked the Continuous button. This is completely separate from the
review/sign functionality and does not depend on the review/sign functionality
all.
Note: Process results that are exported to CSV or CEF files are sent to another
network location by the Guardium archiving and exporting mechanism. These
results are not subject to the receivers list or to any signing actions. They are
subject to the Guardium CSV/CEF export schedule (if any is defined), and they
are subject to the access permissions that have been granted for the directory in
which they are ultimately stored.
In addition, CEF and CSV file output can be written to syslog. If the remote syslog
capability is used, this will result in the immediate forwarding of the output
CEF/CSV file to the remote syslog locations. The remote syslog function provides
the ability to direct messages from each facility and severity combination to a
specific remote system. See the remotelog (syslog) CLI command description for
more information.
Each record in the CSV or CEF file represents a row on the report.
The exported file is created in addition to the standard task output, it does not
replace it. These files are useful when you need to:
v Integrate with an existing SIEM (Security Incident and Event Manager) in your
infrastructure (Qradar, ArcSight, Network Intelligence, LogLogic, TSIEM, etc.).
v Review and analyze very large compliance task results sets. (Task results sets
that are intended for Web presentation are limited to 5,000 rows of output,
whereas there is no limit to the number of rows that will be written to an
exported CSV or CEF file.)
Exported CSV and CEF files are stored on the Guardium system, and are named in
the format:
process_task_YYYY_MMM_DD-HHMMSS.<csv | cef>
Where process is a label you define on the audit process definition, task is a
second-level label that you can define for each task within the process, and
YYYY_MMM_DD-HHMMSS is a date-time stamp created when the task runs.
You cannot access the exported CSV or CEF files directly on the Guardium system.
Your Guardium administrator must use the CSV/CEF Export function to move
these files from the Guardium system to another location on the network. To access
those files, check with your Guardium administrator to determine the location to
which they have been copied.
Note: If observed data level security has been enabled, then audit process output
(including files) will be filtered so users will see only the information of their
assigned databases. Files sent to an email receiver as an attachment will be filtered.
However, files downloaded locally on the machine and then moved elsewhere
using the Results Export function from Administration Console are not subject to
data level security filtering. See CSV/CEF Export later in this topic for further
information on CSV/CEF Export.
The following table summarizes what happens when exporting an Audit Process
file to CSV/CEF/PDF.
Table 19. Exporting Audit Task Output to CSV, CEF or PDF Files
Function Level CSV CEF PDF
Attach to email Receiver Full Details radio --> N/A Full Details radio -->
PDF check box PDF check box
How Zip for Email and Compress work for Audit Task Output
Zip for Email is the highest level of control for Audit Task Export. Zip for email
produces a set of CSV or CEF files. PDF is not ever zipped and is not ever
compressed.
In the Audit Process Definition, in the section on Add New Task, when choosing a
Task Type of Security Assessment, a number of choices will appear: Export AXIS
xml and Export SCAP xml. Choose one of these selections in order to save the
Audit Process results and to transfer the XML file to the destination set up for
Results Export (Manage > Data Management > Results Export (Files)). Further
choices are for configuring the PDF format: Report, Difference, Report and
Difference.
Note: Results will only be shown if there are receivers for the results. Add
receivers, re-run the results and the run will now show up in the dropdown
list.
10. If one or more tasks create CSV or CEF files, you can optionally enter a label
to be included in all file names, in the CSV/CEF File Label box. These files
can also be compressed, or Zipped, by clicking on the Zip for mail box to add
a checkmark.
Note: There is a limit on export of CSV/CEF file sizes greater than 10240 MB
(10.240 GB). It is a recommended best practice to check the box Zip for mail.
11. The Email Subject field in the Audit Process definition is used in the emails
for all receivers for that audit process. The subject may contain one (or more)
of the following variables that will be replaced at run time for the subject:
v %%ProcessName will be replaced with the audit process description
v %%ExecutionStart will be replaced with the start date and time of the first
task.
v %%ExecutionEnd will be replaced with the end date and time of the last
task.
Upon entering a subject, it will check whether any variable (starting with %%
is present) and will ensure all are valid variables.
12. Optionally assign security roles.
13. Optionally add comments.
14. Click the appropriate buttons to Schedule or Run an Audit Workflow Process.
15. Click Save. Do not leave this menu screen to perform another configuration
before saving your work. Work-in-progress is not saved and not held in
half-created suspension if you leave this section to go create something else
needed for the audit task.
For example, to define an assessment task in Audit Process Builder, it is first
necessary to go to Security Assessment Builder to create assessment tests and
then to Datasource Definitions to identify the database(s) to be assessed. Save
Add Receivers
1. In the Receiver column, select a receiver from the drop-down list of Guardium
individual users, groups, or roles. If you select a group or a role, all members
of the group or users with that role will receive the results; and if signing is
required, only one member or user will need to sign the results.
2. In the Action Required column, select one option:
v Review (the default) - Indicates that this receiver does not need to sign the
results.
v Review and Sign - Indicates that this receiver must sign the results
(electronically, by clicking the Sign Results button when viewing the results
online).
3. In the To-Do List column, either mark or clear the Add check box to indicate
whether this receiver should be notified of pending results in their Audit
Process To-Do List.
Note: To send files on an external server without sending email and without
adding results to the to-do list, define an audit process without receivers. Also
clear the to-do list check box in the Add Receiver section and remove/ do not
add any receiver in the receiver section in order not to add results to To-do list.
4. In the Email Notification column, select one option:
v No - email will not be sent to the receiver.
v Link Only - email will contain a hypertext link to the results (on the
Guardium system).
v Results - email will contain a copy of the results in PDF or CSV format. Be
aware that the results from Classification or Assessment tasks may return
sensitive information.
5. The check box in the Continuous column controls whether or not distribution
of results continues to the next receiver (the default), or stops until this receiver
has taken the appropriate action. If the Continuous box is cleared, and this
receiver is a group or a role, when any user who is a member or that group or
role performs the selected action, the results will be released to the next
receiver on the list.
Note: The results will only distribute to the next receiver when the current
receiver has marked the Continuous button. This is completely separate from
the review/sign functionality and does not depend on the review/sign
functionality all.
6. Click Add to add the receiver to the end of the list, and repeat these steps for
each receiver. One receiver is required.
7. Receivers who are not users are permitted. Choose: Email and then enter an
email address, and the results will be sent to that email address. When entering
a non-user email address, there is a requirement that a user name that will be
used to filter the data. The user must be the same user that is logged in or a
user under the user that is logged in the hierarchy. This user will be saved in a
new column in the Receivers section of the screen.
8. Approve if Empty - When this check box is checked, if all the reports of the
task are empty, it will do the following: automatically sign the result (and/or
mark it as viewed); automatically click Continue (if relevant); will NOT send
the notification email; will NOT add the task to the To-Do list of that user;
will NOT generate any PDF/CSV/CEF files. With this check box, empty audit
Note: CEF file output is appropriate for data access domain reports only
(Access, Exceptions, or Policy Violations, for example). Other domains like the
Guardium self-monitoring domains (Aggregation/Archive, Audit Process,
Guardium Logins, etc.) do not map to CEF extensions.
4. If Export CEF file was selected, optionally mark the Write CEF to Syslog box to
write the CEF records to syslog. If the remote syslog facility is enabled, the CEF
file records will thus be written to the remote syslog.
5. If the Compress box is checked, then the CSV/CEF files to be exported will be
compressed.
6. If the Export PDF file box is checked, then a PDF file (with similar name as
CSV Export file) for this Audit Task is created and exported together with the
CSV/CEF files.
Note: The Export PDF file will not be compressed, even if the Compress box in
the previous step is checked.
Note: The selection of PDF Options applies to both PDF attachments and PDF
export files. The Diff result only applies only AFTER the first time this task is
run. There is no Diff with a previous result if there is no previous result. The
maximum number of rows that can be compared at one time is 5000. If the
number of result rows exceeds the maximum, the message
(compare first 5000 rows only)
By default, the Guardium application comes with setup data that links many of the
API functions to reports, providing users, through the GUI, with prepared calls to
APIs from reporting data. Use API Assignment in Reports to link additional API
functions to predefined Guardium reports or custom reports. The menu choice API
for automatic execution will appear in the Add Audit Task: Report when selecting
an appropriate predefined Guardium report or custom report that have fields in
the report that are linked to API parameters. Examples of predefined reports where
the API for automatic execution menu choice will appear are Access Policy
Violations, Databases Discovered, and Guardium Group Details.
Workflow Builder
This Event and Additional Column button appears in all audit tasks. By placing
the cursor over this button, an information balloon will appear telling the user if
the audit task has an Event or a Sign-off column linked to the specific audit task.
Note:
If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.
Under the Report choices within Add an Audit Task are two procedural reports,
Outstanding Events and Event Status Transition. Add these two reports to two
new audit tasks to show details of all workflow events and transitions These two
reports will not be filtered (observed data level security filtering will not be
applied). These two reports are available by default in the list of reports only to
admin user and users with the admin role.
Clone an Audit Task - If you are cloning a process, and you made changes to a
cloned task before the cloned process is saved, the workflow associated with the
original task will not be cloned.
Deletion of a event status is permitted only if the status is not in the first status of
any events, and if it not used by any action. The validation will provide a list of
events/actions that prevent the status from being deleted.
The owner/creator of a workflow event can always see all statuses of this event,
regardless of what roles have been assigned to these statuses.
f you have not yet started to define a compliance workflow automation process,
create a workflow process before performing this procedure. If the assessment to
be used has not yet been defined, do that first.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Security Assessment button.
3. Select a security assessment from the Security Assessment list.
4. The selection of PDF Content are Report (the current results), Diff (difference
between one earlier report and a new report) and Reports and Diff (both).
5. Click Apply.
Note:
If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.
Note: If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.
Note: If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.
If you have not yet started to define a compliance workflow automation process,
create a workflow process before performing this procedure. If the classification
process to be used has not yet been defined, do that first.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Classification Process button.
Note: You will be alerted that classification processes may return sensitive data,
and those results will be appended to PDF or CSV files.
3. Select a classification process from the Classification Process list. Click Apply.
Note: If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.
Note: If this feature is used in a Central Manager environment, the External Feed
Patch must be installed on the Central Manager, and on all managed units on
which the task will run.
If you have not yet started to define a compliance workflow automation process, s
create a workflow process before performing this procedure.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click External Feed.
3. Select a feed type from the Feed Type list.
4. The controls that appear next depend on the feed type selected. See Optional
External Feed for additional information on specific External Feed Types.
5. Select an event type from the Event Type list.
6. Select a report from the Report list. Depending on the report selected, a
variable number of parameters will appear in the Task Parameters pane.
7. In the Extract Lag box, enter the number of hours by which the feed is to lag,
or mark the Continuous box to include data up to the time that the audit task
runs.
8. In the Datasources pane, identify one or more datasources for the external
feed.
9. Enter all parameter values in the Task Parameters pane. The parameters will
vary depending on the report selected.
10. Click Apply.
Note: If there are outstanding events, then the results can not be signed either
from the audit viewer or from the To-do list. If there are outstanding events and an
attempt is made to sign the results, the following message appears:
Audit process cannot be signed - has pending events.
Note: When viewing audit process results, if a result has events associated with it,
the Sign Results button is not available on this result until all events are in a Final
state or cannot be seen by this user (due to data-level security).
Note: This report also contains a date or Last Action Time, located in a column
between Receiver and Status. This report shows that the result was signed by user
AAA, but also when this user AAA signed this result.
Note: These are the comments that were attached to the results when the
report page was retrieved from the Guardium system. If you add comments of
your own, or if other receivers are adding comments simultaneously, you will
not see those comments until you refresh your page (using your browser
Refresh function).
3. Click Close this window link.
A receiver of process results can forward the results notification for review and/or
sign-off to additional receivers. If you escalate the results to a receiver outside of
the original audit and sign-off trail, and the results include a CSV file, that file will
not be included with the notification.
Also, depending on event permissions, if for example, the infosec user can only see
events in status1 and dba user can only see events in status2, the dba user will
receive a different result than the result the infosec user saw when the infosec user
clicked Escalate. It is possible that infosec will escalate to dba, and dba will
receive an audit result with 0 rows in it.
1. If the compliance workflow automation results you want to forward are not
open, open them now.
2. Click Escalate.
3. Select the receiver from the Receiver list.
4. In the Action Required column, select Review (the default) or Review and Sign.
5. Click the Escalation button to complete the operation.
Note:
When escalating to an user who already has the result in the user's to-do list, a
popup message will appear, asking if an additional email should be sent. If yes, an
additional email will be sent to the user, but the to-do list will not be incremented.
Note: After a schedule has been defined for a process, the process runs
according to that schedule only when it is marked active. To activate or
deactivate an audit process, see the next section.
After a schedule has been defined for an audit process, it runs according to that
schedule, only when it is marked active.
Note: If you are activating the process but there is no schedule, click Modify
Schedule to define a schedule for running the process.
5. Click Save.
See the Compliance Workflow Automation topic for additional information on this
subject.
Procedure
1. Open the Audit Process Finder by navigating to Comply > Tools and Views >
Audit Process Builder.
2. Click the New button to open the Audit Process Definition panel.
The Audit Process Definition panel is divided into three sections: General,
Receiver Table, and Audit Tasks.
Note: After a schedule has been defined for a process, the process runs
according to that schedule only when it is marked active.
Distribution Status
The Audit Process Log report shows a detailed activity log for all tasks
including start and end times. This report is available by navigating to
Reports > Guardium Operational Reports > Audit Process Log. Audit tasks
show start and end times.
Note: When you register a new managed unit to a central manager, you might be
unable to view audit results. The application does not show results that have a
timestamp before the managed unit was registered to the central manager. The
timestamp of the registration uses the central manager time, and the timestamp of
the audit result uses the managed node time. So, if the central manager time is
ahead of the managed unit time, results generated on the managed unit are not
visible until the managed unit time passes the time of registration. This should
happen in no more than 24 hours, possibly less depending on the locations of the
two machines. You should be able to view the results of audit processes on the
managed unit within 24 hours of registration.
Value-added: Setup a single audit process and distribute the appropriate results to
the appropriate manager. This saves having to create separate audit processes for
separate receivers.
For example, consider a large organization that has fifteen DBA managers that
need to review the activities for the DBAs they manage without viewing the
activities of the other manager’s DBAs. One solution would be to setup fifteen
separate audit processes; one for each manager. This would take a lot of time to
configure and it is difficult to manage: Each audit process needs to be scheduled
separately and any global change would need to be made individually for all
fifteen audit processes.
The user group distribution method, on the other hand, permits the setup of a
single audit process and distributes the appropriate results to each manager based
on a manager/DBA mapping. This process requires more upfront configuration but
reduces to maintenance time. Only one audit process needs to be scheduled and
changes only need to be applied in one location.
User mapping
The first step in the process is to map the users to the data elements within
Guardium that will be the basis for report distribution. The example that will be
used in this document will be based on objects, but you can apply these concepts
with any data element within Guardium.
Example: Three users have responsibility over three different sets of tables, based
on audit requirements (PCI, HIPPA, and CCI) within a database server, as follows:
User01 db2inst1.cc_numbers
User01 db2inst1.ccn
User02 db2inst1.ADDRESSES
User02 db2inst1.SSN_NUMBERS
User02 db2inst1.G_CUSTOMERS
User02 db2inst1.G_EMPLOYEES
User02 db2inst1.G_FUNDS
User03 db2inst1.doctor
User03 db2inst1.medicare
User03 db2inst1.med_history
This table must be added as a custom table within Guardium, either manually or
through a data upload. The following steps demonstrate how to create a custom
table manually. The screenshots are from the “admin” user interface, but they can
also be accessed from within the “user” user interface.
1.
Navigate to Reports > Report Configuration Tools > Custom Table Builder
and press the Manually Define button.
2.
3.
Press Edit Data to manually add the records. Note, if you have a large amount
of data, choose Upload Data to import from an external data source.
4.
Press Insert.
6.
When complete, press the Query button to review the data.
2.
In the Custom Domain Builder:
a.
Highlight the new table created under Available entities.
b.
Highlight the table under Domain entities to which you would like to join
the custom table.
c.
Under Join condition choose the fields on each table on which to create the
join and press Add Pair.
3.
Press the arrows (>>) button to move the custom table from Available entities
to Domain entities.
5.
Confirm that the joins are correct and press Close.
6.
Press Apply to save the new custom domain.
2.
Press New.
3.
Enter a Query Name and Main Entity and press Next.
User Group
Audit Process
1.
Create a new Audit Process.
2.
Choose the group created in User Group as the Receiver
226 IBM Guardium 10.0
3.
Choose the custom report created in step 4 as the task.
4.
In the run-time parameter, enter the special tag “./LoggedUser”. This will
cause the results to be distributed based on the custom mapping.
5.
Press Run Once Now to run the Audit Process
When the audit process completes, each receiver should a different result set based
the mapping:
User02
User03
There are several ways to open the Audit Process To-Do List, including:
v Click the icon in the page banner.
v Navigate to Comply > Tools and Views > Audit Process To-Do List.
v If you received an email notification, click the To-Do List link to open your
To-Do List. Alternatively, click the report link to open the results. In either case,
you must be accessing your email from a location where the Guardium system
can be accessed.
The following steps describe how to use the Audit Process To-Do List:
1. Select the user whose To-Do list you want to open, either by opening up the
drop-down menu or clicking Search Users. You will be informed if the list is
empty.
2. As an administrator, you can perform any actions on any to-do list entry. Any
actions you perform are logged, indicating that the action was performed on
behalf of the user by the administrator.
3. The choices available per to-do list entry are View, Download as PDF and Sign
viewed results.
The selection of PDF Content are: Report (the current results), Diff (difference
between one earlier report and a new report) and Reports and Diff (both).
Note: The selection of PDF Content applies to both PDF attachments and PDF
export files. The Diff result only applies only AFTER the first time this task is
run. There is no Diff with a previous result if there is no previous result. The
maximum number of rows that can be compared at one time is 5000. If the
number of result rows exceeds the maximum, the message compare first 5000
rows only will show up in the diff result.
4. Click on the icon of arrows circling to Refresh the set.
Note: To send files on an external server without sending email and without
adding results to the to-do list, define an audit process without receivers. Also
clear the to-do list checkbox in the Add Receiver section and remove/ do not add
any receiver in the receiver section in order not to add results to To-do list.
When a user accesses another user's results, the data presented in the report is
filtered according to the Data Level Security and the role of the user selected (for
example, in the case of a custom workflow, the data is filtered according to the role
of the user selected and the status defined for that role).
If a user with role admin accesses a result of a user that is UNDER in the
hierarchy, then it behaves as explained in the previous paragraph. If administrator
accesses a result of a user which is NOT under in the hierarchy, then it will show
the result using the Data Level Security of the administrator and will show for all
roles.
If a user goes to some other user's to-do list, a message will indicate which user is
determining the DLS filtering.
All domains and their contents are described in the Domains, Entities, and
Attributes appendix.
There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Regardless of the domain, the same
general-purpose query-builder tool is used to create all queries. For detailed
instructions on how to build queries, see Queries.
In addition to the standard set of domains, users can define custom domains to
contain information that can be uploaded to the Guardium appliance. For example,
your company might have a table relating generic database user names (hr23455 or
qa4872, for example) to real persons (Paula Smith, John Doe). Once that table has
been uploaded, the real names can be displayed on Guardium reports, from the
custom domain. For more detailed information on how to define and use custom
domains, see External Data Correlation.
For example, perhaps a table exists on a database servers containing all employees,
their database usernames, and the department to which they belong (for example,
Development, Financial, Marketing, HR, etc.). If you upload this table and all its
data, you could cross-reference this table with Guardium's internal tables to see,
for example, which employees from Marketing are accessing the financial database
(which may constitute a suspicious activity).
Custom Tables
A custom table contains one or more attributes that you want to have available on
the Guardium appliance. For example, you may have an existing database table
relating encoded user names to real names. In the network traffic, only the
encoded names will be seen. By defining a custom table on the Guardium
appliance, and uploading data for that table from the existing table, you will be
able to relate the encoded and real names.
Before defining a custom table, first verify that the data you need from the existing
database is a supported data type. A data type is supported if it is taken as one of
the following SQL type by the underlying JDBC driver: INTEGER, BIGINT,
SMALLINT, TINYINT, BIT, BOOLEAN, DECIMAL, DOUBLE, FLOAT, NUMERIC,
REAL, CHAR, VARCHAR, DATE, TIME, TIMESTAMP. The following table
summarizes some of the supported and unsupported data types for uploading to a
custom table.
Use this table to see what supported and unsupported data types exist for certain
databases.
Table 21. Supported and Unsupported Data Types for Custom Tables
Databases Supported Data Types Unsupported Data Types
Oracle float number char varchar2 date nchar nvarchar2 long clob raw nclob longraw bfile rowid urowid
blob
DB2 char varchar bigint integer smallint real double blob clob longvarchar datalink
decimal date time timestamp
Sybase char nchar varchar nvarchar int smallint tinyint text binary varbinary image timestamp
datetime smalldatetime
MS SQL bigint bit char datetime decimal float int money text
nchar numeric nvarchar real smalldatetime
smallint tinyint smallmoney varchar unique
identifier
Informix char nchar integer smallint decimal smallfloat text
float serial date money varchar nvarchar datetime
Note: Blob value (even 1k) in dynamic SQL can be captured, but same size blob
value in static SQL cannot be captured.
The Custom Table Data Purge screen has a checkbox for Archive. Checking this
box results in the data of the custom table being included in the normal data
archive.
The data of the custom table can be archived from a collector or an aggregator.
The data of the custom table archived from a collector can be restored to any
collector or aggregator managed by the same Central Manager as the source
collector (the metadata must be present).
The data of the custom table archived from an aggregator can be restored to any
aggregator managed by the same Central Manager as the source aggragator.
If the archive file to be restored to a Guardium system does not have the metadata,
then the data of the custom table is not restored.
If the structure of the custom table has changed between the time of archive and
the time of restore in a way that results in an SQL error (for example, columns
removed or type changed), then a warning message appears on the
aggregation/archive activity report and the data is not restored.
If a custom table is set to be purged by the default purge, then the restored data
will be kept for the number of days specified on the restore screen.
If the custom table is set to overwrite data when it uploads, then restored data will
be deleted at the time an upload is performed.
Custom Domains
A custom domain contains one or more custom tables. If it contains multiple
tables, you define the relationships between tables when defining the custom
domain.
Custom Queries
A custom query accesses data from a custom domain. You use the Custom Query
Builder to create queries against custom domains. Custom queries can then be
Note:
Do not include any newline characters in the SQL statement. All columns must
be explicitly named; making use of a column alias if necessary.
6. Click Add Datasource to open the Datasource Finder in a separate window.
This will allow us to define where the external database is located, and the
credentials needed to retrieve the table definition and content later in the
process.
7. Use the Datasource Finder to identity the database from which the table
definition will be uploaded.
8. Click Retrieve to upload the table definition. This will execute the SQL
Statement and retrieve the table structure. The SQL request will come from the
Guardium Appliance to the external database. Remember that only the
definition is being uploaded and you can upload data later.
If you modify the definition of a custom table, you may invalidate existing reports
based on queries using that table. For example, an existing query might reference
an attribute that has been deleted, or whose data type has been changed. When
applying changes to a custom table, if any queries have been built using attributes
from that table, the Queries are displayed in the Query List panel. Note: You can
also use the Modify to view and validate the table structures that were imported.
1. Open the Custom Table Builder.
2. Choose a custom table by clicking on the entity label and highlighting it.
3. Click Modify to open the Modify Entity panel.
4. See Defining a Table Manually for assistance.
5. When applying changes to a custom table, if any queries could be invalided
due to modification to attribute from that table, the queries are displayed in the
Query List panel. Use the Query List panel to choose and change queries. You
do not have to make all changes immediately as you can always come back
and use the Check for Invalid Queries option.
Note: Run once now purge will look at the RESTORED_DATA table for
retention. Purge ALL will purge all records deleted without checking the
retention.
5. In the Configuration panel, enter the age of the data to be purged, as a number
of days, weeks or months prior to the purge operation date.
6. Click Run Once Now to run a schedule purge operation once.
7. Click Modify Schedule to open the standard Schedule Definition panel and
schedule a purge operation.
8. Click Done to close the panel.
Note: Changing the engine type is disallowed (and the selection greyed out) if the
row number in the table is greater than 1M.
Once a custom table definition is in place, data can be uploaded to custom tables
on the Guardium appliance on a scheduled basis.
Note: New installations do not automatically start Enterprise reports. There is one
upload schedule for each custom table. The total amount of disk space reserved on
the Guardium appliance for custom tables is 4GB.
1. Open the Custom Table Builder.
2. Choose a custom table by clicking on the entity label and highlighting it.
3. Click Upload Data to open the Import Data panel.
4. Mark the Use Default Schedule check box to upload this table using the
default schedule. Otherwise, this custom table uses its own upload data
schedule.
5. Click Modify Schedule to open the standard Schedule Definition panel and
modify the schedule.
6. Click Done when you are finished.
The Enterprise reports custom upload are like other jobs. There are two ways to
enable them:
v In the Custom Table Upload GUI. (requires license for custom upload)
v Use GuardAPI from the CLI:
grdapi add_schedule jobName=CustomTablePurgeJob_CM_SNIFFER_BUFFER_USAGE obGroup=customTableJobGr
After defining one or more custom tables, define a custom domain so that you can
perform query and reporting tasks using the custom data. The information
collected is organized into domains, each of which contains a different type of
information relating to a specific area of concern: data access, exceptions, policy
violations, etc. There is a separate query builder tool for each domain. Custom
domains allow for user defined domains and can define any tables of data
uploaded to the Guardium appliance. The usage for these custom entitlement
(privileges) domains are for entitlement reports which are found if logged in as a
user. To see these reports, go to the user tab, DB Entitlements.
Note: When data level security is on, internal entities added to the custom
domain cannot belong to different domains with filtering policies.
8. Select the Timestamp attribute for the custom domain entity.
Note: At least one entity with a timestamp must be used, since a timestamp is
required to save a custom domain.
9. Click Apply.
The goal is to create a linkage between external data and the internal data.
1. Open the Custom Domain Builder.
2. Choose the Custom Table that has your external data.
3. Click Domains to open the Domain Finder panel.
4. Click Modify to open the Custom Tables Domain panel.
5. Click the Filter icon next to the Available Entities.
6. Un-check the Custom box for the filter and optionally fill in a Like condition
to filter entity names and click Accept.
7. Select an entity from the Available Entities that you would like to link with
your external data.
8. Select the field that will be used to join data with your external data.
9. Highlight the table from the Domain Entities that contains your external data
10. Select the field that will be used to join data with the internal data.
11. Click the Add Field Pair to add the relationship.
12. Click the double arrow >> to add the internal table to the Domain Entities
list.
13. Click Apply to save the changes.
This section describes how to open the Custom Query Builder. See Building
Queries and Building Reports for assistance in defining a query and building a
report. Use the Custom Query Builder to build queries against data from custom
domains, which contain one or more custom tables.
1. Open the Custom Query Builder by navigating to Comply > Custom
Reporting > Custom Query Builder.
2. Select a custom domain from the list.
3. Click Search to open the Query Finder
4. To view, modify or clone an existing query, select it from the Query Name list,
or select a report using that query from the Report Title list.
5. To view all of the queries defined for a specific custom table, select that custom
table from the Main Entity list and click the Search button (only the custom
tables included in the selected custom domain will be listed).
Import to Guardium
Note: Alternatively you can load the data directly from Discovery database if
you know how to access the Discovery database and Classification results data.
4. After defining the CSV as Datasource, click Add in Datasource list screen.
5. In Upload data screen click Verify Datasource and then Apply.
6. Click Run Once Now to load the data from the CSV.
7. Go to Report Builder, select the Classification Data Import report, Click Add to
Pane to add it to your Portal and then navigate to the report.
The report result has the classification data imported from InfoSphere Discovery.
Double click to invoke APIs assigned to this report. The data imported from
Discovery can be used for the following:
v Add new Datasource based on the result set.
v Add/Update Sensitive Data Group.
v Add policy rules based on datasource and sensitive data details.
v Add Privacy Set.
Use the table for examples of CSV interface signatures used in the bidirectional
transfer between IBM Guardium and InfoSphere Discovery.
Table 22. CSV Interface signature.
Interface signature Example
Type DB2
Host 9.148.99.99
Port 50001
Datasource URL
TableName MK_SCHED
ColumnName ID_PIN
ClassificationName SSN
Privacy Sets
A privacy set is a collection of elements that can be used to do special monitoring.
It consists of one or more object-field pairs - for example, the salary field of the
employee table, or all fields of the salary history table. All access to these elements
within a given timeframe can be reported.
If a auditing process is running, you cannot remove a privacy set. Stop the
auditing process, then follow the steps to remove the privacy set.
1. Select the privacy set to be removed, in the Identify Privacy Set panel. See
Open the Privacy Set Builder.
2. Click Delete and confirm the action.
3. Click Done.
This procedure describes how to run a privacy set report on demand. To schedule
a privacy set report, include it in a compliance workflow (see Compliance
Workflow Automation).
1. Open the privacy set for the report, in the Privacy Set Builder. See Open the
Privacy Set Builder.
2. Click Run.
3. In the Task Parameters, enter the starting and ending times for the task.
4. Select Report by Access Details, or Report by Application User, to specify
how the results should be displayed. The first option is the default, in which
case a count of accesses is shown for each combination of client IP, server IP,
server (name), server type, database protocol, source program name, and
database user name. If Application User is selected, the report will contain a
separate column with that name (following DB User Name) and the output will
be additionally qualified by the application user.
5. Click Run Once Now. After the report has been executed, it will be displayed
in a separate window.
6. Click Done.
Custom Alerting
Alert messages can be distributed via e-mail, SNMP, syslog, or user-written Java
classes. The last option is referred to as custom alerting.
When an alert is triggered, a custom alerting class can take any action appropriate
for the situation; for example, it might update a Web page or send a text message
to a telephone number.
To create a custom alerting class, first contact Technical Support to obtain the
necessary interface file. The following topic describes how to implement the
interface. See Use the Custom Alerting Interface, and also the following topic
which contains an example: Sample Custom Alerting Class.
Once the class has been compiled, it must be uploaded to the Guardium appliance
from the Administration Console. See Manage Custom Classes.
For guidelines on testing a custom alerting class, see the Test a Custom Alerting
Class section later in this topic.
Note: Do not write a custom class that gets data from an untrusted source.
The custom alerting class must be in the com.guardium.custom package and must
implement the com.guardium.custom.alerts.CustomerDefinedAlertingIfc interface:
package com.guardium.custom
public class YourClassNameHere implements CustomerDefinedAlertingIfc {
}
The following sample program implements the five methods described in the
previous section. For the processAlert method, this program simply writes the alert
message and timestamp to the system console.
/*
* Sample Custom Alerting Class
*
*/
package com.guardium.custom;
import java.text.DateFormat;
import java.util.Date;
public class HandleAlerts implements CustomerDefinedAlertingIfc {
private String message = "";
private Date timeStamp = null;
public void processAlert(String message, Date timeStamp){
setMessage(message);
setTimeStamp(timeStamp);
System.out.println(getMessage() + " on " +
DateFormat.getDateInstance(). format(getTimeStamp()));
}
public void setMessage(String inMessage){
message = inMessage;
}
public String getMessage(){
return message;
}
public void setTimeStamp(Date inDate){
timeStamp = inDate;
}
public Date getTimeStamp(){
return timeStamp;
}
}
After compiling a custom alerting class, follow the procedure to test it.
1. Upload the custom class to the appliance. This is an administration function
that is performed from the Administrator Console. See Manage Custom Classes.
2. Define a correlation or real-time alert to use the custom alerting class.
Regardless of which alert type generates the alert, testing is easier if you assign
a second notification type (email, for example) against which you can compare
the custom alerting results.
3. Check the environment by doing one of the following:
v For a correlation alert:
– Check that the Anomaly Detection polling interval is suitable for testing
purposes and that Anomaly Detection has been started. If the polling
interval is too long (it may be 30 minutes or more), you may have a long
wait before the query runs.
– Check that the Alerter polling interval is suitable for testing purposes and
that the Alerter has been started.
– Check that the alert to be tested has been marked Active.
v For a real-time alert:
– Check that policy containing the rule with the custom alert action is the
installed policy.
This saves processing resources, so that a heavier traffic volume can be handled.
The parsing and amalgamation of that data to Guardium's internal database can be
done later, either on a collector or an aggregator unit.
Note: Rules on flat does not work with policy rules involving a field, an object,
SQL verb (command), Object/Command Group, and Object/Field Group. In the
Flat Log process, "flat" means that a syntax tree is not built. If there is no syntax
tree, then the fields, objects and SQL verbs cannot be determined.
The following actions do not work with rules on flat policies: LOG FULL
DETAILS; LOG FULL DETAILS PER SESSION; LOG FULL DETAILS VALUES;
LOG FULL DETAILS VALUES PER SESSION; LOG MASKED DETAILS.
Selection of this feature involves the Policy Builder menu in Setup >Tools and
Views and Flat Log Process menu in Manage > Activity Monitoring.
When the Log Flat (Flat Log) checkbox option listed in the Policy Definition screen
of the Policy Builder is checked,
v Data will not be parsed in real time .
v The flat logs can be seen on a designated Flat Log List report.
v The offline process to parse the data and merge to the standard access domains
is configured through the Administration Console.
1. Navigate to Manage > Activity Monitoring > Flat Log Process.
2. Select the activity to perform:
v Process - Merge the flat log information to the internal database.
v Archive/Aggregation/Purge - Archive or aggregate, and optionally purge, the
flat log.
v Purge Only - Purge the flat log data.
3. Click Apply to save the configuration.
4. For a Process activity, optionally do one of the following:
v Click Run Once Now to merge the flat log information to the internal
database immediately.
v Click Modify Schedule to define a schedule for this activity. You can select
the start time, restart frequency, and repeat frequency. For the Schedule by..
field, you must select either Day/Week or Month.
Use this feature when you need to add a condition that is based not on the entire
content of the attribute as is, but on part of the attribute, a function of the
attribute, or a function that combines more than one attribute.
Along with authenticating users and restricting role-based access privileges to data,
even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have
the privileges required to perform their duties. This is also known as database user
rights attestation reporting.
Custom database entitlement reports have been created to save configuration time
and facilitate the uploading and reporting of data from the following databases:
Oracle; MYSQL; DB2; SYBASE; SYBASE IQ:; Informix; MS SQL 2000/2005/2008;
Netezza®; Teradata; and, PostgreSQL.
The predefined entitlement reports are listed in the Predefined Content section of
the online help.
How to use Access Maps to show paths between clients and servers
You can use access maps to easily understand all access paths between database
clients and database servers.
Access maps provide a convenient way to understand data access between clients
and servers. The access map shows all access paths derived from a set of criteria
that you define.
Criteria can be set based on any combination including server type or location on
the network (IPs and subnets). In addition, you can group access patterns together,
since one of the main problems in reviewing access data is the detailed granularity.
By grouping similar access paths, you are able to get a visual map, which can be
meaningful in understanding your access environment. Using this visual depiction,
you can then drill down and get further information on any one access path in the
map.
Note:
To use the Access Map Builder/Viewer, your Guardium user account must be
assigned a security role that is also assigned to the Access Map Builder/Viewer
application.
Procedure
1. Open the Access Map Builder/Viewer by clicking Reports > Report
Configuration Tools > Access Map Builder/Viewer.
2. Create a new access map or select an existing access map.
v Create a new access map by entering a unique name for the new map in the
Enter a map name field.
v Select an existing map by selecting an item from the select an existing map
name menu.
The appearance of the remaining sections changes depending on your selection.
6. You can aggregate paths to all servers and clients by selecting an option for
Server IP aggregation granularity and Client IP aggregation granularity. The
prefixes are based on the segments or bits in IPV4 and IPV6 addresses, and
indicate how the addresses will be grouped together. You can choose whether
you want to aggregate based on segments or bits.
For example, if you choose a CIDR prefix of 16, all addresses starting with 1.1
are grouped into one node, all addresses starting with 1.2 are grouped into
another node, and so on.
In an IPV4 address, there are 4 segments, each segment comprising 8 bits.
In an IPV6 address, there are 8 segments, each segment comprising 16 bits.
X.*.*.*: For mapping purposes, treat each server or client IP address beginning
with the same first octet as a single endpoint.
X.Y.*.*: For mapping purposes, treat each server or client IP address beginning
with the same first and second octets as a single endpoint.
X.Y.Z.*: For mapping purposes, treat each server or client IP address beginning
with the same first, second, and third octets as a single endpoint.
Full IPs: For mapping purposes, treat each complete server or client IP address
as a single endpoint. Be aware that this option aggregates multiple databases at
the same IP address.
None: (Default) No path aggregation by server IP address.
7. To group together the aggregated addresses, essentially creating a group of
groups, choose an option for Grouping of aggregated addresses. The default is
set to None. For example, if you want to group together two groups of
addresses that begin with 1.2 and 1.1, choose the option one additional
segment.
9. Click Save & View when finished. Following a short delay, the map displays in
the output type you selected. The legend that displays on your map will vary
depending on its contents.
Guardium provides several methods to identify application users, when the actual
database user is not apparent from the database traffic:
v Identify Users via Application User Translation - For some of the most popular
commercial applications (Oracle EBS, PeopleSoft, SAP, etc.), Guardium can
identify users automatically.
v Identify Users via API - The Application Events API allows you to signal
Guardium when an application user takes or relinquishes control of a
connection, or when any other event of interest occurs. (This can be used for
more than just identifying users.)
v Identify Users via Stored Procedures - Many applications use database stored
procedures to identify the application user. In these cases, user information can
usually be extracted from the stored procedure parameters.
For some widely used applications, Guardium has built-in support for identifying
the end-user information from the application, and thus can relate database activity
to the application end-users.
Note: Under Central Management, you must use different application codes
on different managed machines. This prevents aliases generated for the users
from conflicting with each other. (Under Central Management, there is one set
of aliases that is shared by all managed units.)
4. From the Application Type list, select the application type:
v BO-WI - Business Objects / Web Intelligence
v EBS - Oracle E-Business Suite
v PeopleSoft
v SAP Observed
v SAP DB
v SIEBEL Observed
v SIEBEL DB
5. In the Application Version box, enter the application version number (11, for
example).
6. From the Database Type list, select the database type. Only the types that are
available for the selected Application Type and Version will be displayed.
7. In the Server IP box, enter the IP address the application uses to connect to
the database.
8. In the Port box, enter the port number the application uses to connect to the
database.
9. In the Instance Name box, enter the instance name the application uses to
connect to the database.
10. In the DB Name box, enter the database name for the application. (Required
for some applications, not used for others.)
11. Mark the Active box to enable user translation. Nothing is translated until
after the first import of user definitions.
12. Enter a User Name for Guardium to use when accessing the database. Enter a
password for Guardium to use when accessing the database.
13. Mark the Responsibility box if you want to associate responsibilities
(Administration, for example) with user names. Or clear the Responsibility
box to just record user names. When the box is cleared, all activities
performed by a user will be grouped together, regardless of the responsibility
at the time the activity occurred.
Note: The first time Run Once Now is clicked after installing the Application User
Translation setting(s), it retrieves the last update-date for the tables it looks at.
After that, it imports only new data. Otherwise, we could find ourselves
needlessly importing decades worth of data and filling many tables/databases.
When Application User Translation has been configured, you must populate at
least two pre-defined groups with information that will be specific to your
environment. This table identifies the groups that must be populated for each
application type.
For some application types, one or more special report portlets must be
regenerated. For example, there are two pre-defined EBS reports, and two
pre-defined PeopleSoft reports. These reports cannot be modified. After populating
the pre-defined application groups, follow the procedure to regenerate the
predefined application portlets and place them on a page.
The examples in this section are for the EBS portlets, but the procedure is identical
for other application types.
1. Do one of the following to open the Report Finder: Users with the admin role:
Select Tools - Report Building - Report Builder. All Others: Select
Monitor/Audit - Build Reports - Report builder.
2. Click Search to open the Report Search Results panel.
3. Select a report portlet for the application type (EBS Application Access, for
example, and click Regenerate Portlet. You will be informed that the portlet
has been regenerated
4. Repeat the previous step for each application report (EBS Processes Database
Access, or the PSFT Processes Database Access report, for example). Now add
a tab to your layout, and include the two regenerated portlets on that tab.
5. Click Customize to open the Customize pane.
6. Click Add Pane to define a new tab.
7. Enter a name for the tab - EBS Reports, for example - and click Apply. The
new tab appears as the last tab in the list.
8. Click on the new tab name to edit that pane.
9. Click Add Portlet, and click Next until you locate the reports you want (the
EBS reports, for example), and mark the checkbox next to each desired report.
10. Click Apply, and then click Save and Apply and then click Save to save the
new pane layout. The new tab will appear at the end of the first row of tabs.
11. Click on the new tab name to open the tab.
12. Click Customize to set the runtime parameters (date range and Show Aliases,
for example).
In some cases customers do not want to use the Oracle EBS DB_USER for
translating EBS traffic. Under this scenario, when setting up Oracle EBS and
wanting to translate traffic with Application User Translation, there are two choices
to make it work:
v Supply the username and password that EBS uses to talk to Oracle (often
APPS/$passwd).
v If the customer does not want to supply/enter the password for the DB_USER
EBS uses to access Oracle, it is still possible to get Application User Translation,
however the process is more complicated.
Translation:
1. Grant select on the following tables to Custom DB User:
APPLSYS.FND_USER
APPLSYS.FND_RESPONSIBILITY
APPLSYS.FND_RESPONSIBILITY_TL
2. Create a private synonym FND_USER on APPLSYS.FND_USER for Custom DB
User.
3. Create a view called FND_RESPONSIBILITY_VL for Custom DB User. You can
find this view under the APPS user to use as your template.
Note:
ABAP Stack and Java Stack systems will have different tables.
ABAP Stack
Traditional ECC (Enterprise Core Components) SAP systems are written in ABAP
code and are predominantly accessed via the SAP GUI, although web access is
possible.
The following screen will appear when you enter the SAP GUI (ABAP Stack):
To validate the ABAP Stack SAP Kernel module for Application User Translation,
follow these steps:
1. Log in to SAP.
2. Go to System > Status
Data gets put into the app user field and the app event string.
Java Stack
SAP Portal systems are written in Java code and are the front end web applications
utilizing pre-canned queries to display SAP related web pages.
Portal systems can only be accessed via a web browser. Portal system databases are
much smaller with only a few tablespaces.
The following screen will appear when you enter SAP Portal System (Java Stack).
To validate the Java Stack SAP Kernel module for Application User Translation,
follow these steps: 1. Click on System Information.
SAP sets similar client properties in the Java stack as it did for ABAP Stack.
The Application Events API provides simple no-op calls that can be issued from
within the application to signal Guardium when a user acquires or releases a
connection, or when any other event of interest occurs.
Note: If your Guardium security policy has Selective Audit Trail enabled, the
Application Events API commands that are used to set and clear the application
user and/or application events will be ignored by default, and the application user
names and/or application events will not be logged. To log these items so that
they will be available for reports or exceptions, include a policy rule to identify the
appropriate commands, specifying the Audit Only rule action.
Use this call to indicate that a new application user has taken control of the
connection. The supplied application user name will be available in the
Application User attribute of the Access Period entity. For this session, from this
point on, Guardium will attribute all activity on the connection to this application
user, until Guardium receives either another GuardAppUser call or a
GuardAppUserReleased call, which clears the application user name.
To signal when other events occur (you can define event types as needed), use the
GuardAppEvent call, described in the following section.
user_name is a string containing the application user name. This string will be
available as the Application User attribute value in the Access Period entity.
FROM location is used only for Oracle, DB2, or Informix. (Omit for other database
types.) It must be entered exactly as follows:
v Oracle: FROM DUAL
v DB2: FROM SYSIBM.SYSDUMMY1
v Informix: FROM SYSTABLES
This call provides a more generic method of signaling the occurrence of application
events. You can define your own event types and provide text, numeric, or date
values to be stored with the event, both when the event starts and when it ends.
You can use this call together with the GuardAppUser call. Guardium will
attribute all activity on the connection to this application event, until it receives
either another GuardAppEvent:Start command or a GuardAppEvent:Released
command.
Syntax:
SELECT ‘GuardAppEvent:Start|Released’,
‘GuardAppEventType:type’,
‘GuardAppEventUserName:name’,
‘GuardAppEventStrValue:string’,
‘GuardAppEventNumValue:number’,
Start | Released - Use the keyword Start to indicate that the event is taking control
of the connection or Released to indicate that the event has relinquished control of
the connection.
type identifies the event type. It can be any string value, for example: Login,
Logout, Credit, Debit, etc. In the Application Events entity, this value is stored in
the Event Type attribute for a Start call, or the Event Release Type attribute for a
Released call.
name is a user name value to be set for this event. In the Application Events entity,
this value is stored in the Event User Name attribute for a Start call, or the Event
Release User Name attribute for a Released call.
string is any string value to be set for this event. For example, for a Login event
you might provide an account name. In the Application Events entity, this value is
stored in the Event Value Str attribute for a Start call, or the Event Release Value
Str attribute for a Released call.
number is any numeric value to be set for this event. For example, for a Credit
event you might supply the transaction amount. In the Application Events entity,
this value is stored in the Event Value Num attribute for a Start call, or the Event
Release Value Num attribute for a Released call.
date is a user-supplied date and optional time for this event. It must be in the
format: yyyy-mm-dd hh:mm:ss, where the time portion (hh:mm:ss) is optional. It
FROM location is used only for Oracle, DB2, or Informix. (Omit for other database
types.) See the following example. However, any dummy table name is acceptable
for the dummy SQL.
v Oracle: FROM DUAL
v DB2: FROM SYSIBM.SYSDUMMY1
v Informix: FROM SYSTABLES
If any Application Events entity attributes have not been set using the
GuardAppEvent call, those values will be empty.
In the simplest case, an application might have a single stored procedure that sets
a number of property values, one of which is the user name. A call to set the user
name might look like this:
set_application_property(’user_name’, ’JohnDoe’);
In a custom procedure mapping (described later), you can tell Guardium to:
v Watch for a stored procedure named set_application_property, with a first
parameter value of user_name.
v Set the application user to the value of the second parameter in the call
(JohnDoe, in the example).
Since each of your applications may have a different way of identifying users, you
may have to define separate custom identification procedure mappings for each
application. To do that, follow the procedure outlined.
Note: If two conditions are used, the user name or any other information
being extracted must be in the same parameter position for both types of
calls.
8. For a Clear action:
v Use only the Event Type Position and Application Username Position fields.
v Do one of the following:
– To clear the application event: set the Event Type Position to 1, and set
the Application Username Position to 0.
– To clear the application user: set the Event Type Position to 0, and set the
Application Username Position to 1.
9. For a Set action, use the Parameter Position pane to indicate which stored
procedure parameters map to which Guardium application event attributes.
The first procedure parameter is numbered 1. Use 0 (zero – the default) for all
The Value Change Auditing feature tracks changes to values in database tables. For
each table in which changes are to be tracked, you select which SQL value-change
commands to monitor (insert, update, delete). Each time a value-change command
is run against a monitored table, before and after values are captured. On a
scheduled basis, the change activity is uploaded to a Guardium system, where all
the reporting and alerting functions can be used. The basic steps to perform to use
the Value Change Auditing feature are:
1. From the Administration Console, create an audit database on the database
server. This database is where value-change data is stored until it is uploaded
to the Guardium system. See “Create an Audit Database” on page 266.
2. Identify the tables to be monitored, and for each table select the value-change
commands (insert, delete, update) for which changes will be recorded. To
record the changes, a trigger is created for each table to be monitored, and that
trigger writes the value-change data to the audit database. To allow updates to
the audit database (by the trigger), all users with update privileges for the
monitored table are given appropriate privileges for the audit database. This
has implications for users who are given update privileges for that table later
(see step 4). For detailed instructions on how to define the monitoring
activities, see Define Monitoring Activities.
3. Schedule uploads to transfer value-change data from the database server to the
Guardium system. See Schedule Value-Change Uploads.
4. Maintain audit database access privileges. After a trigger has been created, a
new user may be given access to the table on which the trigger is based. If that
user issues a monitored value-change command, it will fail because that user
will not have appropriate privileges to update the audit database. See Maintain
Privileged Users Lists.
In addition to the native facilities within the Guardium product used for showing
before and after values of DML, getting before/after values for Oracle can be
accomplished by using Oracle Streams and Guardium’s External Data Correlation
(upload) facility. Streams are used to create change records for any change that
affects a sensitive column, and the upload job is used to bring the data into the
Guardium repository, where you can issue reports, combine the data with other
details, and add these reports into the sign-off process.
Note: Oracle Streams requires that the Oracle database that is being monitored is
in ARCHIVELOG mode.
1. Define a datasource. Click Value Change Auditing Builder and complete the
blocks under Datasource Definition: Name; Database Type (Oracle); Share
Datasource (check mark); Save Password (check mark); Login Name, Use sys;
Password; Connection Property field with value SysLoginRole=SYSDBA: Host
Name/IP; Port, 1521 (for Oracle); Service Name. Get Host Name/IP, Port and
Service Name from the Oracle database.
2. Test Connection. If successful, click Save and click Done.
3. Configure the audit database. Click Value Change Auditing Builder. Attach
the datasource that you built in step 1, by clicking Add Datasource.
4. Click Choose Tables to Monitor. A pop-up screen appears where a choice
between two monitoring methods is presented. Choose Stream. And then click
Apply. Go to the next section.
After you define an audit database, use the Value Change Auditing Builder to
identify the tables to be monitored, and to select the types of changes (inserts,
updates, deletes) to be recorded.
1. Open the Value Change Auditing Builder by navigating to Harden >
Configure Change Control (CAS Application) > Value Change Auditing
Builder.
2. Click Add Datasource to open the Datasource Finder panel.
3. Select a datasource on which an audit database is defined. If an audit
database is not yet defined, see “Create an Audit Database” on page 266.
4. Click Add to close the Finder and add the selected datasource to the Value
Change Audit panel.
5. Optionally enter a Schema Owner and/or Object Name to limit the number of
tables that are displayed when choosing the tables to be monitored. You can
use the % (percent) wildcard character. For example, to limit the display to all
tables that begin with the letter a, enter a% in the Object Name box.
6. Click Choose Tables To Monitor to open the Define Data Audit panel.
7. Mark the Select box for each table to be monitored.
Note: You cannot define a trigger for a table that contains one or more
user-defined data types.
Note: The Cancel button does not back out any changes that you have made
to triggers using the Add or Remove Selections buttons.
To update the audit database privileged users list, the database user ID that is used
to log in to the monitored database must be the creator of any role to which new
users have been added. Otherwise, the members of that role will not be available.
1. Open the Value Change Auditing Builder by navigating to Harden > Configure
Change Control (CAS Application) > Value Change Auditing Builder.
Value-Change Reporting
You can view value-change data from the default Values Changed report, or you
can create custom reports using the Value Change Tracking domain. By default, the
Value Change Tracking domain is restricted to users having the admin role.
The main entity for the Values Changed report is the Changed Columns entity. In
most cases, there is a separate row of the report for every column change that is
detected for every audit action (Insert, Update, Delete). However, for MS SQL
Server and Sybase, if the monitored table does not have a primary key, there are
two rows per change, with the old and new values displayed on separate rows.
You should use any other database space that has been defined, or to create a new
database space, perform one of the following procedures (depending on the
operating system).
This procedure is performed outside of the Guardium GUI, and applies for
Informix version 9.4 or later.
1. Verify that the database server is online and listening.
2. Create a zero-byte file named guardium_dbs_dat.000 in the
C:\IFMXDATA\server-name directory (sever-name is the name of the Informix
server or the service name). You can do this by saving an empty text file, and
then renaming the file, replacing the txt suffix with 000.
3. Make the following directory the working directory:
C:\Program Files\Informix\bin
4. Execute following command:
C:\Program Files\Informix\bin>onspaces -c -d guardium_dbs -p C:\IFMXDATA\server-name\guardium_d
If the file is created successfully, you see the following messages:
Verifying physical disk space, please wait ...
Space successfully added.
** WARNING ** A level 0 archive of Root DBSpace will need to be done.
5. Restart the Informix server, and use a suitable tool (Aqua Data Studio remote
client, for example) to connect and verify that the space named guardium_dbs
has been created. Your first connection attempt may fail with a message about
the server running in Quiescent Mode. If this happens, attempt to re-connect
at least two more times, and it should work.
6. To verify that the guardium_dbs database space has been created, use Aqua
Data Studio, and look under Storage.
This procedure is performed outside of the Guardium GUI, and applies for
Informix version 9.4 or later.
1. From a command-line window, enter the following commands:
su - informix
cd demo/server
vi guardium_dbs
2. Without adding any text, save the empty guardium_dbs file.
3. Enter the following commands:
chmod 660 guardium_dbs
cd ../../bin
onspaces -c -d guardium_dbs -p /home/informix10/demo/server/guardium_dbs -o 0 -s 100000
This topic applies for Sybase servers only (except for Sybase IQ, which does not
support triggers). Depending on the operating system of the database server,
perform one of the following procedures to initialize disks.
Note: To share a datasource with other users, assign security roles to that
datasource.
6. For any database type other than DB2, there will be additional fields in the
Audit Configuration pane. All fields are required. Referring to the following
table, enter the appropriate values.
Data Device Name: Enter the same data device name used when
initializing the disk for the audit database (guardium_auditdev in the
disk initialization procedure described earlier).
Log Device Name: Enter the same log device name used when
initializing the disk for the audit database (guardium_auditlog in the
disk initialization procedure described earlier).
Action Description
Delete Click to remove the datasource from the Datasources pane.
Modify Click to edit this datasource definition in the Datasource Definition
panel.
Schedule Upload Click to schedule the upload of this audit datasource.
After an audit database has been created on a database server, it will be available
for use by the Value Change Auditing Builder, which is the tool that is used to
build triggers. See “Value Change Auditing” on page 263.
This feature uses Guardium’s External Feed that is preconfigured with the data (a
predefined External Feed map), and an audit process to run it.
Note: The resulting table will show only the last run. The receiver count is the
count of the receivers, and not the count of run results since the last run only.
Guardium collects and maintains a list of tables with the date of last reference. The
list is built using policies in Guardium that dictate the interval of last reference and
the frequency to be used for updating the list content. The information captured by
Guardium is referred to as the “last reference” list and supplies the following
information: What tables are no longer referenced? What table access trends exist
for retirement candidates?
Having the ability to accurately plan for the retirement of applications will help to:
v Plan for hardware retirement or redeployment
v Reduce cost of ownership by moving or retiring those resources supporting the
applications (for example, hardware, DBA(s), Application owners, IT operations
such as backups).
v Know what tables are rarely or never accessed
This functionality of IBM Guardium has been added directly to the Optim
Designer user interface.
Field Comment
DataSourceDesc Description
Server IP
Host Name
User Name for example, for Oracle it mostly defines the schema
Database Name
Schema
Table
Last_referenced_datasource
datasource_desc varchar(100),
server_ip char(39),
host_name varchar(200),
db_vendor char(40),
);
Last_referenced_table
user_name char(32),
);
Requirement: Must use Accelerator patch that comes with v9.0 GPU patch 50
Example
1. User downloaded a v9.0 Accelerator and installed it on v9.0 Guardium system
A, and saved the Accelerator patch.
2. Guardium system A is upgraded to v9.0 GPU patch 50. (At this point, the
Accelerator on Guardium system A is fine).
3. User installs a different Guardium system B with v9.0 GPU patch 50. And used
the Accelerator patch saved in step 1 for Accelerator to install on Guardium
system B. This will not work. The correct action in this step is to install
Accelerator that comes with v9.0 GPU patch 50.
2.
In the user role form, check PCI, and then save the assignment.
Log on using “user1” will display the following PCI accelerator information:
Overview
1.
Click the PCI Data Security Standard to open the Introduction page.
2.
Click the PCI Accelerator for Compliance to get the detail introduction
of PCI Accelerator.
Plan and Organize
Plan and Organize
Click the Overview to get to the introduction of how the predefined
reports follow the compliance in this section.
Each Tab has predefined reports:
1.
Cardholder Server IPs List: Cardholder information database server list.
According to the company's actual situation, set the “PCI Authorized
Server” IPs group information, which specifies the database server that
stores cardholder information.
2.
Cardholders Databases: Cardholder information database. Set the PCI
Cardholder DB: designated group information, which is stored in the
database's cardholder information.
Navigate to Setup > Tools and Views > Group Builder, and in the
Modify Existing Groups selection, select the group name.
3.
Input name, select add new data source:
6.
In Datasource Finder selected to assess the data source, click Add.
7.
After get the success return, choose the new assessment, and
Configure tests...
9.
Click Run Once Now to run the assessment immediately, then can
view the results (according to the selection of project, may take a long
time).
Workflow Builder
The Workflow Builder is used to define customized workflows (steps, transitions
and actions) to be used in the Audit Process.
For additional information, see “Building audit processes” on page 195. Follow
these steps to:
v Define the workflow steps (Event Status),
v Define the flow of transit from one step to another (Actions)
Note: If the task type in Audit Process Builder is Classification Process, then
Workflow Builder can not create customized workflows.
Warning Note: When a workflow event is created, every status used by that event
can be assigned a role (meaning that events can only be seen by this role when in
this status). When an event is assigned to an audit process, it is important that
every role that is assigned to a status of this event have a receiver on this audit
process. Otherwise, it is possible that an audit result row can be put into a status
where none of its receivers are able to see this row or change its status.
If an audit row becomes inaccessible, the admin user (who is able to see all events,
regardless of their roles) would be able to see the row and change its status.
However, if data level security is on, the admin user may not be able to see this
row. The admin user would need to either turn data level security off (from Global
Profile) or have the dataset_exempt role. It is important to configure the audit
process so that all roles who must act on an event associated with this audit
process are receivers of this audit process.
Note: Deletion of a event status is permitted only if the status is not in the first or
final status of any events, and if it not used by any action. The validation will
provide a list of events/actions that prevent the status from being deleted.
When running an Audit Process report task, the results of this process task are
saved in the table, REPORT_RESULT_DATA_ROW. This table will have a row for
every row of the report. If this report task also has a default event assigned to it, a
row is added to the table, TASK_RESULT_ADDITIONAL_INFO, for every row of
the report. This may lead to a disk space issue only if default events are used for
large results. Create events only on task results with a limited number of records,
otherwise users will never be able to manage the large number of records. If
default events are used in the intended limited manner, there will not be any disk
space issues nor any usability issues, since it is not easy to close thousands of
events.
Prerequisites
v See How to create an Audit Workflow. For additional information, see
Compliance Workflow Automation.
v After creating this customized workflow, See How to combine Customized
Workflow with Audit Workflow.
4. Click on the Event Type button and then click on the Add button of Add
Event Type Definition to define a new Event Type.
5. Fill in the description and designate the first task in the workflow.
Prerequisites
v See How to create Customized Workflows. For additional information, see
Workflow Builder.
v See How to create an Audit Workflow. For additional information, see
Compliance Workflow Automation.
Procedure
1. Configure these workflow activities when Adding An Audit Task.
2. Create and save an Audit Task. After saving, an additional button, Events and
Additional Columns, will appear.
3. Click this additional button.
4. At the next screen, place a checkmark in the box for Event & Sign-off. The
workflow created in Workflow Builder will appear as a choice in Event &
Sign-off.
5. Highlight this choice. Save your selection.
6. If additional information (such as company codes, business unit labels, etc.) is
needed as part of the workflow report, add this information in the Additional
Column section of the screen and then click Apply (save). When done, close
this window.
7. Apply (save) your Audit Task. Apply (save) the entire Audit Process Definition.
Click on the Run Once Now to create the report. Click on View to see the
report.
8. Click on the Run Once Now to create the report. Click on View to see the
report.
Note:
If data level security at the observed data level has been enabled (see Global
Profile settings), then audit process output will be filtered so users will see only
the information of their databases.
Under the Report choices within Add an Audit Task are two procedural
reports, Outstanding Events and Event Status Transition. Add these two reports
to two new audit tasks to show details of all workflow events and transitions.
These two reports will not be filtered (observed data level security filtering will
not be applied). These two reports are available by default in the list of reports
only to admin user and users with the admin role.
The distributed search functionality of Quick Search for Enterprise enables you to
query data across an entire Guardium environment, potentially from any
Guardium machine within that environment.
When enabling Quick Search for Enterprise on systems not meeting these
requirements, a limited version of data search will be enabled that only supports
local data queries.
Procedure
1. Log in to the machine as a user or administrator with the CLI role.
2. Use the following GuardAPI command to enable Quick Search for Enterprise
functionality:
Results
Once enabled, see “Using Quick Search for Enterprise” to learn more about using
data search queries.
Attention:
v Distributed search functionality opens ports 8983 and 9983 on both Central
managers and collectors. The ports are opened when distributed search is
enabled and closed when it is disabled. To use distributed search, ensure that
bidirectional communication between Central managers and collectors on ports
8983 and 9983 is not blocked by any firewall.
v Indexed search data is retained for 3 days. Use the purge object Guardium CLI
command to change the retention period. For example, the following command
changes the retention period to 5 days: store purge object age 39 5. Note that
39 is the default object identification number associated with the search index.
For additional information, see Configuration and Control CLI Commands
reference information.
Related tasks:
“Using Quick Search for Enterprise”
This topic describes how to use essential features of Quick Search for Enterprise.
Related reference:
Quick Search for Enterprise CLI Commands
Use these CLI commands to configure Quick Search for Enterprise.
GuardAPI Quick Search for Enterprise Functions
Use these GuardAPI commands to enable, disable, or configure Quick Search for
Enterprise features and parameters.
To use the features described in this topic, Quick Search for Enterprise must be
enabled.
Quick Search for Enterprise may be used in either local or distributed modes. In
local search mode, searches are limited to the data available under the local
machine (the machine from which the search is being run). For example, a local
search run from an individual collector returns results from datasources under that
collector but not from any datasources under other collectors in the environment.
In distributed search mode, searches return data from across the entire Guardium
environment and results are not limited by the specific machine from which the
search is run. A topology tool is provided to conveniently narrow search results to
specific segments of the overall Guardium environment.
Topology view
About this task
A topology view is provided to help visualize and refine the data sources included
in search results. Using the topology view, it is possible to narrow search results to
specific segments of the overall Guardium environment.
Procedure
1. To invoke and use the topology view, Click the “topology view” icon in the
search window toolbar to open the topology browser.
2. Hover the mouse over an object in the topology view to display detailed
information about that object.
3. Click an object in the topology view to select that object and narrow the search
results to only that object and its children if any exist. Use control-click to select
multiple objects in the topology view.
4. After exploring objects in the topology view and selecting a desired scope,
close the topology view by clicking the close icon or clicking outside the
topology browser. The search results update automatically to reflect the
available data based on the scope selected in the topology view.
Procedure
1. To invoke and use the investigation dashboard, click the investigation
dashboard icon in the search window toolbar.
2. Explore your data by interacting with the charts in the following ways:
a. Hover over an individual cell to display the specific values associated with
that cell.
After editing the settings, click OK to save your changes and view the updated
chart on the dashboard.
Outliers Detection
Outlier detection extends traditional database monitoring with increased
intelligence that helps security analysts understand risk based on relative change in
behavior.
Overview
The processes of outlier detection works in two phases: a learning phase and an
analysis phase.
During the learning phase, outlier detection operates on data that is transparently
extracted from the collected audit data. That is, the outlier detection algorithm uses
data that is being collected normally for security and compliance reasons. If data is
not being audited already by a security policy, it is not available for Guardium to
analyze. The model is trained over a period of time and requires 3-4 weeks of data
to build a solid model and learn the normal behaviors of the environment. No
outlier indicators will be generated until sufficient training has taken place.
Example
The Guardium user may investigate incidents identified as outliers using either the
Search tab in the user view, from the Quick Search function in the admin user
view, or by reviewing the Outlier Analytic List report.
Procedure
1. Login to the collector as a user or administrator with the CLI role.
2. Use the following GuardAPI command to enable the outliers detection
function.
grdapi enable_outliers_detection schedule_interval=1 schedule_units=HOUR
v
A new data mart is defined to extract data from GDM tables into CSV files
(default path: /var/dump/ANALYTIC/input).
v
Results
Once enabled, the outliers detection module is available from the Search tab in the
user view and from the Quick Search function in the admin user view.
Allow one month of data collection for effective modeling of the normal patterns
of database activity.
Related concepts:
“Quick Search for Enterprise” on page 289
Quick Search for Enterprise provides immediate access to your data without
requiring detailed knowledge of Guardium topology, aggregation, or
load-balancing schemes.
Related information:
GuardAPI Input Generation
GuardAPI Input Generation allows the user to take the output of one Guardium
report and feed it as the input for another Guardium entity; allowing users to used
prepared calls to quickly call API functionality.
Interpreting outliers
Guardium provides a convenient graphical interface for identifying and responding
to outliers detected by the algorithm.
The summary chart includes red and yellow indicators that reflect the severity or
total outliers score for a time interval. Red indicators reflect highly anomalous
events requiring immediate attention. Yellow indicators represent less extreme
anomalies that warrant attention as part of other or related investigations. The
outlier score is a calculated aggregate value based on the volume of outliers, the
severity of individual outliers, the predicted volume of outliers for a given time of
day, and other factors.
For example, on a system that typically identifies 0 outliers at 1am and 5-10
outliers at 1pm during weekdays, the presence of two additional outliers (of 2
outliers at 1am or of 12 outliers at 1pm) is more significant—and weighted more
heavily—than the hourly total itself.
Placing the cursor over one of the outlier icons provides detailed information
about outliers detected during that time period. To view other activities or outliers
that occurred during the same time period, click “Show activities” or “Show
outliers.”
Outlier reasons are assigned in combinations when needed. For example, an outlier
may be flagged as both rare and high volume if a seldom-seen condition suddenly
occurs many times.
Related information:
Anomaly Detection
The Anomaly Detection process runs every polling interval to create and save, but
not send, correlation alert notifications that are based on an alert's query.
By default, there are two groups of users and objects that are weighted or scored
more heavily by Guardium's machine-learning algorithm: Admin Users and
Sensitive Objects. However, you may have already established additional groups
that would also be useful for outlier detection. For example, you may have a group
of Suspicious Users or you may have several different groups of sensitive objects
that are aligned with different applications.
Procedure
1. This task requires that you know the internal group ID to use with the grdapi
command. To get the group ID, you can use the following command: grdapi
list_group_by_desc desc=[group name]. For example, if you have a group
named BadGuys, you can enter the following command to get its internal
group ID:
grdapi list_group_by_desc desc=”BadGuys”
2. Once you know the desired ID, you can add the group or object to outlier
detection using one of the following commands.
v To add a group with the ID 1234:
grdapi set_outliers_detection_parameter privUsersGroupIds=1234
v To add sensitive objects with the IDs 333 and 156:
grdapi set_outliers_detection_parameter sensitiveObjectGroupIds=333,156
Results
The specified groups or sensitive objects have been added to the outlier detection
and will be given additional weight by the algorithm.
For example, to ignore all activity from server 10.70.144.159, database ON1PARTR,
and any database user beginning with GUARD:
1. Remove any unnecessary fields by clicking on the appropriate icons.
2. Enter the specific values for the server and database fields.
3. Use the wildcard character (*) to expand values for the DB user field.
4. Click OK to commit the changes.
To include previously ignored events, view the Analytic User Feedback report,
double-click the previously-ignored event, and select Invoke >
delete_analytic_user_feedback.
If you have many items for exclusion, use the Guardium Group Builder and
populate any or all of the following groups as needed:
v Analytic Exclude DB User
v Analytic Exclude OS User
v Analytic Exclude Server IP
v Analytic Exclude Service Name
v Analytic Exclude Source Program
The Group Builder has options for bulk uploading including the ability to populate
from a query on a custom table.
The default report is a tabular report that reflects the structure of the query, with
each attribute displayed in a separate column. All presentation components of a
tabular report (the column headings, for example) can be customized. All graphical
reports are defined using the Report Builder. In addition to the start and from date
(query to and query from) parameters, values can now be displayed between the
beginning of the page and start of the table in all reports.
Before using the Report Builder, create a query using the Query Builder. See
“Using the Query Builder” on page 316.
The fastest way to create and view a report is by using the steps to Create a
Report, then select the report from My Dashboard.
Move back and forth between menu screens using the Back and Next buttons. The
back arrow in the web browser does not work for navigation between Guardium
screens.
Refresh
Add a report
Add to favorites
299
Table 30. Report Icons (continued)
Graphical
icons Function
Delete
Clone
To access a report definition, select the Reports lifecycle icon and then click Report
builder.
Search for a report by choosing Domain, Query or Report title. The results display
in the Report Search Results panel.
v To locate a specific report, select that report from the Report Title list. The
selected report displays immediately in the Report Search Results panel.
For the remaining types of search, click the Search button after making entries in
one or more fields, or just click the Search button to list all reports available for
your Guardium account.
v To list all reports that use a specific query, select that query from the Query list.
v To list all reports for a specific chart type, select it from the Chart Type list.
To locate a specific report, select that report from the Report Title list. The selected
report displays immediately in theReport Search Results panel.
If the search locates any reports, they display in the Report Search Results panel.
Click any of the following buttons:
v New - See Create a Report.
v Clone - See Clone a Report.
v Modify - See Modify a Report.
v Roles - See Security Roles. Assign roles to reports in Report Builder. Assigning
roles to reports while in Query Builder (Tracking) assign only the role to the
Query, not the report.
v Delete- See Remove a Report.
v Comment - See Comments.
Create a Report
1. To access a report definition, select the Reports lifecycle icon and then click
Report builder.
2. Click New to open the Create Report panel.
3. From the Query list, select a query value to be used by the report (for example,
Guardium Logins)
4. Enter a unique name for the report in the Report Title field.
Note:
A refresh icon appears in all graphical reports next to the help icon.
Modify a Report
1. Find the report to be modified. Go to the Report Builder finder menu.
2. Click Modify to open the Report Columns panel.
3. Continue with Customize the Report Presentation.
Clone a Report
1. Find the report to be cloned. Go to the Report Builder finder menu.
Remove a Report
Be aware that you cannot remove predefined reports, and you cannot remove
reports that are used in Audit Processes.
1. Find the report to be removed.
Tabular reports are limited to 5,000 rows of output, but when included in a
workflow process, any number of rows can be exported from the report task to a
CSV or CEF file.
Limits
The limit for the buttons when viewing a report (generate PDF, generate CSV, and
printable) is 30,000 rows. This is non-customizable.
The limit for the Populate From Query in Group and Alias Builder when run via
Run Once Now is 5,000 rows. This is non-customizable.
The limit for the Populate From Query in Group and Alias Builder when run via
Scheduling is 20,000 rows. This limit is customizable, via the CLI command,
show/store populate_from_query_maxrecs.
API Assignment
By default, the Guardium application comes with setup data that links many of the
API functions to reports; providing users, through the GUI, with prepared calls to
APIs from reporting data. Use API Assignment to link additional API functions to
predefined Guardium reports or custom reports.
For more information on using linked API functions, see the documentation on
GuardAPI Input Generation.
1. Locate the report. Go to the Report Builder finder menu.
2. Click API Assignment to open the API Assignment panel; showing the current
API functions that are mapped to the selected report.
3. Click an API function to display a pop-up window of the current API to Report
Parameter Mappings; showing the API parameters, if the API parameters are
required, any default values, and if any of the report fields are currently
mapped to those parameters.
If there are no fields in the report that are linked to API parameters, it might be
irrelevant to link an API function to a report. The mapping of API parameters
to report fields can be accomplished through both the GUI and the Guardium
CLI. For additional information on mapping API parameters to report fields,
see Mapping GuardAPI Parameters to Domain Entities and Attributes in the
GuardAPI Input Generation section.
4. Click the greater-than sign '>' to add the selected API function to the current
list of functions that are assigned to this report.
5. Click Apply to save the changes.
Report parameters
You can use parameters to control the contents and presentation of a report.
Creating dashboards
You can create one or more dashboards, add reports to them, and configure their
appearance.
Results
You have a dashboard that gives you easy access to some selected reports.
What to do next
Review the appearance of your dashboard. Is it easy to use, and to find the
information that you want? If not, you can configure it further.
Think about how you use your reports. What arrangement makes it easy to
achieve your goals? Experiment with these changes.
Procedure
1. Rearrange the reports. To move a report, place your cursor on the report’s title
bar, and drag it to a new location.
2. Choose a new number of columns by clicking 1, 2, or 3 in the Number of
columns area. By default, your reports are shown in two columns. If you need
more space for each report, click 1 to see how your reports look when they are
the full width of the dashboard. If you prefer to see more reports at one time,
try three columns.
3. Resize your reports. Drag the resize icon to make a report longer or shorter,
narrower or wider. If you adjust the width of a report, all the reports in that
column use the new width. If you change the number of columns, all columns
return to their default widths.
Procedure
1. Click on the Dashboard icon from the navigation.
2. Then click Create New Dashboard.
3. Click Add Report to select a report from all of the reports that you have
access to, including any new reports that you created.
4. Leverage filtering to quickly find the report you are interested in.
5. Click the report name to add it to your dashboard. Add as many reports to
your dashboard as you want, just by selecting each report.
Viewing a report
There are several ways to view a report, including your dashboard and UI search.
The following choices (with icons) permit editing and configuring of the report:
v Edit the query for this report
v Ad-hoc process for Run Once Now - Use this to invoke a call to GuardAPI
commands.
v Add to favorites
v Refresh
You can hide columns from view. Click the columns icon and clear the check boxes
for the columns that you want to hide.
You can sort report data by the contents of any column. Click the title of the
column on which you want to sort. To reverse the order, click the title again.
Sorting is always performed on the actual data values, ignoring any aliases that are
defined.
Note: For an instance where the PDF text is too small to read, the PDF report has
a physical limit on how much it can expand horizontally given how wide the page
is. Since each line of the PDF report has to fit on one line, the typeface size
changes to fit the data, and may force a very small typeface size in order to
display all the data.
Graphical reports can be customized by clicking the Customize Chart icon. The
choices include converting the data to a line chart, changing the X-axis and Y-axis
orientation, converting the report to a pie chart or a stacked column chart.
When viewing reports that display Oracle information, occasionally the ? question
mark character is used to inform the viewer that the login information was not
available. Again when viewing reports that display Oracle information, the
appearance of the number -1 signifies that an unknown number of records are
affected. All Oracle sessions are recorded, even with missed logins.
Refreshing reports
Some reports are configured to refresh their data automatically. On other reports,
you can refresh the data manually through the UI.
When you view a report that is configured to refresh automatically, the color of the
Circular Arrows Refresh icon for this report is green, indicating that the report is
refreshing itself automatically.
At a certain point, the report stops refreshing if no further changes are made to the
report and the color of the refresh icon turns from green to red. The point in time
where the color changes is equal to half of the GUI session timeout (which can be
found by running the CLI command, show session timeout).
For example, if the session timeout is the default 900 seconds, the Circular Arrows
Refresh icon on the Request Rate report is green for 450 seconds, then turns
red.
UI Customization - In "New Life Cycle" dialog and "New Group" dialog, groups
are limited to a maximum of 5 levels deep, so even with longer group names, all
levels of group names and node item text are visible on the navigation pane.
UI Customization - When user enters "<" or ">" in the textbox of "New Life Cycle"
dialog or "New Group" dialog, a popup message is displayed to indicate that "The
name cannot contain < or > special characters", and the "OK" button becomes
disabled.
UI Customization - In "New Life Cycle" dialog and "New Group" dialog, user can
enter a maximum of 50 characters in the text box.
Exporting a report
You can export a report to a PDF file or a file of comma-separated values.
You can export the contents of a report to a Portable Document Format (PDF) file,
and save the file or view it. In the report toolbar, click Export > Download as PDF
to create a PDF copy. Follow the prompt to save or view the file.
When you generate a large PDF file, the process can cause the UI to time out. If
you plan to generate large PDF files, consider doing so as part of an audit process,
or increasing the UI timeout value to avoid this problem.
You can also export the contents of a report to a comma-separated value (csv) file.
You can export either all the records (the entire report) in the report, or only the
display records (the data currently displayed).
In the report toolbar, click Export > Download all records or Export > Download
display records. You can save the results or select an application in which to view
them.
Note: If editing a report and removing a column (for example, editing a report
with seven columns and removing one column, leaving six columns), when the
report is exported as a PDF file, the report will show the original seven columns.
If any drill down actions are available on a tabular report the user will know by
right-clicking on a row of the grid and a context-menu will appear with any
available drill-down actions.
Creating a report
If the predefined reports do not meet your needs, you can create your own.
You choose a query on which this report is based, and the domain of the query. If
you must create a new query, do that before you create a report based on it.
Remember that there is distinction between queries and reports. A query describes
a set of information to be obtained from the collected data. A report describes how
the data returned by the query is presented. Refer to “Using the Query Builder” on
page 316 for further information on creating a query. Refer to “Domains, Entities,
and Attributes” on page 323 for further information on working with domains.
You might find it easier to clone a report and modify it than to create a report
from scratch.
Procedure
1. Click Reports > Report Configuration Tools > Report Builder to open the
Report Builder finder or filter menu. If you select Search at this point without
choosing any domain or query, a menu will appear with all queries listed.
Select a query and use the icons (Add New Report , Modify , Clone
to work with the queries.
, or Delete
2. From the Report Builder finder menu, click New .
3. The Create Report menu appears. Select a query and give the report a name.
Then click Next.
4. The next screen returns the table columns of the query selected. Customize or
use as is. Then click Next.
5. The Report Attributes menu appears. Chose a report type, either tabular or
chart. Then click Next.
6. Then submit the report for creation by clicking Save. An acknowledgement
screen will appear saying the data was successfully saved.
Data Mart
A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates and
organizes the data in a generic fashion that can be used later for analysis and
reports. A Data Mart begins with user-defined data analysis and emphasizes
meeting the specific demands of the user in terms of content, presentation, and
ease-of-use.
A Data Mart is practical and efficient for all the Guardium predefined-reports. It
prepares the data in advance to avoid overload, full scans, and poor performance.
The Data Mart Configuration icon is available from any Predefined Report.
Highlights of benefits:
v Provide Guardium Analytic capability that supports full lifecycle of data
analysis.
v The analytic process starts from the Query Builder and Pivot Table Builder
where the users define their data analysis needs and then “Set As Data Mart”.
v The Data Mart extraction program runs in a batch according to the specified
schedule. It summarizes the data to hours, days, weeks, or months according to
the granularity requested and then it saves the results in a new table in
Guardium Analytic database.
v The data is then accessible to the users via the standard Reports and Audit
Process utilities, likewise any other traditional Domain/ Entity. The Data Mart
extraction data is available under the DM domain and the Entity name is set
according to the new table name specified for the data mart data. Using the
standard Query Builder and Report Builder, users can clone the default query
and edit the Query and report, generate Portlet and add to a Pane.
v The summarization of data shrinks the data volume significantly. It eliminates
joins of many tables by storing the data analysis in un-normalized and
pre-calculated table.
v The corporate view is supported by using the standard Aggregation utility for
the new Guardium Analytic tables. If there is a huge amount of detailed row
data at the higher levels of the Aggregation Hierarchy, the Selective Aggregation
feature, that enables aggregation of specific module(s), can be configured to
aggregate analytic data only.
The Data Mart builder is accessible via Query builder, Report Results, and
Pivot-Table view.
Access to the screen is enabled for users with Data Mart Building permission (User
Role Permission). Display the Set As Data Mart new button only for users with the
appropriate permission.
Data Mart persistency - changes to the original Query, Report, or Pivot Table do
not affect the Data Mart; A snapshot of the originated analysis definition is saved
together with the Data Mart upon creation.
If the Data Mart is based on Pivot Table, then the extraction process does not
calculate the Total line (sum of columns) and Percent Of Column is not supported.
In addition to the Data Mart definition, the following are created by the Data Mart
Definition process:
v New Domain and Entity
v Default Query
v Default Report and portlet
v New Data Mart table in the “DATAMART” new database to store the extracted
data
Data Mart – Query and Report Builder
The Data Mart definition process creates new Domain, Entity, default
Query and Report. The default Query and Report is accessible via the
Report Building menu.
Clicking Data Mart opens the Query Finder GUI; The Query, Report, and
Entity fields filter only Data Mart domains (domain name starts with -
DatamartDefinition.DOMAIN_PREFIX).
Report Builder GUI: The default Data Marts' reports and all other reports
that are related to Data Marts domains are available in the Report Builder
GUI.
Follow these steps:
1. As an Admin user, select Data Mart icon
.
2. Select New to create a new Data Mart or select from the list of
previously created Data Marts.
3. Complete the fields asking for Data Mart name and Table name
(Default is DM). Specify a time granularity and select an initial start
time from the calendar icon. Description is optional.
4. Use the Scheduler to schedule when to run this feature (Run Once
Now).
5. Use the Roles section to restrict Data Mart only to users with the
appropriate permission.
6. Save the configuration.
Default = Collector
GuardAPI commands
Use the following GuardAPI commands to make the Data Mart function
active and inactive.
grdapi datamart_set_active <Name>
grdapi datamart_set_inactive <Name>
All domains and their contents are described in the Domains, Entities, and
Attributes appendix.
There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Regardless of the domain, the same
general-purpose query-builder tool is used to create all queries. For detailed
instructions on how to build queries, see Queries.
In addition to the standard set of domains, users can define custom domains to
contain information that can be uploaded to the Guardium appliance. For example,
your company might have a table relating generic database user names (hr23455 or
qa4872, for example) to real persons (Paula Smith, John Doe). Once that table has
been uploaded, the real names can be displayed on Guardium reports, from the
custom domain. For more detailed information on how to define and use custom
domains, see External Data Correlation.
Queries
Use one of the many predefined queries that come with Guardium to get
information about your data. Use the Query Builder to work with queries.
Queries are different from reports. A query describes a set of data, whereas a
report describes how the data returned by a query is presented.
Once a query is completed, present the results of the query using reports. Reports
usually are presented in tabular form, but you can customize the layout of a report
as you like.
To use queries, open the Query Builder by clicking Comply > Custom Reporting >
Custom Query Builder. Choose a domain to query, select a main entity, and then
use the query as needed.
You cannot modify the predefined queries, but you can create a clone of a query
and modify the clone.
The main entity that you select for a query determines the following:
v The level of detail for the report. There is one row of data for each occurrence of
the main entity included in the report. The location of the main entity within the
hierarchy of entities is important in terms of what values can be displayed. The
attributes for any entities under the main entity can be counted, but not
displayed (since there might be many occurrences for each row). To choose this
level of detail, check the Sort by Count check box.
v The total count is a count of instances of the main entity included on that row of
the report, added as the last column of the report. To add or drop the count
column of the report, click the Add Count check box. This can result in the
query/report performance boost in some cases.
v To add or drop the ability to display one-row-per-value in the report, (which can
result in the query/report performance boost in some cases), click the Add
Distinct check box. This selection yields condensed reports.
v The time fields against which the Period From and Period To runtime
parameters are compared to select the rows of the report. The Query Builder
uses the main entity (among other parameters) to determine which time fields
are used when defining the Period From and Period To values. This can be
important for long-running sessions, such as when pooled sessions are kept
open by an application server. When applicable, the Period Start/Period End
from the Access Period entity is used, in other cases it will choose period values
according to the main entity:
– Session - the time stamp used is for the last update that is made to the
session entity
– Session Start - the starting time of the session entity is used
– Session End - the ending time of the session entity is used
– Full SQL - time stamp from Full SQL domain; query includes rows from the
Full SQL domain even if not linked to values (for example - when Log Full
Details is set, there are no values)
– Full SQL Values - time stamp from the Full SQL domain; query includes
rows only if they have values from the Full SQL domain even if not linked to
the Field domain
Note: Fields containing tuples (combined fields) in the Two Stages execution is
not supported in this release.
Note: Note: The Main Entity drop-down list includes only primary entities.
However, access to secondary entities (for example Session Start and Session End)
can be done through its corresponding primary entity (for example, Session for
Session Start and Session End).
Sorting
By default, query data is sorted in ascending order by attribute value, with the sort
keys ordered as the attributes appear in the query. Aliases are ignored for sorting
purposes. The actual data values are always used for sorting. Attributes for which
values are computed by the query (Count, Min, Max, or Avg) cannot be sorted.
The last column of a tabular report is a count of main entity occurrences. To sort
on this count in descending sequence (in other words, listing the greatest number
occurrences first), mark the Sorted by occurrences check box.
Timestamps
Creating a Query
1. Open the Query Builder for the appropriate domain.
2. Click New to open the New Query – Overall Details panel.
3. Type a unique query name in the Query Name box. Do not include apostrophe
characters in the query name.
4. Select the main entity for the query from the Main Entity list. Remember that
the main entity controls the level of detail that is available for the query, and
that it cannot be changed. Basically, each row of data returned by the query
will represent a unique instance of the main entity, and a count of occurrences
for that instance.
5. Click Next. The new query opens in the Query Builder panel. To complete the
definition, see one of the following topics:
v Query Builder Overview
v Modify a Query
Modifying a Query
You cannot modify the Guardium predefined queries, but you can clone a query
and modify the clone as needed.
1. Choose a domain and main entity to open the Query Builder for the query you
want to modify.
2. Click Clone, enter a new name for the query (apostrophes are not allowed),
and click Save.
3. Refer to the Query Builder Overview topic to modify any component of the
query definition.
The Query Fields pane lists the columns of data to be returned by the query.
The Field Mode menus indicate what to print for the field: its Value, Count
(number of distinct values), Min, Max, Average (AVG) or Sum for the row. The
Value selection is not available for attributes from entities greater than the main
entity in the entity hierarchy for the domain.
There are two ways to add a field to the Query Fields pane:
v Pop-Up Menu Method:
1. From the Entity List, click on the field to be added.
2. Select Add Field from the pop-up menu.
v Drag-and-Drop Method:
1. From the Entity List, click on the icon of the field name (not on the field
name itself), drag the icon to the Query Fields pane and release it.
To move a field up or down in the Query Fields pane, check the field's check box
and click the Up or Down icons to move the field up or down one row.
Beware of using the Full SQL attribute in a query. It may produce excessively large
reports, because each distinct value of the attribute (the complete SQL query string
in this case) will be returned in a separate row.
On the other hand, the report may contain no information at all, or many blank
columns where you are expecting Full SQL strings. Guardium captures Full SQL
only when directed to do so by policy rules - and the rules may not have been
triggered during the reporting period.
Do not confuse the Full SQL attribute with the ability to drill down to the SQL for
most queries in the Data Access domain having anything to do with SQL requests.
Query Conditions
Use the AND, OR and HAVING operators with parentheses to create query
conditions.
The AND, OR and HAVING operators are located in the Query Conditions title
bar in the Query Builder.
Note:
All conditions are independent. Group conditions together by adding left and right
parentheses around the conditions. Use brackets in complicated query conditions.
Add an AND operator or an OR operator to the end or middle of the condition list
using the add-condition menu or drag-drop the attribute's icon. Select and remove
conditions by clicking Delete. Save the query. If the generated SQL query is
invalid, the query will not save, and an error message results.
When a condition is selected, pressing the left parenthesis button adds one left
parenthesis condition before the first selected condition. Pressing the right
parenthesis button will add one right parenthesis condition after the first selected
condition. If there is no condition that is selected, pressing the parentheses buttons
has no effect.
When creating a query condition that uses parentheses, the parentheses appear in
the UI BEFORE the operator, but are applied AFTER the operator. For example, a
query condition is displayed as, this (AND that OR another). However, the actual
logic is, this AND (that OR another).
There are two parts in the condition display panel: one starts with a WHERE
condition and another one starts with a HAVING condition.
In the HAVING part, the aggregate field has options: Count, Min, Max, and AVG.
The option SUM also applies to certain entities with ID in name (Session ID,
Global ID, Full SQL ID, Instance ID). If the HAVING button is not checked, the
condition is inserted into the WHERE part with the aggregate field as empty
string. If the HAVING button is checked, the condition is inserted into the
HAVING part and the aggregate field has options. After adding or removing a
condition, the condition option will be updated. Pressing SAVE generates a SQL.
The SQL is validated before saving it. If validation failed (for example, syntax
error), it generates an alert error message and puts a more detailed error
description in the log. If adding a condition at the wrong part, (for example,
HAVING button is set, and the attribute icon is dropped on the WHERE part, or
vice versa) it generates a not-matched alert message. If the selected condition is in
WHERE part, but the HAVING button is set, the adding condition fails because
the setting is not matched.
The attributes Total Access, Failed SQLs, and Successful SQLs can be added only
under a HAVING clause (not the WHERE clause).
Allowed queries must have one time stamp column and either at least one column
with Mode=Count OR the count flag set (or both). The query column to be
Note: There are four special words that are not allowed as the name of a
parameter: user; group; role; page.
An error results if an attempt is made to save a query with any of these words
in the parameter. There are two types of conditions where this applies:
v When creating a query condition with an operator such as =, <, LIKE, etc,
and then selecting Parameter. This field does not allow the special words.
v When creating a query condition with a DYNAMIC GROUP type operator
(IN, NOT IN, IN ALIAS, etc), this field does not allow the special words.
5. For a group operator, select a group from the list.
For most other operators, you must supply a value for the condition, or
indicate that a runtime parameter value (not containing exclamation points) is
supplied later (when the query is run). In these cases, a drop-down with three
options appears. Do one of the following:
v Select Value and enter an exact value in the box.
v Select Parameter and enter a name for the runtime parameter (the name
must not contain spaces).
There is an Add Expression icon next to the Value, Parameter, Attribute selections.
Use this icon to enter query conditions, including user-defined string and
mathematical expressions.
Use this feature where the user needs to add a condition that is based not on the
entire content of the attribute as is, but on part of the attribute, a function of the
attribute, or a function that combines more than one attribute.
An example:
Return the location of the string 150.1, from the value 192.150.1.x., where the
string 150.1 is at the fifth character of the value. The string 150.1 represents all
instances of Client IP matching the 5 characters listed.
When the function is run in the Expression field, it returns a value, and that value
should be in the entry box.
Use the function, INSTR(:attribute, ’150.1’) with a "5" value in the entry box
next to the Add Expression icon to return the records with 150.1 in the fifth
location.
Another example: LENGTH(:attribute) >= 40, which returns the length of any SQL
statement greater than 40 characters. The expression might or might not contain
references to the actual attribute and can also contain references to other attributes.
Each domain contains a set of data related to a specific purpose or function (data
access, exceptions, policy violations, and so forth). For a description of all domains,
see Domains.
Each domain contains one or more entities. An entity is a set of related attributes,
and an attribute is basically a field value. For a description of all entities and
attributes, see Entities and Attributes.
A Guardium query returns data from one domain only. When the query is defined,
one entity within that domain is designated as the main entity of the query. Each
row of data returned by a query will contain a count of occurrences of the main
entity matching the values returned for the selected attributes, for the requested
time period. This allows for the creation of two-dimensional reports from entities
that do not have a one-to-one relationship.
There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Thus each Guardium role typically has
access to a subset of domains, depending on the function of that role within the
company. Guardium admin role users typically have access to all reporting
domains.
Some of the attributes described in this appendix are available to users with the
admin role only. These are labeled: Reserved for admin role use only.
For users who do not have the admin role, these attributes will not be available
from the query builder.
Similarly, not all attributes are available for all database protocols. When using a
query builder, if you notice that an entity or attribute described in the
documentation is not listed in the Entities pane, that entity or attribute is not
available for the selected database type.
Domains
The following table describes the query builders and associated domains that are
provided with your Guardium system. Your company may have defined additional
custom domains.
Each domain contains a set of data related to a specific purpose or function (data
access, exceptions, policy violations, and so forth). For a description of all domains,
see Domains.
A Guardium query returns data from one domain only. When the query is defined,
one entity within that domain is designated as the main entity of the query. Each
row of data returned by a query will contain a count of occurrences of the main
entity matching the values returned for the selected attributes, for the requested
time period. This allows for the creation of two-dimensional reports from entities
that do not have a one-to-one relationship.
There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Thus each Guardium role typically has
access to a subset of domains, depending on the function of that role within the
company. Guardium admin role users typically have access to all reporting
domains.
Some of the attributes described in this appendix are available to users with the
admin role only. These are labeled: Reserved for admin role use only.
For users who do not have the admin role, these attributes will not be available
from the query builder.
Similarly, not all attributes are available for all database protocols. When using a
query builder, if you notice that an entity or attribute described in the
documentation is not listed in the Entities pane, that entity or attribute is not
available for the selected database type.
Access to the query builder for each domain is controlled by security roles, so each
user role typically has access to a separate set of domains. Some domains are
available only when optional components are installed (CAS, for example).
On the default admin portal, all query builders can be opened from the menu of
the Tools > Report Building tab. On the default user portal, many query builders
can be opened from the Custom Reporting application: Monitor/Audit > Build
Reports.
Following a short description of the domain, the Description column lists the
default security role assigned for each domain, and indicates how to access the
domain from the default user portal (if available).
Roles: all User portal: Monitor/Audit > Build Reports > Track data
access
Aggregation/Archive Aggregation and archiving activity, including the date, time, and
status of each operation (archive, send, purge, etc.).
(AGGREGATION/
EXPORT/IMPORT) Roles: admin User portal: Not available
Alert All alerts generated and sent by Guardium.
(ALERT ) Roles: all User portal: Monitor/Audit > Build Reports > Track sent
alerts
Application Connection, session, and application data recorded for special
non-Guardium application (Siebel and SAP, for example).
(Application Data)
Roles: admin User portal: Not available
Audit Process The execution of audit processes and the distribution of results.
(AUDIT TRAIL) Roles: all User portal: Monitor/Audit > Build Reports > Audit
Process builder
Auto-discovery Database auto-discovery activity, including all processes that have
been run, and the hosts and ports discovered.
(AUTODETECT DB
DISCOVERY) Roles: all User portal: Discover > DB Discovery > Auto-discovery
Query Builder
CAS Changes All changes detected by CAS, including any changed data
recorded.
(CAS Changes)
Roles: cas User portal: Not available
CAS Config CAS instance configurations, describing the use of templates on
specific hosts.
(CAS Config)
Roles: cas User portal: Not available
CAS Host History History of CAS changes applied to CAS agent hosts.
(COMMENT ) Roles: all User portal: Monitor/Audit > Build Reports > Comment
builder
Custom Domain Custom domains have been defined for uploading commonly used
Builder tables and products. See Custom Table as a custom domain
contains one or more custom tables. If it contains multiple tables,
you define the relationships between tables when defining the
custom domain.
Custom Query Builder User defined domains can define any tables of data uploaded to
the Guardium appliance.
Roles: all User portal: Monitor/Audit > Build Reports > Custom
query builder
Custom Table Builder A custom table contains one or more attributes that you want to
have available on the Guardium appliance. For example, you may
have an existing database table relating encoded user names to real
names. In the network traffic, only the encoded names will be
seen. By defining a custom table on the Guardium appliance, and
uploading data for that table from the existing table, you will be
able to relate the encoded and real names.
DB Default Users Non-credential Scan - A process to scan a list of databases and
Enabled check whether default users are enabled. The default users as well
as the list of servers to scan are provided as parameters to the API.
A default group is provided for each database type with the
default users and passwords created by the database on every
installation, customers can add/remove from that list. The groups
are of type DB User/DB Password and the names of the default
groups are:
(Flat Log) Roles: none User portal: Monitor/Audit > Build Reports > Flat Log
builder
(Group ) Roles: all User portal: Monitor/Audit > Build Reports > Group
builder
Guardium Activity All modifications performed by Guardium users to any Guardium
entity, such as a report or query definition or modification.
(USER ACTIVITY
AUDIT) Roles: admin User portal: Not available
Guardium Login All Guardium user login and logout information.
(Access Rules Roles: all User portal: Monitor/Audit > Build Reports > Policy
Violations) violations summary builder
Replay Results Replays the data stream from one datasource by another different
datasource.
Roles: none
Roles: all User portal: Monitor/Audit > Build Reports > Rogue
connections builder
Security Assessment Records the results of vulnerability assessment processes.
Result
Roles: none User portal: Not available
(Assessment Test
Result Monitor)
Sniffer Buffer Usage Inspection engine statistics.
Custom Domains
Custom domains allow for user defined domains and can define any tables of data
uploaded to the appliance.
The usage for these custom entitlement (privileges) domains are for entitlement
reports which are found if logged in as a user. To see these reports, go to the user
tab DB Entitlements.
[Custom] Access
This domain contains all of the same entities as the standard Data Access domain.
It is provided as a custom domain to allow additional user-defined domains to be
built including information from this domain and any custom tables that have
been uploaded by the user. [Custom]Access domain is meant to be cloned. This
domain is updated on each version therefore is not advisable to create reports on
this domain. For a description of the entities included in the Access domain, see
the Access domain description in the Domains topic.
S-TAP info is a predefined custom domain which contains the S-TAP Info entity
and is not modifiable.
Based on this custom table and custom domain, there are two reports:
Detailed Enterprise S-TAP view shows, from the Central Manager, information on
all active and passive S-TAPs on all collectors and/or managed units.
If the Enterprise S-TAP view and Detailed Enterprise S-TAP view look the same, it
is because there only one S-TAP on one managed unit being displayed. The
Detailed Enterprise S-TAP view would look different if there is more S-TAPs and
more managed units involved.
These two reports can be chosen from the TAP Monitor tab of a standalone system,
but they will display no information.
DB Entitlement Domains
Along with authenticating users and restricting role-based access privileges to data,
even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have
the privileges required to perform their duties. This is also known as database user
rights attestation reporting.
DB Entitlement Reports use the Custom Domain feature to create links between the
external data on the selected database with the internal data of the predefined
entitlement reports. See Database Entitlements Reports for further information on
how to use predefined database entitlement reports. To see entitlement reports, log
on the user portal, and go to the DB Entitlements tab.
The predefined entitlement reports are listed as follows. They appear as domain
names in the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections.
v Oracle DB Entitlements
v MYSQL DB Entitlements
v DB2 DB Entitlements
v SYBASE DB Entitlements
v Informix DB Entitlements
v MSSQL 2000 DB Entitlements
v MSSQL 20005/2008 DB Entitlements
v Netezza DB Entitlements
v Teradata DB Entitlements
v PostgreSQL DB Entitlements
Oracle
v ORA Accnts of ALTER SYSTEM - Accounts with ALTER SYSTEM and ALTER
SESSION privileges
v ORA Accnts with BECOME USER - Accounts with BECOME USER privileges
v ORA All Sys Priv and admin opt - Report showing all system privilege and
admin option for users and roles
v ORA Obj And Columns Priv - Object and columns privileges granted (with or
without grant option)
v ORA Object Access By PUBLIC - Object access by PUBLIC
v ORA Object privileges - Object privileges by database account not in the SYS
and not a DBA role
v ORA PUBLIC Exec Priv On SYS Proc - Execute privilege on SYS PL/SQL
procedures assigned to PUBL
v ORA Roles Granted - Roles granted to users and roles
v ORA Sys Priv Granted - Hierarchical report showing system privilege granted to
users including recursive definitions (i.e. privileges assigned to roles and then
these roles assigned to users
v ORA SYSDBA and SYSOPER Accnts - Accounts with SYSDBA and SYSOPER
privileges
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
MYSQL DB Entitlements
MYSQL: The queries ending in _40 use the most basic version of the mysql schema
(for MySQL 4.0 and beyond). The information_schema has not changed since it
was introduced in MySQL 5.0, so there is a set of _50 queries, but no _51 queries.
The _50 queries work for MySQL 5.0 and 5.1 and for 6.0 when it comes out, since
the information_schema is not expected to change in 6.0. The queries ending in
_502 (MYSQL502) use the new information_schema, which contains much more
information and is much more like a true data dictionary.
v MYSQL Database Privileges 40
v MYSQL User Privileges 40
v MYSQL Host Privileges 40
v MYSQL Table Privileges 40
v MYSQL Database Privileges 500
v MYSQL User Privileges 500
v MYSQL Host Privileges 500
v MYSQL Table Privileges 500
v MYSQL Database Privileges 502
v MYSQL User Privileges 502
v MYSQL Host Privileges 502
v MYSQL Table Privileges 502
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list details the minimal privileges required, in the database table (or
view of the database table), in order for the entitlement to work.
Note: In addition to the privileges required, the user should connect to the MYSQL
database to upload the data.
Beginning with MySQL 5.0.2, and for all later versions, the entitlement queries use
this set of tables: information_schema.SCHEMA_PRIVILEGES mysql.host
information_schema.TABLE_PRIVILEGES information_schema.USER_PRIVILEGES
If a datasource has a MYSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MYSQL databases the user has access to.
DB2 DB Entitlements
The following domains are provided to facilitate uploading and reporting on DB2
DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are
available from the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections. As with other predefined entities and reports, these cannot be
modified, but you can clone and then customize your own versions of any of these
domains or reports. To see entitlement reports, log on the user portal, and go to
the DB Entitlements tab.
v DB2 Column-level Privileges (SELECT, UPDATE, ETC.)
v DB2 Database -level Privileges (CONNECT, CREATE, ETC.)
v DB2 Index-level Privilege (CONTROL)
v DB2 Package-level Privileges (on code packages – BIND, EXECUTE, ETC.)
v DB2 Table-level Privileges (SELECT, UPDATE, ETC.) DB2 Privilege Summary
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
If a datasource has a SYBASE database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all SYBASE databases the user has access to.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements). The following list (with
comment line heading) details the minimal privileges required, in the database
table (or view of the database table), in order for the entitlement to work.
Since all users have sufficient privileges for system catalog SELECT privileges,
there is no need to grant privilege to any user. Informix doesn't seem to like
granting system catalog to users. The grant would normally be used. But in this
case they are not required.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
Note: The entitlement domains for MSSQL2005 listed cover MSSQL2008 as well.
v MSSQL2005/8 Object privileges by database account not including default
system user.
v MSSQL2005/8 Role/System privileges granted To User
v MSSQL2005/8 Role/System Privilege granted to user and role including grant
option
v MSSQL2005/8 Object access by PUBLIC
v MSSQL2005/8 Execute Privilege on System Procedures and functions to PUBLIC
v MSSQL2005/8 Database accounts of db_owner and db_securityadmin Role
v MSSQL2005/8 Server account of sysadmin, serveradmin and security admin /*
only run against MASTER database */
v MSSQL2005/8 Object and columns privileges granted with grant option
v MSSQL2005/8 Role granted to user and role.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
If a datasource has a MSSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MSSQL databases the user has access to.
Netezza DB Entitlements
Note: There is no DB error text translation for Netezza. The error appears in the
exception description. Users can clone/add a report with the exception description
for Netezza as needed.
v Netezza Obj Privs by DB Username - Object privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Admin Privs by DB Username - Admin privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Group /Role Granted To User - Group (Role) granted to user
v Netezza Obj Privs By Group - Object privileges with or without grant option by
GROUP excluding PUBLIC.
v Netezza Admin Privs By Group - Admin privileges with or without grant option
by GROUP excluding PUBLIC.
v Netezza Admin Privs By DB Username, Group - Admin privileges with or
without grant option by database username, group excluding ADMIN account
and PUBLIC group.
v Netezza Obj Privs Granted - Object privileges granted with or without grant
option to PUBLIC.
v Netezza Admin Privis Granted - Admin privileges granted with or without grant
option to PUBLIC.
v Netezza Global Admin Priv To Users and Groups - Global admin privilege
granted to users and groups excluding ADMIN account.
v Netezza Global Obj Priv To Users and Groups - Global object privilege granted
to users and groups excluding ADMIN account.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
Teradata DB Entitlements
Note: There are no such role as System or Security admin in Teradata. User
must create their own roles. These are some important system privileges that
would normally not be granted to normal user: ABORT SESSION, CREATE
DATABASE, CREATE PROFILE, CREATE ROLE,CREATE USER, DROP
DATABASE, DROP PROFILE, DROP ROLE, DROP USER, MONITOR
RESOURCE, MONITOR SESSION, REPLICATION OVERRIDE, SET SESSION
RATE, SET RESOURCE RATE.
v Teradata Object privileges granted with granted option to users. Not including
DBC and grantee = 'All'.
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
PostgreSQL DB Entitlements
v PostgreSQL Priv On. Databases Granted To Public User Role With Or Without
Granted Option. Privilege on databases granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
v PostgreSQL Priv On Language Granted To Public User Role With Or Without
Granted Option. Privilege on Language granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Schema Granted To Public User Role With Or Without
Granted Option. Privilege on Schema granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without
Granted Option. Privilege on Tablespace granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
v PostgreSQL Role Or User Granted To User Or Role. Role or User granted to user
or role including grant option. Run this once in any database. Ideally
PostgreSQL.
v PostgreSQL Super User Granted To User Or Role. Super user granted to user or
role. Run this once in any database. Ideally PostgreSQL.
v PostgreSQL Sys Privs Granted To User And Role. System privileges granted to
user and role. Run this once in any database. Ideally PostgreSQL.
Note: As of version 8.3.6, PostgreSQL does not support grant admin option to
public. There is only function, no store procedure. There is no support for column
grant, only table grant. Public is a group, not user. Public does not show up in
pg_roles. The only privileges need to run all these queries is: GRANT CONNECT
ON DATABASE PostgreSQL TO username;
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
/*These are required on every database, including POSTGRES (By default these are
already granted to PUBLIC) */
If a datasource has a PostgreSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all PostgreSQL databases the user has access to.
For an overview of domains, entities, and attributes, see “Domains, Entities, and
Attributes” on page 323. For a description of all domains, see “Domains” on page
323.
Describes all available policies on the system. Similar to Installed Policies entity
used for all installed policies on system.
Entity List for Access Policy- Access Policy Entity; Rule Policy Entity; Rule Action
Entity; and, Alert Notification. See Rule Entity for a list of attributes. See Rule
Action Entity for a list of attributes. See Alert Notification Entity for a list of
attributes.
Table 35. Access Period Entity
Attribute Description
Audit Pattern Test pattern used for a selective audit trail policy.
Timeout values depend on the number of the sessions opened by analyzer thread.
For each analyzer thread there are following default values: If number of open
sessions >0 and < 250, then timeout is 60 minutes. If number of open sessions
>=250 and < 500, then timeout is 30 minutes. If number of open sessions >= 500
and < 750, then timeout is 15 minutes, If number of open sessions >= 750 and <
1200, then timeout is 5 minutes. If number of open sessions is >= 1200, then
timeout is 2 minutes.
Construct ID Uniquely identifies a command construct (for example, select a from b).
Total Access Total count of construct instances for this access period.
Period Start Date Date only from the period start attribute.
Timestamp Initially, the Timestamp value is set the first time that a request is
observed on a client-server connection during an access period. By
default, an access period is one hour long, but this can be changed by
the Guardium administrator in the Inspection Engine Configuration -
see the Guardium Administrator Guide. Thereafter, for each subsequent
request, it is updated when the system updates the average execution
time and the command count for this period.
Period End Date and time for the end of the access period.
Period End Date Date only from the period end attribute.
Period End Time Time only from the period end attribute.
Average The average command execution time during the period. This is for SQL
Execution Time statements only. It does not apply to FTP or Windows file share traffic.
Failed Sqls (2) The number of failed SQL requests. See note at the end of the table.
Successful Sqls The number of successful SQL requests. See note at the end of the table.
(2)
Total Records The total number of records affected. See note at the end of the table.
Affected (2)
Avg Records The average number of records affected. See note at the end of the table.
Affected (2)
Total Records If the Total Records Affected attribute is a character string instead of a
Affected (Desc) number, that value appears here (for example, Large Results Set, or
(2) N/A.
Records affected - Result set of the number of records which are affected
by each execution of SQL statements.
Note: The records affected option is a sniffer operation which requires
sniffer to process additional response packets and postpone logging of
impacted data which increases the buffer size and might potentially
have a adverse effect on overall sniffer performance. Significant impact
comes from really large responses. To prevent large amount of overhead
associated with this operation, Guardium uses a set of default
thresholds that allows sniffer to decide to skip processing operation
when exceeded.
Show Seconds If a the number of accesses per second is being tracked, this contains
counts for each second in the access period (usually one hour).
Failed Sqls, Successful Sqls, Application Event ID, Total Records Affected, Avg
Records Affected, and Total Records Affected (Desc) are attributes that only appear
when the main entity for the query permits this level of detail. These are not
available if either Client/Server or Session is the main entity.
The name assigned to an access rule when it was defined. This is available for
reporting only from the owning Policy Rule Violation entity (described later), when
an access rule violation is logged.
Table 37. Access Rule Entity
Attribute Description
Timestamp Updated at the start and end of the activity being logged (prepare for
archiving, encrypt, send, etc.).
Period Start Starting time for the data being acted upon. Each archiving or
aggregation activity operates on one full day of activity.
Period End Ending time for the activity being acted upon.
File Name Name of file used for the activity. Files created by the archive and
export operations are named as follows:
<daysequence>-<scp_host>-w<run_datestamp>-
d<data_date>.dbdump.enc
For example:
732423-g1.guardium.com-w20050425.040042-d2005-04-22.dbdump.enc
Records Purged If the activity type is Purge, the number of records purged. Otherwise,
N/A.
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
ALERT_NOTIFICATION_ID
Identifies the alert notification.
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
This entity is created each time that the system observes an Application Events API
call (which sets these attribute values) or a stored procedure call that has been
identified as a Custom Identification Procedure (which maps stored procedure
parameters to these attributes).
Table 42. Application Events Entity
Attribute Description
Timestamp Created only once, when the event is logged. Do not confuse this
attribute with the Event Date attribute, which can be set using an API
call or from a stored procedure parameter. (See the Guardium
Administrator Guide for a description of the Application Events API
and Custom Identification Procedures.)
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
This entity will display the username from the App Event if the App Event exists.
Otherwise, the user name will display from the Construct Instance.
Table 43. App User Name Entity
Attribute Description
APP User Name Unique identifier for this App User Name entity.
Assessment Log The assessment test severity: Critical, Major, Minor, Cautionary,
Severity Informational. This is an ordered list of the level of severity
classifications. Assessment test severity: Critical, Major, Minor,
Cautionary, Informational. The highest severity is the first classification
in this list. The lowest severity is the last classification in this list.
Assessment Result data source ID and Assessment Result ID are only available to
users with the admin role.
This entity is created for each task in the assessment results set.
Table 46. Assessment Result Header Entity
Attribute Description
Received By All Indicates whether or not these results have been received by all
receivers on the distribution list.
Filter Client IP Clients selected: exact IP address, address with wildcards (*), or empty
to select all.
Filter Server IP Servers selected: exact IP address, address with wildcards (*), or empty
to select all.
Assessment Result ID, Assessment ID, and Task ID are only available to users with
the admin role.
Test Type Type of assessment test (Observed, Predefined, Custom, Query based,
CVE)
Datasource Type Type of Datasource (DB2, Informix, MYSQL, ORACLE, SYBASE, etc.)
Threshold User defined threshold, to override the value define upon the test’s
creation
Keep Result The number of days the results will be kept by the system.
Days
Keep Results The number of results sets that will be kept by the system.
Quantity
Task Type A numeric value indicates whether the task is a report, security
assessment, entity audit trail, privacy set, or classification process.
Aliases are defined for these types, so reports with Aliases on will
simplify reading of the report output.
This entity contains the execution date for a set of audit process results.
Table 51. Audit Process Result Entity
Attribute Description
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
This entity is used with the IBM InfoSphere Change Data Capture (InfoSphere
CDC) replication solution that allows the replication to and from supported
databases. Maintenance of replicated databases can be used to reduce processing
overheads and network traffic.
IBM Guardium Customers with Database Activities Monitoring will have access to
InfoSphere CDC.
This Guardium feature uses Java CDC user exit to send value change information
to the Guardium collector.
User exits for InfoSphere CDC lets the user define a set of actions the InfoSphere
CDC can run before or after a database event occurs on a specified table.
Table 55. Changed Data Values Entity
Attribute Description
Two files that need to be installed on the Database Server are for the Guardium
agent that interfaces with IBM's InfoSphere Change Data Capture (InfoSphere
CDC) application. They are in the sources/apps/GuardCDC/lib/ directory of the
build. These files are: protobuf-java-2.4.1.jar; and, GuardCdc.jar
Instructions for installation
Prerequisites - the InfoSphere Change Data Capture (InfoSphere CDC)
application must already be installed on the DB Server.
Steps to install the Guardium agent on the Database server:
This entity is created for each classification process rule that is fired.
Table 56. Classification Process Results Entity
Attribute Description
Queue DateTime Timestamp when the job was submitted to the classifier/assessment
queue.
Client/Server Entity
Timestamp Since all attributes in this entity contain static information, this
timestamp is created only once, when Guardium observes a request on
the defined client-server connection for the first time.
Network Network protocol used (e.g., TCP, UDP, etc. Note that for K-TAP on
Protocol Oracle, this may display as either IPC or BEQ)
DB User Name Database user name. The DB user name is the person who connected to
the database, either local or remote.
Service Name Service name for the interaction. In some cases (AIX® shared memory
connections, for example), the service name is an alias that is used until
the actual service is connected. In those cases, once the actual service is
connected, a new session is started - so what the user experiences as a
single session will be logged as two sessions.
For Teradata, Service name contains the session logical host id value.
ClientIP/DBUser Paired attribute value consisting of the client IP address and database
user name.
Analyzed Client Applies only to encrypted traffic; when set, client IP is set to zeroes.
IP
Analyzed Client IP has a map for CEF source. If the query used for the
CEF does NOT contain the Client IP but contains the analyzed client IP,
the analyzed client IP will be used for the source. If both included in the
query, then Client IP takes precedence.
Server IP/DB Paired attribute value consisting of Server IP address and database user
user name.
Client/ Server Client/Server by session is also a Main Entity. Access this secondary
by session entity by clicking on the Client/Server primary entity.
Note: For Access Tracking only, Client/Server Entity name will appear in the
pulldown menu as two possible entities - Client/Server and Client/Server By
Session.
Client/Server By Session will get count from Client/Server and date conditions
from Session.
Client/Server will get count from Client/Server and date conditions also from
Client/Server.
If the user chooses Client/Server, then the query will be populated with
ATTRIBUTE_ID = 1. If the user chooses Client/Server By Session, then the query
will be populated with MAIN_ATTRIBUTE_ID = 0.
Within Central Manager, shows the aggregate of all Sniffer Buffer Usage Entity that
have been uploaded.
Table 59. CM Buffer Usage Monitor Entity
Attribute Description
Sniffer Buffer
Usage ID
Sniffer Packets Total number of connections that have been ignored due to throttling
Throttled since inspection engine was restarted.
Sniffer Total number of connections that were monitored and have ended since
Connections inspection engine was restarted.
Ended
Mysql Is Up Boolean indicator for internal database restart (1=was restarted, 0=not
restarted).
SqlGuard Is the time the record is inserted into the custom table
Timestamp
Datasource Is the name of the data source used to upload the record
Name
Command Entity
For each command, an entity is created for each parent node and position in which
the command appears in a command construct.
Table 60. Command Entity
Attribute Description
SQL Verb Main verb in SQL command (e.g., select, insert, delete, etc.).
Command ID and Construct ID are only available to users with the admin role.
Comments Entity
This entity describes a user comment. It is available in the Comments domain only,
which is restricted to admin users. This domain includes only sharable comments,
which are all comments except for those that run locally (see the Local Comments
entity).
Table 61. Comments Entity
Attribute Description
Comment Indicates the element to which the comment is attache - a query, audit
Reference process result, or another comment, for example.
Object The name of the object from which the comment was defined. For
Description example, a comment defined on a policy has an object description of
ACCESS_RULE_SET.
Database Error A database error code followed by a short text description of the error.
Text The error code is taken from the Exception Description attribute of the
Exception entity. Using the error code as a key, the error text is obtained
from an internal table on the Guardium appliance, which contains the
most common error messages (about 54,000 of them).
This entity (under CAS Config Tracking/ Monitored Item Details Entity) identifies
a data source.
Table 63. Data Source Entity
Attribute Description
Data source Type Data source type - Oracle, MS-SQL, DB2, Sybase, Informix, etc.
Shared Yes or No
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Timestamp A timestamp value created when Guardium records this instance of the
entity (every instance has a unique timestamp).
Probe Attempted Indicates if a probe for a supported database service has been attempted
on this port. T=yes, F=no.
DB Type If a probe of the port has found a supported database type, indicates the
type (DB2, Informix, MS SQL Server etc.)
Probe The date and time that this specific port was probed.
Timestamp
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Exception Entity
Exception Date and time created when this Exception entity was logged.
Timestamp
SQL string that The SQL string that caused the exception.
caused the
exception
User Name Database user name. On encrypted traffic, where correlation is required,
this value may not be available, but it is always available from the DB
User Name attribute in the Client/Server entity.
Link to more Optional link that is sometimes available, depending on the exception
information source.
about the
exception1
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Exception A text description of the exception type, from the following list. Most of
Description these should never be seen. See the notes in italic for the most common
exceptions and notes.
For this message, a database error code will be stored in the Exception
Description attribute of the Exception entity, and a text version of the
database error message will be available in the Database Error Text
attribute of the Database Error Text entity.
DB Protocol Exception
Login Failed
Security Exception
For this message, a custom class exception has been raised when
breaching code execution is blocked; such as when users use the Java
API to define their own alerts or assessments.
For this message, the IP address or DNS name of the database server
will be available in the Exception Description attribute of the Exception
entity
For this message, the IP address or DNS name of the database server
will be available in the Exception Description attribute of the Exception
entity
TCP ERROR
368 IBM Guardium 10.0 For this message, additional information about the error will be
included in the Exception Description attribute of the Exception entity
Command ID Uniquely identifies the main command from the construct in which it
was referenced.
Object ID Uniquely identifies the object from the construct in which it was
referenced.
ORDER BY department
Having
FROM table_name
GROUP BY column_name1
Group By
FROM table_name
GROUP BY column_name1
Where
FROM Users
SQL: simple, direct SQL command, for example, typed directly into the
CLI
Statement type is part of the FULL SQL entity and is audited only if you
have configured Log Full Details for this statement within the policy.
You can not filter out specific statement types in the policy, for example,
audit-only SQL and BIND statements. You can, however, filter these out
in reports.
Bind Variables For DB2/zOS, contains a list of comma separated bind variable
Values
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Timestamp A timestamp value created when Guardium records this instance of the
entity (every instance has a unique timestamp).
Response Time The response time for the request in milliseconds. When requests are
monitored in network traffic, the response times are an accurate
reflection of the time taken to respond to the request (Guardium
timestamps both the client request and the server response).
Records Affected The number of records affected for each session. On reports using this
attribute, we suggest that you turn on aliases to properly display special
cases such as Large Result Set or N/A.
Returned Data Data returned for this request (if any, and if available).
Records Affected When the Records Affected is a string value instead of a number, that
(Desc) string is stored here. For example: Large Result Set or N/A.
Returned Data Number of rows returned from the SQL statement used in the policy
Count rule.
SQL: simple, direct SQL command, for example, typed directly into the
CLI
Statement type is part of the FULL SQL entity and is only audited if you
have configured Log Full Details for this statement within the policy.
You can not filter out specific statement types in the policy, for example,
audit-only SQL and BIND statements. You can, however, filter these out
in reports.
Bind Variables For DB2/zOS, contains a list of comma separated bind variable
Values
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Full SQL ID, Instance ID, and Succeeded are only available to users with the
admin role.
These entities are created only by the following policy rule actions: Log Full Details
With Values, and Log Full Details Per Session With Values.
Table 73. FULL SQL Values Entity
Attribute Description
Timestamp Date and Time Full SQL Values Entity was created.
This entity describes events that have occurred while using the Guardium
Installation Manager (GIM).
Table 74. GIM Events Entity
Attribute Description
Event Generator IP address of the client (i.e. DB-Server) which generated the event.
Group Entity
This entity describes a group that has been defined to Guardium.
Table 75. Group Entity
Attribute Description
Timestamp Date and time the group member was created or updated.
This entity describes a type of Guardium group (user, client IP address, command,
etc.).
Table 77. Group Type Entity
Attribute Description
Application Guardium application listed (foe example, Query Builder, Policy Builder,
etc.).
An instance is defined in the internal Guardium database for each type of activity.
Table 81. Guardium Activity Types Entity
Attribute Description
Modified Entity The Guardium entity modified (a group definition, for example).
This entity is created each time a user logs in to the Guardium appliance.
Table 83. Guardium Users Login Entity
Attribute Description
User Name Created when the Guardium user logs in or out (there will be one entity
per Guardium session).
Host Entity
A CAS Host entity is created the first time that CAS is seen on a database server
host. It is updated each time that the online/offline status changes. The Host entity
is also available in the CAS Host History domain.
Table 84. Host Entity
Attribute Description
DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix, or N/A if the
change is to an operating system instance
Monitored Item The name of the changed item, from the Description (if entered),
otherwise a default name depending on the Type (a file name, for
example).
A host event entity is created each time an event is detected or signaled (see the
event types) by CAS.
Event Time Date and time that the event was recorded
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Incident Entity
Audit Pattern Test pattern used for a selective audit trail policy.
Sequence Sets the order of sequence when there is multiple installed policies.
DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix; or N/A for an
operating system instance
User The user name that CAS uses to log onto the database; or N/A for an
operating system instance.
Port The port number CAS uses to connect to the database; or empty for an
operating system instance
DB Home Dir The home directory for the database; or empty for an operating system
instance
Join Entity
A join table is a way of implementing many-to-many relationships. Use join entity
to join tables in a SELECT SQL statement.
Table 92. Join Entity
Attribute Description
Timestamp Date and Time that the Join Entity was created.
Comment Indicates the element to which the comment is attached - a query, audit
Reference process result, or another comment, for example.
Object The name of the object from which the comment was defined. For
Description example, a comment defined on an incident has an object description of
INCIDENT.
Location View
How to determine what days are not archived
Use a query (Tools tab > Report Building > Report Builder > query Location View)
that can be modified to create a report showing the files that are archived. This
report lists all the files with archive dates. Dates not on this report indicate that
those dates have not been archived. Run archive for the dates not on the list, if
required.
Table 94. Location View Entity
Attribute Description
Aggregator The Guardium system where the file was generated on. However this
can be a collector, not just a Aggregator
System Type What protocol was used while archiving - if it was SCP or FTP or
Centera or TSM
Obsolete beginning with version 4.0 of Guardium. This was the only entity of the
Access Trace Tracking domain, which was obsolete beginning with version 4.0 of
S-TAP. If you have old queries or reports using that domain, they will not work in
this release, and any database login information recorded in that domain would
pre-date the installation of version 4.0 of S-TAP.
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
For each threshold alert message sent, the message type, recipients, status, and
date of that message.
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Timestamp Date and time the change was recorded on the Guardium appliance.
This timestamp is created during the data upload operation. It is not the
time that the change was recorded on the audit database. To obtain that
time, use the Audit Timestamp entity.
Database Name DB2, Informix, Sybase, MS SQL Server only. Database name.
Audit PK For Sybase and MS SQL Server only. A primary key used to relate old
and new values (which must be logged separately for these database
types).
SQL Text Available only with Oracle 9. The complete SQL statement causing the
value change.
Triggered ID Unique ID (on this audit database) generated for the change.
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
This entity is created each time a monitored item changes. It identifies the
monitored item within the CAS instance, and points to the saved data for the
change.
Table 98. Monitored Changes Entity
Attribute Description
Sample Time Timestamp (date and time on host) that sample was taken
Saved Data Id Identifies the Saved Data entity for this change
Audit State Identifies the Host Configuration entity for this change
Label Id
Timestamp Date and time this change record was created on the server (Guardium
appliance server clock)
Owner Unix only. If the item type is a file, the file owner
Permissions Unix only. If the item type is a file, the file permissions
0 (zero) = File does not exist, but this file name is being monitored (it
never existed or may have been deleted)
Last Modified Timestamp for the last modification, taken from the file system at the
sample time
Group Unix only. If the item type is a file, the group owner
A Monitored Item Details entity is created for each monitored item in a CAS
instance.
Table 99. Monitored Item Details Entity
Attribute Description
Monitored Item Depending on the Audit Type, this is the OS or SQL script,
environment, or registry variable, or file name. Regarding a file pattern
defined in an item template, there will be a separate monitored item
detail entity for each file that matches the pattern, but there is no
monitored item details entity for the file pattern itself. If a file pattern is
used, it is always available in the Template Content attribute.
Audit Config Set Identifies the template set in the host configuration
Id
In Synch Indicates whether or not the template item definition on the server
matches the template item definition on the CAS host
Save Data When marked, previous version of the item can be compared with the
current version
Template The template entry that is the basis for this monitored item, set from the
Content Template entity Access Name attribute when the instance was created.
Typically this will be the same as the monitored item, but in the case
where a file pattern was used in the template, this will be the file
pattern
Object Entity
Object Id and Construct Id are available to users with the admin role only.
Describes an object-field entity. Note fields with no objects will not show up in
reports that include the object.
Table 102. Object Field Entity
Attribute Description
This entity is created each time that a policy rule violation is logged. Not all policy
rule violations are logged - see the description of the rule actions in Chapter 11:
Building Policies. The access rule causing the violation will be available in the
dependent Access Rule Entity (described earlier).
Table 103. Policy Rule Violation Entity
Attribute Description
Application User Name of the user creating the policy rule violation.
Name
Full SQL String SQL string causing the policy rule violation.
Timestamp Created when the policy rule violation is logged. Not all policy rule
violations are logged - see the description of the rule actions in Chapter
11: Building Policies.
Message Sent The text of the policy rule violation message that was sent.
Application Application event ID (if any - these are set using the application events
Event Id API)
Severity Severity defined for the rule (the severity of an incident to which this is
assigned may be different).
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Violation Log Id are available to users with the admin role only.
Qualified Object Tuple - Server IP, Service name, DB name, DB user, Object
An instance is created for each database connection seen by the S-TAP Hunter
process, but not by S-TAP itself, indicating that the connection has bypassed the
access paths monitored by S-TAP.
Table 105. Rogue Connections Entity
Attribute Description
Timestamp A timestamp value created when the Guardium appliance records the
rogue connection reported by the Hunter.
IPC Type Type of inter-process communications used for the connection, which
may be from the following list:
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Rule Entity
Can be used for Installed policy rule entity or access policy rule entity. There is one
for each rule of the installed policy/policies or access policy/policies. Apart from
the ID fields (which uniquely identify components on the internal database), all of
these fields are described in the Policies help topic.
v GDM_INSTALLED_POLICY_RULES_ID - Identifies an installed policy rule.
v ACCESS_RULE_ID - Identifies an access rule.
v Rule Description - From the policy definition.
v Rule Position - Position within the policy.
v Rule Type - Access, Exception, or Extrusion.
v LAST_ACCESSED - Last
v Client IP - From the rule definition.
v Client Net Mask - From the rule definition.
v Client IP Group - From the rule definition.
v Server IP - From the rule definition.
v Server IP Mask - From the rule definition.
v Client MAC - From the rule definition.
v Net Protocol - From the rule definition.
v Net Protocol Group - From the rule definition.
v Field - From the rule definition.
v Field Group - From the rule definition.
v Object - From the rule definition.
v Object Group - From the rule definition.
v Command - From the rule definition.
v Command Group - From the rule definition.
v Object-Field Group - From the rule definition.
v DB Type - From the rule definition.
v Service Name - From the rule definition.
v Service Name Group - From the rule definition.
v DB Name - From the rule definition.
v DB Name Group - From the rule definition.
v DB User - From the rule definition.
v DB User Group - From the rule definition.
Can be used Installed policy rule action entity or access policy rule action entity.
There is one for each rule of the installed policy/policies or access policy/policies .
v Sequence - Sequence of the action within the rule.
v Action
– Block the request - See Blocking Actions in Policies.
– Log or ignore the violation or the traffik - See Log or Ignore Actions in
Policies.
– Alert - See Alerting Actions in Policies.
A Saved Data entity is created each time a change is detected for an item being
monitored, if the Keep data box is marked for that item in the item template
definition.
Table 106. Saved Data Entity
Attribute Description
Timestamp Timestamp for when the saved data entity was recorded in the server
database
Change Identifies the monitored changes entity for this saved data entity
Identifier
Session Start Date and time session started. Session Start is also a Main Entity. Access
this secondary entity by clicking on the Session primary entity.
Session End Date and time the session ended. Session End is also a Main Entity.
Access this secondary entity by clicking on the Session primary entity.
Database Name Name of database for the session (MSSQL or Sybase only).
Session Ignored Indicates whether or not some part of the session was ignored
(beginning at some point in time).
Uid Chain For a session reported by Unix S-TAP (K-Tap mode only), this shows the
chain of OS users, when users su with a different user name. The values
that appear here vary by OS platform - for example, under AIX the
string IBM IBM IBM may appear as a prefix.
Note: For Solaris Zones, user ids may be reported instead of user names
in the Uid Chain.
Old Session ID Points to the session from which this session was created. Zero if this is
the first session of the connection.
Process ID The process ID of the client that initiated the connection (not always
available).
Duration (secs) Indicates the length of time between the Session Start and the Session
End (in seconds).
Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.
Global ID, Session ID, and Access ID are only available to users with the admin
role.
Severity Entity
The system creates this entity at the interval set by the store system
netfilter-buffer-size CLI command (every 60 seconds by default).
Table 110. Sniffer Buffer Usage Entity
Attribute Description
Mysql Is Up Boolean indicator for internal database restart (1=was restarted, 0=not
restarted).
Sniffer Total number of connections that were monitored and have ended since
Connections inspection engine was restarted.
Ended
Sniffer Packets Total number of connections that have been ignored due to throttling
Throttled since inspection engine was restarted.
Bind Out Var Optional. Determines if the entered text in SQL statement is a
procedural block of code that will return a value that should be bound
to an internal Guardium variable that will be used in the comparison to
the Compare to value.
Compare To Compare value that will be used to compare against the return value
Value from the SQL statement using the compare operator.
Recommendation The Recommended text for fail that will be displayed when the test
Text Fail fails.
Recommendation The Recommended text for pass that will be displayed when the test
Text Pass passes.
Result Text Fail The Result text for fail that will be displayed when the test fails.
Result Text Pass The Result text for pass that will be displayed when the test passes.
Return Type The Return type that will be returned from the SQL statement.
SQL For Details A SQL Statement for Detail, a SQL statement that retrieves a list of
strings to generate a detail string of Detail prefix + list of strings.
SQL The SQL statement that will be executed for the test.
SQL Entity
SQL Entity
This entity is created for each unique string of SQL. Values are replaced by
question marks - only the format of the string is stored.
Table 112. SQL Entity
Attribute Description
Truncated SQL Indicates if the SQL has been truncated or not where:
1 - true/yes, truncated
Template Entity
A CAS template entity is created for each item template within a template set. An
item is a specific file or file pattern, an environment or registry variable, the output
of an OS or SQL script, or the list of logged-in users.
Table 115. Template Entity
Attribute Description
Template ID A unique identifier for the item template within the set of all item
templates
Access Name Depending on the Audit Type, this is the OS or SQL script, environment
or registry value, or a file name or a file name pattern
Save Data Indicates if the Keep data checkbox has been marked. If so, previous
versions of the item can be compared with the current version
Editable Indicates whether or not this template can be modified. The default
Guardium templates cannot be modified. In addition once a template set
has been used in a CAS instance, it cannot be modified. In any case, a
template set can always be cloned and the cloned set can be modified
Template ID and Template Set ID are only available to users with the admin role.
A CAS Template Set entity is created for each template set, which is a set of
template items for a particular operating system or database.
Table 116. Template Set Entity
Attribute Description
Template Set Id A unique identifier for the template set, numbered sequentially
DB Type Database Type: Oracle, MS-SQL, DB2, Sybase, Informix, or N/A for an
operating system template
IsDefault Indicates whether or not this template is the default for the specified OS
Type and DB Type combination
Editable Indicates whether or not this template can be modified. The default
Guardium templates cannot be modified. In addition once a template set
has been used in a CAS instance, it cannot be modified. In any case, a
template set can always be cloned and the cloned set can be modified
Threshold String The threshold prompt for the test (e.g. Maximum Number of Different
IP's Allowed per user)
Test Result ID, Assessment Result ID, and Assessment Test ID are only available to
users with the admin role.
Checked From The starting date and time checked for by the alert condition.
Date
Checked To Date The ending date and time checked for by the alert condition.
Unit Utilization – For each unit the max utilization level in the given timeframe.
There is a drill down that will display the details for a unit for all periods within
the timeframe of the report.
Unit Utilization Distribution: Per unit the percent of periods in the timeframe of
the report with utilization Level Low, Medium and High.
Host Name
Period Start
Number Of restarts
Sniffer Memory
Analyzer Queue
Logger Queue
Note: Each parameter has a value and a level which is calculated based on the
value and the thresholds.
The predefined entitlement reports are listed as follows. They appear as domain
names in the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections :
v Oracle DB Entitlements Domains
v MYSQL DB Entitlements Domains
v DB2 DB Entitlements Domains
v DB2 for i 6.1 and 7.1 DB Entitlements Domains
v SYBASE DB Entitlements Domains
v Informix DB Entitlements Domains
v MSSQL 2000 DB Entitlements Domains
v MSSQL 2005 DB Entitlements Domains
v Netezza DB Entitlements Domains
v Teradata DB Entitlements Domains
v PostgreSQL DB Entitlements Domains
Oracle DB Entitlements
Oracle
v ORA Accnts of ALTER SYSTEM - Accounts with ALTER SYSTEM and ALTER
SESSION privileges
v ORA Accnts with BECOME USER - Accounts with BECOME USER privileges
v ORA All Sys Priv and admin opt - Report showing all system privilege and
admin option for users and roles
v ORA Obj And Columns Priv - Object and columns privileges granted (with or
without grant option)
v ORA Object Access By PUBLIC - Object access by PUBLIC
v ORA Object privileges - Object privileges by database account not in the SYS
and not a DBA role
v ORA PUBLIC Exec Priv On SYS Proc - Execute privilege on SYS PL/SQL
procedures assigned to PUBL
v ORA Roles Granted - Roles granted to users and roles
v ORA Sys Priv Granted - Hierarchical report showing system privilege granted to
users including recursive definitions (i.e. privileges assigned to roles and then
these roles assigned to users
v ORA SYSDBA and SYSOPER Accnts - Accounts with SYSDBA and SYSOPER
privileges
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
MYSQL DB Entitlements
MYSQL: The queries ending in "_40" use the most basic version of the mysql
schema (for MySQL 4.0 and beyond). The information_schema has not changed
since it was introduced in MySQL 5.0, so there is a set of _50 queries, but no _51
queries. The _50 queries work for MySQL 5.0 and 5.1 and for 6.0 when it comes
out, since the information_schema is not expected to change in 6.0. The queries
ending in "_502" (MYSQL502) use the new information_schema, which contains
much more information and is much more like a true data dictionary.
v MYSQL Database Privileges 40
v MYSQL User Privileges 40
v MYSQL Host Privileges 40
v MYSQL Table Privileges 40
v MYSQL Database Privileges 500
v MYSQL User Privileges 500
v MYSQL Host Privileges 500
v MYSQL Table Privileges 500
v MYSQL Database Privileges 502
v MYSQL User Privileges 502
v MYSQL Host Privileges 502
v MYSQL Table Privileges 502
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list details the minimal privileges required, in the database table (or
view of the database table), in order for the entitlement to work.
Note: In addition to the privileges required, the user should connect to the MYSQL
database to upload the data.
The entitlement queries for all MySQL versions through MySQL 5.0.1 use this set
of tables: mysql.db mysql.host mysql.tables_priv mysql.user
Beginning with MySQL 5.0.2, and for all later versions, the entitlement queries use
this set of tables: information_schema.SCHEMA_PRIVILEGES mysql.host
information_schema.TABLE_PRIVILEGES information_schema.USER_PRIVILEGES
DB2 DB Entitlements
The following domains are provided to facilitate uploading and reporting on DB2
DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are
available from the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections. As with other predefined entities and reports, these cannot be
modified, but you can clone and then customize your own versions of any of these
domains or reports. To see entitlement reports, log on the user portal, and go to
the DB Entitlements tab.
v DB2 Column-level Privileges (SELECT, UPDATE, ETC.)
v DB2 Database -level Privileges (CONNECT, CREATE, ETC.)
v DB2 Index-level Privilege (CONTROL)
v DB2 Package-level Privileges (on code packages – BIND, EXECUTE, ETC.)
v DB2 Table-level Privileges (SELECT, UPDATE, ETC.) DB2 Privilege Summary
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
The following domains are provided to facilitate uploading and reporting on DB2
for i DB Entitlements. Each of the following domains has a single entity (with the
same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
Object privileges granted to PUBLIC (Object type: Schema, Table, View, Package,
Routine, sequence, column, global variable, and XML schema)
Object privileges granted to grantee with GRANT OPTION (Object type: Schema,
Table, View, Package, Routine, sequence, column, global variable, and XML
schema)
All of the object privileges exclude default system schemas from a predefined
Guardium group called "DB2 for i exclude system schemas - entitlement report".
Please add to this group for schema that should be excluded.
SYBASE DB Entitlements
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
If a datasource has a SYBASE database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all SYBASE databases the user has access to.
SYBASE IQ Entitlements
The following custom table definition are created to upload data: (you can ignore
the id.)
142 | SybaseIQ15 System Authority And Group Granted To User And Group
600 | SybaseIQ15 System Authority And Group Granted To Users And Groups
Grantee
606 | SybaseIQ15 Login Policy For User And Group With Login Option Setting
===========================================================================
Description of each - some of them are self explained. some may need a few extra
words:
1 /*
These are privilege granted to users only, not including group or membership in
group.
*/
2. /*
*/
*/
*/
*/
*/
*/
8 /* Tables and Views privileges granted with grant option to users and groups.
Note, this is the only grant option type allow in Sybase IQ. Routines cannot be
grant with grant option.
*/
*/
10 /* Login policy assigned to user and group with login option setting */
How to use GuardAPI to add a datasource to each of the Sybase IQ reports and
how to execute them.
See the examles below on how to add a datasource to each of the new reports and
then execute each report.
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Group granted to user and
group" datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Login policy for user group with
login"datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Object Access By Public"
datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Object Privileges By DB User"
datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 System Authority And Group
Granted To User"datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 System Authority And Group
Granted To User And Group"datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Table View priv granted with
grant"datasourceName="SybaseIQ15 entitlement 6"
grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 User Group With DBA Perms
Admin etc"datasourceName="SybaseIQ15 entitlement 6"
grdapi upload_custom_data
tableName=SYBASEIQ15_EXEC_PRIV_ON_PROC_FUNC_TO_PUBLIC
grdapi upload_custom_data
tableName=SYBASEIQ15_GROUP_GRANTED_TO_USER_AND_GROUP
grdapi upload_custom_data
tableName=SYBASE_OBJ_COL_PRIVS_GRANTED_WITH_GRAN
grdapi upload_custom_data
tableName=SYBASEIQ15_OBJECT_ACCESS_BY_PUBLIC
grdapi upload_custom_data
tableName=SYBASEIQ15_OBJECT_PRIVS_BY_DB_USER
grdapi upload_custom_data
tableName=SYBASEIQ15_OBJECT_PRIVILEGES_BY_GROUP
grdapi upload_custom_data
tableName=SYBASEIQ15_SYSTEM_AUTHORITY_AND_GROUP_GRANTED_TO_USER
grdapi upload_custom_data
tableName=SYBASEIQ15_SYSTEM_AUTHORITY_AND_GROUP_GRANTED_TO_USER_AND_GRO
grdapi upload_custom_data
tableName=SYBASEIQ15_TABLE_VIEWS_PRIV_GRANTED_WITH_GRANT grdapi
upload_custom_data
tableName=SYBASEIQ15_USER_GROUP_WITH_DBA_PERMS_ADMIN_ETC
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements). The following list (with
comment line heading) details the minimal privileges required, in the database
table (or view of the database table), in order for the entitlement to work.
Since all users have sufficient privileges for system catalog SELECT privileges,
there is no need to grant privilege to any user. Informix doesn't seem to like
granting system catalog to users. The grant below would normally be used. But in
this case they are not required.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
Note: The entitlement domains for MSSQL2005 listed below cover MSSQL2008 as
well.
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
If a datasource has a MSSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MSSQL databases the user has access to.
Netezza DB Entitlements
Note: There is no DB error text translation for Netezza. The error appears in the
exception description. Users can clone/add a report with the exception description
for Netezza as needed.
v Netezza Obj Privs by DB Username - Object privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Admin Privs by DB Username - Admin privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Group /Role Granted To User - Group (Role) granted to user
v Netezza Obj Privs By Group - Object privileges with or without grant option by
GROUP excluding PUBLIC.
v Netezza Admin Privs By Group - Admin privileges with or without grant option
by GROUP excluding PUBLIC.
v Netezza Admin Privs By DB Username, Group - Admin privileges with or
without grant option by database username, group excluding ADMIN account
and PUBLIC group.
v Netezza Obj Privs Granted - Object privileges granted with or without grant
option to PUBLIC.
v Netezza Admin Privis Granted - Admin privileges granted with or without grant
option to PUBLIC.
v Netezza Global Admin Priv To Users and Groups - Global admin privilege
granted to users and groups excluding ADMIN account.
v Netezza Global Obj Priv To Users and Groups - Global object privilege granted
to users and groups excluding ADMIN account.
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
Teradata DB Entitlements
Note: There are no such role as System or Security admin in Teradata. User
must create their own roles. These are some important system privileges that
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
PostgreSQL DB Entitlements
v PostgreSQL Priv On. Databases Granted To Public User Role With Or Without
Granted Option. Privilege on databases granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
v PostgreSQL Priv On Language Granted To Public User Role With Or Without
Granted Option. Privilege on Language granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Schema Granted To Public User Role With Or Without
Granted Option. Privilege on Schema granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without
Granted Option. Privilege on Tablespace granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
Note: As of version 8.3.6, PostgreSQL does not support grant admin option to
public. There is only function, no store procedure. There is no support for column
grant, only table grant. Public is a group, not user. Public does not show up in
pg_roles. The only privileges need to run all these queries is: GRANT CONNECT
ON DATABASE PostgreSQL TO username;
For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.
/*These are required on every database, including POSTGRES (By default these are
already granted to PUBLIC) */
If a datasource has a PostgreSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all PostgreSQL databases the user has access to.
Get the information that you seek faster, by accessing over 600 predefined reports
already available from the Guardium application. These predefined reports can be
cloned and customized to the needs of the user.
All parameters and values are displayed on all reports. The parameters and values
can be edited from the Customize in any report screen.
Use the search function of help to go to the specific report directly. Use quotation
marks around words or phrases to precisely define search terms.
Predefined reports are described in the online help in the following help
sub-topics:
v Predefined admin Reports - available to the admin user from the following tabs:
System View, Daily Monitor, Guardium Monitor, and Tap Monitor.
v Predefined Reports from Accessmgr (see Access Management overview): User
and Role Reports; Allowed Datasources; Allowed Servers; Databases Not
Associated; Datasources Not Associated.
Examples of predefined reports from the Guardium Monitor tab are shown.
Logins to Guardium
All values for this report are from the Guardium Logins entity. For the reporting
period, each row of the report lists the User Name, Login Succeeded (1=
Guardium Applications
For each Guardium application, each row lists a security role that is assigned, or
the word all, indicating that all roles are assigned.
Guardium Roles
This menu pane displays two reports: All Roles - Application Access - and All
Roles; User.
All Roles - Application Access For each role, this report lists the number of
applications to which it is assigned.
To list the applications to which a role is assigned, click the role and drill down to
the Record Details report.
For each role, this report lists the number of users to which it is assigned. To list
the users to which a role is assigned, click the role and drill down to the Record
Details report.
Guardium Users
Lists each user, date of last activity, and number of roles assigned. For each user,
you can drill down to the Record Details report to see the roles that are assigned to
that user.
Table 125. Guardium Users
Domain Based on Query Main Entity
internal - not User role not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW
Three default reports are shown the Guardium Monitor tab for “Units Utilization”:
v Unit Utilization – For each unit the maximum utilization level in the specified
time frame. There is a drill-down that displays the details for a unit for all
periods within the time frame of the report.
v Unit Utilization Distribution: Per unit the percent of periods in the time frame of
the report with utilization levels Low, Medium, and High.
v Utilization Thresholds: This predefined report displays all low and high
threshold values for all Utilization parameters. Parameters: Number of restarts;
Sniffer Memory; Percent Mysql Memory; Free Buffer Space; Analyzer Queue;
Logger Queue; Mysql Disk Usage; System CPU Load; System Var Disk Usage.
v Unit Utilization Daily Summary - Host Name; Period Start; Max Number Of
requests; Max Number Of requests Level; Number of Requests % Increase; Max
System Var Disk Usage; Max System Var Disk Usage Level; System Var Disk
Usage % Increase; Max Mysql Disk Usage; Max Mysql Disk Usage Level; Mysql
Disk Usage % Increase; Max Overall Utilization Level
Access Map
Activity By Client IP
Aggregation Errors
Aggregation/Archive Log
Alerts Sent
Archive Candidates
Archive number
Archives attempted
Assessment 1
Assessment 10
Assessment 12
Assessment 13
Assessment 2
Assessment 3
Assessment 4
Assessment 5
Assessment 6
Assessment 8
Asset Status
Available Patches
Backup number
Backups attempted
Capture-Capture List
CAS Deployment
CAS Instances
CAS Templates
Catalog View
CIS vulnerability
Classifier Results
Client IP Audit
CLS_RESULT
Command Details
Commands List
CPU Tracker
CPU Usage
CVE compliance
Databases Discovered
DataSource Changes
DataSource Status
Data-Sources
Datasources Associated
DB Server List
DB Server Throughput
DB Server Throughput-Chart
DB2 z/OS Schema Privileges Granted To GRANTEE With GRANT Option V8 Only
DB2 z/OS Schema Privileges Granted To GRANTEE With GRANT Option V9 And
Higher
DB2 z/OS System Privileges Granted To GRANTEE With GRANT Option V10 And
Higher
DDL Commands
DDL Distribution
Discovered Instances
Dropped Requests
DW Dormant Objects
DW Dormant Objects-Fields
EF - Exception
EF - Logoff
EF - Logon
EF - SQL Summary
Exception Count
Exceptions By Client
Exceptions By Server
Exceptions By Type
Exceptions By User
Exceptions Details
Exceptions Distribution
Exceptions Monitor
Field Details
Fields List
Group Members
Guardium Logins
IMS Access
IMS Event
IMS Object
Installed Patches
Location View
Locator
Logging collectors
Lucene (Access)
Lucene (Exception)
Lucene (Violations)
Managed Units
My Restore Log
MYSQL DB Privs 40
No Traffic
Object Audit
Object Details
Objects List
Open Incidents
Open Sessions
Open Sessions By IP
Outstanding Events
Parser Exceptions
Policy Changes
Policy Violations
PostgreSQL Table View Sequence and Function Privs Granted With Grant Option
Purge number
Purges attempted
Replay Summary
Replay-Replay List
Request Rate
Restored Data
Retro_Request
Rogue Connections
Scheduled Jobs
Server IP Audit
Servers Accessed
Servers Associated
Session Count
Session Details
Sessions By Client IP
Sessions By Server IP
Sessions List
Slow queries
SQL Count
SQL Errors
Staging Data
S-TAP Events
S-TAP Status
S-TAP/Z Files
STIG compliance
SybaseIQ15 Login Policy For User And Group With Login Option Setting
SybaseIQ15 System Authority And Group Granted To Users And Groups Grantee
System/Security Activities
TCP Exceptions
Tests Exceptions
Throughput
Throughput-Chart
Unit Utilization
Used By View
User - Role
User Comments
Utilization Thresholds
Values Changed
VSAM Access
VSAM RLM
All parameters and values are displayed on all reports. The parameters and values
can be edited from the Customize button in any report screen.
Use the search function of help to go to the specific report directly. Use quotation
marks around words or phrases to precisely define search terms.
In the Guardium GUI, there is an icon (Ad-hoc process for run once now) to
invoke a call to the GuardAPI, create_ad_hoc_audit_and_run_once.
1 - If new process, one or a number of email receivers can be created in the list (if
any) with a content type as indicated in the emailContentType parameter. It will
also create a user receiver for the user logged in (invoking the API) if the
includeUserReceiver parameter is true.
2 - If existing process, all email receivers will be removed and replaced with the
emails from the new list (if any) with the content type as defined in the
emailContentType parameter. If the list is empty, it will remove all email address
receivers. If there is already a receiver for the user it will NOT be removed even if
the includeUserreceiver is false, however if the parameter is true and there is no
such receiver then it will be added.
The GuardAPI that creates ad hoc audit process will keep results to 7 days (instead
of 1 day). Results will be deleted after 7 days.
Note: If data level security at the observed data level has been enabled (see Global
Profile settings), then audit process output will be filtered so users will see only
the information of their databases.
This alert only runs on Central Manager systems. S-TAP Host, S-TAP version,
S-TAP changed, timestamp and count are shown.
Summary of logins to the database using a database user name defined in the
Admin Users group. The report displays the client IP address from which the user
with administrative privileges logged into the database, database user name,
source program, session start date and time, and session total for that record.
Table 129. Admin User Logins
Domain Based on Query Main Entity
Access Admin Users Login Session
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Aggregation/Archive Log
This report lists Guardium aggregation activity by Activity Type. Each row of the
report contains the Activity Type, Start Time, File Name, Status, Comment,
Guardium Host Name, Records Purged, Period Start, Period End, and count of log
records for the row. You can limit the output by setting the Guardium Host Name
run-time parameter, which is set to % by default (to select all servers). The Records
Purged column contains a count of records purged only when the activity type is
Purge.
Table 130. Aggregation/Archive Log
Domain Based on Query Main Entity
Aggregation/ Aggregation/ Agg/Archive Log
Export/Import Archive Log
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 WEEK
Period To <= NOW
Guardium Host LIKE %
Name
This menu pane displays two reports: All Roles - Application Access - and All
Roles; User.
For each role, this report lists the number of users to which it is assigned. To list
the users to which a role is assigned, click on the role and drill down to the Record
Details report.
Table 132. All Roles - User
Domain Based on Query Main Entity
internal - not Role - User not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW
Note: This report presents metadata and as such is not filtered through the Data
Level Security mechanism. This metadata could include database related
information such as Oracle SIDs.
Table 133. Application Objects Summary
Domain Based on Query Main Entity
Application Application Objects Application Objects
Objects Summary
Run-Time Operator Default Value
Parameter
ObjectNameLike % %
ObjectTypeNameLike
% %
This report shows a detailed activity log for all tasks including start and end times.
This report is available for admin users via the Guardium Monitor tab. Audit tasks
show start and end times, however the start and end of Security Assessments and
Classifications (which go to a queue) is the same.
The Audit Process has been expanded to the signoff of specific rows beyond a user
signing off on the entire audit process. Displays a list of what has been signed off
and what is the status of specific rows.
Use this Audit Process Log to stop audit processes. Tasks can be stopped only if
the tasks have not been run or are running. Any more tasks that have not started
will not execute. Partial results will not be delivered. If tasks are complete,
stopping the audit process will not stop the sending of the results. Stopping the
audit process is done through a GrdAPI command, invoke api, from the Audit
process Log report. For any user it only shows the line belonging to the user (but
without all the details - just the tasks). Admin users get to see all the details and
can stop anyone's runs. Users can only stop their own runs.
Note:
Stopping the audit process will not cancel queries running using a remote source.
Neither will such online reports using a remote source.
Not supported for Privacy sets and External Feed. This means that if the Privacy
set task was started or the External Feed has started - it will finish even if the
process is stopped (as opposed to a query which will be killed).
Login Name
Run ID
Timestamp
Audit Process ID
Audit Task ID
Event Type
Detail
Available Patches
Displays a list of available patches. There are no run-time parameters, and this
reporting domain is system-only.
Provides an extensive set of buffer usage statistics. See the description of the
Sniffer Buffer Usage entity for a description of the fields listed on this report.
Table 135. Buffer Usage Monitor
Domain Based on Query Main Entity
Buffer Usage Buff Usage Monitor Sniffer Buffer Usage Monitor
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
CAS Deployment
This CAS reports details the Database type, OS name, Hostname and OS type.
Table 136. CAS Deployment
Domain Based on Query Main Entity
CAS CAS Deployment N/A
Run-Time Operator Default Value
Parameter
DB Type Like %
OS_Name Like %
Hostname Like %
OS_Type Like %
Changes (CAS)
CAS Change Details
For each monitored item, the changes are listed in order by owner.
This report lists the data saved for each change detected. This report is sorted by
host name, and then by the most recent modification time.
Table 138. CAS Saved Data
Domain Based on Query Main Entity
CAS Changes CAS Saved Data Saved Data
Run-Time Operator Default Value
Parameter
Host_Name Like %
Monitored_Item Like %
Saved_Data_Id Like %
Configuration (CAS)
CAS Instances
This report lists CAS instance definitions (a CAS instance applies a template set to
a specific CAS host). The default sort order for this report is non-standard. The sort
keys are, from major to minor: Host Name (ascending), Instance (ascending) and
Last Status Change (descending).
Table 139. CAS Instances
Domain Based on Query Main Entity
CAS Config CAS Instances Monitored Item Details
Run-Time Operator Default Value
Parameter
Host_Name Like %
OS_Type Like %
DB_Type Like %
Instance Like %
Connections Quarantined
CPU Usage
By default, displays the CPU usage for the last two hours. This graphical report is
intended to display recent activity only. If you alter the From and To run-time
parameters to include a larger timeframe, you may receive a message indicating
that there is too much data. Use a tabular report to display a larger time period.
Table 144. CPU Usage
Domain Based on Query Main Entity
Sniffer Buffer CPU Usage Sniffer Buffer Usage Monitor
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW
Server type and client sources for each database type monitored.
Table 145. Databases by Type
Domain Based on Query Main Entity
Access Number of db per Client/Server
type
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Databases Discovered
For the reporting period, for each Discovered Port entity where the DB Type
attribute value is NOT LIKE Unknown, this report lists the Probe Timestamp,
Server IP, Sever Host Name, DB Type, Port, Port Type, and count of Discovered
Ports for the row.
The mapping between database users (Invokers of SQL that caused a violation)
and email addresses for real time alerts.
Table 147. DB Users Mapping List
Domain Based on Query Main Entity
Auto-discovery DB Users Mapping Guardium Users Login
List
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
You can restrict the output of this report using the Data Source Name run time
parameter, which by default is set to “%” to select all datasources.
Table 149. Data Sources
Domain Based on Query Main Entity
internal - not Data-Sources not available
available
Run-Time Operator Default Value
Parameter
Data Source LIKE %
Name
Period From >= NOW -1 DAY
Period To <= NOW
Discovered Instances
Timestamp, Host, Protocol, Port Min, Port Max, KTAP DB Port, Instance Name,
Client, Exclude Client, Proc name, Named Pipe, DB Instance Dir, DB2 Shared Mem
Adjust, DB2 Shared Mem Client Position, DB2 Shared Mem Size.
Table 150. Discovered Instances
Domain Based on Query Main Entity
Exception Discovered Instances Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
The Data Mart extraction program runs in a batch according to the specified
schedule. It summarizes the data to hours, days, weeks or months according to the
granularity requested and then it saves the results in a new table in Guardium
Analytic database.
The data is then accessible to the users via the standard Reports and Audit Process
utilities, likewise any other traditional Domain/ Entity. The Data Mart extraction
data are available under DM domain and the Entity name is set according to the
The extraction log consists of the following - Data Mart Name, Collector IP, Server
IP, from-time, to-time, ID, run started, run ended, number of records, status, error
code.
This report lists Guardium export/import activity by Activity Type. Each row of
the report contains the Activity Type, Start Time, File Name, Status, Comment, and
count of log records for the row.
Table 151. Definitions Export/Import Log
Domain Based on Query Main Entity
Aggregation/ Export-Import Agg/Archive Log
Archive Definitions Log
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Dropped Requests
Tracks requests dropped by an inspection engine (Exception Description =
Dropped database request). Under extremely rare, high-volume situations some
requests may be lost. When this happens, the sessions from which the requests
were lost are listed in the Dropped Requests report.
Table 152. Dropped Requests
Domain Based on Query Main Entity
Exceptions Dropped Requests Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Exception Count
For the reporting period, the total number of exceptions logged.
Table 153. Exception Count
Domain Based on Query Main Entity
Exceptions Exception Count Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Enterprise S-TAP Association History reports on how long the S-TAP reported to
the specific Guardium system in the Load balancer environment.
This data will be transferred via CSV files. See External Data Correlation
(Bidirectional Interface) for further information.
Table 154. Export Sensitive Data to Discovery
Domain Based on Query Main Entity
Internal - not Export Sensitive Classification Process Results
available Data to Discovery
Run-Time Operator Default Value
Parameter
Period From >= NOW -3 HOURS
Period To <= NOW
Rule Description LIKE
Schema LIKE
Assessments and Classifications run in their own separate process called the job
queue. Jobs are queued and have their status maintained while a listener
periodically polls the queue looking for waiting jobs to run.
Stopping
Running jobs, when right-clicked for drill-down, there is an option to stop the
running job and cancel it. The job can not be restarted at this point.
Halting
Running jobs are monitored to reduce the number of hung jobs that might cause
the job queue to be come overloaded. If a job is inactive for 30 minutes, the listener
is terminated and restarted, effectively stopping the operation of a job. Before the
listener is restarted, a process called the cleaner runs, the status is set from
RUNNING to HALTED, and then the listener is restarted. A status of HALTED
means the job was not able to run to completion.
Resubmitting
Sometimes the listener gets restarted for reasons other than a job hanging, for
example rebooting the machine. When the cleaner halts the running jobs, it will see
if the job has responded in the past 8 minutes. If it has, the job will be copied and
that copy will be resubmitted onto the job queue. The original halted will still
display on the queue, and still have the results it was able to process available.
Monitoring
The mechanism by which jobs maintain their active status is by touching the
timestamp on the job queue record. It is important to note that the job queue
record is used for the entire job. Each individual classifier rule or assessment test
interacts with the timestamp for its parent process, and they do not have
individual timestamps that are monitored.
Assessments touch the timestamp after each test in the assessment is evaluated.
Most assessment tests run in a few seconds or less.
Observed Tests
Displays a time stamp and description of all GuardAPI exceptions. These are jobs
where the Exception Type ID is GUARD_API_EXCEPTION.
Table 160. Guardium API Exceptions
Domain Based on Query Main Entity
Exception Guardium API Exception
Exceptions
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Guardium Applications
For each Guardium application, each row lists a security role assigned, or the word
all, indicating that all roles are assigned.
Table 161. Guardium Applications
Domain Based on Query Main Entity
internal - not All Guardium not available
available Applications
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 Month DAY
Period To <= NOW
For the reporting period, each row of the report lists a group member. The
columns contain the following information: Group Description, Group Type, Group
Subtype, Timestamp (from the Group Member entity), Group Member, and count
You can restrict the output of this report using the run-time parameters, both of
which are used with the LIKE operator and a default value of %, which selects all
values.
Table 162. Guardium Group Details
Domain Based on Query Main Entity
Group Guardium Group Group Member
Details
Run-Time Operator Default Value
Parameter
Group LIKE %
Description
Group Type LIKE %
Period From >= NOW -100 MONTH
Period To <= NOW
Guardium Users
Lists each user, date of last activity, and number of roles assigned. For each user,
you can drill down to the Record Details report to see the roles assigned to that
user.
Table 163. Guardium Users
Domain Based on Query Main Entity
internal - not User Role not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW
This report lists CAS host events. The default sort order for this report is
non-standard. The sort keys are, from major to minor: Host Name (ascending),
Instance and Event Time (descending).
Table 164. CAS Host History
Domain Based on Query Main Entity
CAS Host History CAS Host History Host Event
Run-Time Operator Default Value
Parameter
Host_Name Like %
OS_Type Like %
Event_Type Like %
Lists all inactive S-TAPs defined on the system. It has a single run-time parameter:
Period From, which is set to now -1 hour by default. Use this parameter to control
how you want to define inactive. This report contains the same columns of data
for the S-TAP Status report with the addition of a count for each row of the report.
Table 166. Inactive S-TAPs Since
Domain Based on Query Main Entity
internal - not Inactive S-TAPs not available
available Since
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 HOUR
Installed Patches
Displays a list of installed patches. There are no run-time parameters, and this
reporting domain is system-only.
Table 167. Installed Patches
Domain Based on Query Main Entity
internal - not Installed Patches not available
available
Run-Time Operator Default Value
Parameter
none not applicable not applicable
Logins to Guardium
All values for this report are from the Guardium Logins entity. For the reporting
period, each row of the report lists the User Name, Login Succeeded (1=
Successful, 0=Failed), Login Date And Time, Logout Date And Time (which will be
blank if the user has not yet logged out), Host Name, Remote Address (of the user)
and count of logins for the row.
For the reporting period, the total number of logged real time alerts, listed by rule
description.
Table 169. Logged R/T Alerts
Domain Based on Query Main Entity
Policy Violations Logged R/T Alerts Policy Rule Violation
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
For the reporting period, the total number of threshold alerts logged.
Table 170. Logged Threshold Alerts
Domain Based on Query Main Entity
Alert Logged Alerts Threshold Alert Details
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Enterprise report on a Central Manager that shows which managed units are up.
Use this report in a Statistical Alert to send an email to an ADMIN anytime a
managed unit is down.
Table 172. Managed Units (Central Manager)
Domain Based on Query Main Entity
internal - not Managed Units Managed Units
available
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Remote Data drop-down
Source
Show Aliases Radio-buttons (On, Off, Default)
This report lists all the entities and attributes in Guardium reports and was created
to simplify the linkage between the Guardium attributes to the GuardAPI calls.
Use this report to also invoke Use this report to also invoke
create_constant_attribute, create_api_parameter_mapping,
delete_api_parameter_mapping, or list_param_mapping_for_function.
Table 176. Query Entities and Attributes
Domain Based on Query Main Entity
Any of Guardium Any of the entities for the Any of the attributes within the
reporting domains reporting domain entity
Run-Time Parameter Operator Default Value
Report Name Like not applicable not applicable
Replay Statistics
This report shows Replay Statistics for Execution Start/End Date; Configuration
Name; Schedule Setup Name; Job Status; Statistic Description; Session ID;
Successful Queries; Failed Queries; Total Queries; Type; Active/Waiting/Completed
Tasks.
Table 177. Replay Statistics
Domain Based on Query Main Entity
Replay Results Replay Statistics Replay Result Statistics
Tracking
Replay Summary
For the reporting period, a measure of what query failed or succeeded. Checkmark
required in Replay Configuration for Query Failed or Query Succeeded.
Table 178. Replay Summary
Domain Based on Query Main Entity
Replay Results Replay Summary Replay Results
Run-Time Operator Default Value
Parameter
Query from date >= NOW -1 DAY
Query to date <= NOW
Results status % N/A
Schedule setup % N/A
name
Restored Data
future.
Table 179. Restored Data
Domain Based on Query Main Entity
Restored Data Restored Data Restored Data
Run-Time Operator Default Value
Parameter
Period From >= NOW -10 DAY
Period To <= NOW +10 DAY
Rogue Connections
This report is available only when the Hunter option is enabled on Unix servers.
The Hunter option is only used when the Tee monitoring method is used. This
report lists all local processes that have circumvented S-TAP to connect to the
database.
Table 181. Rogue Connections
Domain Based on Query Main Entity
Rogue Rogue Connections Rogue Connections
Connections
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Session Count
For the reporting period, the total number of different sessions open.
Table 184. Session Count
Domain Based on Query Main Entity
Access Session Count Session
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
SQL Count
For the reporting period, the total number of different SQL commands issued.
Table 185. SQL Count
Domain Based on Query Main Entity
Access SQL Count SQL
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
This report is displayed only when an inspection engine is added or changed. Lists
S-TAP configuration changes - each inspection engine change will be displayed on
a separate row. Each row lists the S-TAP Host, DB Server Type, DB Port From, DB
Port To, DB Client IP, DB Client Mask, and Timestamp for the change.
Table 186. S-TAP Configuration Change History
Domain Based on Query Main Entity
internal - not Configuration not available
available Change History
Run-Time Operator Default Value
Parameter
S-TAP Status
Displays status information about each inspection engine defined on each S-TAP
Host. This report has no From and To date parameters, since it is reporting current
status. Each row of the report lists the S-TAP Host, DB Server Type, Status, Last
Response, Primary Host Name, Yes/No indicators for the following attributes:
KTAP Installed, TEE Installed, Shared Memory Driver Installed, DB2 Shared
Memory Driver Installed, LHMON Driver Installed, Named Pipes Driver Installed,
and App Server Installed. In addition, it lists the Hunter DBS.
Note: The DB2 shared memory driver has been superseded by the DB2 Tap
feature.
Table 187. S-TAP Status
Domain Based on Query Main Entity
internal - not S-TAP Status not available
available
Run-Time Operator Default Value
Parameter
none n/a n/a
S-TAP Verification
List all results of S-TAP verifications.
Table 188. S-TAP Verification
Domain Based on Query Main Entity
internal - not S-TAP Verification S-TAP Verification Header
available
Run-Time Operator Default Value
Parameter
Query from date >= NOW -3 HOUR
Query to date >= NOW
S-TAP Events
Use this report for information on the S-TAP (from SOFTWARE_TAP_EVENT table
in internal database).
Table 189. S-TAP Events
Domain Based on Query Main Entity
internal - not S-TAP Events not available
available
Run-Time Operator Default Value
Parameter
S-TAP info is a predefined custom domain which contains the S-TAP Info entity
and is not modifiable like the entitlement domain.
Based on this custom table and custom domain, there are two reports:
Enterprise S-TAP view shows, from the Central Manager, information on an active
S-TAP on a collector and/or managed unit (If there are duplicates for the same
S-TAP engine, one being active and one being inactive, then the report will only
use the active).
Detailed Enterprise S-TAP view shows, from the Central Manager, information on
all active and inactive S-TAPs on all collectors and/or managed units.
If the Enterprise S-STAP view and Detailed Enterprise S-TAP view look the same,
it is because there only one S-TAP on one managed unit being displayed. The
Detailed Enterprise S-TAP view would look different if there is more S-TAPs and
more managed units involved.
These two reports can be chosen from the TAP Monitor tab of a standalone system,
but they will display no information.
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and
S-TAP - alert on any activity related to inspection engine and S-TAP configuration
Pre-defined query and report are available, but not added to any panels.
The query/report displays All S-TAP Hosts and the last response (heartbeat) sent
by each host.
The purpose of this query is to be able to define an alert that will trigger when
S-TAP on a host did not respond for a given period of time.
The input parameters are: Last response From, and, Last Response To.
For each S-TAP reporting to this Guardium appliance, this report identifies the
S-TAP Host, S-TAP Version, DB Server Type, Status (active or inactive), Last
Response Received (date and time), Primary Host Name, and true/false indicators
for: KTAP, TEE, MS SQL Server Shared Memory, DB2 Shared Memory, Local TCP
monitoring, Named Pipes Usage, and Encryption.
This report has no run-time parameters, and is based on a system-only query that
cannot be modified.
STAP/Z Files
STAP/Z provides files with raw data collected from DB2 (on z/OS) containing
DB2 events, SQL statements, etc. This report lists an Interface ID, UA file name
(Un-normalized Audit Event), UT file name (Un-normalized Audit Event text), UH
file name (Un-normalized Audit Event host variables), File Status, Total Number of
Events Processed, Number of Events Failed, and Timestamp. The Run-time
parameters are FileName Like % and FileStatus Like %.
This report has two run-time parameters, FileName Like % and FileStatus Like %.
It is based on a system-only query that cannot be modified.
TCP Exceptions
For the reporting period, for each exception where the Exception Description of the
Exception Type entity is TCP/IP Protocol Exception, a row of this report lists the
following attribute values from the Exception entity: Exception Timestamp,
Exception Description, Source Address, Destination Address, Source Port,
Destination Port, and count of Exceptions for that row.
Table 190. TCP Exceptions
Domain Based on Query Main Entity
Exceptions TCP Exceptions Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Templates (CAS)
CAS Templates
This report lists CAS templates. By default, all template items are listed.
Table 191. CAS Templates
Domain Based on Query Main Entity
CAS Templates CAS Templates Template
Tests Exceptions
Throughput
For each Access Period in the reporting period, each row lists the Period Start time,
the count of Server IP addresses, and the total number of accesses (Access Period
entities).
You can restrict the output of this report using the Server IP run time parameter,
which by default is set to % to select all IP addresses.
Table 193. Throughput
Domain Based on Query Main Entity
internal - not DB Server not available
available Throughput
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Server IP LIKE %
Throughput (graphical)
This report is a Distributed Label Line chart version of the tabular Throughput
report. It plots the total number of accesses over the reporting period, one data
point per Period Start time.
You can restrict the output of this report using the Server IP run time parameter,
which by default is set to % to select all IP addresses.
The User Activity Audit Trail menu selection displays two reports. In addition,
from each of those reports, a third report can be produced. See:
v User Activity Audit Trail
v System/Security Activities
v Detailed Guardium User Activity (Drill-Down)
For the reporting period, for each User Name seen on a Guardium User Activity
Audit entity, each row displays the Guardium User Name, an Activity Type
Description (from the Guardium Activity Types entity), a Count of Modified Entity
values, the Host Name, and the total number of Guardium Activity Audits entities
for that row.
From any row of the this report, the Detailed Guardium User Activity report is
available as a drill-down report.
Table 195. User Activity Audit Trail
Domain Based on Query Main Entity
Guardium User Activity Audit Guardium User Activity Audit
Activity Trail
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Period From >= NOW -1 DAY
Period To <= NOW
System/Security Activities
For the reporting period, for each User Name seen on a Guardium User Activity
Audit entity, each row displays the Guardium User Name, an Activity Type
Description (from the Guardium Activity Types entity), a Count of Modified Entity
values, the Host Name, and the total number of Guardium Activity Audits entities
for that row.
From any row of the this report, the Detailed Guardium User Activity report is
available as a drill-down report.
This report is not available from the menu, but can be opened for any row of the
User Activity Audit Trail report, or the System/Security Activities report. For the
selected row of the report, based on the User Name and Activity Type Description,
this report lists the following attribute values, all of which are from the Guardium
User Activity Audit entity, except for the Activity Type Description, which is from
the Guardium Activity Types entity: User Name, Timestamp, Modified Entity,
Object Description, All Values, and a count of Guardium User Activity Audits
entities for the row.
Table 197. Detailed Guardium User Activity (Drill-Down)
Domain Based on Query Main Entity
Guardium Detailed Guardium Guardium User Activity Audit
Activity User Activity
Run-Time Operator Default Value
Parameter
Activity Type value from calling report
Description
Period From >= NOW -1 DAY
Period To <= NOW
User Name value from calling report
Warning: Users should be aware that activities of the root user, and other sensitive
system accounts, are logged. Drilling down into the activity of these users may
show sensitive commands and passwords that have been entered on the command
line. Therefore users, whenever possible, should not enter sensitive command line
information that they would not like to show on this drill-down report.
Displays for each Guardium audit process: a description, login name, action
required (review or approve), status, user who has signed or reviewed, and
execution date of the specified task.
Table 198. User To-Do Lists
Domain Based on Query Main Entity
internal - not Users To-do List not available
available
Sharable user comments are all comments except for inspection engine, installed
policy, and audit process results comments. For each sharable user comment, this
report lists the date created, the type of item to which it applies (an alert, for
example), the user who created the comment, and the contents of the comment.
Note: Comments defined for inspection engines, installed policies, or audit process
results can be viewed from the individual definitions, but they cannot be displayed
on a report.
Table 199. User Comments - Sharable
Domain Based on Query Main Entity
Comments Comments Defined Comments
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 MONTH
Period To <= NOW
Three default reports are provided the Guardium Monitor tab, “Units Utilization”:
v Unit Utilization – For each unit the max utilization level in the given timeframe.
There is a drill down that will display the details for a unit for all periods
within the timeframe of the report.
v Unit Utilization Distribution: Per unit the percent of periods in the timeframe of
the report with utilization levels Low, Medium and High.
v Utilization Thresholds: This predefined report displays all low and high
threshold values for all Utilization parameters. Parameters: Number of restarts;
Sniffer Memory; Percent Mysql Memory; Free Buffer Space; Analyzer Queue;
Logger Queue; Mysql Disk Usage; System CPU Load; System Var Disk Usage.
v Unit Utilization Daily Summary - Host Name; Period Start; Max Number Of
requests; Max Number Of requests Level; NUmber OF Requests % Increase; Max
System Var Disk Usage; Max System Var Disk Usage Level; System Var Disk
Usage % Increase; Max Mysql Disk Usage; Max Mysql Disk Usage Level; Mysql
Disk Usage % Increase; Max Overall Utilization Level
Table 200. Unit Utilization Levels
Domain Based on Query Main Entity
Internal - not Unit Utilization Unit Utilization Levels
available Distribution
Run-Time Operator Default Value
Parameter
Values Changed
For the reporting period, this report provides detailed information about
monitored value changes. All attribute values displayed are from the Monitor
Values entity. The query this report is based upon has a non-standard sorting
sequence, as follows:
v Server IP
v DB Type
v Audit Timestamp
v Audit Table Name
v Audit Owner
The query this report is based upon has a number of run-time parameters, all of
which use the LIKE operator and default to the value %, meaning all values will
be selected.
For each monitored value selected, a row of the report lists the Timestamp, Server
IP, DB Type, Service Name, Database Name, Audit Login Name, Audit Timestamp,
Audit Table Name, Audit Owner, Audit Action, Audit Old Value, Audit New
Value, SQL Text, Triggered ID, and a count of Change Columns entities for that
row.
Table 201. Values Changed
Domain Based on Query Main Entity
Value Changed Values Changed Changed Columns
Run-Time Operator Default Value
Parameter
Audit Action LIKE %
Audit Login LIKE %
Name
Audit Owner LIKE %
Audit Table LIKE %
Name
DB Type LIKE %
Period From >= NOW -1 DAY
Period To <= NOW
Server IP LIKE %
Note: If data level security at the observed data level has been enabled (see Global
Profile settings), then audit process output will be filtered so users will see only
the information of their databases.
Displays the number of servers and clients for each monitored database type
(default time period is the current day).
Request Rate
By default, displays the request rate for the last two hours. This graphical report is
intended to display recent activity only. If you alter the From and To run-time
parameters to include a larger timeframe, you may receive a message indicating
that there is too much data. (Use a tabular report to display a larger time period.)
For each server type (DB2, Informix, etc.), a row of this report displays the total
number of sessions that were open during the reporting period (by default, the last
three hours).
For each SQL Verb from the DML Commands group that references an Object
Name in the Sensitive Objects group, this report displays a row for each Access
Period, Client IP, and Source Program, with a total count of objects referenced in
that row. Although the report title contains the word Executions, there is no
guarantee that all commands reported were actually executed.
For each object in the Sensitive Objects group, displays a row for each Client IP
and Source Program that referenced the object during the reporting period, and a
count of object references.
Activity By Client IP
Database Servers
For each Server IP address accessed during the reporting period, a row of the
report displays the Server Type, Database Name, Service Name, a count of source
programs accessing that server, and the total number of sessions for that row.
Two VSAM predefined reports: VSAM Detailed Access and VSAM RLM.
For every policy rule violation logged during the reporting period, this report
provides the Timestamp from the Policy Rule Violation entity, Access Rule
Description, Client IP, Server IP, DB User Name, Full SQL String from the Policy
Rule Violation entity, Severity Description, and a count of violations for that row.
You cannot access the query that this report is based upon (Policy Violations List
with Severity), but you can clone the report.
Exceptions Distribution
Each wedge of the pie chart represents the proportion of exceptions for each
Exception Description attribute value (from the Exception Type entity) that was
logged during the reporting period.
As with any chart, you can drill down on the pie chart to display the tabular
version of the query on which the chart is based. There are several exceptions
reports that are accessible from this tabular report (or drill-downs from it) that are
available here, but are not included on any menu.
Exceptions Monitor
A count of exceptions logged during the reporting period. One datapoint is created
each time that you refresh the report on your portal.
For each failed login attempt during the reporting period, lists the User Name,
Source Address, Destination Address, and Database Protocol Type for the server
the user was attempting to log into.
SQL Errors
Exception Count
The total number of exceptions (Exception entities) logged during the reporting
period.
Lists all logins by database users who are members of the Terminated DB User
group. Each row lists a DB User Name, Client IP, Server IP, Server Type, Source
Program, last login time (the maximum value of the Session Start attribute), and
the count of sessions for the row.
Last login recorded during the reporting period for each member of the Active
Users group. All members of the group will be listed, even if there were no logins
during the reporting period. This is unlike most other reports based on members
of a group. In the “normal” case, if no activity is found for a member, that member
is not listed.
Each row lists a DB User Name, Client IP, Server IP, Server Type, Source Program,
last login time (the maximum value of the Session Start attribute), and the count of
sessions for the row.
Listing of members in the Active Users group who have had no activity during the
reporting period. This report will be empty if all users have had activity during the
reporting period.
The Active Users group is pre-defined, but empty at installation time. It must be
populated by someone at your location. The query that this report is based upon
(Active Users with no Activity) cannot be accessed from any query builder.
Lists failed login attempts by database users who are members of the Terminated
DB User group. This report will be empty if there were no failed login attempts by
anyone in this group during the reporting period.
Excessive Errors per period - Display #Errors/Period; E.g., more than N errors in
60min for the same Client IP address, Server IP address, Server Type, database user
name.
Users inactive since - Show User and Last Session Start for all users having Access
records and having max Session Start time earlier than 90 days ago. (an inactive
user is missed if they never once logged in, or if all their old logins have been
purged)
For each DB User Name included in the Admin Users group, who had one or
more sessions during the reporting period, each row lists the Client IP, DB User
Name, Source Program, Session Start time, and Count of Sessions for that row.
For each DB User Name included in the DB Predefined Users group, who had one
or more sessions during the reporting period, each row lists the DB User Name,
Client IP, Server IP, Source Program, Database Name, Service Name, and Count of
Sessions for that row.
For each SQL Verb included in the Administrative Commands group that was seen
during the reporting period, this report lists the SQL Verb, Depth, Object Name,
and Client IP, and a count of objects referenced.
For each Object Name included in the Administration Objects group that was seen
during the reporting period, each row lists the Object Name, Client IP, Server IP,
Service Name, Database Name, Source Program, DB User Name, and Count of
Objects for that row.
For each SQL Verb from the DML Commands group that references an Object
Name in the Administration Objects group, this report displays a row for the DB
User Name, Client IP, Server IP, Server Type, Service Name, Database Name, SQL
Verb, Object Name, and Count of Objects referenced in the row.
For each SQL Verb from the BACKUP Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
For each SQL Verb from the REVOKE Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
For each SQL Verb from the KILL Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
For each SQL Verb from the DBCC Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, SQL statement, and Count of Objects referenced
in the row.
For each SQL Verb from the GRANT Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
Privileged Account Utilization - Show User, Verb, and the Count of Periods within
which the Verb was performed by a User in the group Admin Users
Privileged User Access of Business Objects - Show User, Verb, Object where the
User in Admin Users and the Verb was performed by the on an Object that is in a
selected group of Business Objects
For each SQL Verb from the CREATE Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
DDL Commands
For each SQL Verb from the DDL Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Server Type, SQL Verb, and
Count of Commands referenced in the row.
All ALTER commands issued. The report displays the client IP from which the
DDL was requested, server IP address, service name, database user name, source
program, database name, object name, and main SQL verb (a specific DDL
command) for each combination of client IP/DDL command listed on that specific
line.
For each SQL Verb from the ALTER Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
DDL Distribution
This bar graph displays the distribution of commands seen from the DDL
Commands group during the reporting period. For each command seen, a single
bar represents the total number of objects affected.
For each SQL Verb from the DROP Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.
For each DB User Name for which session data was collected during the reporting
period, each line of this report displays the count of Client IP addresses from
which the user logged in, and a total number of sessions.
This report displays reporting period activity from a single Client IP address,
which is specified as a run time parameter. Each row of the report displays the
Client IP, Source Program, SQL Verb, Depth (of sentence within the SQL
command), an Object Name, and a count of times that object was referenced for
that row.
Sessions List
This report lists all database sessions for the reporting period. For each session, the
report displays the session (entity) Timestamp, the Session Start (timestamp),
As with most reports, drill-down reports are available. There are a number of
session reports that are accessible from this report, but are not included on any
menu. This includes the following reports, with the run time parameters for those
reports set by using values from the selected row of the report:
Table 206. Sessions List
Report Run-time Parameters
Sessions by Client IP Server IP, Server Type
Sessions by Server IP Server Type
Sessions by Source Server Type, Sever IP
Program
Sessions by User Server Type, Server IP
Sessions Details by Server Type, Server IP
Server
Commands List
This report lists all SQL Verbs seen during the reporting period. At the outermost
level, commands are grouped by the Period Start time from the Access Period
entity, which is usually one hour, on the hour. Your Guardium administrator can
modify the access period length by changing the logging granularity, which is one
hour by default. For each Access Period in the reporting period, each row lists the
access Period Start time, a SQL Verb, Depth of the verb in the SQL statement,
Parent (a pointer to the owning verb), and a count of occurrences for the row.
Objects List
This report lists all objects seen during the reporting period. At the outermost
level, objects are grouped by the Period Start time from the Access Period entity,
which is usually one hour, on the hour. Your SQL Guard administrator can modify
the access period length by changing the logging granularity, which is one hour by
default. For each Access Period in the reporting period, each row lists the access
Period Start time, an Object Name, and the count of occurrences for that row.
This report displays reporting period activity for a single Object Name, which is
specified as a run time parameter. Each row of the report displays the Client IP,
Source Program, SQL Verb, Depth (of sentence within the SQL command), an
Object Name, and a count of times that object was referenced for that row.
Archive Candidates
This report lists objects (database tables or stored procedures, for example) that
have not been accessed for an extended period of time. You cannot access the
query this report is based upon.
This report produces a highly detailed listing for each DB User Name seen in the
reporting period, which is one hour by default for this report. Each row of the
report lists a DB User Name, Client IP, Server IP, Period Start, Source Program,
SQL (from the SQL entity), and a count of occurrences during the access period.
This report displays reporting period Full SQL attribute values that have been
logged for a single DB User Name, which is specified as a run time parameter.
Each row of the report displays the Full SQL ID, Timestamp (of the Full SQL
entity), Client IP, DB User Name, Session Start, Source Program, Full SQL, and a
count of occurrences for the row.
This report displays reporting period Full SQL attribute values that have been
logged for a single Client IP, which is specified as a run time parameter. Each row
of the report displays the Full SQL ID, Timestamp (of the Full SQL entity), Client
IP, DB User Name, Session Start, Source Program, Full SQL, and a count of
occurrences for the row.
There are five predefined reports that use monitored data to show object names.
These reports all start with the prefix DW (Data Warehouse). See the topic, How to
report on dormant tables/columns, for further information on how to use these
predefined reports.
DW Dormant Objects
Shows all the members of one group that are not members in a second group, with
a focus on dormant tables. For example, this report shows objects that are in the all
objects group, but have not been used in a Select.
Shows all the members of one group that are not members in a second group, with
a focus on dormant tables and columns. In this instance, groups are a 2-tuple type
(members that are a composite of a pair of value attributes). For example, this
report shows objects that are in the all objects and fields group, but have not been
used in a Select.
Use this report to populate the group called DW EXECUTE Objects with a set of
stored procedure names that being executed. Then use indirect mapping in Group
Builder/Auto Generate Calling Prox to generate all the objects being used within
these procedures.
This report shows all object names that have been accessed through a SELECT
statement.
This report shows all object and field names that have been accessed through a
SELECT statement.
For the reporting period, this report lists the longest running queries, with the
longest average execution time first. For each query, lists the Client IP, Server IP,
SQL, Period Start (from the Access Period entity), Average Execution Time, and the
count of occurrences for this row. You cannot access the query this report is based
upon.
Throughput
This report produces a count of all Server IPs seen, and total accesses, during the
reporting period. At the outermost level, accesses are grouped by the Period Start
time from the Access Period entity, which is usually one hour, on the hour. Your
Guardium administrator can modify the access period length by changing the
logging granularity, which is one hour by default. Each row lists the Period Start
time, the count of Server IPs seen, and a total count of accesses for the row.
You can restrict the output of this report using the Server IP run time parameter,
which by default is set to “%” to select all IP addresses.
Throughput (Graphical)
This report is a Distributed Label Line chart version of the tabular Throughput
report, plotting the total number of accesses over the reporting period, one data
point per Period Start time.
You can restrict the output of this report using the Server IP run time parameter,
which by default is set to “%” to select all IP addresses.
For the reporting period, this report displays a double bar for each type of
database server for which traffic was seen. Each double bar is labeled with the
server type. For each server type, the first bar represents the number of Client IPs,
and the second bar represents the total number of Server IPs.
DB Server List
This report lists all database servers seen during the reporting period. It displays
the Server Type, Server IP, Server OS, Server Host Name, Server Description, and
the total count of Client/Server entities for that row (the total number of clients).
Number of active Guardium audit processes that contain one or more privacy set
tasks. When central management is used, this report contains data on the Central
Manager only, and is empty on all managed units (the standard message, No data
found for requested query, displays). This report has non-standard run time
parameters: there are no from and to date parameters, so all audit processes
containing one or more privacy set tasks will be reported. You can clone the query
that this report is based upon (Number of Active Privacy Set Processes), but you
cannot clone or regenerate the default report. The cloned query will have all of the
standard run-time parameters (including the from and to dates).
Displays the Guardium Job Queue. For each job, lists the Process Run ID, Process
Type, Status, Cls/Asmt Process Id, Report Result Id, Cls/Asmt Description, Audit
Task Description, Queue Time, Start Time, End Time, and Data Sources.
Table 207. Guardium Job Queue
Domain Based on Query Main Entity
internal - not Guardium Job not available
available Queue
Run-Time Operator Default Value
Parameter
Job Description LIKE %
For the reporting period, for each Discovered Port entity where the DB Type
attribute value is NOT LIKE Unknown, this report lists the Probe Timestamp,
Server IP, Sever Host Name, DB Type, Port, Port Type, and count of Discovered
Ports for the row.
Table 208. Databases Discovered
Domain Based on Query Main Entity
Auto-discovery Databases Discovered Port
Discovered
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Data Sources
This report appears on the default layout for both administrators and users. See
Data Sources on the Predefined Reports - Common page.
This report appears on the default layout for both administrators and users. See
Data Source Version History on the Predefined Reports - Common page.
Displays the Guardium Job Queue. For each job, lists the Process Run ID, Process
Type, Status, Cls/Asmt Process Id, Report Result Id, Cls/Asmt Description, Audit
Task Description, Queue Time, Start Time, End Time, and Data Sources.
Comply Tab
Outstanding Audit Process Reviews
For each Guardium user Login Name, this report lists the number and type of
outstanding Guardium audit processes. An outstanding audit process has a Status
attribute value (in the Task Results To-Do-List entity) other than Reviewed or
In the Currently Installed Policy panel, this special report displays the installed
policy name, the number of rules it contains, and the number of baseline rules. You
cannot access the query this report is based upon.
For the reporting period, this report displays the number of policy violations
logged.
This report displays a bar representing the total number of alerts logged during the
reporting period, for each type of threshold alert logged, based on the Alert
Description attribute of the Threshold Alert Details entity.
This report displays a bar representing the total number of alerts logged during the
reporting period, for each type of real-time alert logged, based on the Access Rule
Description attribute of the Policy Rule Violation entity.
Violations/Incidents
Shows, for a selected replay configuration, the staged SQL. By default the value of
the config ID is empty and the user must modify the runtime parameter through
the customize option and enter the config ID that you would like to see.
Table 209. Staged Data
Domain Based on Query Main Entity
Replay Statistics Replay Results Statistics
Run-Time Operator Default Value
Parameter
Client IP LIKE %
Configuration ID
DB Name LIKE %
DB User LIKE %
Full SQL LIKE %
Remote Data LIKE none
Source
Server IP LIKE %
Aliases Radio none
Source Program LIKE %
Replay Statistics
This report shows Replay Statistics for Execution Start/End Date; Configuration
Name; Schedule Setup Name; Job Status; Statistic Description; Session ID;
Successful Queries; Failed Queries; Total Queries; Type; Active/Waiting/Completed
Tasks.
Table 210. Replay Statistics
Domain Based on Query Main Entity
Replay Statistics Replay Results Statistics
Run-Time Operator Default Value
Parameter
Period From <= NOW - 1MONTH
Period To >= NOW
Data Source LIKE none
Session ID >= 0
(greater than)
Session ID (less <= 99999999999
than)
Aliases Radio none
Type LIKE %
A listing of all the Captures that have been configured and have a Replay
associated with them. This listing is used for the purpose of examining the
differences in captured SQL/replayed SQL on a target database system. If a
capture configuration has not been replayed, then it will not appear in the list.
Table 212. Capture/Replay > Capture-Replay List
Domain Based on Query Main Entity
Capture-Replay List
Run-Time Operator Default Value
Parameter
Data Set (from) LIKE %
Data Set (to) LIKE %
Period From >= NOW -1 MONTH
Period To <= NOW
Data Source none
Aliases Radio none
Is a listing of all the Replays that have been performed against the same capture
configuration.
Table 213. Capture/Replay > Replay-Replay list
Domain Based on Query Main Entity
Replay-Replay List
Run-Time Operator Default Value
Parameter
Data Set LIKE %
Shows the Full SQL, the staging data that was used and that was executed during
replay
Summary Comparison provides a high-level look into the differences in the capture
and replay, consisting of:
Compare Avg Execution Time - how the execution time differed between capture
and replay
Compare SQL Exceptions - how the number of SQL exceptions differed between
capture and replay
Compare Rows Retrieved - how the number of rows returned differed between
capture and replay
Compare SQL Failures - how many SQL failures there were between capture and
replay
Workload Exceptions - shows the SQL that generated exceptions during replay
The box displays the output of the Linux VMSTAT command. If you are familiar
with that command, these statistics should be familiar to you.
Table 214. Current Status Monitor
Field Description
procs The number of processes:
Data Sources
Lists all datasources defined: Data -Source Type, Data-Source Name , Data-Source
Description, Host, Port, Service Name, User Name, Database Name, Last Connect,
Shared, and Connection Properties..
You can restrict the output of this report using the Data Source Name run time
parameter, which by default is set to “%” to select all datasources.
Table 215. Data Sources
Domain Based on Query Main Entity
internal - not Data-Sources not available
available
Run-Time Operator Default Value
Parameter
Data Source LIKE %
Name
Period From >= NOW -1 DAY
Note: When scheduling this audit process, check that the FROM/TO dates for
each report make sense for the process interval being defined (for example, it
doesn’t make sense to have a reporting period of one day if the audit process runs
only once a week - you will miss six days of activity).
Note:
The default user portal contains a My New Reports pane, but the default
admin portal does not. If your portal does not contain a My New Reports
pane, you will receive an error message.
To add a tab to the outer-most row of tabs, click the Customize word link.
When creating a My New Reports pane, be sure to:
v Use the exact spelling shown.
v Define the pane with a Menu pane layout .
The step procedure for adding a My New Reports pane for an admin user
is (1) Click on the Add Pane button, type My New Reports and then click
on Apply. (2) Next, highlight My New Reports and click on the Edit Layout
button. (3) Specify the menu pane layout. (4) Click Save.
In order to see meaningful data in the tabular report within My New
Reports pane, click on the customize button on the same line as the title of
5. parameters
a. Select the report in the Report Finder and click Search. The application
takes you to Report Search results. Click Modify to open up additional
configuration menus.
b. In the Report Column Descriptions panel,
v Optionally override the Report Title. The default is from the report
definition. You can modify the title on most subsequent panels.
There is a separate Query Builder for each domain, and it is opened from the
Query Finder for that domain (see Open the Query Finder section). By default, the
Query Builder panel name is Custom Reporting for a user portal, but for admin
role users, the Query Builder panel takes its name from the menu selection that is
used to open the query builder (Access Tracking, Exceptions Tracking, Alert
Tracking, etc).
After determining which domain to use, do one of the following to open the Query
Finder for that domain:
v Users with the admin role: Select Tools > Report Building, and then select one of
the Query Builders from the menu. The Query Builders all end with the word
Tracking (Access Tracking, for example).
v All Others: Select Monitor/Audit > Build Reports, and select one of the Query
Builder buttons from the panel.
Either one of these options opens the Query Finder for the selected domain.
To locate and view a query definition in the Query Builder, there are several
options:
1. Use the Query Finder - see Use the Query Finder.
2. From a report portlet that is based on the query, click Edit this Report's Query
in the toolbar of the report.
3. If the query is used in a report on your portal, and you know some portion of
the report name, use the Portal Search tool, and then open the query.
4. From the Customize Portlet panel for a report that is based on the query, click
Edit this Query next to the query name on the panel.
Create a Query
1. Open the Query Finder for the appropriate domain (see Open the Query
Finder).
2. Click New to open the New Query – Overall Details panel.
3. Type a unique query name in the Query Name box. Do not include apostrophe
characters in the query name.
4. Select the main entity for the query from the Main Entity list. The main entity
controls the level of detail that is available for the query, and that it cannot be
changed. Basically, each row of data that is returned by the query represents a
unique instance of the main entity, and a count of occurrences for that instance.
5. Click Next. The new query opens in the Query Builder panel. To complete the
definition, see next section on Query Fields.
The Query Fields pane basically lists the columns of data to be returned by the
query.
There are two ways to add a field to the Query Fields pane:
v Pop-Up Menu Method:
1. Click the field to be added.
2. Select Add Field from the pop-up menu.
v Drag-and-Drop Method:
1. Click the icon of the field (not on the field name).
2. Drag the icon to the Query Fields list and release it.
Regardless of the method that is used, the field is added to the end of the list.
1. Open the Query Finder for the appropriate domain (see Open the Query
Finder).
2. Use the Query Finder to open the query to use for the report.
3. Do one of the following:
To add a tabular report to the end of an existing menu layout, first click
Generate Tabular and then the Add to Pane buttons on the panel. Then,
navigate to the desired menu layout, and click it. To redo an existing tabular
report, click Regenerate.
To add a tabular report to the My New Reports tab, click Add to My New
Reports in the panel. (If no tabular report portal has been generated yet for the
query, you need to click the Generate Tabular first.)
Note: The default user portal contains a My New Reports pane, but the default
admin portal does not. If your portal does not contain a My New Reports pane,
you will receive an error message. If it does not exist, you can create this pane
anywhere on your portal (see Customize the Portal. If you create a My New
Reports pane, be sure to:
v Use the exact spelling shown.
v Define the pane with a Menu pane layout.
In order to see meaningful data in the tabular report within My New Reports
pane, click Customize next to the title of the tabular report in order to access
the run-time parameters (change the time from and now).
A domain provides a view of the stored data. Each domain contains a set of data
related to a specific purpose or function (data access, exceptions, policy violations,
and so forth)
Each domain contains one or more entities. An entity is a set of related attributes,
and an attribute is basically a field value. A query returns data from one domain
only. When the query is defined, one entity within that domain is designated as
the main entity of the query. Each row of data returned by a query will contain a
count of occurrences of the main entity matching the values returned for the
selected attributes, for the requested time period. This allows for the creation of
two-dimensional reports from entities that do not have a one-to-one relationship.
Domains
See Domains for a description of all Guardium domains. On the default admin
portal, all query builders can be opened from the menu of the Tools > Report
Building tab. On the default user portal, many query builders can be opened from
the Custom Reporting application: Monitor/Audit > Build Reports.
See Entities and Attributes for a description of all attributes contained in each
entity. The two illustrations show a list of entities for the Data Access Domain and
the attributes available under the Client/Server entity.
There are six levels in the entity hierarchy for this domain.
Table 217. Entities Hierarchy
Number Entities Description
1 Client/Server Session Each client/server connection has one or more sessions.
Each session has one of more requests.
2 Application Events Each request has some combination of this entity.
3 Full SQL Values Each request has some combination of these entities.
Full SQL
SQL
Access Period
4 Command Each request may contain commands.
5 Object Each command may contain objects.
6 Object-Command Each object may contain these entities.
Field
Object-Field
Main Entity
Build Queries
Procedure
1. Begin a query definition
From the user portal, go the Custom Reporting application.
a. Click the Monitor/Audit tab.
b. Click the Build Report tab.
c. Click Track data access to open the Query Finder. See the following
examples.
Next, you use a report that uses monitored data to show all object names that have
participated in a SELECT statement. There are predefined reports for this in
Guardium 8, all starting with the prefix DW (Data Warehouse). Then, use the
output to populate one of the predefined groups.
Finally, use a predefined report that shows all members in the first group that are
not members in the second group.
There are two sets of such reports and groups – one which focuses on tables and
one which focuses on tables and columns. The only difference is that in the later
case groups are of a 2-tuple type (members that are a composite of a pair of value
attributes, referred to as tuple).
Let's look at an example from start to finish involving an Oracle database and the
EMP user.
Procedure
1. Upload all the tables from the system catalog. Do this by creating a custom
table.
Prerequisites
a. Define datasource/test database connection
b. Upload data (create custom table)
c. Create new domain (merge custom tables with existing reports)
See External Data Correlation for further information.
The following example is available from User > Monitor/Audit > Build Reports
> Custom Table Builder > Upload Definition > Import Table Structure. When
the configuration is complete, click the Retrieve button.
Report - After
Now, populate the group DW SELECT Accessed Objects group from the report,
filling in the filtering attributes that you require.
The following example is available from User > Monitor/Audit > Build Reports
> Group Builder > Choose DW Select Accessed Objects > Populate from
Query> DW Select Object Access.
When done, click the Save button.
For this scenario, we will generate API function calls to populate the Data Security
User Hierarchy.
1. To begin, let's show the current Data Security User Hierarchy for the user
scott
2. To invoke an API function we must find a report that currently has the
desired API functions linked to it. Since creating a user hierarchy is related to
users, selection of a user report should yield good results. For this scenario
we've selected the User - Role report.
5. Click on the API you'd like to invoke; bringing up the API Call Form for the
Report and Invoked API Function
6. Fill in the Required Parameters and any non-Required Parameters for the
selected API call. Many of the parameters are pre-filled from the report but
may be changed to build a unique API call. For specific help in filling out
required or non-required parameters please see the individual API function
calls within the GuardAPI Reference guide.
7. Use the drop-down list to select the Log level, where Log level represents the
following (0 - returns ID=identifier and ERR=error_code as defined in Return
b. If Generate Script is selected: Open the generated script with your favorite
editor or optionally save to disk to edit and execute at a later time --
replacing any of the empty parameter values (denoted by '< >') if
contained within the script.
Note: Empty parameters may remain in the script as the API call will
ignore them
Example Script
# A template script for invoking guardAPI function create_user_hierarchy :
# Usage: ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
# replace any < > with the required value
#
grdapi create_user_hierarchy userName=jkoopmann
parentUserName=scott
c. Execute the CLI function call.
Example Call
$ ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
10. Validate. For this scenario it is a redisplay of the Data Security User Hierarchy.
This scenario uses a custom report with mapped parameters to report fields. Please
see additional scenarios further in this section for additional information.
1. To begin, let's show the current Data Security User Hierarchy for the user scott
2. Click on the Invoke... icon to display a list of APIs that are mapped to this
report
3. Click on the API you'd like to invoke; bringing up the API Call Form for the
Report and Invoked API Function. Invoking an API call from a report for
multiple rows will produce an API Call Form that displays and enables the
editing of all records displayed on the screen (dependent on the fetch size) to a
maximum of 20 records.
Note: Empty parameters may remain in the script as the API call will
ignore them.
Example Script
# A template script for invoking guardAPI function create_user_hierarchy :
# Usage: ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
# replace any < > with the required value
#
grdapi create_user_hierarchy userName=ADAMS parentUserName=SCOTT
grdapi create_user_hierarchy userName=JOHNY parentUserName=SCOTT
grdapi create_user_hierarchy userName=MARY parentUserName=SCOTT
grdapi create_user_hierarchy userName=SCOTT parentUserName=SCOTT
grdapi create_user_hierarchy userName=SCOTT parentUserName=SCOTT
c. Execute the CLI function call.
Example Call
$ ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
Value-added: Through a GUI, create a user-defined constant that can be used for
filling in a parameter in an API function call .
1. From our report, we can modify it to have a field that we could use for
parameter mappings.
5. Clicking on the Invoke now button will produce a API Call Output status
showing the constant was created.
7. The newly create constant can now be mapped for the report. Double-click on
the new row and select the Invoke... option.
9. Fill in the functionName and the parameterName and click on the Invoke
now button.
12. To validate the new constant's usage, double-click on a row and select the
Invoke... option.
14. Now the parentUserName is populated from the newly added constant. Click
the Invoke now button.
Value-added: Through a GUI, quickly and easily map API parameters to custom
report fields to be used in API function calls.
1. By default, a newly created custom report will not have any API functions
linked to it. This can be seen by the proceeding custom report where
double-clicking on a row will only produce a list of additional drill-down
reports to run but lacks the Invoke option.
3. The API Assignment panel shows all the API functions assigned to the
selected report. Notice for our scenario the report selected has no API
functions assigned to it.
5. At this point, none of the report fields are mapped to the API parameters.
Users can go to the Query Entities & Attributes report to create these
mappings, otherwise when invoking the API call none of the parameters will
have values. add the API parameter mappings. Open the Query Entities &
Attributes report and create the mappings. Since our report for this scenario
uses the Client/Server entity within the ACCESS RULES VIOLATIONS
domain, filter the report by using the customize button; modifying the report
to display only the Client/Server entity.
9. Now, when we go back to the Report Builder for our report and look at the
API Assignment; clicking on the create_user_hierarchy API function displays
the API - Report Parameter Mapping with our mapping of userName to the
Report field Client/Server.DB User Name.
11. Now when we invoke the create_user_hierarchy API function through our
report the parameter userName will be populated from the report. To see this,
go back to the report and double-click on a row and then click on the Invoke...
option.
13. Notice that the userName is now populated from the report field.
15. Verify that the new Data Security User Hierarchy has been added.
The first time that an optional external feed task runs, the necessary internal
representation of the audit sources will be created. One limitation is that data that
is time-stamped with a date earlier than the audit source creation date cannot be
stored. This means that the first time the task runs, it will only export data for the
current date. On subsequent executions of the task following that date, any data
from that date forward can be exported. (In other words, the next day, you will be
able to export that day's data plus the prior day's data.)
If you have not yet started to define a compliance workflow automation process,
see Create a Workflow Process before performing this procedure.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click External Feed.
3. Select the feed type from the Feed Type list. (The controls that appear next
depend on the feed type selected.) One predefined feed type is Object Last
Referenced.
Note: You must map an external feed before attempting to use this feature.
4. Select an event type from the Event Type list.
5. Select a report from the Report list. Depending on the report selected, a
variable number of parameters appear in the Task Parameters pane.
6. In the Extract Lag box, enter the number of hours by which the feed is to lag,
and mark the Continuous box to include data up to the time that the audit task
runs. Extract Log only works when the Continuous box is marked.
7. In the Datasources pane, identify one or more datasources for the external feed.
For instructions on how to define or select datasources, see Datasources.
8. Enter all parameter values in the Task Parameters pane. The parameters will
vary depending on the report selected. Count column is not supported in
External Feed.
9. Click Apply.
Related concepts:
“Building audit processes” on page 195
Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.
Related tasks:
Procedure
1. Generate a report with the data you would like to transfer using an external
feed. You can do this from a central manager, aggregator, or stand-alone
Guardium instance, provided that system can access the report data you
require.
2. From the CLI, run grdapi create_ef_mapping reportName="My report". In
addition to establishing the mapping, the grdapi_create_ef_mapping function
also generates a sample create table statement to be used in subsequent steps.
3. On the Guardium system where your report is defined, search /var/log/guard
for a filename like ef_sample_[my_report].sql. This file contains the example
create table statements. You must modify the statements in this file to match
the requirements of your external database. After modifying the file, run the
statements against your external database to create the target tables.
4. The external feed should now be available for use in workflow processes
defined through the audit process builder. See the “Optional External Feed” on
page 547 documentation for additional information.
Related concepts:
“Optional External Feed” on page 547
External feeds allow you to send Guardium report data directly to an external
database.
It is easy to create a Distributed Report. Simply define it via the Distributed Report
screen, add to a Pane and it is ready for your use.
Furthermore, this feature optionally makes use of data marts on the Central
Manager to enable scheduled collection of aggregated data over time. In essence,
the distributed report data is stored on the Central manager as a flat table, so no
complex joins are required to create the report you want, which can significantly
improve response time for these enterprise reports.
Distributed report data can be gathered from Collectors, Aggregators, and even
Central Managers. The default distributed versions of the reports includes the host
name of the unit responsible for that data.
About this task - In this example, we see how to get a broader view and
correlation insight for Exceptions (for example, SQL Errors) that are recorded on
specific collectors.
Summary of steps
Prerequisites – create group of Managed Units via the Central Management screen.
1. Create Distributed Report.
2. Review the data gathered.
3. Create additional summary reports on the data gathered.
Procedure
1. Distributed Reports builder is available from (admin) Tools > Report Building
> Distributed Report Builder.
2. Click New.
3. Select Based on Report from the list (the list shows the User-Defined Reports).
For this example, choose Exceptions Details.
4. Move down the screen to specify the Managed Units to be included in this
distributed report. For this example, choose two groups from the Group list
and in addition a few managed units from the Managed units list. In this
example, leave the ‘Central Manager’ unchecked (in the case the Central
Manager is also an Aggregator, it might need to be included).
6. Click Apply to create the Distributed Report. The next screen appears while
saving the new Distributed Report.
Note: The line saying ‘Distributed Report status – click here for details’,
shows the status of data gathering, if data is missing from managed units then
the line is colored in red; clicking the line navigates to details report of status
per units per hour.
11. The data is gathered from all the specified Managed Units and stored in new
designated entity (table). This entity is now available via the Query Builder
and Report Builder to create additional Queries and Reports against this new
table. The option to build additional Queries and Report are available via the
Distributed Report result screen as well. Click Edit the query for this report.
This default Report cannot be changed, click Clone, name it, remove all
attributes and leave the Date, User Name, Exception Type Description, and
Sum Of Count Of Exceptions.
563
1. Agent-based-Using software installed on each endpoint (e.g. database server).
They can determine aspects of the endpoint that cannot be determined
remotely, such as administrator’s access to sensitive data directly from the
database console.
2. Passive detection-Discovering vulnerabilities by observing network traffic.
3. Scanning-Interrogating an endpoint over the network through credentialed
access.
To aid in the finding of individual vulnerabilities while viewing the CVE names
for specific databases, the user, when configuring tests through Security
Assessment Builder, can select the CVE radio button for the desired database and
To keep CVEs current within the Guardium solution, Guardium will download
and use the most current CVE database to populate a database table with all
current CVE entries and candidates. Guardium the programmatically compares the
downloaded CVE data with the CVE data already in the Guardium Vulnerability
Assessment repository; producing a list of new CVEs for review. Guardium
Database Security Team then manually reviews these candidates for the Guardium
Vulnerability Knowledgebase, tests them and adds the relevant ones to the GA
Guardium Vulnerability Assessment Knowledgebase. These tests are tagged with
the appropriate CVE number, and once in the GA repository, these tests can
automatically run using the Guardium Vulnerability Assessment application.
Note: When using an expiring product license key, or license with a limited
number of datasources, the following message may appear: Cannot add
datasource. The maximum number of datasources allowed by license has been
reached. The License valid until date and Number of datasources can be seen on
the System Configuration panel of the Administrator Console. A Vulnerability or
Classification process with N datasources are counted as N scans every time they
run.
The list is not categorized by DBMS type or test name. But the exception group
name itself are very obvious indicating what DBMS type it is and the test name.
MongoDB
Developed in 2007, MongoDB is a NoSQL, document-oriented database. MongoDB
uses JSON documents with dynamic schemas (this format is called BSON). In
MongoDB, a collection is the equivalent of a RDBMS table while documents are
equivalent to records in an RDBMS table.
MongoDB is the largest and fastest growing NoSQL database system. It tends to be
used as an operational system and as a backend for web applications due to an
ease of programming for non-relationally formatted data like JSON documents
which are often found in web applications.
v First NoSQL database supported for Guardium Vulnerability Assessment (VA)
v First non-JDBC database connection. Connection uses a Java driver.
v MongoDB data sources support SSL server and client/server connections with
SSL client certificates.
You can import server cert which we do behind the scene for self signed. Customer
can also import their certificate. Certificates also work on central manager and
push down to collectors.
The Mongo CAS Assessment template allows you to specify multiple paths in the
datasource to scan various components of the file system.
CLI commands
snif_mongo export
snif_mongo list
1. Compress all the .ready files in the auditlog directory and use --remove_file
option to remove file.
4. If user quits the operation or export fails, put back the .ready files.
Teradata Aster
Aster Data
Acquired by Teradata in 2011, typically used for data warehousing and analytic
applications (OLAP). Aster Data created a framework called SQL-MapReduce that
allows the Structured Query Language (SQL) to be used with Map Reduce. Most
often associated with clickstream kinds of applications.
A security assessment should be created to execute all tests on the queen node. All
database connections for Aster Data goes through the queen node only.
Testing on worker and loader nodes are only required when performing CAS tests
(File permission and File ownership).
Privilege tests loop through all the databases in a given Aster’s instance.
SAP HANA
Deployment Steps
1. Vulnerability Assessment is deployed from the Guardium system.
2. User runs a Guardium-supplied script against the target database to create a
role with the appropriate privileges. User then creates a datasource connection
to the database.
3. Create a security assessment, then select your datasources and desired tests to
execute.
4. Once the execution is done, a report is created, showing what tests have passed
and/or failed along with detailed hardening recommendations.
Password policies
Security APARs
Entitlement Reports:
Procedure
1. Use the Group Builder to create a group of users that you want to use VA.
Open the Group Builder by clicking Setup > Tools and View > Group Builder.
The next step uses a script for a group named gdmmonitor.
2. Run the following script on your DB2 for i system to grant privileges needed
for executing VA to the group. This is done outside the Guardium system using
a database native client.
grant select on SYSIBMADM.FUNCTION_INFO to gdmmonitor;
grant select on SYSIBMADM.FUNCTION_USAGE to gdmmonitor;
grant select on SYSIBMADM.GROUP_PROFILE_ENTRIES to gdmmonitor;
grant select on SYSIBMADM.SYSTEM_VALUE_INFO to gdmmonitor;
grant select on SYSIBMADM.USER_STORAGE to gdmmonitor;
grant select on Qsys2.Authorizations to gdmmonitor;
grant select on SYSIBMADM.USER_INFO to gdmmonitor;
grant select on QSYS2.SYSSCHEMAAUTH to gdmmonitor;
grant select on QSYS2.SYSTABAUTH to gdmmonitor;
grant select on QSYS2.SYSPACKAGEAUTH to gdmmonitor;
grant select on QSYS2.SYSROUTINEAUTH to gdmmonitor;
grant select on QSYS2.SYSSEQUENCEAUTH to gdmmonitor;
grant select on QSYS2.SYSCOLAUTH to gdmmonitor;
For IBM DB2 for i v7.1 and higher, also include the scripts:
grant select on QSYS2.SYSVARIABLEAUTH to gdmmonitor;
grant select on QSYS2.SYSXSROBJECTAUTH to gdmmonitor;
3. Create a JDBC connection to your DB2 for i system . Open Datasource Finder
by clicking Setup > Tools and Views > Datasource Definitions, and then
Security Assessment from the Application Selection menu.
a. Click New and enter the appropriate information. For Connection Property,
enter “property1=com.ibm.as400.access.AS400JDBCDriver;translate
binary=true”.
4. Create an assessment using the Assessment Builder. Open the Assessment
Builder by clicking Harden > Vulnerability Assessment > Assessment Builder.
a. Enter a description for the assessment.
b. Add the datasource created in the previous step by clicking Add
Datasource, selecting the datasource from the Datasource Finder, and
clicking Add.
Note: You must click Apply to save the assessment before you can
configure tests.
5. Add tests to the assessment by clicking Configure Tests. Click the IBM for i
tab, select the tests that you want to add, and click Add Selections.
6. Click Return to go back to the Security Assessment Finder. Run the test by
clicking Run Once Now, or schedule the test using Audit Process Builder.
Open the Audit Process Builder by clicking, Discover > Classifications > Audit
Process Builder.
7. Click View Results to view the details of all the executed tests, including
recommendations for improving your score.
Predefined Tests
Predefined tests are designed to illustrate common vulnerability issues that may be
encountered in database environments. Because of the highly variable nature of
database applications and the differences in what is deemed acceptable in various
companies or situations, some of these tests may be suitable for certain databases
but totally inappropriate for others (even within the same company). Most of the
predefined tests are customizable to meet requirement of your organization.
Additionally, to keep your assessments current with industry best practices and
protect against newly discovered vulnerabilities, Guardium distributes new
assessment tests and updates on a quarterly basis as part of its Database Protection
Subscription Service. Please refer to Guardium Administration Guide for more
details.
Behavioral Tests
This set of tests assesses the security health of the database environment by
observing database traffic in real-time and discovering vulnerabilities in the way
information is being access and manipulated.
Configuration Tests
As an example, the current categories, with some high-level tests, for configuration
vulnerabilities include:
v Privilege
– Object creation / usage rights
– Privilege grants to DBA and individual users
– System level rights
v Authentication
– User account usage
– Remote login usage
– Password regulations
v Configuration
– Database specific parameter settings
– System level parameter settings
v Version
– Database versions
– Database patch levels
v Object
– Installed sample databases
– Recommended database layouts
– Database ownership
Query-based Tests
A query based tests is either a pre-defined or user-defined test that can be quickly
and easy created by defining or modifying a SQL query, which will be run against
database datasource and results compared to a predefined test value. See Define a
Query-based Test for additional information on building a user defined
query-based test.
CAS-based Tests
Guardium also comes pre-configured with some CAS template items of type OS
Script that can be used for creating a CAS-based test. These tests can be see
through the CAS Template Set Definition panel and have a name which contains
the word Assessment. For instance, the Unix/Oracle set for assessments is named
Guardium Unix/Oracle Assessment. Additionally, any template that is added that
involves file permissions will also be used for permission and ownership checking.
See Modify a Template Set Item for viewing these template sets and seeing those
items with type OS Script.
CVE Tests
You can create a new query-based test by using any of these approaches:
New Start from the beginning and define all the fields.
Clone Clone an existing query-based test.
Modify
Modify an existing query-based test.
Procedure
1. Open the Assessment Builder by clicking Harden > Vulnerability Assessment
> Assessment Builder.
2. From the User-defined tests, click Query-based Tests.
3. Click New, Clone or Modify to open the Query-based Test Builder.
4. Enter a unique Test Name.
5. Select a Database Type.
6. Select a Category.
7. Select a Severity.
8. Optional: Enter a Short Description for the test.
9. Optional: Enter an External Reference for the test.
10. Enter the Result text for pass that will be displayed when the test passes.
11. Enter the Result text for fail that will be displayed when the test fails.
12. Enter the SQL statement that will be run for the test.
Use the following convention to add and reference group members within a
SQL statement:
If the group has no members, the database returns an error. In this case the
reference is replaced with a single pair of quotation marks, like this:
Select ... from DBA_GRANTS where ... AND USER in (’’) and ...
Use the following convention to replace a reference to a specific alias (of a
specific group type) with the actual alias:
For example:
Select ... from USER_OBJECTS where ... AND OBJECT_TYPE =
'~~A~GroupType~TYPE~~'
If there is an alias to TYPE of group type GrouptType it will replace the string
and the resulting SQL will look like:
Select ... from USER_OBJECTS where ... AND OBJECT_TYPE = 'TYPE'
where TYPE is the actual ALIAS
13. Optional: Enter a SQL Statement for Detail, a SQL statement that retrieves a
list of strings to generate a detail string of Detail prefix + list of strings. See
the example in Detail prefix.
Note: The detail generated is only displayed when the query-based test fails;
allowing the user to enter a SQL statement that can retrieve the information
that caused the test to fail and help identify the cause of failure.
Results
You can add this newly created query-based test to an assessment.
What to do next
Results
Assessments
Assessments are a group of tests that scan database infrastructures for
vulnerabilities and provide an evaluation of database and data security health with
real-time and historical measurements.
Creating an assessment
Create an assessment, or modify or clone an existing assessment.
Note: You cannot assign roles to an assessment until you have assigned roles
to the datasources it is based on.
6. Click Apply to save the assessment.
Click CAS Support to supply appropriate data for an assessment.
You can also Add Comments to any assessment to document or log what
changes were made to assessments and why.
Results
Procedure
1. Open the Group Builder by clicking Setup > Tools and Views > Group
Builder.
2. Select VA Tests Exception from the Group Type menu to view the list of
predefined exception groups.
3. Select a group from the Modify Existing Groups menu and click Modify.
4. Add the group members that you want to exclude from the VA test.
5. Open the Assessment Builder by clicking Harden > Vulnerability Assessment
> Assessment Builder. Select an assessment from the Security Assessment
Finder and click Configure Tests.
All the Database Objects privilege tests exclude default system schemas from
Guardium groups.
Procedure
1. Create or modify an assessment by opening the Assessment Builder. Open the
Assessment Builder by clicking Harden > Vulnerability Assessment >
Assessment Builder.
After clicking the Add button, the datasource will appear in the Datasources
section of the Security Assessment Builder.
6. Click Configure Tests to add tests to the assessment. In the Tests available for
addition panel, click the tab for the appropriate datasource you created, select
the tests you want to add to the assessment, and click Add Selections. Use the
radio buttons to filter the tests to be added. See Predefined Tests, Query Based
Tests, or CVE Tests for assistance.
Note: You cannot assign roles to the assessment until you have assigned roles
to the datasources the assessment is based on.
8. Save your assessment by clicking Apply. The assessment can now be run
against the selected datasources.
Running an assessment
To get the results of an assessment, it must be run once it is created.
Assessments run in a serialized mode one after the other. If more than one
assessment is scheduled to run they will have to be queued. This queue can be
viewed through the Guardium Job Queue report.
Clicking the Run Once Now button will enter the assessment into the queue and
immediately run it. A short period of time is required for the job to be executed
and become viewable. See Viewing assessment results for more information on the
results of an assessment.
You can optionally define and schedule an automated process for running of an
assessment definition. The Audit Process finder panel is the starting point for
creating or modifying an audit process schedule. create a schedule to automatically
run your assessments by going to the Audit Process finder panel. See Compliance
Workflow Automation for assistance in defining an audit process
Assessment Identity
Assessment Selection
Use the drop-down menu to select and display past results for an assessment. The
latest result is displayed by default.
View log
When clicked, the Execution Log will be displayed in a new window that shows
the runtime execution of the assessment test. A timestamp, along with events, and
messages can aid in the debugging of issues that might have caused certain tests to
fail.
Results Summary
A tabular graph summarizes all the tests that were executed within this
assessment. The X-axis represents the test’s severity (CRITICAL, MAJOR, MINOR,
CAUTION, or INFO). The Y-axis represents the type of test (Privilege,
Authentication, Configuration, Version, or Other). Within the grid is the
representation of the number of tests that have either Passed, Failed, or had an
Error when trying to execute. These numbers are directly related to the detail for
the assessment tests that is given under the Assessment Test Results section.
If you would like to change the filtering from what is currently applied, use the
following two options to filter the results as you would like:
Reset Filtering - Removes all filtering options selected through the Filter / Sort
Controls options.
The assessment results include a count of the number of tests and the number of
passed tests in each of these categories:
v CIS tests
v CVE tests
v STIG tests
These values are displayed in the assessment result viewer and available for
reporting as part of the VA results domain.
Datasource Details
When expanded, the Datasource Details section will show all of the datasources
that were referenced within this assessment including the datasource's specific
environmental information.
The reference links are clickable (opens new window). Either section will be absent
when there is no corresponding record for a result.
You can generate a PDF version of Assessment result by clicking Download PDF.
Use the Download XML button to open two menu choices: Download as SCAP
xml and Download as AXIS xml. Choose one of these selections in order to
download to your workstation an XML file representing the displayed assessment
results. The file can be formatted for Security Content Automation Protocol (SCAP)
XML or Apache EXtensible Interaction System (AXIS) XML, which is used by
QRadar.
VA summary
The following table list information per test and database key displayed in the VA
summary table: test result by unique identifier; cumulative failed age; first failed
date/ last failed date; last passed date; and, last scanned date. This information is
tracked and users can create a report on this information.
VA Summary
The key may include, in addition to the three original elements, the datasource
Name. The default is Host, port and Instance Name.
In prior releases you created and populated tables in the gdmmonitor schema:
v GDMMONITOR.OS_GROUP
v GDMMONITOR.OS_USER
These tables are replaced by tables in the CKADBVA schema:
v CKADBVA.CKA_OS_GROUP
v CKADBVA.CKA_OS_USER
Procedure
1. Install Guardium 9.1.
2. Copy create_CKADBVA-schema_tables_zOS.sql from the /var/log/guard/
gdmmonitor_scripts directory on your Guardium system to your database
server. Run the fileserver command on your database server to retrieve the
file.
3. The script contains instructions that describe steps to be performed before and
after running the script. Read these instructions and run the script.
4. Populate the new tables with data similar to the data that was stored in the old
tables.
Results
Your system is now configured to use current vulnerability assessment tests.
What to do next
Assess your Resource Access Control Facility (RACF) privileges whether they are
granted within the database or external to the database. The tests, that comprise
the RACF vulnerability assessments, identify the access control for object
privileges, database privileges, and system privileges.
In order to use these tests, you must obtain and install IBM Security zSecure Audit,
Version 2.1. This product enables the commands that are used in these tests to
interact with RACF.
Procedure
1. Upgrade the database schema used to support vulnerability assessment on your
database server.
2. Install zSecure Audit on your database server. Use the instructions and tools
that are provided with zSecure Audit to learn how to populate approximately
24 tables in the CKADBVA schema to support the new zSecure tests.
3. The zSecure team will issue a PTF that enables zSecure Audit to work with
Guardium vulnerability assessment. Obtain this PTF and apply it according to
the accompanying instructions.
Results
Your system is now configured to take advantage of the new zSecure tests.
What to do next
Choose the new tests that you want to run to assess your RACF vulnerabilities.
Configure and run the tests.
CAS Agent
CAS is an agent installed on the database server and reports to the Guardium
system whenever a monitored entity have changed, either in content or in
ownership or permissions. You install a CAS client on the database server system,
using the same utility that is used to install S-TAP. CAS shares configuration
information with S-TAP, though each component runs independently of the other.
Once the CAS client has been installed on the host, you configure the actual
change auditing functions from the Guardium portal.
The CAS server is configured to use only a few of the available processors on the
Guardium system. The number of processors that CAS uses is determined by using
the parameter divide_num_of_processors_by. This parameter is stored in the
cas.server.config.properties file and its default value is 2. The number of
available processors on the Guardium system is divided by this value. This ensures
that even when CAS uses 100% of the CPU on the allocated processors, the rest of
the processors are available for use by other applications.
In addition to the basic security SSL provides, Guardium provides CAS Server
authentication support on the CAS client that runs on the database server. This
will guarantee that CAS client communicates only with Guardium's CAS server.
Unauthenticated connections and Common Names (CN) mismatches will be
reported in the CAS log file.
When configured, when the CAS server starts it will load a signed certificate as
well as a private key and assigns them to a server socket on which it accepts
connections. On the database server side the CAS client will support the following
connection modes:
1. Non-secure connection (use_tls=’0')
2. Secure connection without authentication (use_tls ='1',
guardium_ca_path=NULL). This mode forces the use of SSL as the means of
communication with the CAS server (i.e. uses SSL without server
authentication).
3. Secure connection with server authentication ( use_tls ='1',
guardium_ca_path=<public key location>). The public key is used by the CAS
client in order to authenticate the CAS server. The public key (ca.cert.pem) is
going to be located under <install_dir>/etc/pki/certs/trusted.
ca.cert.pem - is a file containing Root Certificate Authorities certificates (which
are self signed). In a browser equivalent those would be trusted CA certificates,
such as VeriSign's, etc.
All gmachine certificates are issued/signed by the root authority - that's how
they are validate and how the chain of trust is established.
It is possible to set guardium_ca_path with either the full path including the
actual public key file name , or just the directory name (<install_dir>/etc/pki/
certs/trusted), in which all the public keys within this directory will be used in
order to authenticate the server. If guardium_ca_path is set with a file or
directory that doesn't contain the public key, the connection attempt will fail.
4. Secure connection with server authentication and common name verification.
This mode has an additional check in which the certificate CN from the server
is compared with the one set in the parameter sqlguard_cert_cn. If
sqlguard_cert_cn is NULL or empty this check will be disabled. Otherwise it
needs to be set with the same CN Guardium's self signed certificate has
('gmachine').
Note: All the parameters mentioned are from the guard_tap.ini file.
If you attempt to use an older CAS agent to communicate with the updated CAS
server using SSL, you will see this message in the log file on the CAS agent
system:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
You might also see this message in the CAS log file on the Guardium system
javax.net.ssl.SSLHandshakeException: Client requested protocol SSLv3 not
enabled or not supported
If you want to use a non-SSL connection between the CAS agents and the CAS
server, you can continue to use your existing CAS agents.
Template Set
A CAS template set contains a list of item templates, bundled together, share a
common purpose such as monitoring a particular type of database (Oracle on
Unix, for example), and is one of two types:
v Operating System Only (Unix or Windows)
v Database (Unix-Oracle, Windows-Oracle, Unix-DB2, Windows-DB2, etc.)
A database template set is always specific to both the database type and the
operating system type.
Note: CAS should not be asked to monitor more than 10,000 files per client.
Monitored Entity
The actual entity being monitored, can be A File (its content and properties), Value
of an Environment Variable or Windows Registry, Output of an OS command or
Script or SQL statement
CAS Instance
CAS Configuration
A CAS configuration defines one or more CAS instances, each of which identifies a
template set to be used to monitor a set of items on that host.
For each operating system and database type supported, Guardium provides a
preconfigured, default template sets for monitoring a variety of databases on either
Unix or Windows platforms. A default template set is one that will be used as a
starting point for any new template set defined for that template-set type. A
template-set type is either an operating system alone (Unix or Windows), or a
database management system (DB2, Informix, Oracle, etc.), which is always
qualified by an operating system type - for example, UNIX-Oracle, or
Windows-Oracle. Many of the preconfigured, default template sets are used within
Guardium's Vulnerability Assessments where, for example, known parameters, file
locations, and file permissions can be checked. See Vulnerability Assessment for
additional information.
You cannot modify a Guardium default template set, but you can clone it and
modify the cloned version. Each of the Guardium default template sets defines a
set of items to be monitored. Make sure that you understand the function and use
of each of the items monitored by that default template set and use the ones that
are relevant to your environment. After defining a template set of your own, you
can designate that template set as the default template set for that template-set
type. After that, any new template sets defined for that operating system and
database type will be defined using your new default template set as a starting
point. The Guardium default template set for that type will not be removed; it will
remain defined, but will not be marked as the default.
For example, the predefined CAS template set for Oracle contains these templates,
among others:
v $ORACLE_HOME/oradata/../.*dbf
v $ORACLE_HOME/oradata/../.*ctl
v $ORACLE_HOME/oradata/../.*log
v $ORACLE_HOME/../init.*.ora
As you can see, these file-pattern templates all start with the same root,
$ORACLE_HOME (NOTE: This is not necessarily the $ORACLE_HOME
environment variable defined on your database server; by preference, CAS uses the
datasource field “Database Instance Directory” as the value for $ORACLE_HOME).
It is possible that in a production environment your Oracle data files will not be in
the same directory tree, or even on the same device, as your log files, and your
Oracle configuration files might be in still another location.
You might create additional CAS templates using absolute paths to allow CAS to
find and monitor all of your Oracle files, for example:
v /u01/oradata/mydb/*.dbf
v /u02/oradata/mydb/*.dbf
v /u03/oradata/mydb/*.dbf
v /u01/oradata/mydb/*.ctl
v /u02/oradata/mydb/*.ctl
v /u03/oradata/mydb/*.ctl
v /home/oracle11/admin/mydb/bdump/*.log
v /home/oracle11/product/11.1/db_1/dbs/init*.ora
You can even use additional environment variables that are defined in your Oracle
instance account. As an example, if you have variables defined as $ORA_DATA1,
$ORA_DATA2 and $ORA_SOFT you can use:
v $ORA_DATA1/mydb/*.dbf
v $ORA_DATA2/mydb/*.dbf
v $ORA_DATA1/mydb/*.ctl
v $ORA_DATA2/mydb/*.ctl
v $ORA_SOFT/admin/mydb/bdump/*.log
v $ORA_SOFT/product/11.1/db_1/dbs/init*.ora
If you need to specify more than one pattern, use the bar symbol (|) to separate
patterns. If you want to add the profiles of your mysql users to the previous entry,
replace the previous example with this:
user_profile_files=.*db2.*=.profile|.*mysql.*=.profile
When the CAS client starts on the host, it looks for a checkpoint file that it may
have written to the system. This file tells CAS what it was doing the last time it
was running. CAS then connects to its Guardium system. If it has found a
checkpoint file, CAS will ask the Guardium system to verify its version of its
monitoring assignment against what is stored in the Guardium database. While the
CAS client and the Guardium system have been disconnected, there may have
been changes to the assignment. When any differences are resolved, CAS will
resume monitoring. If CAS does not find a checkpoint file, it will ask the
Guardium system what it should do. If the Guardium system finds the CAS host
in its database, then the associated template sets will be sent to the CAS client,
expanded into monitored items, and monitoring will begin. If the Guardium
system cannot find the CAS host in its database, it will add it to the database and
send the default template set for the CAS host operating system.
When connectivity is lost between the CAS client and Guardium system, it may
take the CAS client and Guardium system up to five minutes (the wait time for a
CAS client to expect a message from the Guardium system) to discover that it has
lost contact with the primary Guardium system, but may happen sooner if the
communication error is detected.
If the CAS client loses its connection to the Guardium system or cannot make an
initial connection, it opens a failover file and begins writing the messages that it
would have sent to the Guardium system, to the failover file. The path to this fail
over file is stored in guard_tap.ini with the name cas_fail_over_file. When
communication is reestablished the CAS client shuts down and restarts, sends all
messages stored in the failover file to the Guardium system, and deletes the file. If
the CAS client was unable to make the initial connection, it will use the checkpoint
file to determine what to monitor, and continues doing what it was doing before
communication failed.
When communication is lost, the client also starts a thread which periodically tries
to reconnect with the primary Guardium system. The number of times CAS will
attempt to reconnect, and the average time interval between reconnect attempts,
are configurable parameters. It will try to reconnect for a period of time set in
guard_tap.ini with the name cas_server_failover_delay. After that time has
passed, the client will also try to connect to any secondary servers identified in
guard_tap.ini. The secondaries will be tried in the order of the value of the
primary attribute listed in the SQL_Guard sections of guard_tap.ini. When
primary is not 1, it is a secondary. While the client is connected to a secondary
server it will continue to try to reconnect to the primary server.
You can specify one or more secondary Guardium systems when configuring the
CAS client. In failover mode, CAS only tries to reconnect to its primary server
until the time specified by cas_server_failover_delay in guard_tap.ini is
exceeded. At that time, CAS begins trying to connect to any of the secondary
servers, as well as its primary server (which is always the first server it tries to
connect with during any reconnect attempt). While it is connected to a secondary
server, CAS continues to try to reconnect to its primary server.
Changes to the CAS client configuration can only be made from the primary server
and only while the host is online. Whenever the configuration of the CAS client is
changed on the primary server and Guardium system is in standalone
configuration, an export file is saved on the host. If the CAS client connects to a
secondary server, the saved export file is imported from the host to the secondary
server.
Various failover and connect parameters can be modified through S-TAP Control
Change Auditing.
Rules of Failover
Rule # Guardium system Fails over to Valid
1 stand alone stand alone Yes
2 managed managed (same Yes
manager)
3 managed managed (different No
manager)
Be sure to perform this procedure only while the selected CAS host is connected to
its primary server.
1. Export the definition of the CAS host (see the previous section).
2. On each secondary server:
v Delete the old CAS host definition that you want to replace.
v Import the definitions that were exported from the primary server (see
Importing CAS Hosts, previous).
The CAS client agent can avoid sending change notifications to the CAS server
based on a predefined settings.
The CAS client agent will now look for a new parameter ignore_change_alerts in
the CAS client agent's cas.client.config.properties configuration file.
If the parameter is not found or not set, the CAS client will work without any
changes and the Ignore change alerts functionality will not be enabled (for
example, the CAS client will alert on any file change).
If the new parameter is set, CAS client agent will ignore sending change
notifications based on the change-types specified in the parameter value.
For example:
In order to avoid sending change notification on OWNER and GROUP changes, set
up the parameter as follows:
ignore_change_alerts=OWNER+GROUP
Note: In the initial installation or when defining a new template, the FIRST scan of
the files will be performed and these files will appear in the CAS changes report
regardless to settings of Ignore change alerts.
If the scenario happens, the user will have to delete the datasource and change the
tap_ip parameter to the correct database server hostname/ip.
CAS Templates
Guardium provides a set of CAS templates, one for each type of data repository.
OS Script
File
File Pattern
Additionally, the Guardium Unix/DB2 Assessment: UNIX - DB2 for Unix set
includes the following templates:
This test monitors that the SETUID bit on DB2GOVD has been disabled
This test monitors that the SETUID bit on DB2STOP has been disabled
File ownership
This test monitors file ownership, and changes thereto, of DB2 files.
File permissions
This test monitors file permissions, and changes thereto, of DB2 files.
OS Script
File
Additionally, the Guardium Unix/Informix Assessment for Unix set includes the
following templates:
File ownership
This test monitors file ownership, and changes thereto, of Informix files.
This test monitors file permissions, and changes thereto, of Informix files.
OS Script
File
Designates a file to be tracked and monitored. The path to the file can be absolute,
or relative to the $ORACLE_HOME variable. The value of the $ORACLE_HOME
variable is the value you set in the Database Instance Directory field of the
Datasource Definition panel. (This is assumed to name a single file. Environment
variables from the OS user environment can be used in the file name and will be
expanded. For example, $HOME/START.sh will name the startup script in the Oracle
user's home directory.)
File Pattern
Designates a group of files to be tracked and monitored. The path to the files can
be absolute, or relative to the $ORACLE_HOME variable. Set the value of the
$ORACLE_HOME variable in Database Instance Directory on the Datasource
Definition panel. A .. in the path indicates one or more directories between the
portion of the path before it and the portion of the path after it. A .+ in the path
indicates exactly one directory between the portion of the path before it and the
portion of the path after it. For example: $ORACLE_HOME/oradata/../*.dbf (This is
just a short-hand for creating many single file identifications from a single
identification string, a file pattern. A file pattern can be viewed as a series of
regular expressions separated by /'s. A file is matched if each element of its full
path can be matched by one of the regular expressions in order. If an element of
the pattern is an environment variable, it is expanded before the match begins. If ..
is one of the elements of the pattern, it will match zero or more directory levels.
For example, /usr/local/../foo will match /usr/local/foo and
/usr/local/gunk/junk/bunk/foo. Using more than one .. element in a file pattern
should not be necessary and is discouraged because it makes the pattern very slow
to expand. Because of the confusion with its use in regular expressions \ cannot be
used as a separator as it might be in Windows. The file pattern shown previously
is not correct because *.dbf is not a valid regular expression. It should be .*dbf.
ADMIN_RESTRICTIONS Is On
File ownership
This test monitors file ownership, and changes thereto, of the Oracle data files,
logs, executables, etc.
File permissions
This test monitors file permissions, and changes thereto, on the Oracle data files,
logs, executables, etc.
This test scans the Oracle log files for occurrences of error strings.
If the template item is not specified as part of the Database Instance Directory in
the MongoDB datasource definition, the item will be skipped over and not
scanned.
Note: For CAS scripts to work, you must enable log in for the MongoDB account
on the Mongo DB server. To enable log in, log in as root, run the command chsh
mongod, and when prompted for new shell, enter /bin/bash.
Note: You can create your own template with multiple file paths for any type of
datasource. When creating your own template, we recommend that you use the
Unix/MongoDB as a reference. To create a new template for a MongoDB
datasource, you can clone and modify the Unix/MongoDB template.
Note: The VA solution for MongoDB clusters can be run on mongos, a primary
node and all secondary nodes for replica sets.
File Ownership
This test checks whether the files are owned and belongs to the correct group
according to the definition within the CAS template.
File Permission
This test checks whether the file permission is properly set according to the
definition within the CAS template.
This test checks for these events (FATAL, ERROR, DEBUG, ABORT and PANIC) in
these two log files. /nz/kit/log/postgres/pg.log and /nz/kit/log/startupsvr/
startupsvr.log
OS Script
File
Designates a file to be tracked and monitored. The path to the file can be absolute,
or relative to the $ORACLE_HOME variable. The value of the $ORACLE_HOME
variable is the value you set in the Database Instance Directory field of the
Datasource Definition panel. (This is assumed to name a single file. Environment
variables from the OS user environment can be used in the file name and will be
expanded. For example, $HOME/START.sh will name the startup script in the Oracle
user's home directory.)
Designates a group of files to be tracked and monitored. The path to the files can
be absolute, or relative to the $ORACLE_HOME variable. Set the value of the
$ORACLE_HOME variable in Database Instance Directory on the Datasource
Definition panel. A .. in the path indicates one or more directories between the
portion of the path before it and the portion of the path after it. A .+ in the path
indicates exactly one directory between the portion of the path before it and the
portion of the path after it. For example: $ORACLE_HOME/oradata/../*.dbf (This is
just a short-hand for creating many single file identifications from a single
identification string, a file pattern. A file pattern can be viewed as a series of
regular expressions separated by /'s. A file is matched if each element of its full
path can be matched by one of the regular expressions in order. If an element of
the pattern is an environment variable, it is expanded before the match begins. If ..
is one of the elements of the pattern, it will match zero or more directory levels.
For example, /usr/local/../foo will match /usr/local/foo and
/usr/local/gunk/junk/bunk/foo. Using more than one .. element in a file pattern
should not be necessary and is discouraged because it makes the pattern very slow
to expand. Because of the confusion with its use in regular expressions \ cannot be
used as a separator as it might be in Windows. The file pattern shown previously
is not correct because *.dbf is not a valid regular expression. It should be .*dbf.
ADMIN_RESTRICTIONS Is On
File ownership
This test monitors file ownership, and changes thereto, of the Oracle data files,
logs, executables, etc.
File permissions
This test monitors file permissions, and changes thereto, on the Oracle data files,
logs, executables, etc.
This test scans the Oracle log files for occurrences of error strings.
unix_domain_socket_marker=<key>
Example 1:
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ORCL))))
unix_domain_socket_marker=ORCL
Example 2:
In the case where there is more than one IPC line in listener.ora, use a common
denominator of all the key
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST
Guardium uses a string search in the path so LISTENER will work for all four and
should be used in this case:
unix_domain_socket_marker=LISTENER
File Ownership
This test checks whether the files are owned and belongs to the correct group
according to the definition within the CAS template.
File Permission
This test checks whether the file permission is properly set according to the
definition within the CAS template.
Registry Variable
Search Windows registry for specific key value that are required by security
assessments test.
OS Script
File
File Pattern
File ownership
This test monitors file ownership, and changes thereto, of Sybase files.
File permissions
This test monitors file permissions, and changes thereto, of Sybase files.
Click Harden. The list of CAS functions is listed within the Configuration Change
Control (CAS Application) header.
The CAS Configuration Navigator panel is the starting point for creating or
modifying CAS Template Sets.
Open the CAS Configuration Navigator panel by clicking Harden > Configuration
Change Control (CAS Application) > CAS Template Set Configuration.
Use the CAS Configuration Navigator panel to modify an existing CAS template
set. Once a template set is in use on any CAS host, the modifications that you can
make to that template set are limited. You will be able to make minor changes to
various elements of the definition, but you will not be able to add or remove
templates.
1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to modify and click Modify to open the
CAS Template Set Definition panel.
4. Make your desired changes and click Apply to save them.
create_cas_template
create_datasource
create_cas_host_instance
CAS Hosts
A Configuration Auditing System (CAS) host configuration defines one or more
CAS instances.
Once you have defined one or more CAS template sets, and have installed CAS on
a database server, you are ready to configure CAS on that host. A CAS host
configuration defines one or more CAS instances. Each CAS instance specifies a
CAS template set, and defines any parameters needed to connect to the database.
For each database server on which CAS is installed, there is a single CAS host
configuration, which typically contains multiple CAS instances - for example, one
CAS instance to monitor operating system items, and additional CAS instances to
monitor individual database instances.
v Define a CAS Instance
v Modify a CAS Instance
v Delete a CAS Instance
v Disable a CAS Instance
Note: CAS Instance cannot be defined if the host is off line or this is a
secondary Guardium system for the host.
5. Click Add Datasource to open the Datasource Finder panel.
Note: If no compatible datasource is available for this template set on this host
you may click New to open the Datasource Definition panel and add a
datasource.
6. Select the datasource that you want to add to the template set, and click Add
to add it to the template set.
Access to CAS Configuration Functions is restricted to the admin and users who
have been assigned the CAS role.
Open the CAS Configuration Navigator panel by clicking Harden > Configuration
Change Control (CAS Application) > CAS Host Configuration.
In the Host Instance Definitions panel, click a Monitored Items link to view the
complete list of items monitored in the Monitored Items Definitions panel. The
following table describes the components seen on the Monitored Items Definitions
panel for this Host Configuration.
All the monitored items refer to raw data, a character object on the host, the result
of a SQL query, the output of an OS script, or the contents of a file. The size of that
character object is computed. If the item is a file, then the permissions, owner,
group, and last modified time are also checked. If any of these have changed since
the last time the item was checked, the change will be noted.
Table 220. View Monitored Item Lists
Component Description
Select Box Check the Select Box if you'd like to edit a monitored item
individually or as a group.
list_cas_hosts
create_cas_host_instance
delete_cas_host_instance
list_cas_host_instances
update_cas_host_instance
CAS Reporting
This section describes Configuration Auditing System (CAS) reporting.
The admin user has access to all query builders and default reports. The admin
role allows access to the default CAS reports, but not to the CAS query builders.
The CAS role allows access to both the default CAS reports and the query builders.
v Accessing CAS Query Builders
v Accessing Default CAS Reports
v CAS Reporting Domains
This section describes how to access the CAS Query Builders from the
administrator and user portals. For help on how to use the query builders or
report builders, see Queries or Reports.
View the default reports related to CAS by clicking Harden > Reports.
Template Entity
Attribute Description
Template ID A unique identifier for the template set,
numbered sequentially
Access Name Depending on the Audit Type, this is the OS
or SQL script, environment or registry value,
or a file name or a file name pattern
Host Entity
Entity Description
Host Name Database server host name (may display as
IP address)
OS Type Operating system: UNIX or WIN
Is Online Online status (yes or no) when record was
written
Host Entity
Attribute Description
Host Name Database server host name
OS Type Operating system: Unix or Windows
Is Online Current online status (Yes/No)
Host Event
Attribute Description
Event Time Date and time that the event was recorded
Event Type Identifies the event being recorded:
Drill-Down Reports
Report Description
Record Details Displays the saved data included in the
Count of Saved Data column
Drill-Down Reports
Report Description
View Difference Displays the difference between the selected
data and prior version
CAS Status
Open the Configuration Auditing System Status by clicking Manage > Change
Monitoring > CAS Status
For each database server where CAS is installed and running, and where this
Guardium system is configured as the active Guardium host, this panel displays
the CAS status, and the status of each CAS instance configured for that database
server.
If you have trouble distinguishing the colors of the status indicator lights, hover
your mouse over status lights, and a text box will display the current status.
Note: The TAP_IP entry in the guard_tap.ini file is required. If TAP_IP is missing
CAS will not start and an error message will be logged in the log file on the CAS
client.
There are several situations where you may need to stop or start the CAS agent on
a monitored system.
Note: If you want to stop and restart the CAS agent, you can do so by clicking
Manage > Change Monitoring > CAS Status.
Use this procedure to restart the CAS agent only when it has been stopped by
editing the /etc/inittab file as described previously.
1. Edit the file /etc/inittab.
2. Find the line:
#cas:2345:respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/gua
3. Uncomment the line, in our example (step 2.), by removing the # in the first
character position. Depending on the operating system the comment character
may be something else.
4. Save the file.
5. Enter the following command to restart the CAS agent: init -q
This feature works only with MySQL, MS SQL and Oracle databases.
Prerequisites
1. An Amazon account.
2. A few RDS under the Amazon account.
3. Amazon credentials, including:
Access Key ID
Identifies user as the party responsible for service requests. It needs to
be included it in each request. It is not confidential and does not need
to be encrypted.
Secret Access Key
Is associated with Access Key ID calculating a digital signature
included in the request. Secret Access Key is a secret, and only the user
and AWS should have it.
Amazon RDS requires the clock time of the Guardium system to be correct (within
15 minutes). A larger discrepancy results in an Amazon error. If there is too large a
difference between the request time and the current time, the request is not
accepted.
If the Guardium system time is not correct, set the correct time by using the
following CLI commands:
show system ntp server
store system ntp server (An example is ntp server: ntp.swg.usma.ibm.com)
store system ntp state on
See the terminology terms section at the end of this help topic for Amazon
definitions.
Step procedures
1. Configure discovery of Amazon Relational Databases Systems (RDS).
2. Create datasource definitions for each discovered datasource.
3. Run Guardium Vulnerability Assessment (VA) tests automatically for
discovered RDS.
Buttons
Table 221. Menu buttons
Menu screen buttons Description
Discover Use this button after adding Access Key and Secret Access
Key values.
Errors Use this button to read all error messages.
Amazon credential access key id and secret access key are required in order to
access RDS.
The Discover button stays disabled if the Access Key ID or Secret Access Key field
is empty.
Entering any text in Secret Access Key enables the discover button. There is no
validation for a valid secret access code at this point.
Discovery of RDS without a valid access key id and secret access key results in
error and can be seen by clicking on the Errors button.
One error message for each selected RDS indicates the problem with invalid access
key ID or secret access key.
Filter text field is designed to limit the number of regions shown in the list.
Example - entering “west” in the filter text displays only regions with the word
“west” in their names.
Clicking the check box Amazon region selects all the shown regions.
All RDS belonging to the account, with the access key and secret access key for the
selected regions, will be displayed in a list.
The number of discovered RDS can be reduced by entering a text in the filter field.
The Associate an RDS with a Guardium security group button does nothing if the
security group already exists; RDS already has that security group; and, the IP
address of the Guardium system has already been assigned to the security group.
If the selected RDS does not have any Guardium Security Group assigned with it,
clicking on the button creates one.
Clicking the Add IP Range to Guardium Security Group button opens a dialog
box.
Use the menu shown to add the IP address range to the Guardium Security group.
Unlike adding IP addresses that can be done from both RDS console and Amazon
RDS discovery page, deleting IP address only can be done from Amazon RDS
console.
Clicking on the button will create a new datasource definition or will update the
existing one with the new user name and password.
When the datasource definition is created, the new datasource definition info will
be displayed in Guardium Datasource column and Datasource Definition and
Launch Vulnerability Assessment buttons become enabled.
After creating a datasource for the RDS you can go to the Guardium Datasource
Definition page and modify the configuration.
The Datasource Definition button opens the Datasource Definition panel. All
necessary information has already been filled. The information on this panel is the
same as the one that exists in Guardium Datasource Definition for a non Amazon
database. You can modify existing information or add additional information for
the datasource. Use the Test Connection button on this page to test the connection
to Amazon RDS.
Note: The security group must allow Internet access. Click on the Errors button to
read any error messages.
In the Result dialog, the user can give a description for the Vulnerability
Assessment and Audit Process (otherwise default names are used); enter email
addresses to be added as receivers for the audit process as well as determining
whether the user executing it should be added also as a receiver.
Once the user submits the execution, a Vulnerability assessment is created with all
the datasources selected as well as all the relevant tests for the datasource types
included. An Audit process is created that contains the Vulnerability assessment
and the execution is submitted.
Note: The description (default or user defined) is used to identify the Security
Assessment, if there is such security assessment already defined (with that
description) the existing security is the one that will be used otherwise a new
security assessment will be created.
If the security assessment is already present then it will add the data source the
user checked (if not already in the assessment). If the assessment has NO tests at
all, then it will add all available tests (same as if it was a new assessment),
however if the assessment was present before and has some tests, the tests will
remain untouched (will not add and will not remove any tests).
Finally an audit process will be created (unless already exists) with the same
description and one task which is the security assessment, and execution of the
process will be submitted.
For additional information on how to view and work with VA results, go to the
“Viewing assessment results” on page 582 help topic.
Terminology terms
AWS (Amazon Web Services): A set of services delivered by Amazon that can be
used to meet the needs for a cloud-based application.
Regions: Compute power you use from Amazon (EC2 and EBS volumes) runs in a
physical datacenter, whereby there are currently five datacenter regions you can
use: Northern Virginia, Northern California, Ireland, Singapore and Tokyo.
Availability Zones: Each physical region is further broken data into zones, whereby
a zone is an independent section of a datacenter that adds redundancy and fault
tolerance to a given region.
Amazon Virtual Private Cloud (VPC): Amazon Virtual Private Cloud (Amazon
VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual
network that you've defined. This virtual network closely resembles a traditional
network that you'd operate in your own data center, with the benefits of using the
scalable infrastructure of AWS.
Amazon Relational Database Service is a web service that makes it easy to set up,
operate, and scale a relational database in the cloud.
RDS: Relational Database Service: A relational database (MySQL) that is hosted and
managed by Amazon, and made available to developers that do not want to
manage their own database platform.
Secret Access Key: Secret Access Key (a 40-character sequence). Each Access Key ID
has a Secret Access Key associated with it. This key is just a long string of
characters (and not a file) that is used to calculate the digital signature that needs
to be included in the request. Secret Access Key is a secret, and only the user and
AWS should have it.
Amazon RDS Security Group: Security groups control the access that traffic has in
and out of a DB instance. Three types of security groups are used with Amazon
RDS: DB security groups, VPC security groups, and EC2 security groups. In simple
terms, a DB security group controls access to a DB instance that is not in a VPC, a
VPC security group controls access to a DB instance (or other AWS instances)
inside a VPC, and an EC2 security group controls access to an EC2 instance.
Additional Information
docs.aws.amazon.com
aws.amazon.com/rds
B N
Negative rule 57
Baselines 51 G
GIM Parameters with FAM 161
C
GIM Status 623
Guardium for Applications JavaScript
O
API 174 Optional External Feed 548
CAS Hosts help book 612
Outliers Detection 294
CAS Reporting 615
CAS Start-up and Failover help
book 593 H P
CAS Templates help book 606 How to ask questions 507
Category, Classification and Severity 57 How to create a Distributed Report 556 Pattern Matching using Regular
Change Audit System help book 588 How to distribute workflow through Expressions 57
Classification 24 Guardium groups 217 PDF 308
Classification Process 25 How to Generate API Call from Policies overview 57
Clone a policy 72 Reports 526 Policies, Install 80
Compliance Workflow Automation 195 How to install a policy and detail group Policy Simulator 72
concepts members 85 Policy violation 57
IBM Guardium for Applications 163 How to integrate custom rules with Predefined admin Reports 450
Configure data masking policy 166 Guardium policy 95 Predefined Reports 447
Continue to Next Rule 57 How to terminate connections 139 Predefined Reports Common 497
Copy rules with a policy 72 How to use API Calls from Custom Predefined user Reports 481
Correlation Alerts 131 Reports 540 Privacy Sets 241
Create a policy 72 How to use Constants within API
Create an Audit Database 266 Calls 532
Creating and running an assessment help
book 583
How to use PCI/DSS Accelerator to Q
implement PCI compliance 272 Query
csv 308 How to use predefined reports 422 Entity List 314
Custom Alerting 243
Operators
Custom Domains 328
Query conditions 314
custom URL 21
I Query Fields 314
IBM Guardium for Applications 163 Query Builder 314
query rewrite 185, 186, 187, 188, 189,
D Identify Users via API 259
Identify Users via Application User 191, 192
Dashboards, creating 304 Translation 251
Data Mart 310 Identify Users via Stored
Database Auto-discovery 22
Database Entitlement Reports 247, 405
Procedures 261 R
Ignore action, how to use RACF vulnerability 587
Discover help book 13, 563 appropriate 104 Record Values with Policy Violation 57
Distributed Report Builder 550 Incident Management 179 Regular Expressions 45
Domains 323
Remove a policy 72
Reports 299
633
Reports parameters 303
Reports, customizing 307
Security policy (continued)
Selective Audit policy 57
V
Reports, refreshing 307 Selective Audit Trail policy 62 VA schema update 571, 573, 587
Rule Actions service name 21 Value Change Auditing 263
Alert 62 Special Pattern Tests View a report 306
Attach alternative character sets 62 guardium://CREDIT_CARD 57 Vulnerability Assessment help book 569
Block request 62 guardium://PCI_TRACK_DATA 57
Log or ignore the violation or guardium://SSEC_NUMBER 57
traffic 62 Suggested Object Groups 72 W
Rules on flat 62 What to discover 36
Rules Suggested from Baseline 72 Workflow Builder 283
Rules suggested from Database ACL 72 T Workflow Process Results 216
To-Do List, Audit Process 229
S
S-GATE 62
S-TAP Terminate 62
U
User Identification 251
Security policy
Construct 57
Instance 57
Administration
ii Administration
Contents
Chapter 1. Configuring your Guardium Manage Custom Classes . . . . . . . . .. 129
system . . . . . . . . . . . . . .. 1 Uploading a Key File . . . . . . . . . .. 130
System Configuration . . . . . . . . . .. 1 SSH Public Keys . . . . . . . . . . .. 130
Inspection Engine Configuration . . . . . . .. 5 How to install an appliance certificate to avoid a
Portal Configuration . . . . . . . . . .. 11 browser SSL certificate challenge . . . . . .. 131
Generate New Layout . . . . . . . . . .. 12 Express Security Setup . . . . . . . . .. 132
Configure Authentication . . . . . . . . .. 13 GRC Heatmap . . . . . . . . . . . .. 136
Global Profile. . . . . . . . . . . . .. 14 Self Monitoring. . . . . . . . . . . .. 138
Alerter Configuration . . . . . . . . . .. 22 How to monitor the Guardium system via alerts 147
Anomaly Detection . . . . . . . . . . .. 23 Monitoring with SNMP . . . . . . . .. 156
Session Inference . . . . . . . . . . .. 24 Running Query Monitor . . . . . . . .. 158
IP to Hostname Aliasing . . . . . . . . .. 25 Groups . . . . . . . . . . . . . .. 158
System Backup . . . . . . . . . . . .. 25 Groups Overview . . . . . . . . . .. 158
Configuring patch backup . . . . . . . .. 31 Using groups in queries and policies . . .. 160
Configure Permission to Socket connection . . .. 32 Example: Using groups to create rules and
policies . . . . . . . . . . . . .. 161
Creating a new group . . . . . . . .. 162
Chapter 2. Access Management Modifying a group . . . . . . . . .. 162
Overview . . . . . . . . . . . . .. 33 Predefined Groups . . . . . . . . .. 164
Understanding Roles . . . . . . . . . .. 34 Populating groups. . . . . . . . . .. 171
Managing roles and permissions . . . . . .. 38 Security Roles . . . . . . . . . . . .. 179
How to create a role with minimal access . . .. 39 Notifications. . . . . . . . . . . . .. 180
Manage Users . . . . . . . . . . . .. 41 How to create a real-time alert . . . . . .. 181
How to create a user with the proper entitlements Custom Alerting Class Administration . . . .. 183
to login to CLI . . . . . . . . . . . .. 46 Predefined Alerts . . . . . . . . . . .. 183
Importing Users from LDAP . . . . . . .. 48 Scheduling . . . . . . . . . . . . .. 185
Data Security - User Hierarchy and Database Aliases . . . . . . . . . . . . . .. 186
Associations . . . . . . . . . . . . .. 51 Dates and Timestamps . . . . . . . . .. 188
How to define User Hierarchies . . . . . .. 54 Time Periods . . . . . . . . . . . .. 191
Time Periods . . . . . . . . . . . .. 191
Chapter 3. Aggregation and Central Comments . . . . . . . . . . . . .. 192
management . . . . . . . . . . .. 57 How to install patches . . . . . . . . .. 193
Aggregation . . . . . . . . . . . . .. 57 Support Maintenance . . . . . . . . . .. 197
Central Management . . . . . . . . . .. 68
Guardium Component Services . . . . . .. 69 Chapter 5. Product integration . . .. 199
Implementing Central Management . . . .. 72 Configure BIG-IP Application Security Manager
Using Central Management Functions . . .. 78 (ASM) to communicate with Guardium system .. 199
Investigation Center . . . . . . . . . .. 91 Guardium Integration with BigInsights . . . .. 199
OPTIM to Guardium Interface. . . . . . .. 203
Chapter 4. Managing your Guardium Combining real-time alerts and correlation analysis
system. . . . . . . . . . . . . .. 95 with SIEM products . . . . . . . . . .. 204
How to transfer sensitive data . . . . . . .. 209
Guardium Administration . . . . . . . .. 95
CEF Mapping . . . . . . . . . . . .. 212
Certificates . . . . . . . . . . . . .. 96
LEEF Mapping . . . . . . . . . . . .. 215
Unit Utilization Level . . . . . . . . . .. 99
Customer Uploads . . . . . . . . . .. 101
Services Status panel . . . . . . . . . .. 105 Chapter 6. Troubleshooting problems 219
Archive, Purge and Restore. . . . . . . .. 105 Techniques for troubleshooting problems . . .. 219
Guardium catalog . . . . . . . . . . .. 113 Searching knowledge bases . . . . . . .. 221
Archiving a catalog . . . . . . . . .. 114 Getting fixes from Fix Central . . . . . .. 222
Exporting a catalog . . . . . . . . .. 115 Contacting IBM Support. . . . . . . .. 222
Importing a catalog . . . . . . . . .. 115 Basic information for IBM Support . . . .. 223
How to manage backup and archiving . . . .. 115 Exchanging information with IBM . . . .. 228
Exporting Results (CSV, CEF, PDF) . . . . .. 121 Subscribing to Support updates . . . . .. 229
Export/Import Definitions . . . . . . . .. 122 Problems and solutions . . . . . . . . .. 230
Distributed Interface . . . . . . . . . .. 127 User Interface . . . . . . . . . . .. 230
iii
Policies . . . . . . . . . . . . .. 233 S-TAPs and other agents . . . . . . .. 251
Reports . . . . . . . . . . . . .. 236 GIM . . . . . . . . . . . . . .. 260
Assess and Harden . . . . . . . . .. 241 Installing Your Guardium System . . . .. 260
Configuring your Guardium system . . . .. 242
Access Management . . . . . . . . .. 246 Index . . . . . . . . . . . . . .. 265
Aggregation . . . . . . . . . . . .. 247
Central Management . . . . . . . . .. 249
iv Administration
Chapter 1. Configuring your Guardium system
You can configure several aspects of your Guardium system to enable you to meet
your business goals effectively and efficiently.
System Configuration
Most of the information on the System Configuration panel is set by using the CLI
at installation time.
For instructions on how to configure the system, or to modify any other System
Configuration settings, see Modify the System Configuration.
There must be a valid license to use various functions within the appliance. When
a license is entered after the system starts, a restart of the GUI is needed.
The Guardium® administrator defines the system shared secret in the System
Configuration. The system shared secret is used for two general purposes:
v To encrypt files that are exported from the appliance by archive/export activities
v To establish secure communications between Central Managers and managed
units
If you are using Central Management and/or aggregation, you must set the System
Shared Secret for all related systems to the same value.
Note: The applied changes do not take effect until the Guardium system is
restarted. After you apply configuration changes, click Restart to stop and restart
the system.
1
Table 1. System Configuration Panel Reference
Field or Control Description
Unique Global This value is used for collation and aggregation of data. The default
Identifier value is a unique value that is derived from the MAC address of the
machine. Do not change this value after the system begins
monitoring operations.
System Shared Any value that you enter here is not displayed. Each character you
Secret type is masked.
2 Administration
Table 1. System Configuration Panel Reference (continued)
Field or Control Description
License Key The license key is inserted in the configuration during installation.
Do not modify this field unless you are instructed to do so by
Technical Support. You might need to paste a new product key here
if optional components are being added.
To display the network interfaces installed on the unit, use the show
network interface inventory CLI command. For example:
show network interface inventory
Current network card configuration:
Device | Mac Address |Member of
-----------------------------------------
eth0 | 00:50:56:3b:c3:73 |
eth1 | 00:50:56:8a:0d:fa |
eth2 | 00:50:56:8a:0d:fb |
eth3 | 00:50:56:8a:00:c1 |
Note: The “Member of” will show which NICs are in a bond pair, if
a bonding exists.
Note: The secondary IP address and its associated port are NOT
related to the high availability feature, which provides fail-over
support via IP Teaming for the primary connection. For more
information about the high-availability option, see the store network
interface commands in the CLI Appendix.
4 Administration
Table 1. System Configuration Panel Reference (continued)
Field or Control Description
SubNet Mask Optional. The subnet mask for the secondary System IP Address.
(Secondary)
Default Route/ The IP address of the default router for the system./ The IP address
Secondary Route of the Secondary Router
Primary Resolver The IP address for the Primary Resolver (DNS) is required. The
Secondary Resolver secondary and tertiary are optional.
Tertiary Resolver
Test Connection Click Test Connection to test the connection to the corresponding
DNS (Domain Name System) server. This only tests that there is
access to port 53 (DNS) on the specified host. It does not verify that
this is a working DNS server. You will receive a message box
indicating if the DNS server responded.
Stop Click Stop to shut down the system.
Restart Click Restart to stop and then restart the system. You will be
prompted to confirm the action.
Apply Click Apply to save the changes. The changes will be applied the
next time the system restarts.
The inspection engine extracts SQL from network packets; compiles parse trees that
identify sentences, requests, commands, objects, and fields; and logs detailed
information about that traffic to an internal database.
You can configure and start or stop multiple inspection engines on the Guardium
appliance.
Inspection engines are also defined on S-TAPs. If S-TAPs report to this Guardium
appliance, be sure the appliance does not monitor the same traffic as the S-TAP. If
that happens, the analysis engine will receive duplicate packets, will be unable to
reconstruct messages, and will ignore that traffic.
Selecting IP addresses
Each inspection engine monitors traffic between one or more client and server IP
addresses. In an inspection engine definition these are defined using an IP address
and a mask. You can think of an IP address as a single location and a mask as a
wild-card mechanism that allows you to define a range of IP addresses.
IP addresses have the format: n.n.n.n, where each n is an eight-bit number (called
an octet) in the range 0-255.
The mask is specified in the same format as the IP address: n.n.n.n. A zero in any
bit position of the mask serves as a wildcard. Thus, the mask 255.255.255.240
combined with the IP address 192.168.1.3 matches all values from 0-15 in the last
octet, since the value 240 in binary is 11110000. But it only matches the values
192.168.1 in the first three octets, since 255 is all 1s in binary (in other words, no
wildcards apply for the first three octets).
Specifying binary masks can be a little confusing. However, for the sake of
convenience, IP addresses are usually grouped in a hierarchical fashion, with all of
the addresses in one category (desktop computers, for example) grouped together
in one of the last two octets. Therefore, in practice, the numbers you see most often
in masks are either 255 (no wildcard) or 0 (all).
Thus a mask 255.255.255.255 (which has no zero bits) identifies only the single
address specified by IP address (192.168.1.3 in the example).
Alternatively, the mask 255.255.255.0, combined with the same IP address matches
all IP addresses beginning with 192.168.1.
The IP address 0.0.0.0, which is sometimes used to indicate all IP addresses, is not
allowed by Guardium. To select all IP addresses when using an IP address/mask
combination, use any non-zero IP address followed by a mask containing all zeroes
(for example: 1.1.1.1/0.0.0.0).
Note: The applied changes do not take effect until the inspection engines are
restarted. After applying inspection engine configuration changes, click the Restart
button to stop and restart the system (using the new configuration settings).
Note: For HTTP support, there are Inspection Engine configuration limitations.
The following Inspection Engine settings are not supported for HTTP: Default
Capture Value; Default Mark Auto Commit; Log Request Sql String; Log
Sequencing; Log Exception Sql String; Log Records Affected; Compute Avg.
Response Time; Inspect Returned Data; Record Empty Sessions.
6 Administration
Table 2. Settings that Apply to All Inspection Engines
Control Description
Default Capture Default value is false. Used by Replay function to distinguish between
Value transactions and capture values, meaning that if you have a prepared
statement, assigned values will be captured and replayed. If you want
to replay your captured prepared statements as prepared statements
the check box should be checked for the captured data.
Default Mark Auto Default value is true. Due to various auto-commit models for different
Commit databases, this value is used by Replay function to explicitly mark up
the transactions and auto commit after each command.
Note: If the check box is checked then commits and rollbacks will be
ignored. Databases currently supported include DB2®, Informix®, and
Oracle.
Log Request Sql If enabled, this option will automatically log DB2 application events
String which use the procedure WLM_SET_CLIENT_INFO. These events will
only be logged if there is an application issuing them in the
environment. They can be added to reports by using attributes from
the Application Events entity.
Log Sequencing If marked, a record is made of the immediately previous SQL
statement, as well as the current SQL statement, provided that the
previous construct occurs within a short enough time period.
Log Exception Sql If marked, when exceptions are logged, the entire SQL statement is
String logged.
8 Administration
Table 2. Settings that Apply to All Inspection Engines (continued)
Control Description
Logging The number of minutes (1, 2, 5, 10, 15, 30, or 60) in a logging unit. If
Granularity requested in a report, Guardium summarizes request data at this
granularity. For example, if the logging granularity is 60, a certain
request occurred n times in a given hour. If the check box is not
marked, exactly when the command occurred within the hour is not
recorded. But, if a rule in a policy is triggered by a request, a real time
alert can indicate the exact time. When you define exception rules for
a policy, those rules can also apply to the logging unit. For example,
you might want to ignore 5 login failures per hour, but send an alert
on the sixth login failure.
Max. Hits per When returned data is being inspected, indicate how many hits (policy
Returned Data rule violations) are to be recorded.
Ignored Ports List A list of ports to be ignored. Add values to this list if you know your
database servers are processing non-database protocols, and you want
Guardium to not waste cycles analyzing non-database traffic. For
example, if you know the host on which your database resides also
runs an HTTP server on port 80, you can add 80 to the ignored ports
list, ensuring that Guardium will not process these streams. Separate
multiple values with commas, and use a hyphen to specify an
inclusive range of ports. For example:
101,105,110-223
Buffer Free: n % Display only. n is the percent of free buffer space available for the
inspection engine process. This value is updated each time the
window is refreshed. There is a single inspection engine process that
drives all inspection engines. This is the buffer used by that process.
Restart Inspection Click Restart Inspection Engines to stop and restart all inspection
Engines engines.
Add Comments Click Comment to add comments to the Inspection Engine
Configuration.
Apply Click the Apply to save the configuration.
Note: Any global changes made (and saved by using Apply) do not
take effect until you restart the inspection engines. However,
individual inspection engine attributes, such as exclude, sequence
order, etc., take effect immediately.
Note: When sending IPC traffic from the GreenPlum database, it will be
logged on the Guardium system as PostgresSQL traffic. When sending TCP
traffic from the GreenPlum database, it will be logged on as GreenPlum
database with the inspection engine. For TCP traffic, Guardium determines the
database according to the Port (port 5432 for GreenPlum). For IPC traffic,
Guardium is using named pipe, and for GreenPlum database, the Guardium
system is using PostgresSQL as the name of the database. When both
PostgresSQL and Greenplum database are on the same system, their IPC
traffic will log in DB_PROTOCOL according to the first PostgresSQL/
Greenplum database IE set in the guard_tap.ini file.
5. In the DB Client IP/Mask boxes, enter a list of clients (a client host from
which the database connection was initiated) to be monitored (or excluded if
the Exclude DB Client IP box is marked). The clients are identified by IP
addresses and subnet masks. There are detailed instructions on how to use
these fields in the overview.
Click the plus sign to add additional IP address and subnet mask. Click the
minus sign to remove the last IP address and subnet mask.
6. In the DB Server IP/Mask boxes, enter a list of database servers (where a
database sits) to be monitored. The servers are identified by IP addresses and
subnet masks. There are detailed instructions on how to use these fields in the
overview.
Click the plus sign to add additional IP address and subnet mask. Click the
minus sign to remove the last IP address and subnet mask.
7. In the Port box, enter a single port or a range of ports over which traffic
between the specified clients and database servers will be monitored. Most
often, this should be a single port.
Warning: Do not enter a wide range of ports, just to be certain that you have
included the correct one! You may cause the inspection engine to bog down
attempting to analyze traffic on ports that carry no database traffic or traffic
that is of no interest for your environment.
8. Mark the Active on startup box if this inspection engine should be started
automatically on start-up.
9. Mark the Exclude DB Client IP box if you want the inspection engine to
monitor traffic from all clients except for those listed in the DB Client
IP/Mask list. Be sure that you understand the difference between this and the
Ignore protocol selection. This includes all traffic except for the from IP
addresses. To ignore a specific set of clients without including all other clients,
define a separate inspection engine for those clients and use the Ignore
protocol.
10. Click Add to save the definition.
11. Optionally reposition the inspection engine in the list of inspection engines.
Filtering mechanisms defined in the inspection engines are executed in the
10 Administration
order. If necessary, reposition the new inspection engine configuration, or any
existing configurations, using the Up and/or Down buttons in the border of
the definition.
12. Optionally click Start to start the inspection engine just configured. The Start
button will be replaced by a Stop button, once the engine has been started.
13.
Note: If you provide a value for TAP_IDENTIFIER and the value contains
spaces, Guardium will automatically replace the spaces with hyphens. For
example, the value “Sample description” will become “Sample-description”.
If you are no longer using an inspection engine, we suggest that you remove the
definition, so that it is not restarted accidentally.
1. Click Manage > Activity Monitoring > Inspection Engines to open the
Inspection Engines.
2. If the inspection engine to be removed has not been stopped, click Stop.
3. To remove an inspection engine, click Delete.
Portal Configuration
You can keep the Guardium appliance Web server on its default port (8443) or
reset the portal. We strongly recommend that you use the default port.
1. Click Setup > Tools and Views > Portal to open the Portal.
2. If it is not marked, mark the Active on Startup checkbox (this should never be
disabled).
3. Set the HTTPS Port to an integer value between 1025 and 65535.
4. Click Apply to save the value. (The Guardium security portal will not start
listening on this port until it is restarted.) Or click Revert to restore the value
stored by the last Apply operation.
5. Click Restart to restart the Guardium Web server if you have made and saved
any changes. You can now connect to the unit on the newly assigned port.
Note: To re-connect to the unit after it has restarted with the new port number,
you must change the URL used to open the Guardium Login Page on your
browser.
The Guardium Portal Configuration is used to define the way user passwords are
authenticated when logging into the Guardium appliance. There are three choices.
The Portal configuration screen under Setup > Tools and Views > Portal is used
for the following:
1. To define the best way to authenticate a user password.
2. To restart GUI to reset the authentication type.
When you define your username and password using the accessmgr role type, the
defined password per user will be used when logging into the Guardium
appliance.
The RADIUS connection allows login authentication through a radius server. The
Radius/RSA server can be defined using both a password and a SecurID token
number. The SecurID token numeric password is displayed via a hardware token.
The Radius/RSA server is defined on a Windows server. The security RSA SecurID
token is also defined and stored on the Radius server and does not have to be
downloaded in order for the Radius portal to work.
The LDAP connection will work when the password is defined and stored on a
given LDAP server. In order for a user to use the LDAP portal and to login, a user
account name must be imported from the LDAP server first. Use the User LDAP
Import function available from the accessmgr account to define the LDAP location
and then import the LDAP users. The password does not have to be uploaded.
Note: Default .psml structures for user and role can be defined, via the GUI, by
the admin user. See Portlet Editor for further information.
generate-role-layout
Parameters
12 Administration
If either of the following parameters contains spaces (John Doe is user , or DBA
Managers is role), replace the space characters with underscore characters.
For example:
user - The name of the user whose layout will be used as a model for the role
layout. If the user does not exist, you will receive the following error message: No
such user '<user>'.
Configure Authentication
By default, Guardium user logins are authenticated by Guardium, independent of
any other application.
For the Guardium admin user account, login is always authenticated by Guardium
alone. For all other Guardium user accounts, authentication can be configured to
use either RADIUS or LDAP. In the latter cases, additional configuration
information for connecting with the authentication server is required.
When an alternative authentication method is used, all Guardium users must still
be defined as users on the Guardium appliance. It is only the authentication that is
performed by another application.
While user accounts and roles are managed by the accessmgr user, the
authentication method used is managed by the admin user. This is a standard
separation-of-duties best practice.
Note:
This attribute identifies a user for LDAP authentication. The Access Manager
should be made aware of what attribute is used here, since the Access
Manager performs the LDAP User Import operation. Click on this help link
LDAP User Import for further information on Importing LDAP Users.
If a user is using SamAccountName as the RDN value, the user must use
either a =search or =[domain name] in the full name.
Global Profile
The Global Profile panel defines defaults that apply to all users.
By default, for any new report, or for any report that is contained in a default
layout, aliases are not used.
14 Administration
An alias provides a synonym that substitutes for a stored value of a specific
attribute type. It is commonly used to display a meaningful or user-friendly name
for a data value. For example, Financial Server might be defined as an alias for IP
address 192.168.2.18.
If you want to see aliases by default, you can change the default aliases setting for
all reports, as follows:
v Click Setup > Tools and Views > Global Profile to open the Global Profile.
v Mark the Use Aliases in Reports unless otherwise specified check box.
v Click Apply.
PDF files created by various Guardium components (audit tasks, for example) have
a standard page footer. To customize that footer:
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the PDF Footer Text field, enter the text to be printed at the foot of each
page.
3. Click Apply.
Named Template
Message templates are used to generate alerts.
The feature defines multiple message templates and facilitates the use of different
templates on different rules. In the past, only a single message template was
available for all rules, all receiver types, etc.
To add, modify and delete named message templates, click Edit. When creating a
new named template, the starting value of the string is a copy of whatever is
currently in the Message template of the Global Profile. "R/T Alert" is the only
level of severity permitted.
16 Administration
Predefined message templates have been created for the SIEM solutions, ArcSight,
EnVision, and QRadar. The Guardium system comes preloaded with two certified
(agreed upon) templates to integrate with these two SIEM solutions.
The Named Template builder can select from two template types - Real-time Alerts
and Audit Process Report.
Use the Audit Process Report to audit process tasks. The CSV generated will use
the Named Template to adjust the content.
Click Edit Named Templates. Choose an SIEM and then click Modify. Select
Real-time Alerts or Audit Process Report.
After editing, the multiple message templates can be selected from within the
Policy Builder menu.
Adding the QRadar template allows sending real-time alerts or Audit Process
Report to QRadar using the LEEF Format (this is QRadar's format).
Follow the steps to send real-time alerts or Audit Process Results to the QRadar
SIEM.
Real-time alert, Guardium to QRadar
1. Create an real-time alert.
2. Write to syslog
3. Select Template type (Read-time Alert)
4. Forward to Q1 Labs QRadar SIEM (via LEEF mapping/ predefined
message template) - choose QRadar Named Template from Global
Profile
5. From the CLI, run the CLI command "store remotelog" to forward the
syslog messages to QRadar.
Audit Process Report, Guardium to QRadar
Click Harden > Vulnerability Assessment > Audit Process Builder to
open the Audit Process Builder.
1. Create an Audit Process report (Audit Process Builder)
2. Write to syslog
3. Select Template type (Audit Process Report)
4. Forward to Q1 Labs QRadar SIEM (via LEEF mapping/ predefined
message template) – choose QRadar Named Template from Global
Profile
5. From the CLI, run the CLI command "store remotelog" to forward the
syslog messages to QRadar.
For example, here is the default LEEF template for the Databases
Discovered report:
LEEF:0|IBM|Guardium|9.0|Databases Discovered|Time Probed=${1}|Server IP=${2}|Server Host N
Here are the report columns that are mapped to the template:
Time Probed Server IP Server Host Name DB Type Port Port Type
1. Check Export to CSV file and Write to Syslog.
2. Select the Named Template, LEEF Discovered Databases
3. Configure Remote Syslog by using the store remotelog command. For
example:
18 Administration
Threshold: %%alertThreshold
Query period: %%alertQueryFromDate - %%alertQueryToDate
Alert Classification: %%classification
Category: %%category
Severity: %%severity
Recommended Action: %%recommendation
Customize real-time alerts and email
Control appearance of Prefix email subject with Guardium appliance name.
Control appearance of email subject in email body.
Add naming template parameter %%applianceHostName so Guardium
users can add appliance hostname to Name Templates (any position
subject or body).
To accomplish this, use two fields in ADMINCONSOLE_PARAMETERS
table:
APPEND_APPLIANCENAME_SUBJECT
APPEND_SUBJECT_IN_BODY
Use the following CLI commands to control the content of these fields:
show alerter email append_name_subject
store alerter email append_name_subject
show or store the flag to append the appliance name in email subject
show alerter email append_subject_body
store alerter email append_subject_body show or store the flag to append
email subject in the beginning of the email body
Each time the value in CLI changes, it takes effect immediately on the
outgoing emails.
CSV Separator
By default, the same Guardium user can log in to an appliance from multiple IP
addresses. You can disable concurrent logins from the same user. When disabled,
each Guardium user will be allowed to log in from only one IP address at a time.
If a user closes their browser without logging out, the connection will time out due
to inactivity, so the user account will not be blocked for long.
Note: When the feature is disabled, an Unlock button appears next to the
Enable button. You can click Unlock to allow a second user to log in with this
user account, from a different IP address. This is provided for support
purposes.
This feature assumes that specific Guardium users are responsible for certain
specific databases. Therefore a mechanism exists that will filter results,
system-wide, in a way that each user will only be able to see the information
from those databases that the user is responsible for.
Note: The datasec-exempt role is activated when data level security is enabled
and the datasec-exempt role has been assigned to a user.
3. Additional choices include:
v Show-all - Permits the logged-in viewer to see all the rows in the result
regardless of who these rows belong to. When used with the Datasec-exempt
role permits an override of the data level security filtering.
v Include indirect records - Permits the logged-in viewer to see the rows that
belong to the logged-in user, but also all rows that belong to users under the
logged-in user in the user hierarchy.
Note: If data level security at the observed data level is enabled, then audit
process escalation is allowed only to users at a higher level in the user hierarchy.
20 Administration
Default Filtering
Online viewer default setting and for audit process results distribution.
Set the size of the custom database table (in MB). The Default value is 4000 MB.
At this point in the Global Profile menu is a button to see Current usage. Click on
the Current Usage button to show values for INNODB, MYISAM and Total.
Note: The custom size limit is tested before importing data. The import can exceed
the maximum size limit. After the limit is exceeded, the next import will be
prevented.
Change the ports that can be used to send files over SCP and FTP.
For Global Profile - Export and Patch Backup can be changed. The default port for
ssh/scp/sftp is 22. The default port for FTP is 21.
Note: Seeing a zero 0 in the Guardium GUI as the port indicates that the default
port is being used and that there is no need to change.
Note: The name of the uploaded logo file cannot contain a single quotation mark,
double quotation mark, less than sign, or greater than sign.
Encrypt Must Gather was added to the Global Profile. Default value is cleared
(Do not encrypt). If it is cleared, must gather output is just compressed and not
encrypted. When the check box is checked, all future must gather output will be
Alerter Configuration
No e-mail messages, SNMP traps, or alert related Syslog messages will be sent
until the Alerter is configured and activated.
Other components create and queue messages for the Alerter. The Alerter checks
for and sends messages based on the polling interval that has been configured for
it.
For correlation alerts and appliance alerts to be produced, Anomaly Detection must
also be started. For real-time alerts to be produced, a security policy must be
installed.
Set the frequency that the Alerter checks for and sends
messages
1. Click Setup > Tools and Views > Alerter to open the Alerter or click Protect >
Database Intrusion Detection > Alerter to open the Alerter.
2. Enter the Polling Interval, in seconds.
3. Click Apply.
Note: All remaining items in this topic are in the SMTP section of the Alerter
panel.
2. Enter the IP address for the SMTP gateway, in the IP Address box.
3. Enter the SMTP port number (it is almost always 25) in the Port box.
4. Optional: Click the Test Connection hypertext link to verify the SMTP address
and port. This only tests that there is access to specified host and port. It does
not verify that this is a working SMTP server. A dialog box is displayed,
informing you of the success or failure of the operation.
Note: If this SMTP server uses authentication, you must supply a valid User
Name and Password for that mail server in the following two fields.
Otherwise, those fields can be blank.
5. Enter a valid user name for your mail server in the User Name box if your
SMTP server uses authentication.
6. Enter the password for the user in the Password box if your SMTP server uses
authentication. Re-enter it in the Re-enter Password box.
22 Administration
7. In the Return E-mail Address box, enter the return address for e-mail sent by
the system. This address is usually an administrative account that is checked
often.
8. Select Auth in the Authentication Method if your SMTP server uses
authentication. Otherwise, select None. When Auth is selected, you must
specify the user name and password to be used for authentication.
9. Click Apply to save the configuration.
Note: The Alerter will not begin using a new configuration until it is
restarted.
10. Click Restart to restart the Alerter with the new configuration.
Note: All remaining items in this topic are in the SMTP section of the Alerter
panel.
2. In the IP Address box, enter the IP address to which the SNMP trap will be
sent.
3. Optional: Click the Test Connection hypertext link to verify the SNMP address
and port (162). This only tests that there is access to specified host and port. It
does not verify that this is a working SNMP server. A dialog box is displayed,
informing you of the success or failure of the operation.
4. In the ”Trap” Community box, enter the community name for the trap. Retype
the community in the Retype Community box.
5. Click Apply to save the configuration.
Note: The Alerter will not begin using a new configuration until it is restarted.
6. Click Restart to restart the Alerter with the new configuration.
Anomaly Detection
The Anomaly Detection process runs every polling interval to create and save, but
not send, correlation alert notifications that are based on an alert's query.
This notification is run according to the schedule defined for each alert. See
“Alerter Configuration” on page 22 for more information about sending
notifications.
The Anomaly Detection process uses the results of a correlation alert's query, which
looks back over a specified period of time, and the correlation alert's threshold, to
determine whether a condition is satisfied (an excessive number of failed logins,
for example).
Note: Anomaly Detection does not play a role in the production of real-time alerts,
which are produced by security policies.
Session Inference
Session Inference checks for open sessions that have not been active for a specified
period of time, and marks them as closed.
24 Administration
4. In the Max Inactive Period box, enter the number of minutes of inactivity after
which a session is marked closed. The default is 720 (minutes).
5. Click Applyto store the values in the configuration database. Session Inference
will not begin using a new configuration until it is restarted.
6. Click Restart to restart Session Inference with the new configuration.
To stop Session Inference, open the Session Inference panel and click Stop.
IP to Hostname Aliasing
The IP-to-Hostname Aliasing function accesses the Domain Name System (DNS)
server to define hostname aliases for client and server IP addresses.
There are two separate sets of IP addresses: one for clients, and one for servers.
When IP-to-Hostname Aliasing is enabled, alias names will replace IP addresses
within Guardium where appropriate.
1. Click Protect > Database Intrusion Detection > IP-to-Hostname Aliasing to
open IP-to-Hostname Aliasing.
2. Mark the check box for Generate Hostname Aliases for Client and Server IPs
(when available) to enable hostname aliasing.
A second check box can now be accessed. The name of this check box is
Update existing Hostname Aliases if rediscovered.
3. Mark the check box to update a previously defined alias that does not match
the current DNS hostname (usually indicating that the hostname for that IP
address has changed). You may not want to do this if you have assigned some
aliases manually. For example, assume that the DNS hostname for a given IP
address is dbserver204.guardium.com, but that server is commonly known as
the QA Sybase Server. If QA Sybase Server has been defined manually as an
alias for that IP address, and the check box for Update existing Hostname
Aliases if rediscovered is marked, that alias will be overwritten by the DNS
hostname.
4. Click Apply to save the IP-to-Hostname Aliasing configuration.
5. Do one of the following:
v Click Run Once Now to generate the aliases immediately.
v Click Define Schedule to define a schedule for running this task.
System Backup
Use the System Backup function to define a backup operation that can be run on
demand or on a scheduled basis. Use the Patch Backup function to create the
backup profile settings.
System Backup
System backups are used to backup and store all the necessary data and
configuration values to restore a server in case of hardware corruption.
To restore backed up system information, use the restore system CLI command.
The CLI command, diag, can also be used, provided that diag is defined as a role
for given user.
Note: System restore must be done to the same patch level of the system backup.
For example, if a customer backed up the appliance when it was on Version 7.0,
Patch 7 and then wants to restore this backup into a newly-built appliance, then
there is a need to first install Version 7.0, Patches 1 to 7 on the appliance and only
then to restore the file.
26 Administration
v If the operation succeeds, the configuration will be saved.
6. To run or schedule the system backup operation, do one of the following:
v Click Run Once Now to run the operation once.
v Click Modify Schedule to schedule the operation to run on a regular basis.
7. Click Done when you are finished.
Note: When performing a system backup and restore from one server, which
has GIM defined, to another server, then the user must configure a GIM
failover to the restore server. This GIM configuration applies to a Backup
Central Manager or a System backup and restore.
Change the ports that can be used to send files over SCP and FTP.
For System Backup or Patch Backup - Set the protocol (SCP or FTP) and specify
Host, Directory and Port. The default port for ssh/scp/sftp is 22. The default port
for FTP is 21.
The archive process will check the size of the static tables and make sure there is
room in /var to create the archive.
An error is logged in the logfile and GUI if the backup is over 50%. For example:
ERROR: /var backup space is at 60% used. Insufficient disk space for backup.
Patch Backup
Patch Backup in the GUI copies the functions available in the CLI command, store
backup profile. Use this function to maintain the backup profile data (patch
mechanism).
All four fields must be filled in - backup destination host, backup destination
directory, backup destination username, and backup destination password.
Enter 0 or press the Enter key to use the default port. Then, click Apply.
Prerequisites
1. An Amazon account.
2. Register for S3 service
3. Amazon S3 credentials are required in order to access Amazon S3. These
credentials are:
v Access Key ID - identifies user as the party responsible for service requests.
It needs to be included it in each request. It is not confidential and does not
need to be encrypted. (20-character, alphanumeric sequence).
v Secret Access Key - Secret Access Key is associated with Access Key ID
calculating a digital signature included in the request. Secret Access Key is a
secret, and only the user and AWS should have it (40-character sequence).
This key is just a long string of characters (and not a file) that is used to
calculate the digital signature that needs to be included in the request.
There are two archive operations available on the Administration Console, in the
Data Management section of the menu:
v Data Archive backs up the data that has been captured by the appliance, for a
given time period.
v Results Archive backs up audit tasks results (reports, assessment tests, entity
audit trail, privacy sets and classification processes) as well as the view and
sign-off trails and the accommodated comments from work flow processes.
When Guardium data is archived, there is a separate file for each day of data.
The archive function creates signed, encrypted files that cannot be tampered with.
The names of the generated archive files should not be changed. The archive
operation depends on the file names created during the archiving process.
System backups are used to backup and store all the necessary data and
configuration values to restore a server in case of hardware corruption.
All configuration information and data is written to a single encrypted file and
sent to the specified destination, using the transfer method configured for backups
on this appliance.
28 Administration
The Aggregation/Archive Log report can be used to verify that the operation
completes successfully. There should be multiple activities listed for each Archive
operation, and the status of each activity should be Succeeded.
Regardless of the destination for the archived data, the Guardium catalog tracks
where every archive file is sent, so that it can be retrieved and restored on the
system with minimal effort, at any point in the future.
When catalog entries are imported from another system, those entries will point to
files that have been encrypted by that system. Before restoring or importing any
such file, the system shared secret of the system that encrypted the file must be
available on the importing system.
Amazon S3 archive and backup option is enabled by default in the Guardium GUI.
To enable Amazon S3 via Guardium CLI, run the following CLI commands:
store storage-system amazon_s3 archive on
store storage-system amazon_s3 backup on
Amazon S3 requires that the clock time of Guardium system to be correct (within
15-minutes). Otherwise, this will result in an Amazon error. If there is too large a
difference between the request time and the current time, the request will not be
accepted.
If the Guardium system time is not correct, set the correct time using the following
CLI commands:
show system ntp server
store system ntp server (An example is ntp server: ntp.swg.usma.ibm.com)
store system ntp state on
User Interface
Use the System Backup screen (Manage > Data Management > System Backup) to
configure the backup. After enabling Amazon S3 through the CLI commands,
Amazon S3 will appear in the list of protocols.
http://aws.amazon.com/console/
1. Click on S3.
2. Click on the bucket that you specified in Guardium UI.
30 Administration
2. CONFIGURATION
Please enter the number of your choice: (q to quit) 1
1. SCP
2. CONFIGURED DESTINATION
Please enter the number of your choice: (q to quit) 2
Make sure destination is configured in the GUI under the <System
Backup> option
Please wait, this may take some time.
Performing a DEFAULT backup, config=
System Backup and System Restore
Access CLI.
CLI> restore system
1. SCP
2. FTP
3. TSM
4. CENTERA
5. AMAZONS3
7. SOFTLAYER
8. SFTP
Please enter the number of your choice: (q to quit) 7
Enter the SoftLayer Authentication Endpoint URL:
Enter Softlayer Object Storage Container name:
Enter Softlayer X-Auth-User:
Enter X-Auth-Key:
Enter a file name from list:
Authenticate success!
Download file success!
Select your recovery type, for most cases, use the normal option:
1. normal
2. upgrade
Procedure
1. Click Setup > Patch Backup to open the Patch Backup panel.
2. Choose the method of file transfer.
3. Enter the name of the host and the directory where the information is to be
stored.
4. Enter a user name and password to own the file on the destination host.
Follow this procedure to configure permissions for socket all connections that are
used by custom classes.
1. Click Setup > Evaluations > Communication Permissions to open the
Communication Permissions.
2. Click Add permission To Socket Connection to expand that pane.
3. Enter the IP address or Host name for the host.
4. Enter a Port number for the socket connection.
5. Enter a description.
6. Click Save.
32 Administration
Chapter 2. Access Management Overview
Access management consists of four tasks: account administration, maintenance,
monitoring, and revocation.
There are two predefined users on a Guardium appliance: accessmgr and admin.
v accessmgr is the user name assigned to the access manager. By default, the access
manager is the only user authorized to manage user accounts and security roles.
v admin is the user name assigned to the (primary) Guardium administrator. By
default, the administrator does not have authority to manage user accounts or
security roles. The admin user has a more extensive set of privileges.
Note:
Admin and accessmgr roles can not be assigned to the same user. The same user
may contain both of these roles through a legacy situation or as a result of an
upgrade. However, current use will not allow the two roles to be assigned to the
same user.
In the past, when a unit was upgraded, the accessmgr role was assigned to the
admin user, and the accessmgr user was disabled. In this upgrade situation, it was
necessary to first log in as admin and enable the accessmgr user, then log in as
accessmgr (with initial password “accessmgr”, the system prompted the user to
change it), and remove the accessmgr role from the admin user.
The following predefined reports are available from the Accessmgr user.
33
User and Role Reports
Defining and modifying users (see Manage Users) involves deciding both who will
be using the Guardium system and to what roles (see Manage Roles) they will be
assigned. A role is a group of users, all of whom are granted the same access
privileges.
Note: admin and access manager are pre-existing, other roles are created by the
Access manager.
Datasources Associated
This report identifies Datasource Name, Host, Service Name, Login Name and
Association Type. This information comes from the choices made in the
User-Database Associations activity. See the Data User Security - Hierarchy and
Associations help topic.
This report is a list of datasources not associated with any users. This report
identifies Datasource Name, Datasource Type, Host, and Service Name. This
information comes from the choices made in the User-Database Associations
activity. See the Data User Security - Hierarchy and Associations help topic.
Servers Associated
This report identifies Server IP, Service Name, Login Name and Association Type.
This information comes from the choices made in the User-Database Associations
activity. See the Data User Security - Hierarchy and Associations help topic.
Understanding Roles
Assign a role to a Guardium user to grant them specific access privileges. Some
examples of roles are: CLI, admin, accessmgr, CAS, and user.
34 Administration
The access manager defines roles, and assigns them to users and applications.
When a role is assigned to an application or the definition of an item (a specific
query, for example), only those Guardium users who are also assigned that role
can access that component.
If no security roles are assigned to a component (a report, for example), only the
user who defined that component and the admin user can access it. At installation
time, Guardium is configured with a default set of roles, and a default set of user
accounts.
When user definitions are imported from an LDAP server, the groups to which
they belong can optionally be defined as roles. For more information, see
“Importing Users from LDAP” on page 48.
Object types that can be assigned to roles: Alert; Audit process (Discover Sensitive
Data scenario); Baseline; Custom domain; Custom table; Classifier policy (Discover
Sensitive Data scenario); Custom workflow; Data source; Group; Query; Policy;
Privacy set; Report; Security assessment; or SQL application.
Each default role comes with a default layout. When a user logs in for the first
time, that user's initial layout is determined by the roles assigned. After the initial
login, adding or removing roles will not alter the user's layout. After a role is
removed, if the user attempts to access reports or applications that are no longer
authorized, a not authorized message will be produced.
Note: When assigning roles to a user, the admin and access manager role cannot
be assigned to the same user.
Note: Admin role and object owner have access to all objects by default.
Note: Taking a base role and customizing (with additional navigation items), and
then copying this customized role, will result in a loss of the customization if the
customized or copied role is reset to default.
Default Roles
The Guardium system is pre-configured to support users who fall into four
broadly defined default roles: admin, user, access manager, and investigations. The
Guardium access manager can create new roles as well. Users must always be
assigned one of the default roles, but might be assigned any number of other roles,
as well.
Note: Note: If data level security at the observed data level is enabled (see Global
Profile settings), then audit process escalation is allowed only to users at a higher
level in the Data Hierarchy (see Access Manager). The Datasec-exempt user can
escalate, without restrictions, to anyone.
Table 4. Default Roles
Default Role Description
user Provides the default layout and access for all common users. This role can
not be deleted.
To run GrdAPI or CLI commands without admin rights, click the role CLI
for Admin Console in the User Role Permissions selection.
See the topic, diag CLI Command, on how to manage the diag role.
inv Provides the default layout and access for investigation users. An
investigation user must have the restore-to database name of INV_1,
INV_2 or INV_3, as the Last Name in their user definition. This is not
enforced by the GUI, but is required for the application to function
properly. When assigned, the user role must also be assigned. This role
can not be deleted.
Note: The Run an Ad-Hoc Audit Process button is available on all report
screens for all users except investigation (INV) user.
datasec-exempt Data Security - Exempt. This role is activated when Data level security is
enabled (see Global Profile in Administration Console) and the
datasec-exempt role has been assigned. If the user has this role, a Show
all check box appears in all reports. If checked, all sniffed data records are
shown (no filter is applied). This role cannot be deleted in the Role
Browser.
review-only A user that is specified by this role can view only results (Audit,
Assessment, Classifier), Audit Results and the To Do List. This role cannot
be deleted in the Role Browser.
Users with this role is allowed to enter comments in the audit process
viewer (not workflow or comments/data per row, but comments at
process/result level).
Sample Roles
In addition to the default roles, a set of sample roles is also defined.
Table 5. Sample Roles
Sample Role Description
dba Users who have a database-centric view of security, allowing access to
database-related reports and tracking of database objects
infosec Users who have an information security focus, including tracking access
to the database, and handling network requests, audits, and forensics
36 Administration
Table 5. Sample Roles (continued)
Sample Role Description
netadm Users who have a network-centric view, including IP sources for database
requests
appdev Application developers, architects, and QA personnel who have an
application-centric focus and want to track and report on SQL streams
generated by an application
audit Auditors and others who need to view audit reports
Note: If trying to copy this role, an embedded message will appear
explaining that not all aspects of this role can be copied. The message is:
"Create a new role using the layout and permission from the "audit" role.
Special privileges and actions associated with the "audit" role will not be
copied."
audit-delete This role is used to track or log when an audit process result has been
deleted. Users with the audit-delete role can delete reports. Admin users
can also delete reports. Tracking is done through the User Activity Audit
Trail report.
admin-console- A user that is specified by this role can only access the admin console tab.
only
cas Configuration Auditing System (CAS)
vulnerability- A user that is specified by this role can view only vulnerability results.
assess
diag A user that is specified by this role can access and run the diag
commands in CLI.
workload- A user that is specified by this role can define and modify the
replay-admin workload-replay functions.
workload- A user that is specified by this role can run the workload-replay functions.
replay-user
fam A user that is specified by this role can define and modify the File
Activity Monitor functions.
BaselII Accelerator - Basel II. This role can not be deleted.
SOX Section 404 requires that companies must establish and maintain an
adequate internal control structure and procedures for financial reporting.
In Central Manager environments, all User Accounts, Roles, and Permissions are
controlled by the Central Manager. To administer any of these definitions, you
must be logged in to the Central Manager (and not to a managed unit).
Create a Role
1. Login as accessmgr, and open the User Role Browser by clicking Access >
Access Management > Role Browser.
2. Click Add Role to open the Role Form panel.
3. Enter a unique name for Role Name and click Add Role.
Remove a Role
1. Open the User Role Browser by clicking Access > Access Management > Role
Browser.
2. Click Delete for any role (some roles cannot be removed, and do not have the
Delete option). This opens the Role Form for the role.
3. Click Confirm Deletion. A message displays informing you that all references
to the role are removed, and you will be asked to confirm the action.
4. Click OK to confirm the deletion, or Cancel to abort the operation.
Examples of roles include user, admin, and audit. Using roles allows you to easily
define permissions for an entire group of users. Only access managers can create
new roles and assign users to that role. As part of role creation, access managers
can also customize the navigation menu and permissions for that role.
38 Administration
The process is the same if you find that the All Roles check box is already
deselected: simply select or deselect the individual roles to grant or revoke
access to the application.
When All Roles is selected for a particular application, every
currently-defined role will have access to that application.
Limit access from the role
Limit access from the role by navigating to the Role Browser > Manage
Permissions screen and move individual applications from the Accessible
applications list to the Inaccessible applications list.
When managing permissions or customizing the navigation menu for a
new role, the defaults shown in the Accessible applications list reflects
any application with the All Roles check box selected on the Role
Permissions > Edit Application Role Permissions screen.
It is also possible to restrict access to specific tools by hiding menu items using the
Role Browser > Customize Navigation Menu tool. This approach limits access
without altering the default application permissions, but it may be less secure than
a permissions-based approach.
Best Practice: Copy and edit predefined roles to establish the desired permissions
and navigation menu. This approach allows you to revert to the original role if
needed.
Related tasks:
“How to create a role with minimal access”
This topic explains how to create a new role with minimal access permissions, for
example an auditor role that can only access the Audit Process To-Do List and
view specific reports.
Procedure
1. Create a new role.
a. Log in as accessmgr, navigate to Access > Access Management, and select
the Role Browser.
b. Click the Add Role button, give the role a name, and click the Add Role
button to create the new role.
2. Manage permissions so the new role can only access the Audit Process To-Do
List and the Report Builder (which is required for viewing reports).
40 Administration
e. Deselect the Assign check box next to the user role. Deselecting the user role
prevents the new user from inheriting the default user access and
permissions.
f. Click Save to commit your changes.
Related concepts:
“Managing roles and permissions” on page 38
Roles and permissions provide different levels of access to users based on their job
duties.
Manage Users
Use the access manager, assigned the user name accessmgr, to add user accounts,
enable or disable user accounts, import members from LDAP, or edit user
permissions. Open the User Browser and browse the user accounts by clicking
Access > Access Management > User Browser
Defining and modifying users involves deciding both who will be using the
Guardium system and to what roles they will be assigned. A group of users can all
have the same role and the same access privileges if you so choose. For more
information on roles, see “Understanding Roles” on page 34.
Note: A default layout can be defined for a role, so that any new user assigned
that role will have that layout. See Generate New Layout in the CLI Reference.
Regardless of how users are defined to the Guardium system, the Guardium
administrator can configure the system to authenticate users via Guardium, LDAP,
or Radius.
When getting started with your Guardium system, an important early task is to
identify which groups of users will use the system, and what their function will
be. For example, an information security group might use Guardium for alerting
and troubleshooting purposes while a database administrator group might use
Guardium for reporting and monitoring. When deciding who will access the
Guardium system, keep in mind that sensitive company data can be picked up by
the system. Therefore, be very aware of who will be able to access that data.
Once you decide which groups of users will use the Guardium system (and for
what purpose), collect the following information for each user:
v User’s first and last name
v User account name (the name they will use to log in)
v User’s email address
v User’s function/role with Guardium
Locked Accounts
1. Open the User Browser by clicking Access > Access Management to view the
list of users.
2. Click Edit for any user, clear the Disabled check box, and click Update User to
save changes.
Note: If the admin user account becomes locked, use the unlock admin CLI
command to unlock it (see Configuration and Control CLI Commands in the
CLI Reference).
Note: When adding a user manually, from either the Add User panel or User
LDAP Import, if there is no first name and/or last name, the login name will
be used.
3. Enter a password and confirm it again in the Password (confirm) box. The
password you assign will be temporary, and the user will be required to change
it following their first login.
Note: Passwords are case sensitive. When password validation is enabled (the
default), the password must be eight or more characters in length, and must
include at least one uppercase alphabetic character (A-Z), one lowercase
alphabetic character (a-z), one digit (0-9), and one special character from the
following set: @$%^&.;!-+=_
4. Enter the user’s first and last name in the respective fields.
Note: Restrictions apply to the last name for those users assigned the
Investigation Data Restore role (inv). If you want to assign a user the
investigator role, their last name must be INV_1, INV_2, INV_3. The UI will
not restrict you from entering something different in this field, but the
application will not function properly unless the last name is entered as shown.
Further, the investigator cannot be assigned any additional roles - they must be
inv only. This is the only case where it is not required to have a user or admin
role.
5. (Optional) Enter the user’s email address.
42 Administration
6. (Caution) The Disabled check box is checked by default. We suggest that you
defer clearing the check box and enabling the account until after the correct set
of roles have been assigned for the user.
It is much simpler to assign the roles first, so that the user has all components
in their layout the first time they log in. When a user logs in for the first time,
their layout is built using all of the roles assigned at that time. If roles are
added later, the user has access to everything available to that role, but will
have to add reports or applications particular to that role manually.
7. Click Add User to save the new user account definition and close the panel.
This completes the user definition. We suggest that you add the appropriate roles
for the user before informing them of their password for the initial login. See
“Understanding Roles” on page 34 for more information.
Open the User Browser and click Search Users to easily filter users by role. When
you select a user, you have the option to enable or disable the user. Because users
are disabled by default, this menu can be very useful to easily change the status of
many users.
Note: Changing a user's password will require the user to change it following
their next login.
Note: Alerts that were sent to deleted user will be sent now to the admin;
however this will not take effect until the access policy is re-installed.
Note: Best practices dictate a Full Update Active User-DB Map after changing
the User Hierarchy.
When you make a change to a hierarchy or to a database association (via UI or
GuardAPI), this change DOES NOT take effect automatically. The Periodic
Update will NOT pick up this change, unless it is the FIRST time the Periodic
Update has run. Otherwise, the user MUST click Full Update or run the Full
Update GuardAPI command for their changes to take effect.
A periodic update of the user hierarchy is run every 10 minutes automatically.
This cannot be run manually. This is an incremental update, meaning that it is
only looking at new server IPs or Service Names that have been sniffed since
the last time the periodic update was run. It compares the existing hierarchy
and associations against the new IPs/Service Names and determines what
users should have access to these IPs/Service Names.
A full update of the user hierarchy is NOT run automatically. It is only run
when the user executes it, either via the UI or GuardAPI function. This
compares ALL IPs/Service Names to the existing hierarchy and associations to
determine who has access to what.
Use the Data Security User-DB Association to find, assign, or remove users from
available servers and service names (databases).
1. Open the User-DB Association panel by clicking Data Security > User-DB
Association.
2. Select the check boxes of the Server & Service Name Suggestion to find
databases and service names to associate to users. Choices include:
v Observed Accesses - Observed traffic from Guardium internal database table
GDM_Access
v Datasource Definitions - Existing datasource definition information such as
name, database type, authentication information, and location of datasource.
v S-TAP Definitions - Existing S-TAP definition information such as the IP
address of the database server and the IP address of the Guardium host that
will receive data from S-TAP.
v Auto-Discovered Hosts - Hosts discovered by the Guardium Auto-discovery
process that were not previously known. Guardium's Auto-discovery
application can be configured to probe the network, searching for and
reporting on all databases discovered.
44 Administration
v Guardium Install Manager (GIM)-Discovered Systems - Hosts discovered by
the GIM that were not previously known.
3. Click Go to find and display available servers, service names, and currently
associated users.
Note: When traversing the node tree, numerical indicators are displayed next
to each server and service name to provide a count of direct and descendant
users that have been associated. The indicators take the format of [nn] for
direct association and (mm) for descendant association (a server or service
name within the current server has a user associated to it for example).
Likewise, when viewing the users associated to a server or service name, if
there is a user associated to a larger level node in the tree, that user will be
displayed.
4. Click a server or service name node to display associated users. With any node
selected, you can do one of the following:
v Click Add User to add a new user-DB association, click any users you want
to add, and then click Add.
v Click Add Group to add a new group-DB association. When Add Group is
selected, groups that were created using the Group Builder for group type
Guardium Users will be displayed. Select the group you'd like to add and
click Add.
v Right-click any server or service name node to do one of the following:
5. Right-click any server or service name node, and you are presented with
options to do one of the following:
v Highlight the server
v Expand or collapse the server
v Find a server
v Add server, service name, or unnamed service
v Delete the server
6. Add an IP or IP/Service Name pair using the IP and Service Name fields
before the tree structure.
Note: The Find button can be used to search the IP/Service Name tree
structure. IP strings may be entered as partials or include the wild card * such
that 192.168 and 192.168.*.* are both valid. Numeric values cannot trail the use
of any wild card or be used with the wild card to form an octet. Service Name
names may include the wild card % anywhere within their name.
7. Click Full Update Active User-DB Map to fully apply all recent changes to the
active User-DB association map.
Note: Best practices dictate a full update of the active User-DB map after
changing the User-DB Association.
A full update of the user hierarchy is NOT run automatically. It is only run
when the user executes it, either via the Full Update Activer User-DB Map
button or the GuardAPI function. This compares ALL IPs/Service Names to the
existing hierarchy and associations to determine who has access to what.
A periodic update of the user hierarchy is run every 10 minutes automatically
(cannot be run manually). This update is only looking at new server IPs or
Service Names that have been sniffed since the last time the periodic update
was run. It compares the existing hierarchy and associations against the new
IPs/Service Names and determines what users should have access to these
IPs/Service Names.
Procedure
1. Login as the accessmgr and open the User Browser by clicking Access > Access
Management > User Browser.
2. Click Add User from the User Browser panel
3. Fill in the User Form, clear the Disabled check box to enable the user upon
creation, and click Add User.
When a user is initially created they do not have the privilege to login to CLI
and execute any of the GuardAPI commands. As an example, if we try and use
46 Administration
one of the CLI accounts (guardcli1,...,guardcli5) under the newly created user
we are quickly disconnected and told that the user does not have the necessary
role defined.
$ ssh -l guardcli1 192.168.1.89 guardcli1@192.168.1.89’s password:
Last login: Tue Aug 10 18:37:25 2010 from 192.168.1.14
Welcome guardcli1 - your last login was Tue Aug 10 18:37:26 2010
Please enter your GUI login (one with ADMIN or CLI role defined):johnsmith
No such user or user does not have the necessary role defined.
Connection to 192.168.1.89 closed.
4. From the User Browser panel, click Roles for any user to bring up the User
Role Form panel.
5. Check the CLI check box, and click Save to grant the user CLI access
Now when the user tries to use one of the CLI accounts (guardcli1,...,guardcli5)
under the newly created user we are asked for a password and granted access
to the CLI.
$ ssh -l guardcli1 192.168.1.89
guardcli1@192.168.1.89’s password:
Last login: Tue Aug 10 18:39:01 2012 from 192.168.1.14
Welcome guardcli1 - your last login was Tue Aug 10 18:39:02 2011
You can run the import operation on demand, or schedule it to run on a periodic
basis. You can elect to have only new users imported, or you can have existing
user definitions replaced. In either case, LDAP groups can be imported as
Guardium roles.
Note:
48 Administration
When adding a user manually via Access Management (either from Add User or
LDAP user import), if there is no first name and/or last name, the login name will
be used.
This LDAP configuration menu screen has tool tips for certain menu choices. Move
the cursor over a menu choice (such as Object Class for user), and a short
description will appear.
Guardium CLI users can not authenticate in the LDAP environment, as there is no
privilege separation for the CLI users.
The attribute that will be used to identify users is defined by the Guardium
administrator, in the User RDN Type box of the LDAP Authentication
Configuration panel. See Configure LDAP Authentication for further information.
The default is uid, but you should consult with your Guardium administrator to
determine what value is being used. If a user is using SamAccountName as the
RDN value, the user must use either a =search or =[domain name] in the full
name. Examples: SamAccountName=search, SamAccountName=dom
Note: In order to configure LDAP user import, accessmgr user must have the
privilege to run Group Builder. In certain situations, when changes are made to the
role privilege, accessmgr's privilege to Group Builder can be taken away. This
results in an inability to save or run successfully LDAP user import. Go to the
access management portal, select Role Permissions from the choices. Choose the
Group Builder application and make sure that there is a checkmark in the all roles
box or a checkmark in the accessmgr box.
1. Open the LDAP User Import panel by clicking Access > Access Management
> LDAP User Import.
See Example of Tivoli® LDAP Configuration at the end of this help topic for
reference in filling out the required information.
2. For LDAP Host Name, enter the IP address or host name for the LDAP server
to be accessed.
3. For Port, enter the port number for connecting to the LDAP server.
4. Select the LDAP server type from the Server Type menu.
5. Check the Use SSL Connection check box if Guardium is to connect to your
LDAP server using an SSL (secure socket layer) connection.
6. For Base DN, specify the node in the tree at which to begin the search. For
example, a company tree might begin like: DC=encore,DC=corp,DC=root
7. For Attribute to Import, enter the attribute that will be used to import users
(for example: cn). Each attribute has a name and belongs to an objectClass.
8. Check the Clear existing group members before importing check box if you
want to delete all existing group members before importing.
9. For Log In As and Password, enter the user account information that will
connect to the Guardium server.
10. For Search Filter Scope, select One-Level to apply the search to the base level
only, or select Sub-Tree to apply the search to levels beneath the base level.
11. For Limit, enter the maximum number of items to be returned. We
recommend that you use this field to test new queries or modifications to
existing queries, so that you do not inadvertently load an excessive number of
members.
Note: The Status indicator in the Configuration - General section will change
to LDAP import currently set up for this group as follows and the Modify
Schedule and Run Once Now buttons will be enabled. You can now import
from your LDAP server.
If LDAP Import has not yet been configured, you must perform Configure LDAP
User Import before performing this procedure.
1. Open the LDAP User Import panel by clicking Access > Access Management >
LDAP User Import.
50 Administration
Table 6. Example of Tivoli LDAP Configuration (continued)
LDAP Host Name Values
Limit
Attribute to Import as cn (Configurable through Portal)
User Login
Search filter
Object Class for User Fill with Default Value -
|(objectClass=organizationalPerson)(objectClass=inetOrgPerson)(objectClass=person)
Import Roles Add a Checkmark
Attribute to Import as cn
Role
Role Search Base DB Fill with Default Value - cn=sample realm,0=sample
Role filter
Object Class for Role Fill with Default Value -
|(objectClass=groupOfNames)(objectClass=group)(objectClass=groupOfUniqueNames)
Attribute in User to Fill with Default Value - memberOf
Associate Role
Attribute in Role to Fill with Default Value - member
Associate User
Follow these steps to enable and use Guardium data security features:
1. Enable Data Security
2. Create a User Hierarchy
3. Create a User to Database Association
4. Filter Results
When data security features are used with the Classification feature (which
discovers and classifies sensitive data found in multiple places of the database), the
Data Level Security prevents a specified user from seeing classifier results from a
specified datasource (datasource definition). Using Data Level Security can also
prevent a specified user from seeing Audit Task results when the task type is
Classifier.
Note: The status indicator icon for Data level security filtering will now appear
as .
You can verify that Data level security filtering is enabled by referencing the
Services Status panel (Setup > Services Status).
Log in as accessmgr and open the User Hierarchy by clicking Data Security > User
Hierarchy.
Note: Depending on the configuration, inheritance can also take place where the
parent inherits the data-level security of the child.
The User-DB Association feature maps users to specific databases to ensure that
users see only data that they are permitted to view.
Log in as accessmgr and open the User-DB Association by clicking Data Security >
User-DB Association.
Note: Once the map is fully updated, you will see a tree listing all your
servers. Click any node in the tree to view which users are currently associated
with that node.
If you are using dual-stack configuration, there is a root node, and two trees of
addresses to choose from. One tree is for the IPV4 address, and the longer tree
is for the IPV6 address.
Add a user or group to a node by selecting the node and clicking Add user or
Add group.
Central Management
On a Central Management appliance, there is also a box on the User-Database
Associations screen that allows a user to create database associations based on data
from a managed node. Select a remote source from only a box that appears for
Central Management appliances. Also, there is a check box to get data from ALL
managed nodes.
52 Administration
Filter Results
Data level security at the observed data level requires the filtering of data for
specific users and the specific databases they are responsible for.
Filtering at the system level is based on the User Hierarchy and User-DB
Association so that users will see only information from their assigned databases
for the various reports, audit processes, security assessments, and so on, within the
Guardium system.
Log in as the admin user and use the Global Profile to filter results. Open the
Global Profile by clicking Setup > Global Profile.
v Default filtering:
– Show all - This option is available only if the user logged in has the special
role datasec-exempt defined, which allows the user to see all data as if there
was no data level security.
– Include indirect records - This check box shows the viewer not only the rows
that belong to the user logged in, but also all the rows that belong to other
users within that hierarchy.
v Audit Process Escalation: Escalation is allowed for tasks on this type only to
users who have the datasec-exempt role. Users without the datasec-exmpt role
are not shown in the escalation list.
Escalate results to all users - A check mark in this check box escalates audit
process results (and PDF versions) to all users, even if data level security at the
observed data level is enabled. The default setting is enabled. If the check box is
disabled (no check mark in the check box), then audit process escalation only
will be allowed to users at a higher level in the user hierarchy and to users with
the datasec-exempt role. If the check box is disabled, and there is no user
hierarchy, then no escalation is permitted.
v PDF and CSV generation for results (attached to email) distribution will use the
default global profile values set in Administration Console parameters.
v PDF and CSV generated from the viewer will use the same filtering as in the
screen.
Note:
The Data Security User to Database Association filters reports only from the
following domains: Access; Exception; and, Policy Violations (as well as custom
domains using these domains or tables from these domains). All other domains
(reports) are not filtered by the Data Security User to Database Association.
Users with admin role will be able to see event types on all roles (the information
will still be filtered based on observed data level security parameters).
If Data Level Security is turned on, predefined entities added to a custom domain
need to be in the same domain(s) for the data level security filtering to work
properly.
If Data Level Security is on, and two predefined entity subjects are trying to send
data from two domains (not Custom Domains) that are using a filtering policy,
then the sending of the two predefined entity subjects will not be permitted. Data
Level Security can only enforce one kind of filtering policy (for example, there can
be only one policy depending on server_ip/service_name and one policy
depending on datasource).
The Data Security User Hierarchy represents the parent-child relationships between
users; allowing for the creation and enforcement of a data-level security by
permitting the parent of a hierarchy to look at specified servers and databases, but
not the children. Depending on the configuration, inheritance can also take place in
that the parent inherits the data-level security of the child.
Procedure
1. Login as accessmgr and click Data Security > User Hierarchy.
2. Select a user from the Users drop-down menu to display it in the Data Security
User Hierarchy pane. This example uses john smith as a user.
3. To add a user to john smith's hierarchy, right-click on the user in the Data
Security User Hierarchy pane, and select Add user from the drop-down menu.
4. After clicking Add user from the drop down list, the Add user dialog appears.
Select one or more users that you would like to add to the user's hierarchy, and
then click Add.
54 Administration
5. After adding the users to a hierarchy, the Data Security User Hierarchy panel
will be refreshed; allowing the user to drill down and see the new hierarchy.
6. Repeat the steps until all required users are defined to the data security user
hierarchy.
Aggregation
Collect and merge information from multiple Guardium units into a single
Guardium Aggregation appliance to facilitate an enterprise view of database usage.
Aggregation Process
v Accomplished by exporting data on a daily basis from the source appliances to
the Aggregator (copying daily export files to the aggregator).
v Aggregator then goes over the uploaded files, extracts each file and merges it
into the internal repository on the aggregator.
For example, if you are running Guardium in an enterprise deployment, you may
have multiple Guardium servers monitoring different environments (different
geographic locations or business units, for example). It may be useful to collect all
data in a central location to facilitate an enterprise view of database usage. You can
accomplish this by exporting data from a number of servers to another server that
has been configured (during the initial installation procedures) as an aggregation
appliance. In such a deployment, you typically run all reports, assessments, audit
processes, and so forth, on the aggregation appliance to achieve a wider view, not
always an enterprise view. Note: The Aggregator does not collect data, but it is
used to present the data from the collectors.
Appliance Types
Collector
Used to collect database activity, analyze it in real time and log it in the
internal repository for further analysis and/or reacting in real-time
(alerting, blocking, etc.).
Use this unit for the real-time capture and analysis of the database activity.
57
Note:
Terminology
Table 7.
Term Description
Guardium Appliance The physical or virtual Guardium box; can
be either a “collector” or an “aggregator”
(with or without central management)
Guardium Unit See Guardium Appliance
Manager Unit An appliance configured as Central Manager
Managed Unit An appliance managed by the Central
Manger
Standalone Unit An appliance not in a Central Manager
environment
Purge For the best performance, purge all data that
is not needed. Purge to free disk space.
Archive Compress the data of a single day into an
encrypted file and send it to the aggregator.
Hierarchical Aggregation
Guardium also supports hierarchical aggregation, where multiple aggregation
appliances merge upwards to a higher-level, central aggregation appliance. This is
useful for multi-level views. For example, you may need to deploy one aggregation
appliance for North America aggregating multiple units, another aggregation
appliance for Asia aggregating multiple units, and a central, global aggregation
appliance merging the contents of the North America and Asia aggregation
appliances into a single corporate view. To consolidate data, all aggregated
Guardium servers export data to the aggregation appliance on a scheduled basis.
The aggregation appliance imports that data into a single database on the
aggregation appliance, so that reports run on the aggregation appliance are based
on the data consolidated from all of the aggregated Guardium servers.
58 Administration
v When an aggregated unit signs and encrypts data for export to the aggregator.
v When any unit signs and encrypts data for archiving.
v When an aggregator imports data from an aggregated unit.
v When any unit restores archived data.
For aggregation to work, the shared secret must be set and be the same for
aggregator and all aggregated collectors.
Note:
To export data to an aggregation appliance, follow the procedure. You can define a
single export configuration for each Guardium unit.
1. Click Manage > Aggregation & Archive > Data Export to open Data Export.
2. Check the Export data box as this will open additional options for exporting
data.
3. In the boxes following Export data older than, specify a starting day for the
export operation as a number of days, weeks, or months prior to the current
day, which is day zero. These are calendar measurements, so if today is April
24, all data captured on April 23 is one day old, regardless of the time when
the operation is performed. To archive data starting with yesterday’s data, enter
the value 1.
4. Optionally, use the boxes following Ignore data older than to control how many
days of data will be archived. Any value specified here must be greater than
the Export data older than value, so you always export at least two days of
data. If you leave the Ignore data older than blank, you export data for all days
older than the value specified in the Export data older than row; It is
recommended to always set the Ignore older than value, otherwise you will be
exporting the exact same days over and over again; overloading the network
and the aggregator with redundant data (that will be ignored).
5. The Export Values box is checked by default. In some cases, where the collector
resides in a country that prohibits the export of data, and the aggregation
appliance resides in another country, you would want to clear the Export
Values check box, which would mask all fields containing database values.
6. In the Host box, enter the IP address or DNS host name of the aggregation
appliance to which this system’s encrypted data files will be sent. There is also
an option to enable a secondary aggregation for export data over more then
one aggregator. There are two Host boxes available, the first one is required,
while the Secondary Host is an option. This unit and the aggregation appliance
to which it is sending data must have the same System Shared Secret. If not,
the export operation works, but the aggregation appliance that receives the data
is not able to decrypt the exported file and the Import will fail. See System
60 Administration
Shared Secret in “System Configuration” on page 1 for more information. The
Shared Secret is required to be identical on both exporting system and receiving
system. The reason for this is that unless they have same shared secret, the
configuration on the exporting system will not be set and there will be a
message for a test file that can not be sent to the receiving system.
7. Click the Apply button to save the export and purge configuration for this unit.
When you click the Apply button, the system attempts to verify that the
specified aggregator host will accept data from this unit. If the operation fails,
the following message is displayed and the configuration will not be saved: A
test data file could not be sent to this host. Please confirm the hostname or IP
address is entered correctly and the host is online. If the Apply operation
succeeds, the buttons in the Scheduling panel become active.
8. Click Run Once Now to run the operation one time.
9. Click Modify Schedule to schedule this operation to run on a regular basis.
Stopping Export
Note: Stopping an export after the Run Once Now button has been clicked is
impossible.
Importing Data
The Guardium collector units export encrypted data files to another Guardium
appliance configured as an aggregation appliance. The encrypted data files reside
in a special location on the aggregation appliance until the aggregation appliance
executes an import operation to decrypt and merge all data to its own internal
database.
Note: To avoid the possibility of importing files that have not completely arrived,
the aggregation appliance will not import files that have changed in the last two
minutes.
Table 9. Importing Data
Topic Description
Function Import and merge the imported data into the internal databases of the
Aggregator.
Schedule Executed on a daily basis.
Stopping Import
Note: Stopping an import once the RUN ONCE NOW button is clicked is
impossible.
Archiving and purging data on a regular basis is essential for the health of your
Guardium system. For the best performance, we strongly recommend that you
archive and purge all data that is not needed. Important - purge to free disk space.
For example, if you only need three moths of data on the Guardium appliance,
archive and purge all data that is older than 90 days.
The archive and purge process frees space and preserves information for future
use. You should periodically archive and purge data from standalone units and
from aggregation units. The Guardium’s archive function creates signed, encrypted
files that cannot be tampered with. Archive files are transferred and stored on
external systems such as file servers or storage systems.
Note:
If both Archive and Purge are scheduled, Purge will run after Archive.
Data that was archived on a collector can be restored either on another collector or
an aggregator server. Restoring of data that was archived on an aggregator to a
collector machine is not supported.
Archiving data on aggregator system - on the first day of the month, all static
tables are archived. On all other days, only additional data added to archived data
will be archived. This methodology is the same as used by collectors. Adding the
static tables to the normal purge process eliminates the existence of orphans,
freeing up disk space and improving report performance.
Archive and export of static tables on an aggregator includes full static data only
on the first day of the month (archive) or when the export configuration changes
62 Administration
(export). Use the CLI commands, store archive_table_by_date [enable |
disable] or show archive_table_by_date. Other relevant CLI commands are store
aggregator clean orphans or show aggregator clean orphans.
Scheduling Data Management tasks - Default schedule times are supplied when
the unit is built and these can be amended accordingly. The Data Management
tasks should be scheduled at less busy times, for example, overnight. They should
be spaced out so as not to overlap (for example, the start of one task should not
run into the start of another before finishing.)
Aggregator Data Archive, when dealing with an Aggregator/ Central Manager that
performs Data Imports and Data Archives. A default or common setting is to have
the Data Archive perform an Archive of data older than one day ignoring data
older than two days. If it happens that the Data Archive is scheduled to run
BEFORE the Data Imports from other Collector(s)/Aggregator(s), then the Archive
will NOT contain the Imports meant for that days Archive. Imagine the following
schedule: Data Archive to run at 30 minutes past Midnight; Data Imports to run at
6:00 AM for data older than 1 day - ignoring older than 2 days. When the Archive
happens - it will not Archive any relevant yesterday data - no Imports for that
days data have yet occurred. In this example, the Data Archive should be
re-scheduled to occur AFTER the Data Import(s) have finished. This way the
Archive would correctly contain data for yesterday.
Table 10. Archiving and Purging Data
Topic Description
Purge Function Delete old records from appliance (typically - older than 60 days) to free
up space and speed up access operation to the internal database.
Purging is based on dates (deleting whole days’ worth of data), but will
not delete records that are still “in use” (for example: open sessions).
Schedule The default purge activity is scheduled every day at 5:00 AM.
For a new install a default purge schedule will be installed that is based
on the default value and activity
It may be necessary to run reports or investigations on this data at some point. For
example, some regulatory environments may require that you keep this
information for three, five, or even seven years in a form that can be queried
The following sections describe how to define and schedule archiving and how to
restore from an archive.
Note: The archive and restore operations depend on the file names generated
during the archiving process. DO NOT change the names of archived files.
Archive data files can be sent to an SCP or FTP host on the network, or to an EMC
Centera or TSM storage system (if configured). You can define a single archiving
configuration for each unit To archive data to another host on the network and
optionally purge data from the unit, follow the procedure.
1. Click Manage > Aggregation & Archive > Data Archive to open Data
Archive.
2. Check the Archive checkbox to expose additional fields for the archive
process.
3. In the boxes following Archive data older than, specify a starting day for the
archive operation as a number of days, weeks, or months prior to the current
day, which is day zero. These are calendar measurements, so if today is April
24, all data captured on April 23 is one day old, regardless of the time when
the operation is performed. To archive data starting with yesterday’s data,
enter the value 1.
4. Optionally, use the boxes following Ignore data older than to control how
many days of data will be archived. Any value specified here must be greater
than the value in the Archive data older than field. If you leave the Ignore
data older than row blank, you archive data for all days older than the value
specified in the Archive data older than row. This means that if you archive
daily and purge data older than 30 days, you archive each day of data 30
times (before it is purged on the 31st day). Depending on the archive options
configured for your system (using the store storage-system CLI command),
you may have EMC Centera or TSM options on your panel. If you select one
of those archive destinations, see the appropriate topic.
a. EMC Centera Archive and Backup
b. TSM Archive and Backup
5. Enter the IP address or DNS Host name of the host to receive the archived
data
6. In the Directory box, identify the directory in which the data is to be stored.
How you specify this depends on whether the file transfer method used is
FTP or SCP. For FTP, specify the directory relative to the FTP account home
directory. For SCP, specify the directory as an absolute path.
7. In the Username box, enter the user name to use for logging onto the host
machine. This user must have write/execute permissions for the directory
specified in the Directory box.
8. In the Password box, enter the password for the user, then enter it again in
the Re-enter Password box.
9. Data Purge
10. Check the Purge checkbox to purge data, whether or not it is archived. When
this box is marked, the Purge data older than fields display. It is important to
note that the Purge configuration is used by both Data Archive and Data
Export. Changes made here will apply to any executions of Data Export and
vice-versa. In the event that purging is activated and both Data Export and
Data Archive run on the same day, the first operation that runs will likely
64 Administration
purge any old data before the second operation's execution. For this reason,
any time that Data Export and Data Archive are both configured, the purge
age must be greater than both the age at which to export and the age at which
to archive.
11. If purging data, use the Purge data older than fields to specify a starting day
for the purge operation as a number of days, weeks, or months prior to the
current day, which is day zero. All data from the specified day and all older
days will be purged, except as noted otherwise. Any value specified for the
starting purge date must be greater than the value specified for the Archive
data older than value. In addition, if data exporting is active (see Exporting
Data to an aggregation appliance), the starting purge date specified here must
be greater than the Export data older than value. There is no warning when
you purge data that has not been archived or exported by a previous
operation. The purge operation does not purge restored data whose age is
within the do not purge restored data timeframe specified on a restore
operation. For more information, see Restoring Archived Data.
12. Click Apply to verify and save the configuration changes. When you click the
Apply button, the system attempts to verify the specified Host, Directory,
Username, and Password by sending a test data file to that location.
13. Click Run Once Now to run the operation once.
14. Click Modify Schedule to schedule the operation to run on a regular basis.
The general-purpose task scheduler is opened.
Restoring
Unless you are restoring data from the first archive created during the month, you
will need to restore multiple days of data. That is because when restoring data,
Guardium needs to have all of the information that it had when the data being
restored was archived. After the archive was created, some of that information may
have been purged due to a lack of use. All information needed for a restore
operation is archived automatically, the first time that data is archived each month.
66 Administration
So, when restoring data, you can restore the first day of the month and all the
following days until the desired day or restore the desired day and then the first
day of the following month
For example, to restore June 28th, either restore June 1st through June 28th, or
restore June 28th and July 1st.
To restore file for archived data (and not backup system), you need to use the GUI
screen called Catalog Archive. The archive and restore operations depend on the
file names generated during the archiving process. DO NOT change the names of
archived files. If a generated file name is changed, the restore operation will not
work.
1. Click Manage > Aggregation & Archive > Data Restore to open Data Restore.
2. Enter a date in the From box, to specify the earliest date for which you want
data.
3. Enter a date in the To box, to specify the latest date for which you want data.
4. In the Host Name box, optionally enter the name of the Guardium appliance
from which the archive originated.
5. Click Search.
6. In the Search Results panel, mark the Select box for each archive you want to
restore.
7. In the Don't purge restored data for at least box, enter the number of days that
you want to retain the restored data on the appliance.
8. Click Restore.
9. Click Done when you are finished.
Troubleshooting
When there is a problem, more detailed logging should be started using CLI
command: CLI>agg debug start
On any escalation to Technical Support, please supply detailed log files of the time
when the problem occurred.
Use the Support-based CLI commands to organize and sort material important to
troubleshooting.
Central Management
In a central management configuration, one Guardium unit is designated as the
Central Manager. That unit can be used to monitor and control other Guardium
units, which are referred to as managed units. Un-managed units are referred to as
stand-alone units.
The concept of a local machine can refer to any machine in the Central
Management system. There are some applications (Audit Processes, Queries,
Portlets, etc.) which can be run on both the Managed Units and the Central
Manager. In both cases, the definitions come from the Central Manager and the
data comes from the local machine (which might also be the Central Manager).
Once a Central Management system is set up, customers can use either the Central
Manager or a managed unit to create or modify most definitions. Keep in mind
that most of the definitions reside on the Central Manager, regardless of which
machine does the actual editing.
Note:
v Using the Remote Source function, a user on the Manager can run any report on
the managed unit (the user must have the correct role privileges) and view data
and information of that managed unit.
v CAS template definitions are shared between all units of a federated
environment just like all other definitions (reports, policies, alerts, etc.)
v It is recommended that a user run CAS Reports on a manager, especially CAS
Reports relating to CAS configurations, hosts, and templates.
v If you use the Custom Domain Builder to create a report that uses some or all
remote tables (tables that live on the manager in a Central Manager
environment, such as Datasource or Comments), this report does not work on a
managed node. No data will be returned.
v The Central Management page of a manager will no longer automatically refresh
itself based on a certain interval. This page will timeout based on the GUI
timeout of the system.
v After some time of inactivity, the system will log you out automatically and ask
you to sign in again. The length of the GUI timeout can be set via the CLI
command show/store session timeout (default is 900 seconds). Status lights will
refresh every five minutes when the session is active.
v If a user is attempting to synchronize or upload any data from the Central
Manager to managed nodes, all nodes that are involved in this type of activity
MUST be on the SAME version of Guardium.
v During the Central Management Redundancy Transition, it can take up to five
minutes for the Unit type Sync to occur depending on how many units are
defined in the Central Management environment.
68 Administration
Guardium Component Services
Identify Guardium components and the locations from which they are taken in a
central management environment.
That unit can be used to monitor and control other Guardium units, which are
referred to as managed units. Unmanaged units are referred to as stand-alone
units.
Table 11. Guardium Component Services
Component Description
Users, Roles and Central Manager controls the definition of users, roles, groups and datamart tables for all
Permissions managed systems. The Central Manager exports the complete set of user, security role,
group, and datamart tables definitions on a scheduled basis or on demand. The managed
units update their internal databases on an hourly basis. As a result, there might be a delay
of up to an hour between the time users, roles, permissions or datamart tables are added or
modified on the Central manager and the time that the managed unit applies those
updates.
Note: If you have Guardium users or security roles that are defined on an existing
stand-alone unit that is about to be registered for central management, those definitions
will not be available after the system is registered, unless those users and security roles
have also been defined on the Central Manager. You cannot administer users or security
roles on a managed unit. Those definitions can be administered only when logged on to the
Central Manager. When a unit is unregistered for central management, all added users and
security roles are removed leaving only the default users (admin, accessmgr). When
installing an Accelerator add-in product (PCI, SOX, etc.), in a Central Manager
environment, install it first on the Central Manager and then on the managed unit. Add
any roles and users as required for the Accelerator on the Central Manager (and those will
be synchronized with the managed unit from there). Accelerator documentation is
contained within the Accelerator module. See an overview of PCI Accelerator at the end of
this Component Services table.
Aliases and Groups On all processes that automatically generate aliases or groups, for example: import user
groups from LDAP, group generation from queries, alias generation from queries, classifier,
etc. if the same group or alias is automatically generated on more than one managed
machine (managed by the same manager), then it might conflict with an existing group or
alias, which will not be replaced.
Audit Processes The definitions of the Audit Process itself and all of its corresponding tasks are saved to the
Central Manager and available to all managed units. However, Schedules, Results, and
To-Do lists are saved on the local machine. This means that the same Audit Process tasks
can be run on all Managed Units, plus the Central Manager. But it can be run at different
times on different machines, which can be useful if the Managed Units have different peak
load periods. Each machine has its own set of results, which are based on the data that the
machine has collected; and each machine has its own set of To-Do lists for all users. Audit
Process definitions are exported from the Central Manager to the managed units as part of
the user synchronization process (see Synchronizing Portal User Accounts). When audit
process results have been produced, the results are available to users, but on managed
units, there might be a delay of up to an hour before reports or monitors such as
Outstanding Audit Process Reviews are updated.
Queries Each query can get only database information from a single machine. Queries that require
access information including both Central Manager definitions and Managed Unit data
show no data, or missing data.
When regenerate portlet is called on a Central Manager, it also sends a management (https)
request to all managed units to regenerate the portlet (with the report ID). When regenerate
is called on a managed unit - if it is called from the screen (not the management request),
then it should send a management request to the manager to refresh the portlet (this would
also send it to all units). There is a persistence mechanism for management requests for the
case a unit is down - see sections within this topic on registration and policy installation.
From the Central Manager, reports and audit processes can use data from a managed unit
but not managed aggregators. The managed unit is selected as a run-time parameter, is
referred to as a remote datasource, and presented as a filtered drop-down selection list
containing only managed units. When an audit process references a remote datasource, that
audit process can be run from the Central Manager only, so it will not appear in a list of
audit processes that are displayed on a managed unit.
Note: Certain reports, on a Central Manager, of domain Sniffer Buffer Usage (for example,
Request Rate, CPU Usage, Buffer Usage Monitor) will NOT display any data. The reports
will be empty.
Security Assessment Like the Audit Process, the definition of the Security Assessment itself is saved to the
Central Manager. But the results are saved on the local machine. This means that the same
Security Assessment can be run on all Managed Units, plus the Central Manager.
Baselines Baselines are always saved on the Central Manager. However, baselines are GENERATED
using the logged data that is local to the machine on which it is generated. Therefore, if
you want to include constructs from all Managed Units, you must regenerate the baseline
on ALL Managed Units and merge the new results into the existing baseline.
Comments Comments can be saved on either the local machine or the Central Manager, depending on
what the comment is associated with. If the Comment is associated with a definition that
resides on the Central Manager, then it is also saved on the Central Manager. If the
Comment is associated with a Result on the local machine, OR something specific to a
Managed Unit (like an Inspection Engine), the Comment is also saved on the local machine.
Schedules Schedules are always saved on the local machine, even when the definition is saved on the
Central Manager.
Non-Central Manager When a server is configured as a Central Manager, you must be aware of the tasks that
Tasks cannot be performed on that unit, but rather must be performed on other (non-Central
Manager) units. Inspection engines cannot be defined on the Central Manager and can be
created only on the Managed Units. But Inspection engines can be viewed from the Central
Manager.
Upgrade It is recommended to have your Central Manager and managed units on the same version.
Considerations The Central Manager should be upgraded first and then the managed units should follow.
Having a manager in a different version than its managed units should be a temporary
thing and it is highly recommended to upgrade all managed units to the same version as
the manager. Run Sync (Refresh) on all managed nodes after upgrading, in order for these
managed nodes to recognize the proper software version that they are.
70 Administration
Table 11. Guardium Component Services (continued)
Component Description
PCI Accelerator for The PCI Data Security Standard consists of twelve basic requirements. Much of the
Compliance requirements are focused on protecting physical infrastructure (for instance, Requirement 1:
Install and maintain a firewall configuration to protect data) or implementing procedural
best practices (for instance, Requirement 5: Use and regularly update anti-virus software).
However, an extra emphasis is placed on real-time monitoring and tracking of access to
cardholder data and continuous assessment of database security health status (for instance,
Requirement 10: Track and monitor all access to network resources and cardholder data).
Other tools in the Guardium family of solutions are available to help meeting regulations
include the following:
v Cardholder Database Access Map - A graphical map of access between cardholder
database access clients and servers. This map, which is located under the access map
capabilities, provides an at-a-glance view of activities by access type, content, and
frequency.
v PCI Compliance Report Card - A detailed view of cardholder databases access security
health that is used to automate the compliance processes with continuous real-time
snapshots customized for user-defined tests, weights, and assessments. The Report Card
can be generated using security assessment.
v Full Audit Trail - The non-intrusive generation of a full audit trail for data usage and
modifications that are required by regulatory compliance.
v Automated Scheduling - Automated scheduling of PCI work flows, audit tasks, and
dissemination of information to responsible parties across the organization.
The following table can help identify which components are taken from which
location in a central management environment.
Table 12. Components and Location in Central Manager Environment
Central Manager Managed Unit
Users System Configuration
Security Roles Inspection Engines
Application Role Alerter (configuration)
Permissions
Queries Anomaly Detection
Reports Session Inference
Time Periods IP-to-Hostname Aliasing
Alerts System Backup
Security Aggregation / Archiving
Assessments
Users, Security Roles, Audit Process Definitions, and Groups are exported from the
Central Manager to all managed units on a scheduled basis, as described later.
Note: Application Role Permissions can also be changed by the administrator from
any managed unit. When this happens, the permissions are changed for all
managed units.
72 Administration
Make one machine the Central Manager
The first thing is to make one machine into a Central Manager. Select a machine.
Then, complete the following steps.
1. Log in to the CLI of the Machine that you want to make the Central Manager.
2. Enter store unit type manager. This step makes the machine a Central
Manager; however, it is not yet managing anything.
After you have a Central Manager, you must connect the other machines into a
Central Management system. For security reasons, it is a requirement that the
communications between the machines be encrypted by using the same shared
secret. To do this step, do the following action items.
1. Click Setup > Tools and Views > System to open System.
2. Set the shared secret to the same string on all systems.
Registering Units:
You can register Guardium units for central management either from the Central
Manager or from the unit itself. Regardless of how the registration is done, the
Central Manager and all managed units must have the same system shared secret.
If the unit to be managed is already registered for central management with
another manager, unregister that unit from that manager before you register it with
the new manager. Be sure to understand exactly what happens to that unit when it
is registered and unregistered for central management.
Note: If the user that is logged in to a managed unit does not exist on the Central
Manager, the session is invalidated. It remains invalidated until the unit is
registered with a Central Manager.
After registration all definitions of reports, queries, groups, policies, audits, and
more are retrieved from the Central manager.
If you know the unit that is registered is online and accessible from the Central
Manager, but its status remains offline, then complete the following steps.
v Verify that the unit to be managed is online, accessible, and operational by using
a browser window to log in to the Guardium system on that unit.
v Click Refresh for the unit.
v Check that you entered the correct IP address for the unit.
v Check that the unit has the same shared secret as the Central Manager.
On a managed unit, you can use the GUI to register the unit with the Central
Manager. Otherwise, you can use the CLI register command as described in
Registering a Managed Unit with the CLI.
1. Click Manage > Install Management > Registration to open Registration.
2. For Central Management Host IP, enter the IP address of the Central Manager.
3. For Port, complete the https port for the Central Manager (usually 8443).
4. Click Register.
After you register on the managed unit, it initiates communication with the Central
Manager, and nothing more needs to be done.
Note: The central management unit must be online and accessible by this unit
when you register for central management. In contrast, when you register units for
management from the central management unit, you can register units that are not
currently accessible.
After you register on the managed unit, it initiates communication with the Central
Manager, and nothing more needs to be done.
When a unit is unregistered, always unregister from the Central Manager. This
method is the only way that the Central Manager decrements its count of managed
units.
Unregistering from the managed unit does NOT unregister the unit on the Central
Manager. The Central Manager still counts that unit as a managed unit for
licensing purposes and thinks the unit is managed. It might not allow another unit
to be registered with the Central Manager. The unregister function on the managed
unit is included for emergency use ONLY. If a manager is no longer in service,
then you must unregister the unit before you can register it to another manager.
74 Administration
If you unregister a unit from the managed unit, it still shows on the Central
Manager screen. Pressing refresh for that unit re-registers it. Pressing any other
operation for that unit gives out a message that the unit is no longer managed and
removes it from the manager.
On a managed unit, you can use the GUI to unregister the unit with the Central
Manager. Also, you can use the CLI unregister command as described in
Unregistering a Managed Unit with the CLI.
1. Log in to the Guardium GUI of the unit to be managed as the admin user.
2. Click Manage > Install Management > Registration to open Registration.
3. Click Unregister.
After unregistration all definitions of reports, queries, groups, policies, audits, and
more are retrieved from the local database, the definitions that are stored on
Central Manager are no longer accessible.
If you are unsure about how to verify, contact Guardium Support before you
unregister the unit.
Unregistering a managed unit from the Central Manager screen removes it from
the managed unit list and sets the unit to be a stand-alone unit.
Note: The product key of the unit is removed and unless the unit is registered to
another manager the product key is placed in manually.
On a managed unit, you can use the GUI to unregister the unit with the Central
Manager. Also, you can use the CLI unregister command as described in
Unregistering a Managed Unit with the CLI.
1. Log in to the Guardium GUI of the unit to be managed as the admin user.
To unregister a Managed Unit by using the CLI, complete the following steps.
1. On the Managed Unit, log in to the CLI.
2. Type unregister management.
After you have unregistered from the Managed Unit, it severs communication with
the Central Manager, and nothing more needs to be done.
As mentioned earlier, the Central Manager controls the definition of Users, Security
Roles, Groups, and datamart tables for all managed units. The Central Manager
makes an encrypted and signed copy of its complete set of User and Security
Roles. In addition, the Central Manager transmits that information to all managed
units. Furthermore, some other definitions that are required for local processing
(Groups and Group members, Audit processes, Aliases, and more) are also copied.
The managed units then update their internal databases on an hourly basis. This
process means that there might be a delay of up to an hour before using these
roles or datamart tables.
Note: Use caution when setting the schedule so that it does not interfere with
other scheduled jobs like Import which can fail to start.
Procedure
Click Manage > Central Management > Portal User Sync to manage portal user
synchronization.
1. Click Modify Schedule to change the user synchronization task schedule by
using the standard task scheduler.
2. If the task is actively scheduled, click Pause to stop further scheduled
executions.
3. If the task is paused, click Resume to start running the task again (according to
the defined schedule).
4. Click Run Once Now to run the synchronization task immediately.
Note: The task that is scheduled or Run Once Now refers to the collection of
data and its transmission to the managed units only. The managed units might
not use that data to update their user tables until up to 1 hour after it is
received.
76 Administration
In an existing Guardium environment, refer to the procedure outlined to develop a
plan for implementing central management. If you are converting an existing
Guardium unit to a Central Manager, keep in mind that a Central Manager cannot
monitor network traffic. For example, inspection engines cannot be defined on a
Central Manager.
1. Select a system shared secret to be used by the Central Manager and all
managed units. For more information, see the system shared secret in System
Configuration.
2. Install the Central Manager unit or designate one of the existing systems as the
Central Manager. In either case, use the store unit type command to set the
manager attribute for the Central Manager.
3. Any definitions from the stand-alone unit that you want to have available in
the central management environment must be exported before the stand-alone
unit is registered for management. Later, those definitions are imported on the
Central Manager. BEFORE exporting or importing any definitions, follow the
procedure that is outlined for each stand-alone unit that is to become a
managed unit. Read through the introductory information under
Export/Import Definitions.
v Decide which users, security roles, queries, reports, groups, time periods,
alerts, security assessments, audit processes, privacy sets, baselines, policies,
and aliases from the stand-alone system you want to have available after the
system becomes a managed unit. Ignore any components on the stand-alone
system you do not want to have available.
v Compare the security roles and groups that are defined on the stand-alone
unit with those defined on the Central Manager. Under central management,
a single version of these definitions applies to all units. If a security role with
the same name exists on both systems and it is used for different purposes,
add a new role on the Central Manager and assign the new role to the
appropriate definitions after they are imported.
v If the same group name exists on the stand-alone unit and the Central
Manager but it has different members, create a new duplicate group on the
stand-alone system, taking care to select a group name that does not exist on
the Central Manager. In all of the definitions to be exported, change the old
group name references to new group name references.
v All security roles that are assigned to all definitions that are exported from
the stand-alone system. When definitions are imported, they are imported
WITHOUT roles, so you must add them manually.
v Check the application role permissions on each system. If any security roles
assigned to an application on the stand-alone unit are missing from the
Central Manager, add them to the Central Manager.
v Export all queries, reports, groups, time periods, alerts, security assessments,
audit processes, privacy sets, baselines, policies, and aliases from the
stand-alone system that you want to have available after the system becomes
a managed unit. (See Export/Import Definitions) Do not export users or
security roles. If you are unsure about a definition, export it in a separate
export operation so that you can decide in the future whether to import that
definition to the Central Manager. After you register for central management,
none of the old definitions from the stand-alone unit are available.
v On the stand-alone unit, create PDF versions audit process results and store
them in an appropriate location. Under central management, only the audit
results produced under central management are available.
Use the following steps when you migrate a CAS collector with active instances to
managed.
1. Export the CAS host definitions from the stand-alone collector.
2. Manage the stand-alone collector.
3. Restart the CAS host from the GUI of the now managed collector.
4. Import the CAS host definition to the manager.
5. Restart the CAS host from the GUI of the managed collector again.
After these steps are performed, the CAS collector has the same instances and
monitor the same files that it did when it was a stand-alone.
Note: The CAS data that was collected when it was a standalone is deleted. There
is no collected CAS data unless a file changes. There is no collected baseline data
for each file.
78 Administration
Table 13. Monitoring Managed Units (continued)
Control Description
Unselect all Clear all managed units.
Check box Mark this box to select the unit for wanted operation.
Refresh unit information Refreshes all information that is displayed in the expanded view of that unit and
issues new requests to that unit. This action also causes a full user
synchronization cycle.
Reboot unit Reboots the unit at the operating system level. By default, the Guardium portal
is started at startup.
Restart unit portal Restarts the Guardium application portal on the managed unit. You can then log
in to that unit to do Guardium tasks (defining or removing inspection engines,
for example).
View unit SNMP attributes Opens the SNMP Viewer pane in a separate window. Clicking the refresh icon in
the SNMP Viewer pane refreshes the data in the window.
View unit syslog Opens the Syslog Viewer in a separate window, displaying the last 64 KB of
syslog messages. Clicking the Refresh icon in the Syslog Viewer pane refreshes
the data in the window.
Shortcut to unit portal Opens the Guardium login page for the managed unit, in a separate browser
window.
Unit Name The host name of the managed unit. If you hold the mouse pointer over the unit
name, its IP address displays as a tooltip. If the host name changes on the unit,
the Central Manager no longer sees that unit when automatically refreshing the
Online status. If you suspect the host name was changed, use Refresh on the
toolbar. Obtain the changed host name and update the displayed current Online
status and other information for that unit.
Online Indicates whether the unit is online. If the green indicator is lit, the unit is
online; if the red indicator is lit, the unit is offline. The Central Manager
refreshes this status at the refresh interval that is specified in the central
management configuration (1 minute by default). If an error occurred connecting
to a unit, the error description can be viewed as a tooltip. Hover the mouse
indicator over that unit's record in the management table.
From here, depending on status, you might stop or start the inspection engine.
The information that is displayed for each inspection engine is as follows (This
information is fetched from the managed unit when the Refresh is pressed, not
on every ping):
From-IP/Mask - A list of the IP addresses and subnet masks of the clients whose
database traffic to the To-IP/Mask addresses the inspection engine monitors.
Ports - The ports on which database clients and servers communicate; can be a
single port, a list of ports, or a range of ports
80 Administration
Table 13. Monitoring Managed Units (continued)
Control Description
Distribute Uploaded JAR files Click Harden > Configuration Change Control (CAS Application) > Customer
Uploads. Then, enter the name of the file to be uploaded. Otherwise, click the
Browse to locate and select that file. Upload one driver at a time.
Click Upload. You are notified when the operation completes, and the file that is
uploaded is displayed. This action brings the uploaded file to the Central
Manager.
Select a check box of the managed unit or units where these JAR files are to be
distributed. Click Distribute Uploaded JAR files.
Distribute Patch Backup Settings This setting distributes the following to selected units:
PATCH_BACKUP_FLAG; PATCH_AUTOMATIC_RECOVERY_FLAG;
PATCH_BACKUP_DEST_HOST; PATCH_BACKUP_DEST_DIR;
PATCH_BACKUP_DEST_USER; PATCH_BACKUP_DEST_PASS
Distribute Authentication Config Select the managed units that receive the distribution of the Central Management
authentication.
Some of these configurations do not take effect until the portal is restarted
(Anomaly Detection, Session Inference). Other processes, such as the Alerter,
need to be restarted, either directly through the admin portal of the managed
unit, or by rebooting all relevant managed units from the manager.
The Distribute Configurations does not restart the managed units. There is a
separate icon for each managed unit to be restarted.
After Distribution, a message will display saying that the managed units will
need to be restarted for all the configurations to take effect on managed units.
Each parameter that has scheduling has a second check box. When this second
box is checked, this parameter's scheduling is distributed.
Alerter
Active on Startup check box. Each time the appliance restarts, the Alerter is
activated automatically.
The Alerter to be manually restarted on the managed units through the admin
portal (Admin Console/ Alerter). Since this restart cannot be done from the
Central Manager, restart the managed units from Admin Console and get the
same effect.
Anomaly Detection
Active on Startup check box. Each time the appliance restarts, Anomaly
Detection is activated automatically.
Procedure
1. Click Setup > Tools and Views > Policy Installation to open Currently
Installed Policies and the Policy Installer.
2. From the Policy list, select the policy that you want to install.
3. From the list, select an installation action. After you select an installation action,
you are informed of the success (or failure) of each policy installation. If a
selected unit is not available (it might be offline or a link might be down), the
Central Manager informs you of that fact. It continues attempting to install the
new policy for a maximum of seven days (on the condition that unit remains
registered for central management).
4. From the Policy list, select the policy that you want to install.
5. The available installation actions include the following items:
a. Install and Override - delete all installed policies and install the selected
one instead
b. Install last - installing the selected policy as the last one in the sequence;
installing the policy after all currently installed policies and having the
lowest priority
c. Install first - installing the selected policy as the first one in the sequence;
installing the policy before all currently installed policies.
Note: If you install a policy from the Central Manager, the selection of Run
Once Now (and scheduler) updates existing groups within the installed
policies.
To load changes to rules, including addition and subtraction of groups, you
must either:
a. Initially install policies from the Collector, or
b. Reinstall policies from the Collector or Central Manager.
To view a map that shows all managed units, click Manage > Central
Management to open Central Management. Then, click Show Distributed Map to
display a map of the central manager unit and all managed units.
Provide visibility and control over patch installation, status, and history. On a
Central management cluster provides a way to install patches on managed units
from the Central Manager.
When you install a patch, a date and time request can be specified to indicate
when the patch is installed. If no date and time is entered or if now is entered, the
installation request time is immediate.
Note: A patch that is installed successfully can be installed again. This fact is
important for batched patches. A warning informs you if the patch is already
installed.
Log in to the Guardium GUI of the unit to be managed as the admin user:
Procedure
1. Click Reports > Guardium Operational Reports > Installed Patches to open
the Installed Patches
2. Do one of the following steps:
a. Click Patch Distribution - Patch Distribution opens a new screen, display
an available patch list with dependencies, and allow for the selecting of a
patch and installing it to all selected units. The list of available patches is
84 Administration
constructed out of the available patches. Evaluate the currently installed
patches on each of the selected units along with the dependency list of
available patches. Patches available but not installable (a dependent patch is
missing) are disabled and cannot be selected. The selection of patch to
install is a single selection - only one patch can be installed at a time. After
a patch is selected and the installation is pushed, a command is sent to all
selected units to install that patch. This process of installing patches
happens in the background.
b. Click Patch Installation Status. The Patch Installation Status screen
displays, for each unit, failed installations and discrepancies. For example,
having one patch installed on part of the units only, regardless if it failed on
other units or was not installed.
c. Click Delete - Click this button to delete the patch file from the Central
Manager, and remove the patch from the Available Patches list.
See the CLI commands, store system patch installation, and delete
scheduled-patch through the CLI.
Patch Management troubleshooting - Problem: Patch is not showing in the
available patch list.
Check:
a. The patch file does not exist in /var/log/guard/patches/
b. The Central Manager and managed units are on a different version.
Distribute Configuration
Configurations and their schedules, can be distributed, either all or individually,
between the Central Manager and the managed units.
Procedure
1. Select the managed units that receive the configurations.
2. Click Distribute Configurations to display the Distribute Configurations
window.
3. Check the appropriate boxes for those Configurations that you would like
distributed. Use the check box in the header to select all configurations.
4. Check the appropriate boxes for those Schedules that you would like
distributed. Use the check box in the header to select all schedules. If a
configuration is not scheduled, there is not a check box for it and displays 'n/a'
instead.
5. Click Distribute to distribute the configurations and schedules.
6. Option: Click Cancel to abort distribution.
Results
Procedure
1. Ensure authentication (Configure Authentication) on both the central manager
and the managed unit. So if LDAP authentication is being used, ensure that
LDAP is configured on the central manager and the managed unit.
2. Select the managed units to receive the distribution of the central management
authentication.
3. Click Distribute Authentication Config to distribute the authentication
configuration to all managed units selected.
Note: Data, either collected data, audit results and custom tables data, is not
included.
86 Administration
Note: Failover with Central Manager load balancing - After failover, if the new
Managed Units connect and then disconnect right away, the correct DB_USER will
not be sent until the failover message is received.
Note:
v IMPORTANT: Wait approximately one hour to be sure at least TWO of
the Backup CM sync files supporting Backup CM have completed.
v The backups schedule for Backup CM sync files is approximately every
30 minutes.
v The process will run on the CM to create a backup CM file and copy
that file to the directory on the Backup CM.
Start the Backup CM Process after two sync file process have completed
Shutdown the Primary CM Guardium Server
If you have no access to shutdown the Primary CM, then go directly to the
Backup CM and login as Admin. (select Setup > Tools and Views and then
choose Central Management) and click Make Primary CM). Skip to section
“Steps to start the Backup CM configuration to become the Primary CM”
in this document.
1. Wait approximate five minutes and login again as admin in the GUI of
the Backup CM.
2. Once the Primary CM is shutdown completely, you can continue onto
the next step
Note:
88 Administration
If you are logged into the Primary CM and it goes down, you get a
message indicating that the connection has timed out.
Steps to start the Backup CM configuration to become the Primary CM
The secondary CM will not be responsive for approximately five minutes.
Login after five minutes and the Make Primary CM link will be available.
The link is available under the admin login and (Setup > Tools and Views
> Central Management).
1. When the Primary Server goes down, you will get a message on the
Backup CM “Unable to connect to Remote Manager, consider switching
to (the name of the backup CM)".
2. If you decide to switch:
a. Login as admin
b. Select Setup > Tools and Views.
c. Click Make Primary CM (do not click the “Make Primary CM” link
more than once. Also stay on this screen and do not select anything
else during the running of this process. A log file will be created
that you can view to see the progress and completion of this
process.) Be patient as this process will take awhile to complete.
There is a safeguard that if you do click this button more than once
nothing will change with the current running process.
d. Within seconds you should get a message “Are you sure you want
to make this unit the primary CM? Click OK.
e. Within a few seconds more you will get a message stating “This
may take a few minutes”. The time it takes for the Backup CM to
become the primary CM depends on the amount of data backed up
from the Backup CM sync file and the amount of managed nodes
that switch to the Backup CM which will become the Primary CM.
Click OK.
As soon as we click OK a log file will be created called
load_secondary_cm_sync_file.log that will allow you to view the
progress of the switch to the completion of the Backup CM switch
process. This file can be viewed from your GUI. The following steps
indicate how to view this log file.
f. The last message will take a while to be presented to the screen. It
will be the last message before the Backup CM switch has
completed. The message is “GUI will restart now. Try to login again
in a few minutes and the Backup CM will now become the Primary
CM”. Click OK.
Wait a few minutes for the Backup CM to become Primary and for
all the managed nodes to complete switching over to the new
Primary CM.
While the CM Backup Process is running – viewing the progress log file
From the Backup CM while the Make Primary CM process is running, you
can do the following to view the progress of the Backup CM becoming the
Primary CM.
Prerequisite: You will need the IP of the server you are connected to in
order to view the log files.
1. Login as CLI from your Backup CM server from a Putty.exe session
2. From CLI run Fileserver <IP> “enter your IP number” 3600", for
example: fileserver 9.70.32.122 3600
90 Administration
Investigation Center
Investigation Center is an extension of the Aggregation Servers. Investigation Users
(once defined) can restore data and results of selected historic dates and perform
forensic investigation. Once the days (dates) are restored, the investigation users
can define and view reports using the standard Guardium UI, only in the scope of
the investigated dates.
Each Guardium appliance maintains a Catalog of all the data and results archived.
The Catalog contains information about the archive, its location and credentials to
access them. The Catalog is exported from the collectors and merged into a
complete Catalog on the Aggregation Server as part of the aggregation process.
With the Catalog in place, investigation users can now select the desired dates for
restoration and these dates will automatically be uploaded to the Investigation
Center and merged into that investigation user’s view. In addition to merging
collectors’ Catalogs through the Aggregation Server, it is also possible to Export
and Import Catalogs from Setup > Tools and Views.
An investigation user for the most part utilizes the same query and report
definitions as any other user would. The biggest difference is that the investigation
user sees only data selected for his investigation database (multiple investigators
can be configured to share an INV database). Selected data can be restored from
archive or viewed from the current database in the case of data that was not
purged yet. An investigation user can also restore archived audit process results
and view them.
Caution: Role inv is a special role which will cause the user to be connected to a
separate, investigation-only internal database. It should be combined with the role
user and in general it is incompatible with all other roles.
Note: To correctly configure an investigation user, the user's Last Name must be
set to the name of one of the three investigation databases, INV_1, INV_2, or
INV_3 (case-sensitive).
Note: The Run an Ad-Hoc Audit Process button is available on all report screens
for all users except investigation (INV) user.
If the user is INV, then the audit process definition menu screen will permit the
following:
If the user is not INV, the audit process finder will not display any audit process
owned by an investigation user (regardless of the roles assigned).
When an audit process is ran on INV data, the result title is appended with the
words Executed on Investigation center by and the name of the INV user.
A comment is attached to the results specifying the dates and source hosts of the
data mounted on the Investigation database at execution time.
The results can be viewed either from the Audit Process Builder or for the result
navigation list.
Results of audits run on Investigation center cannot be archived and the results are
discarded when investigation data is discarded.
Investigation Context
Guardium’s Investigation Center supports one to three concurrent investigation
periods, dubbed INV_1, INV_2 and INV_3, each can hold separate historic data
and provides means to forensic investigation of that period. When creating an
investigation user, the user's last name is must be either INV_1, INV_2, or INV_3
to associate that user with one of the investigation databases. When logged into
the Investigation Center (using one of the investigation users) a label specifies the
selected investigation period.
GUI
A user with the investigation role will see two additional tabs that are particular to
the Investigate Center.
v Auditing tab gives access to restored audit process results
v Volume management tab allows the user to set or modify the investigation
period, select audit process results to restore and discard data at the end of an
investigation.
After logging into the Guardium interface as a user with the inv role:
92 Administration
1. Click Manage > Aggregation & Archive > Data Restore to open the Data
Restore Search Criteria.
2. C
3. Click Data Restore to open the Restored Data panel. If a prior restore was
performed, this panel will display the currently mounted data periods being
used. At this point, you may click Discard Data to un-mount all previously
mounted data periods.
4. Click Re-Select Investigation Period to open the Data Restore Search Criteria
panel.
5. Enter the start date in the From: box for the beginning time period you wish
to search
6. Enter the end date in the To: box for the ending time period you wish to
search
7. Optionally, enter a Host name to aid in filtering the result set on the host
name
8. Click Search to view the result set - this will search the catalog for all archives
matching the search criteria.
9. From the result set produced, check the Select box(es) of those periods you
wish to restore. You may also click Select All or Unselect All to speed the
selection process.
10. Click Restore to restore the selected periods. Depending on the number of
periods to restore, and whether the datasets are local to the system, the restore
process could take long time.
11. You can monitor the progress of the restore process in the View Restore Log
panel.
Note: Data of any day restored to Investigation Center that falls within the merge
period is also merged into the Guardium application database and is visible by
non-inv users.
After logging into the Guardium interface as a user with the inv role:
1. Click the Volume Management tab.
2. Click Audit Results Restore to open the Restored Results panel. If a prior
restore was performed, this panel will display the currently restored results
being used. At this point, you may click Discard Data to un-mount all
previously mounted results.
3. Click Audit Results Restore to open the Results Restore Search Criteria panel.
4. Enter the start date in the From: box for the beginning time period you wish
to search.
5. Enter the end date in the To: box for the ending time period you wish to
search.
The restore log provides a view to the Archive/Restore of past and current restore
attempts and filtered for the user currently logged in. This log enables the user to
validate a successful restore for both data and audit results.
After logging into the Guardium interface as a user with the inv role: Click Restore
Log to open My Restore Log. From this panel you will be able to see the status of
all restore attempts.
After logging into the Guardium interface as a user with the inv role:
1. Click the Auditing tab.
2. Click the Results Navigation link to open the Audit Process Finder panel.
3. From the drop down list (if there are audit processes), select a process.
4. Click View to open another window and view the available reports for the
audit results.
94 Administration
Chapter 4. Managing your Guardium system
Management tasks include monitoring your system’s health and managing artifacts
such as groups, domains, and notifications.
Guardium Administration
Guardium administrators perform various administration and maintenance tasks.
Any user assigned the admin role is referred to as a Guardium administrator. This
is distinct from the admin user account.
The Guardium admin role has privileges that are not explicitly assigned to that
role. For example, when a user with the admin role displays a list of privacy set
definitions, all privacy sets defined on the Guardium system display, and the user
with the admin role can view, modify, or delete any of those definitions. When a
user without the admin role accesses the list of privacy sets, that user will see only
those privacy sets that he or she owns (i.e. created), and all privacy sets that have
been assigned a security role that is also assigned to that user.
Use of the diag CLI command requires an additional password, which can be the
password of any user with the admin role.
If automatic account lockout is enabled (a feature that locks a user account after a
specified number of login failures), the admin user account may become locked
after a number of failed login attempts. If that happens, use the unlock admin CLI
command to unlock it.
Note: The access manager (accessmgr) can unlock accounts from the User Browser.
Open the User Browser by clicking Access > Access Management > User Browser.
95
available to that user. When the admin user performs any actions on another user's
to-do list, that fact is noted in the audit process activity log, for example, User
admin signed results on behalf of user x.
When definitions are exported, all roles are removed, and the owner is changed to
the admin user. This is the only way to control how the definition will be used on
the importing system.
The next time the admin user logs in, access manager functionality will be
available to them. This is possible for the admin user only (and not for other users
having the admin role).
Note:
The same user may contain both of these roles through a legacy situation or as a
result of an upgrade. However, current use will not allow the two roles to be
assigned to the same user.
In the past, when a unit was upgraded, the accessmgr role was assigned to the
admin user, and the accessmgr user was disabled.
In this situation, to configure the accessmgr and admin, log in as admin and enable
the accessmgr user, then log in as accessmgr (the default initial password
isguardium), and remove the accessmgr role from the admin user.
Certificates
Check certificates periodically to avoid loss of function. Use CLI commands to
obtain and install new certificates.
Certification Expiration
Expired certificates will result in a loss of function. Run the show certificate
warn_expire command periodically to check for expired certificates. The command
displays certificates that will expire within six months and certificates that have
already expired. The user interface will also inform you of certificates that will
expire. To see a summary of all certificates, run the command show certificate
summary.
For more information, see the full list of Certificate CLI Commands.
New Certificates
To obtain a new certificate, generate a certificate signed request (CSR) and contact
a third-party certificate authority (CA) such as VeriSign or Entrust. Guardium does
not provide CA services and will not ship systems with different certificates than
the ones that are installed by default. The certificate format must be in PEM and
include BEGIN and END delimiters. The certificate can either be pasted from the
console or imported through one of the standard import protocols.
96 Administration
You can generate a certificate signed request (CSR) with one of the following
commands:
v create csr alias - This command creates a certificate request with an alias.
v create csr gui - This command creates a certificate request for the tomcat.
v create csr sniffer - This command creates a certificate request for the sniffer.
Note: Do not perform this action until after the system network configuration
parameters have been set.
To install a new certificate through the command line interface, use one of the
following commands:
v store certificate gim - This command stores GIM certificates in the keystore.
v store certificate gui - This command stores tomcat certificates in the
keystore.
v store certificate keystore - This command asks for a one-word alias to
uniquely identify the certificate and store it in the keystore.
v store certificate mysql - This command stores mysql client and server
certificates.
v store certificate stap - This command stores S-TAP certificates.
v store certificate sniffer - This command stores sniffer certificates.
To install a new certificate key through the command line interface, use one of the
following commands:
v store cert_key mysql - This command stores the certificate key of a mysql client
and server.
v store cert_key sniffer - This command stores the sniffer certificate key.
You can choose to restore certificates and certificate keys with the backup or
default parameter. Use the backup parameter to restore a certificate to the last
saved certificate. Use the default parameter to restore a certificate to the original
certificate that Guardium supplied.
Changes in Commands
New Commands
Deprecated Commands
The following commands have been deprecated.
v csr
v store certificate console
v store system key
v show system key
v store system certificate
v show system certificate
98 Administration
Unit Utilization Level
Use unit utilization reports to identify under- and over-utilized collectors in your
Guardium system. Unit utilization reporting is not available on systems without a
Central Manager.
Open the unit utilization reports by clicking Manage > Reports > Unit Utilization,
and then selecting one of the reports.
There are four unit utilization reports that you can use:
1. Buff Usage Monitor
2. CPU Tracker
3. Enterprise Buffer Usage Monitor
4. Unit Utilization
Utilization Parameters
All parameters except for number of restarts are averaged for a specific unit over a
specific time range. The number of restarts is a count of the sniffer restarts during
a specific time range based on the different PIDs.
Thresholds
For each parameter there are two thresholds defined that separate three utilization
levels: Low, Medium, and High.
Utilization levels:
v Low: value is less than Threshold1
v Medium: value is greater than Threshold1, and less than Threshold2
v High: value is greater than Threshold2
There is also an overall utilization level for each unit. For each period of time, this
level is the highest level for all levels during that period.
Reporting
View the four unit utilization reports by clicking Manage > Reports > Unit
Utilization.
The Unit Utilization Levels tracking option allows you to create custom queries
and reports.
Note: Each parameter is classified into three levels based on the values of the
thresholds.
Guard APIs:
v listUtilizationThresholds
updateUtilizationThresholds
100 Administration
reset_unit_utilization
CLI commands:
v store monitor gdm_statistics
v show monitor gdm_statistics
Customer Uploads
Database Activity Monitor Content Subscription (previously known as Database
Protection Subscription Service) supports the maintenance of predefined
assessment tests, SQL based tests, CVEs, APARs, and groups such as database
versions and patches.
Uploads are used to keep information current and within industry best practices to
protect against newly discovered vulnerabilities. Distribution of updates is done on
a quarterly basis.
Use Customer Uploads to upload the following: DPS update files; Oracle JDBC
drivers; MS SQL Server JDBC drivers; and, DB2 for z/OS license jar.
Note: If a custom group exists with the same name as a predefined Guardium
group, the upload process will add Guardium in front of the name for the
predefined group.
1. Open Customer Uploads by clicking Harden > Configuration Change Control
(CAS Application) > Customer Uploads.
2. For DPS Upload, click Browse to locate and select the file to be uploaded.
Note: Reference the Import DPS pane to see what files have been uploaded.
3. For Upload DB2 z/OS License jar, click Browse to locate and select the file.
4. Use Upload Oracle JDBC driver or Upload MS SQL Server JDBC driver to
upload open source drivers. After uploading, you will see the databases added
to the Datasource finder. Upload one driver at a time.
Note: There are two instances where open source drivers are recommended
over Oracle Data Direct drivers or MS SQL Data Direct drivers.
a. To support Windows Authentication for MS SQL Server. In all other uses,
the Data Direct driver pre-loaded in the Guardium appliance is sufficient.
b. When using the Value Change Tracking application for Oracle version 10 or
higher, the open source driver is recommended in order to support using
streams instead of triggers.
Use keywords to search and download open source JDBC drivers (for example:
open source JDBC driver for MS SQL).
5. Use the Central Manager to distribute the .jar file to managed units. After the
file is successfully uploaded, the GUI needs to be restarted on the Central
Manager and the managed units.
Note:
If you will be exporting and importing definitions from one unit to another, be
aware that subscribed groups are not exported. When exporting definitions that
reference subscribed groups, you must ensure that all referenced subscribed groups
are installed on the importing unit (or central manager in a federated
environment).
Note: If the DPS stops for any reason (for example, a server restart or a GUI
restart), it is recommended to wait 30 minutes before starting the DPS upload
process again.
Use a TAB Delimited file (.TXT) when creating and saving a Datasource Upload
file from the Customer Upload functionality
If you choose to use a comma delimited file structure (.CSV), it will not
behave as intended if any column value contains a comma.
Follow these steps
1. If using EXCEL, save file as a TAB Delimited (.TXT) file.
2. If using OpenOffice or Libre Office then save a (.CSV) file with TAB
Delimiters.
3. Log in as admin and open Customer Uploads by clicking Harden >
Configuration Change Control (CAS Application) > Customer
Uploads.
4. For Upload CSV to Create/Update Datasources, click Browse..., and
select the tab delimited file.
Create Datasource for CSV uploaded via the Upload CSV menu
Follow the proceeding steps to create a Tab Delimited .TXT formatted file
containing datasource information. This Tab Delimited .TXT file can then be used
with the Customer Upload function in the Guardium application to many
datasource types.
Use the function to import datasources was not always compatible with each
Guardium Software Release. This procedure will enable the uploading of any
datasource.
102 Administration
Table 15. create_datasource
Parameter Description
application Required. Identifies the application for which the datasource is being
defined. It must be one of the following:
ChangeAuditSystem
Access_policy
MonitorValues
DatabaseAnalyzer
AuditDatabase
CustomDomain
Classifier
AuditTask
SecurityAssessment
Replay
Stap_Verification
compatibilityMode Compatibility Mode: Choices are Default or MSSQL 2000. The
processor is told what compatibility mode to use when monitoring a
table.
conProperty Optional. Use only if additional connection properties must be
included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.
DB2
DB2 for i
Informix
MS SQL Server
MySQL
NA
Netezza
Oracle (DataDirect)
Oracle (SID)
PostgreSQL
Sybase
Sybase IQ
Teradata
TEXT
TEXT:FTP
TEXT:HTTP
TEXT:HTTPS
TEXT:SAMBA
user Optional. User for the datasource. If used, password must also be
used.
Notes:
1. Each of the column names must be included in the Excel spreadsheet SAVED
as a TAB delimited (.TXT) file.
2. The Created Datasource name (what is shown when looking for the datasource)
is made up of both the name column and the type column.
104 Administration
3. Upload file MUST be saved as a Column Tab Delimited file type.
Steps to create and upload txt file in a Text CSV format file and add Datasource
Data
1. Create the Excel spreadsheet file save as a Tab Delimited .TXT file with the
following headers and datasource data to support the datasource import
capability.
2. Create and save your .txt file to your PC or UNIX/Linux device for uploading
into the Guardium application.
3. Login as admin and open Customer Uploads by clicking Harden >
Configuration Change Control (CAS Application) > Customer Uploads
4. From Upload CSV to Create/Update Datasources, click Browse and select the
.txt file containing the tab delimited datasource information.
5. Click Upload.
A message will display showing which values from the .txt file were uploaded:
1. New: Per file upload (if save file and added New Datasource member(s), these
members will be have the status of NEW.
2. Update: Upload SAME datasource that you made changes on will give an
Update status.
3. Fail: Displayed failed datasource or errors
Say that you set up a policy that sends a real-time alert whenever there are more
than three failed log-ins in 5 minutes. To protect against this possible intrusion,
you must make sure that the policy was installed, and that the alerter is on.
Use the Services Status panel to verify that both of these services are configured
properly. If for some reason the policy didn't install correctly, click Policy
Installation to go to Policy Installer, view the currently installed policies, and make
the necessary changes.
Note: Clicking any service takes you to its configuration page, where you can turn
the service off or on, and, also view the status of the service.
In an aggregation environment, data can be archived from the collector, from the
aggregator, or from both locations. Most commonly, the data is archived only once,
and the location from where it is archived varies depending on your requirements.
Archive files can be sent using SCP or FTP protocol, or to an EMC Centera or TSM
storage system (if configured). You can define a single archiving configuration for
each Guardium system.
Archive and export activities use the system shared secret to create encrypted data
files. Before information encrypted on one system can be restored on another, the
restoring system must have the shared secret that was used on the archiving
system when the file was created.
Perform System Backup tasks by clicking Manage > Data Management > System
Backup. You can also perform backup tasks from the CLI.
Default Purging
v The default value for purge is 60 days
v The default purge activity is scheduled every day at 5:00 AM.
v For a new install, a default purge schedule is installed that is based on the
default value and activity.
106 Administration
v When a unit type is changed to a managed unit or back to a standalone unit, the
default purge schedule is applied.
v The purge schedule will not be affected during an upgrade.
v When purging a large number of records (10 million or higher), a large batch
size setting (500k to 1 million) is the most effective way to go. Using a smaller
batch size or NULL causes the purge to take hours longer. Smaller purges finish
quickly, so a large batch size setting is only relevant for large purges.
Note: Setting batch size is not available in the UI. Use the GuardAPI command
grdapi set_purge_batch_size batchSize to set batch size.
Note: If you leave this field blank, you archive data for all days older than
the value specified in Archive data older than. This means that if you archive
daily and purge data older than 30 days, you archive each day of data 30
times (before it is purged on the 31st day).
5. Check the Archive Values check box to include values from SQL strings in the
archived data. If this box is cleared, values are replaced with question mark
characters on the archive (and hence the values will not be available following
a restore operation).
6. Select a Protocols option, and fill in the appropriate information. Depending
on how your Guardium system has been configured, one or more of these
buttons might not be available. For a description of how to configure the
archive and backup storage methods, see the description of the show and
store storage-system commands.
7. Perform the appropriate procedure, depending on the storage method
selected:
v Configure SCP or FTP Archive or Backup
v Configure EMC Centera Archive or Backup
v Configure TSM Archive or Backup
8. Check the Purge check box to define a purge operation.
IMPORTANT: The Purge configuration is used by both Data Archive and Data
Export. Changes that are made here apply to any executions of Data Export
and vice versa. In the event that purging is activated and both Data Export
Note:
There is no warning when you purge data that has not been archived or
exported by a previous operation.
The purge operation does not purge restored data whose age is within the do
not purge restored data timeframe that is specified on a restore operation.
10. Click Apply to save the configuration changes. The system attempts to verify
the configuration by sending a test data file to that location.
v If the operation fails, an error message is displayed and the configuration
will not be saved.
v If the operation succeeds, the configuration is saved.
11. To run or schedule the archive and purge operation, do one of the following:
v Click Run Once Now to run the operation once.
v Click Modify Schedule to schedule the operation to run on a regular basis.
12. Click Done when you are finished.
Note: Seeing a zero (0) for port indicates that the default port is being used
and that there is no need to change.
4. For Username and Password, enter the credentials for the user logging on to
the SCP or FTP server. This user must have write/execute permissions for the
directory that is specified in Directory.
For Windows, a domain user is accepted with the format of domain\user
5. Click Apply to save the configuration.
108 Administration
Configure EMC Centera Archive or Backup
This backup or archiving task copies files to an EMC Centera storage system
off-site. A license is needed with user name and password from EMC. Four main
actions are needed for this task:
1. Establish account with an EMC Centera on the network (IP addresses and a
ClipID are needed)
2. Configure the data and/or configuration files from a Guardium system
3. Define and export a library
4. Confirm that your files are stored on the EMC Cetera storage system.
CLI action
Open System Backup by clicking Manage > Data Management > System Backup.
Select EMC Centera, the following information must be provided:
1. For Retention, enter the number of days to retain the data. The maximum is
24855 (68 years). If you want to save it for longer, you can restore the data later
and save it again.
2. For Centera Pool Address, enter the Centera Pool Connection String; for
example: 10.2.3.4,10.6.7.8?/var/centera/us1_profile1_rwe.pea txt
Note: This IP address and the .PEA file comes from EMC Centera. The
question mark is required when configuring the path. The .../var/centera/...
path name is important as the backup might fail if the path name is not
followed. The .PEA file gives permissions, username, and password
authentication per Centera backup request.
3. Click Upload PEA File to upload a Centera PEA file to be used for the
connection string. The Centera Pool Address is still needed.
Note: If the message Cannot open the pool at this address.. appears, check
the size of the Guardium system host name. A timeout issue has been reported
with Centera when using host names that are fewer than four characters in
length.
4. Click Apply to save the configuration. The system attempts to verify the
Centera address by opening a pool using the connection string specified. If the
operation fails, you will be informed and the configuration will not be saved.
5. Click Run Once Now to perform the backup using the downloaded .PEA file.
Confirm that your files have been copied to the EMC Centera. The name of the
files and a ClipID are required for this task.
Restore Data
110 Administration
v Before restoring from EMC Centera, a pea file must be uploaded to the
Guardium system, via the Data Archive panel.
v Before restoring or importing a file that was encrypted by a different Guardium
system, make sure that the system shared secret used by the Guardium system
that encrypted the file is available on this system (otherwise, it will not be able
to decrypt the file). See About the System Shared Secret in “System
Configuration” on page 1.
v Before restoring on a Guardium collector run the CLI command stop
inspection-core to stop the inspection-core process.
To restore data:
1. Open Data Restore by clicking Manage > Data Management > Data Restore.
2. Enter a date in From to specify the earliest date for which you want data.
3. Enter a date in To to specify the latest date for which you want data.
4. For Host Name, optionally enter the name of the Guardium system from which
the archive originated.
5. Click Search.
6. In the Search Results panel, check the Select check box for each archive you
want to restore.
7. In the Don't purge restored data for at least field, enter the number of days
that you want to retain the restored data on the system.
8. Click Restore.
9. Click Done when you are finished.
Note: The restore of data archived from a collector should be done only to: the
same collector; an aggregator; or, a different collector dedicated to investigation
that is not part of an aggregation cluster. In the case of a crashed collector, a
system backup can be restored onto a new, clean collector.
Use this feature to archive and backup data, from Guardium, to Amazon S3.
Prerequisites
1. An Amazon account.
2. Register for S3 service
3. Amazon S3 credentials are required in order to access Amazon S3. These
credentials are:
v Access Key ID - identifies user as the party responsible for service requests.
It needs to be included it in each request. It is not confidential and does not
need to be encrypted. (20-character, alphanumeric sequence).
v Secret Access Key - Secret Access Key is associated with Access Key ID
calculating a digital signature included in the request. Secret Access Key is a
secret, and only the user and AWS should have it (40-character sequence).
When Guardium data is archived, there is a separate file for each day of data.
System backups are used to backup and store all the necessary data and
configuration values to restore a server in case of hardware corruption.
All configuration information and data is written to a single encrypted file and
sent to the specified destination, using the transfer method that is configured for
backups on this system.
Use the Aggregation/Archive Log report in Guardium to verify that the operation
completes successfully. Open the Aggregation/Archive Log by clicking Manage >
Reports > Data Management > Aggregation/Archive Log. There should be
multiple activities that are listed for each Archive operation, and the status of each
activity should be Succeeded.
Regardless of the destination for the archived data, the Guardium catalog tracks
where every archive file is sent, so that it can be retrieved and restored on the
system with minimal effort, at any point in the future.
A separate catalog is maintained on each system, and a new record is added to the
catalog whenever the system archives data or results.
112 Administration
When catalog entries are imported from another system, those entries will point to
files that have been encrypted by that system. Before restoring or importing any
such file, the system shared secret of the system that encrypted the file must be
available on the importing system.
Amazon S3 archive and backup option is not enabled by default in the Guardium
GUI. To enable Amazon S3 via Guardium CLI, run the following CLI commands:
store storage-system amazon_s3 archive on
store storage-system amazon_s3 backup on
Amazon S3 requires that the clock time of Guardium system to be correct (within
15-minutes). Otherwise, this results in an Amazon error. If there is too large a
difference between the request time and the current time, the request will not be
accepted.
If the Guardium system time is not correct, set the correct time using the following
CLI commands:
show system ntp server
store system ntp server (An example is ntp server: ntp.swg.usma.ibm.com)
store system ntp state on
User Interface
Use the System Backup to configure the backup. Open the System Backup by
clicking Manage > Data Management > System Backup.
http://aws.amazon.com/console/
1. Click S3.
2. Click the bucket that you specified in Guardium UI.
Guardium catalog
When you archive data from your Guardium system, the Guardium catalog tracks
where every archive file is sent, so that it can be retrieved and restored.
You can archive a catalog, export a catalog to external storage, or import a catalog
that has been stored.
When catalog entries are imported from another system, those entries point to files
that have been encrypted by that system. Before you restore or import any such
file, the system shared secret of the system that encrypted the file must be
available on the importing system. You can use the aggregator backup keys file
and aggregator restore keys file CLI commands to copy the shared secrets from
one Guardium system to another.
Archiving a catalog
Procedure
1. Click Manage > Data Management > Catalog Archive.
2. You can display available catalog entries for a range of dates, or add a catalog
entry. To display catalog entries:
a. Enter a date in From to specify the earliest date for which you want data.
b. Enter a date in To to specify the latest date for which you want data.
c. Optional: For Host Name, enter the name of the Guardium system from
which the archive originated.
d. Click Search.
To add a catalog entry:
a. Click Add.
b. Enter a File Name.
c. Enter a Host Name.
d. Enter the Path for the file.
Note:
For FTP: specify the directory relative to the FTP account home directory
For TSM: Specify the directory as an absolute path of the original location.
e. Enter a User Name and Password for access to this location.
f. In the Retention field, enter the number of days this entry is to be kept in
the catalog (the default is 365).
g. Select an option from the Storage System menu on which the file is
contained.
114 Administration
h. Click Save.
3. To remove a catalog entry, open the catalog, select the entry, and click Remove
Selected.
4. Click Done when you are finished.
Exporting a catalog
Procedure
1. Click Manage > Data Management > Catalog Export.
2. Select a definition type from the Type dropdown list. TheDefinitions to Export
list is populated with definitions of the selected type.
3. Select all of the definitions of this type that you want to export and click
Export. Depending on your browser security settings, you might see a message
that asks whether you want to save the file or open it.
4. Choose a location to save the exported file.
Importing a catalog
Procedure
1. Click Manage > Data Management > Catalog Import.
2. Click Browse to locate and select the file.
3. Click Upload. You are notified when the operation completes and the
definitions that are contained in the file are displayed. Repeat to upload more
files.
4. Click Import to import the uploaded files or click Remove without Importing
to remove the uploaded files without importing the contents.
Value-added: Best Practices. Protect your data from loss. Make your data readily
accessible for auditing purposes.
Use the System Backup function to define a backup operation that can be run on
demand or on a scheduled basis.
System backups are used to back up and store all the necessary data and
configuration values to restore a server in case of hardware corruption.
There are two archive operations available on the Administration Console, in the
Data Management section of the menu:
v
Data Archive backs up the data that has been captured by the Guardium system,
for a given time period. When configuring Data Archive, a purge operation can
also be configured. Typically, data is archived at the end of the day on which it
is captured, which ensures that in the event of a catastrophe, only the data of
that day is lost. The purging of data depends on the application and is highly
variable, depending on business and auditing requirements. In most cases data
can be kept on the machines for more than six months.
v
116 Administration
In an aggregation environment, data can be archived from the collector, from the
aggregator, or from both locations. Most commonly, the data is archived only once,
and the location from where it is archived varies depending on the customer's
requirements.
Data backup
Data retention
The data backup and archive files serve two purposes: disaster recovery, and
historical investigation or auditing.
The following suggestions can be modified based on your corporate data retention
policy. For example, some organizations are mandated to keep all backups for 18
months.
Note: If you have stand-alone collectors, the daily archives should be kept
according to your data-retention policy.
Storage capacity
The following are only estimates/ranges of backup and archive file sizes for
auxiliary storage capacity planning purposes.
The actual sizes vary depending on (1) the volume and granularity of the database
activity that is logged on the Guardium collectors, and (2) the retention period of
the backup files.
Daily Archives
Monthly System Backups – assuming a 50% full database on a Dell R610 or IBM
xSeries 3550 M4 (600 GB Disks)
Note: The backup gets roughly a 1:8 compression for the backup file.
Collector: 7 – 10 GB
Aggregator: 16 – 20 GB
Results Archives
118 Administration
This control is primarily achieved in the policy rules, and via the inspection engine
configuration.
Scheduling
The following tables provide a summary of the key schedules to be configured on
your Guardium systems. Following the tables is a brief explanation of each
process.
Use the Aggregation/Archive log to record the time and status of these processes
to assist with adjusting your scheduling times.
The following table lists a schedule of tasks for a Guardium system that is
deployed as a collector.
Function Schedule
Data export (to the Aggregators) Daily*: 12:30 AM
Data Archive and Purge Daily: 01:30 AM AND Purge for 15 days
Audit/Workflow jobs Daily: 03:00 AM (if standalone)
CSV/CEF export to the SCP/FTP Server Daily: 05:00 AM, if configured in the Audit
jobs AND after the audit jobs complete.
Host name Aliasing Daily: 10:00 PM
Policy Reinstallation Daily: 11:00 PM
System Backups Monthly: First Sunday of each Month at 6:00
AM
Function Schedule
Data Archive and Purge Daily: 12:30 AM AND Purge for 30 days
Data Import (from the Collectors) Daily 1:15 AM
Audit/Workflow jobs Daily: 03:30 AM
CSV/CEF export to the SCP/FTP Server Daily: 05:15 AM, if configured in the Audit
jobs AND after the audit jobs complete.
Hostname Aliasing Daily: 10:00 PM
System Backups Monthly: First Sunday of each Month at 7:00
AM
Note: Avoid scheduling before 12:15 a.m. to avoid any conflicts with the internal
start-of-day processing on each Guardium system.
The daily Data Archive should be set to Archive data older than 1-Day and Ignore
data older than 2-days. The first run archives all data in the database and
subsequent processes will only archive yesterday's data.
The amount of data kept online is constrained by the size of the database on each
Guardium system, so the Purge process helps to manage how much data is kept
online, and it works with the Daily Archive. Guardium recommends keeping the
minimum amount of data necessary to avoid filling up the database and help with
database performance.
For collectors, Guardium recommends 15 days for the collector and 30 days for the
aggregator. The actual length, however, depends on how much data is recorded
(for example, numbers of S-TAPS, policy rules, and collectors).
The previous day’s logged activities are exported daily (a push process) from the
collectors to their assigned aggregators for aggregated-reporting. This activity is the
counterpart to the Data Import on the aggregator.
Note: For convenience, purge can be configured on either the Archive or Export
setup screens.
120 Administration
The Data Import process is scheduled only on an aggregator. It imports and
processes the previous day’s data exported from the collectors.
Monthly Backups
As noted previously, the system backups are full backups and used for disaster
recovery. Here is an example of the monthly schedule for the first Sunday of each
month starting at 6:00 AM.
CEF/CSV files that are created by workflow processes can also be written to
syslog. When that happens, those files are not available to be exported by the
means described here. Those files should be accessed from syslog by other means.
To define a default separator, open the Global Profile by clicking Setup > Tools
and Views > Global Profile.
To enter a label to be included in all file names, go to Tools > Audit Process
Builder.
Note:
The Syslog maximum message size is 4000. CSV results are truncated if they
exceed this limit.
Set the encoding to UTF-8 no matter what application is used to read .CSV files.
Excel defaults to a different character set and can corrupt the .CSV files. Also,
when using Excel, import the .CSV file and select UTF-8 encoding instead of just
opening the file and having Excel launch based on file association.
Export/Import Definitions
If you have multiple systems with identical or similar requirements, and are not
using Central Management, you can define the components that you need on one
system and export those definitions to other systems, provided those systems are
on the same software release level.
You can export one type of definition (reports, for example) at a time. Each
element that is exported can cause other referenced definitions to be exported as
well. For example, a report is always based on a query, and it can also reference
other items, such as IP address groups or time periods. All referenced definitions
(except for security roles) are exported along with the report definition. However,
only one copy of a definition is exported if that definition is referenced in multiple
exported items. An export of policies or queries exports only the groups that are
referenced by the exported policies or queries. Previously an export of policies or
queries would export all groups.
Export/Import Definitions
Export and Import Definitions are used to save and then restore functional
data from a given Guardium system. For example, this function enables
you to create a report on one Guardium system and then import that same
report onto another server with the same Guardium installed version.
Note: This function is not the same as a full backup of the server. Backups
should still be defined and run on a scheduled or manual basis.
Export Definitions - Are used to save and share defined functional values
such as Reports/Queries, CAS data, Classifier Data, and so on. The export
types are saved onto your PC as a .sql file type.
Import Definitions - This function is used to import the exported
definitions onto servers that use the SAME Guardium Software version.
For example, if you export definitions from a Guardium V10 system, then
you can import those definitions only onto another V10 system.
Note:
122 Administration
v When you export graphical reports, the presentation parameter settings (colors,
fonts, titles, and so on) are not exported. When imported, these reports use the
default presentation parameter settings for the importing system.
v Subscribed groups are not exported. When you export definitions that reference
subscribed groups, the user must ensure that all referenced subscribed groups
are installed on the importing appliance (or Central Manager in a federated
environment).
v The logs of Export/Import Definitions have the same retention period than the
monitored database activity logs.
v Comments are not included in export.
v When audit process definitions of scheduled runs (including schedule time) are
exported to another system, the ACTIVE check box in Audit Process Builder is
not checked (INACTIVE).
v Schedule Start Time of an audit process defined on one appliance and exported
to another (unrelated) appliance - In the case that the original schedule start
time is defined, it is retained. If the original schedule start time is not defined
(empty), then the imported schedule start time is set to the time it was imported.
v When you export a datasource with an open source driver, the open source
driver is not included in the export. The user needs to first upload the open
source driver into the new system before importing the datasource definition
that was created using it, otherwise the data direct driver will be substituted for
the open source driver when it is imported.
v Large complex imports can take a very long time and can exceed the length of
the user's session. If this happens and the session times out, the import
continues to run in the background until it completes.
v When you export the definition of classifier policies - any custom evaluation
classes associated with the policies are not exported with the definition. For the
imported policies to work custom evaluation classes must be uploaded
separately.
v Exporting/Importing definitions between different languages does not work. For
example, trying to export a file from a Guardium system with a language of
Simplified Chinese and import that file to a Guardium system of English will
not be successful.
Optim Designer can convert data values for various purposes and through various
means. In the core Optim runtime (z/OS and Distributed) this is achieved through
the invocation of data privacy functions that are declared within column maps. In
Optim Privacy this is specified, by the user, as the application of a data privacy
policy on an attribute, referenced by an entity within a data access plan.
Note: XACML imports from previous versions of Guardium are not supported.
To Import an XACML file from another Guardium system or Optim Privacy, open
the Definitions Import by clicking Manage > Data Management > Import.
Importing Groups
When you import a group that already exists, members may be added, but no
members will be deleted.
Importing Aliases
When you import aliases, new aliases may be added, but no aliases will be
deleted.
When a definition is created, the user who creates it is saved as the owner of that
definition. The significance of this is that if no security roles are assigned to that
definition, only the owner and the admin user have access to it.
In addition, imported user definitions are disabled. This means that imported users
can receive email notifications that are sent from the importing system, but they
are not able to log in to that system, unless and until the administrator enables that
account.
124 Administration
Duplicate Group and User Implications
If a group that is referenced by an exported definition exists on the importing
system, the definition of that group from the exporting system will not be not
imported. This may create some confusion if the group is not used for the same
purposes on both systems.
If a user definition exists on the importing system, it may not be for the same
person that is defined on the exporting system. For example, assume that on the
exporting system the user jdoe with the email address john_doe@aaa.com is a
recipient of output from an exported alert. Assume also that on the importing
system, the jdoe user already exists for a person with the email address
jane_doe@zzz.com. The exported user definition is not imported, and when the
imported alert is triggered, email is sent to the jane_doe@zzz,.com address. In
either case, when security roles or user definitions are not imported, check the
definitions on both systems to see if there are differences. If so, make the
appropriate adjustments to those definitions.
Role
Security Assessment
User
Users database mapping
Users database permission
Users Hierarchy
Export Definitions
1. Open the Definitions Export pane by clicking Manage > Aggregation &
Archive > Export.
2. Select an option from the Type menu. The Definitions to Export menu will be
populated with definitions of the selected type.
3. Select all of the definitions of this type to be exported.
Note: Do not export a Policy definition whose name contains one or more
quote characters. That definition can be exported, but it cannot be imported. To
export such a definition, make a clone of it, naming the clone without using
any quote characters, and export the clone.
4. Click Export. Depending on your browser security settings, you may receive a
warning message asking if you want to save the file or to open it using an
editor.
5. Save the exported file in an appropriate location.
Import Definitions
1. Open the Definitions Import pane by clicking Manage > Aggregation &
Archive > Import.
2. Click Browse to locate and select the file.
3. Click Upload. You are notified when the operation completes and the
definitions contained in the file are displayed. Repeat to upload additional files.
4. Use the Fully synchronize group members checkbox to set the behavior of
how to add new group members imported directly or via other datasets such
as queries or policies. If not checked, new members that are in the import are
added, but members not in the import are not removed. If checked, then group
members not in the import are removed. Use the Set as default button next to
the checkbox to save the checkbox setting.
126 Administration
5. Click Import this set of Definitions to import a set of definitions, or click
Remove this set of Definitions without Importing to remove the uploaded file
without importing the definitions.
6. You will be prompted to confirm either action.
Distributed Interface
Use this configuration screen to define the Distributed Interface and upload the
Protocol Buffer (.proto) file to the DIST_INT database.
From this database, Query Domain metadata is built automatically. After the
metadata is built, the user can go to the Custom Domain Builder to modify or
clone the data and build custom reports. The distributed interface data uses
protocol buffers. Protocol buffers are a flexible, efficient, and automated mechanism
for serializing structured data.
For Universal Feed type 3, upload the protocol definition file for configuration of
DIST_INT database by clicking Manage > Aggregation & Archive > Distributed
Interface.
Note: Click Maintenance to manage the table engine type and table index. The
table engine types for universal feed tables (InnoDB and MyISAM) will appear for
all universal feed tables as the data stored on the Guardium internal database is
MYSQL-based.
enum Data_type {
DOUBLE = 1;
LONG = 2;
INT = 3;
FLOAT = 4;
DATE = 5;
128 Administration
BOOLEAN = 6; // convention is to store it
as 0 and 1 in the double_value
STRING = 7; // stored in string_value
}
optional Data_type dataType = 5;
optional string unit = 6; // unit for the value
}
message AssetRelationEvent {
optional AssetRelationID unique_key__ = 1;
required string relationshipType = 2;
repeated RelationshipProperty property = 3;
optional bool deleted = 4;
}
message RelationshipProperty {
optional RelationPropertyID unique_key__ = 1;
optional string value = 2;
}
message RuleEvent {
optional string ruleName = 1;
optional bool enabled = 2;
}
// --- Metadata --- All unique identifier must be defined here
message Identifier {
optional InfoPropertyID infoPropertyId = 1;
optional MetricPropertyID metricPropertyId = 2;
optional AssetID assetId = 3;
optional AssetRelationID assetRelationId = 4;
optional RelationPropertyID relationshipPropertyId = 5;
}
Note: You cannot remove a class that is in use by some other component (the
installed policy, for example).
No key file is needed if an S-TAP has been installed on the SQL Server and
configured to handle encryption. This is the recommended and most common way
of configuring an S-TAP agent for MS SQL Server. To determine if an S-TAP is
configured to handle encrypted MS SQL Server traffic:
1. Open the S-TAP Control by clicking Manage > Activity Monitoring > S-TAP
Control.
2. Expand the Details pane for the S-TAP agent on the MS SQL Server host.
3. Verify that the SQL Server TAP Decrypted property has been set to either SSL
Only or Kerberos and SSL.
4. If the SQL Server TAP Decrypted property has been set to None, we recommend
changing that setting to either SSL Only or Kerberos and SSL.
Note: After changing the SQL Server TAP Decrypted property, you must restart
the S-TAP and MSSQL Monitor service for the change to take effect, .
If for some reason you are not permitted to change the SQL Server TAP
Decrypted setting, use this procedure to upload a key file from the server.
If no S-TAP has been installed, or if it has been installed but is not configured to
handle encrypted SQL Server traffic, a key file is required to monitor SQL Server
traffic under the following conditions:
v If the server is configured using the force protocol encryption option.
v If the server in a SQL Server 2005 environment uses encrypted login sessions
with SQL Server mixed authentication.
Since a single Guardium system may be monitoring multiple SQL Server instances,
you may need to upload multiple key files. To upload a key file to the Guardium
system:
1. Click Setup > Tools & Views > Upload Key File.
2. Click Browse to locate the key file you want to upload.
Note: The key file name must be the fully qualified domain name of the SQL
Server. The class file cannot be renamed – it must be created with that name.
3. Click Upload Key File. You will be informed of the results of the operation.
130 Administration
How to install an appliance certificate to avoid a browser SSL
certificate challenge
Use IBM Security Guardium CLI commands to create a certificate signing request
(CSR), and to install server, certificate authority (CA), or trusted path certificates on
your Guardium system.
See Certificate CLI Commands for more information on all the certificate
commands.
Note: One prerequisite is that you must provide a public certificate from a CA you
will be using to sign your certificates (Verisign, Thwate, Geotrust, GoDaddy,
Comodo, within-your-company, etc).
Note: Guardium does not provide CA services and will not ship systems with
different certificates than the one installed by default. A customer that wants their
own certificate will need to contact a third-party CA.
Procedure
1. Have available the public certificate from the CA (Certificate Authority) you
will be using to sign your certificates (from Verisign, Thwate, Geotrust,
GoDaddy, Comodo, in-house, etc).
2. Log into the CLI on the individual Guardium system you wish to have a
signed certificate on.
Before executing the command, obtain the appropriate certificate (in PEM
format) from your CA, and copy the certificate, including the Begin and End
lines, to your clipboard.
3. Enter the command, store trusted certificate. The following prompt will be
displayed:
What is a one-word alias we can use to uniquely identify this certificate?
Enter a one-word name for the certificate and press Enter.
The following instructions will be displayed:
Please paste your CA certificate, in PEM format. Include the BEGIN and END lines, and then pres
Paste the PEM-format certificate to the command line, then press CRTL-D. You
will be informed of the success or failure of the store operation.
Now the CA you will sign with is set as trusted on the Guardium system.
4. Next, from the CLI command prompt, type: csr.
Fill in the requested information. If the CN (common name) of the certificate is
not set to the hostname.domain of the box, certificate errors from the browser
will result.
There are no parameters, but you will be prompted to supply the
organizational unit (OU), country code (C), and so forth. Be sure to enter this
information correctly. The last prompt is as follows:
What encryption algorithm should be used (1=DSA or 2=RSA)?
132 Administration
Note: Express Security does not run on Aggregators or Central Managers.
Select Datasources
Open the Express Security Setup by clicking Setup > Tools and Views > Express
Security.
Select the datasources you want to use from the Available Datasources menu, and
move them over to the Selected Datasources menu by using the chevron buttons.
Datasources can be modified directly from this page by adding them to the
Selected Datasources menu and double-clicking them.
These policy choices define the exclusions per groups (users and servers). The
default choice is “no exclusions”, however, the more users and servers that are
monitored, the more processing and data collection that takes place.
The granularity of the policy is chosen by marking the check box for the policy,
and then choosing an option from the menu.
There are two radio buttons for Merge common access requests:
v Yes, maintaining counts - Merge common access requests and maintain counts
(this is the default and is also known as “Audit only”)
v No - log full detail
Note: Click the Groups icon to modify members of selected groups. Groups must
be defined by the admin user.
Alerting Options
An alert is a message that indicates that an exception or policy rule violation was
detected. These choices specify how to handle exceptions or policy rule violations.
They also define how to transmit the message.
Each rule in a policy defines a conditional action. The condition that is tested can
be a simple test, for example, it might check for any access from a client IP address
that does not belong to an Authorized Client IPs group. Or the condition tested
can be a complex test that considers multiple message and session attributes
(database user, source program, command type, time of day, etc.), and it can be
sensitive to the number of times the condition is met within a specified timeframe.
See Policies.
Selecting a policy in this section of the menu screen takes all the rules from the
particular policy and appends them to the rules that Express Security Setup has
collected in the other sections of the total menu screen. The user does not have to
select additional policies in the Add Policy rules section of this total menu screen.
The Express Security menu choices list rules from predefined or customized
policies. The following examples are Guardium predefined policies:
v Copy all rules from policy
– Allow all
– Basel II
– Data privacy
– Data privacy - PII
– HIPAA PCI
– PCI Oracle EBS
– PCI SAP
– Privileged users monitoring (black list)
– Privileged users monitoring (white list)
– SOX
– SOX Oracle EBS
– Vulnerability and threads management
Assessments
The security assessment function scans the database infrastructure for
vulnerabilities and provides evaluation of database and data security health, with
real time and historical measurements. For further information on this procedure,
see Vulnerability Assessment.
134 Administration
v MySQL
v Netezza®
v Oracle
v PostgreSQL
v Sybase
v Teradata
Auditing Of
This section of the Express Security menu screen includes selecting additional
policies that result in a selective audit policy.
To completely control the client traffic that is logged, a policy can be defined as a
selective audit trail policy. In this type of policy, audit-only rules and an optional
pattern identify all of the client traffic to be logged. For further information see
Using_Selective_Audit_Trail in Policies.
Express Security menu choices are as follows, use available drop down menus to
see policy group choices for each item:
v Privileged users
v Data definition language (DDL) commands
v Administrative commands
v Data manipulation language (DML)
v SELECT commands
v EXECUTE commands
Get sign-off from: access role, admin role, user-defined role, accessmgr or admin
user.
After checking off selections, click Install to install and save the policy choices, or
Save to save the choices without installing the policy choices.
A Done message will appear when the choices have been successfully saved. When
the install is done, another menu screen will appear. The menu defines the
schedule when the Audit will run. The choices are day/week or month and the
choices require specific times.
The details of an Installed Policy can also be seen by clicking Setup > Tools and
Views > Policy Installation.
When the Revert button is clicked, the scheduling page is re-opened with the
expectation that the user will want to remove the schedule from this process.
Note: A comment field is available after the Express Security Setup is saved.
GRC Heatmap
This high-level management report shows a snapshot of the current state of the
Guardium system in terms of three areas that matter most: Governance, Risk, and
Compliance (GRC). Open the GRC Heatmap by clicking Setup > Tools and Views
> GRC Heatmap.
The GRC Heatmap allows you to quickly check on the most pertinent security
areas of your environment. There are 16 focus areas organized by Governance,
Risk, and Compliance, and color coded based on the level of activity for each. Each
area has a title and short description for what it reports on. Double-clicking on the
area produces a drill-down tabular report with full details.
Compliance has two rows - the first for the database environment and the second
for the individual unit (for example, whether data is being backed up or not).
Risk Management is the process by which an organization sets the risk tolerance,
identifies potential risks and prioritizes the tolerance for risk based on the
organization’s business objectives. Compliance is the process that records and
monitors the policies, procedures and controls needed to enable compliance with
legislative or industry mandates as well as internal policies.
136 Administration
Table 17. Speedometer Views of GRC Heatmap
Heatmap Views
Governance Active audit process Processes with Pending to-dos lists Open Incidents
pending results items
Risk Unpatched Databases Critical tests failed Access violations Classification
violations
Compliance Policy Installed? Non-assessed data Unmonitored Servers Inactive S-TAPs
sources
Compliance (self) Data Archiving Results Archiving Data Purged? Backups Performed?
Performed Performed?
Timeframe - Three
months
Compliance (self) Data Archiving Results Archiving Data Purged? Backups Performed?
Performed Performed?
Color-coding Color-coding
Color-coding Color-coding
Green >0 Green >0
Green >0 Green >0
Red =0 Red =0
Red =0 Red =0
Data Used - Number Data Used - Number
Data Used - Number Data Used - Number of successful data of successful backups
of successful data of successful results purges performed performed
archives performed archives performed
Timeframe - One Timeframe - One
Timeframe - One Timeframe - One month week
month month
Self Monitoring
The Guardium solution monitors itself to minimize disruptions and correct
problems automatically whenever possible.
138 Administration
more detailed effort to provide higher levels of granularity. A specific query
builder has been created (VA Test Tracking) to report on tests that are available
for security assessments.
v Alerts - In addition to building reports, a user can define an alert against those
reports through defined thresholds--indicating an exception or policy rule
violation. These alerts can either be real-time or determined through historical
analysis. These alerts can then trigger notification to users through SMTP, SNMP,
syslog, or a custom Java™ class.
v Self-Monitoring Utility - Guardium has implemented an internal self-monitoring
demon (always running) service utility on collectors and aggregators that wakes
up every 5 minutes and does system scan, checking components for optimal
configuration, operational effectiveness, and repairs when necessary. For
example if the utility finds the Web Server down, it will first validate a complete
shutdown of the service, restart the service, and then alerts an administrative
user.
Components Monitored
Table 19. Components Monitored
Components Monitored
System
Disk space(%full)
See the System Monitor for more information - Manage > System View > System Monitor
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use
CPU Load
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor. Refer to the System Monitor for more information.
Open the System Monitor by clicking Manage > System View > System Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use
Memory Usage
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor. Refer to the System View for more information.
Open the System Monitor by clicking Manage > System View > System Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Open the System Monitor by clicking Manage > System View > System Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Login
domain and Guardium Users Login entity to create alerts
Self-Monitoring: Is in use
CPU Usage
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Memory Usage
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use
Identify bottle-necks
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
140 Administration
Table 19. Components Monitored (continued)
Components Monitored
Overload & delays (Queues)
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use
Lost requests
Report: Dropped Requests - click Manage > Reports > Activity Monitoring > Dropped
Requests.
Alert: You can use the Queries and Correlation Alerts, utilizing the Exceptions domain and
Exceptions entity to create alerts
Self-Monitoring: Is in use
Monitored Data
Database types currently monitored
Report: See Daily Monitor > Databases by Type, or See Predefined admin Reports for
report : Databases by Type for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Auto-discovery domain
and Host Configuration entity to create alerts
Packets rates
Report: Select Guardium Monitor > Buffer Usage Monitor
Requests rates
Report: Select Guardium Monitor > Buffer Usage Monitor, or See Predefined admin
Reports for report : Request Rate for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Inspection Engine
Report: See S-TAP Reports
142 Administration
Table 19. Components Monitored (continued)
Components Monitored
Policy Changes & Policy Installations
Alert: See Viewing an Audit Process Definition for alert: Policy Changes Alert - alert once a
day on policy related changes
Failed Logins
Report: Select Guardium Monitor > Logins to Guardium, or See Predefined admin Reports
for report : Logins to Guardium for more information, or See Predefined admin Reports for
report : Admin User Logins for more information
Alert: See Viewing an Audit Process Definition for alert: Failed Logins To Guardium - alert
if have more than 5 failed logins in the last 11 minutes, or Select Tools > Report Building >
drop-down Report Title: Guardium Logins, See Reports for additional information
Creation/Deletion of Users/Roles
Report: Select Guardium Monitor > User Activity Audit Trail, or See Predefined admin
Reports for report : User Activity Audit Trail for more information
Alert: See Viewing an Audit Process Definition for alert: Guardium - Add/Remove Users -
alert on any Addition or Removal of Guardium User
Permissions monitoring
Aggregation / Archive
Activity Log
Report: See Reporting on Aggregation and Archiving Activity
Alert: See Viewing an Audit Process Definition for alert: Aggregation/Archive Errors - alert
on any aggregation/archive error, runs once a day
Resolution -- Success/failure
Report: See Reporting on Aggregation and Archiving Activity
Alert: See Viewing an Audit Process Definition for alert: Aggregation/Archive Errors - alert
on any aggregation/archive error, runs once a day
CPU Usage
Report: You can use Reports, utilizing the Sniffer Buffer domain and Sniffer Buffer Usage
entity to build a report
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Memory Usage
Report: You can use Reports, utilizing the Sniffer Buffer domain and Sniffer Buffer Usage
entity to build a report
144 Administration
Table 19. Components Monitored (continued)
Components Monitored
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Queries Performance
Report: You can use Reports, utilizing the Access domain and Full SQL entity to build a
report
S-TAP
Status: up/down/synchronizing
Report: See S-TAP Reports, or See Predefined admin Reports for report : S-TAP Status
Monitor for more information
Alert: See Viewing an Audit Process Definition for alert: Inactive S-TAPs Since - alert if
have inactive S-TAPS
Number of S-TAPs inactive in the last hour (based on the Inactive S-TAP Since report)
Report: See Quick Start > GRC Heatmap > Compliance > Inactive S-TAP
When defining a custom query, go to upload page and click Check/Repair to create the
custom table in CUSTOM database, otherwise save query will not validate it. This table
loads automatically from all remote sources. A user cannot select which remote sources are
used - it pulls from all of them.
Based on this custom table and custom domain, there are two reports:
Enterprise S-TAP view shows, from the Central Manager, information on an active S-TAP
on a collector and/or managed unit (If there are duplicates for the same S-TAP engine, one
being active and one being inactive, then the report will only use the active).
Detailed Enterprise S-TAP view shows, from the Central Manager, information on all active
and passive S-TAPs on all collectors and/or managed units.
If the Enterprise S-STAP view and Detailed Enterprise S-TAP view look the same, it is
because there only one S-TAP on one managed unit being displayed. The Detailed
Enterprise S-TAP view would look different if there is more S-TAPs and more managed
units involved.
These two reports can be chosen from the TAP Monitor tab of a standalone system, but
they will display no information.
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP -
alert on any activity related to inspection engine and S-TAP configuration
CAS
Status: up/down
Report: See CAS Status
Template changes
Report: CAS Templates - click Manage > Change Monitoring > CAS Templates.
Alert: See Viewing an Audit Process Definition for alert: CAS Template Changes - alert on
any CAS Template configuration
CAS Event
Report: You can use Reports, utilizing the CAS Host History domain and Host Event entity
to build a report
146 Administration
Guardium nanny process
The Guardium nanny is an internal process that monitors the system's critical
resources and then alert when potential problems are emerging. Nanny alerts go to
syslog, can be forwarded and sent as emails to the administrator, and in some
cases take remedial actions.
The nanny watches key components and critical resources within the Guardium
system—guaranteeing their availability and reliability. These resources and
components include:
v Web service monitoring - service port (default 8443) not responding or tomcat
service is not up
– syslog message
– mail admin
– will issue restarts of the web service
v Inspection Engine activity - snif overloaded, not responding, or failure
– syslog message
– mail admin
– mail guardium support (optional)
– will try and fix by restarting the snif under certain conditions
– will try and respawn snif if process dies
v Diskspace utilization - alerts when > 75% on the critical partitions
– syslog message
– alert admin
– will perform preventive action by cleaning temporary files when over 95%
v Failed login (ssh) to the appliance - checks for ssh daemon's messages and alerts
on failed ssh login attempts
– mail admin (it's already in syslog)
v Monitor internal database (TURBINE) - verify service is up, status, and capacity
utilization monitoring
– syslog message
– mail admin
– restart service
v File System utilization - every five minutes, Nanny.pl checks file system at /var,
warning alert when > 75% in the /var directory, critical alert and services
stopped when >90% in /var directory
– syslog message
– alert admin
– Admin clean-up required, using CLI commands: show filesystem usage, clear
filesystem dir, and restart stopped_services
Alert users to issues that may affect system performance, such as: CPU utilization,
database disk space, inactive STAPs, and no traffic situations.
The Sniffer Buffer Usage domain is the basis for most of the following alerts.
Create a Query using the Sniffer Buffer Usage domain with the columns and Fields
as shown – there are no conditions.
148 Administration
High CPU Utilization
Using the Enterprise Buffer Usage domain, create an alert to monitor system CPU
utilization. Here is an example of a query for CPU utilization which exceeds 75%.
Note: The Sniffer buffer usage domain is populated once a minute, so there are
1440 entries in a 24-hour period.
150 Administration
Database Disk Space Alerts
Use the Query Builder to Build two reports (they are similar) and two alerts – one
for the collector and the other for the aggregator since the database size is fixed on
the collector but dynamic on the aggregator (up to the size of the var partition).
1.
Setup a new alert in the Alert Builder. Open the Alert Builder by clicking
Protect > Database Intrusion Detection > Alert Builder.
152 Administration
Collector Disk Space Alert
Repeat the previous steps to create an alert for monitoring disk space on the
collectors.
1.
Create a Query.
1.
Use the Alert Builder to set up a new alert.
For STAPs configured with a primary and secondary collector, if the STAP cannot
communicate with the primary (for example, due to network issues), it will
154 Administration
failover to the secondary. Unless the former-primary collector is able to ping the
STAP, it will then generate an inactive STAP alert.
No Traffic Alerts
This alert checks for traffic from an active inspection engine, from which the
collector previously received traffic, AND for traffic that is processed by the policy.
If both conditions are not satisfied within 48 hours, an alert will be generated.
As a general rule, avoid invoking ad-hoc queries/reports on the collector with time
spans > 1 hour. Large/long running queries should be invoked on the aggregator
and are best scheduled using the Audit Process.
The following two reports should be scheduled, from the Central Manager, to run
weekly on each collector.
Using the Sniffer Buffer Usage domain, create a report with the following fields:
This report displays the key parameters for ALL STAPs and inspection engines for
a given collector. The report cannot be modified but can be run on each collector,
or from the Central Manager pointing to each collector in turn, or scheduled via
the Audit process on each collector.
When querying, a value of -1 (minus one) indicates a NULL in the database. The
table at the end of this section lists the available SNMP OIDs.
SNMP Examples
From a Unix session, you can display SQL Guard SNMP information using the
snmpget or snmpwalk commands. (Use snmpget -h or snmpwalk -h to display
command syntax.) Various UI-based software packages are available for displaying
SNMP information. Those alternatives are not described here.
Table 20. SNMP Examples
SNMP Examples
Disk space used and available:
> snmpget -v 2c -c guardiumsnmp a1.corp.com UCD-SNMP-MIB::dskAvail.1
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 1043856
> snmpget -v 2c -c guardiumsnmp a1.corp.com UCD-SNMP-MIB::dskUsed.1
UCD-SNMP-MIB::dskUsed.1 = INTEGER: 914856
156 Administration
Table 20. SNMP Examples (continued)
SNMP Examples
> snmpwalk -v 2c -c guardiumsnmp a1.corp.com ssCpuRawNice
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 11
Note: Adding the RawUser, RawSystem, and RawNice numbers provides a good
approximation of total CPU usage.
> snmpwalk -v 2c -c guardiumsnmp a1.corp.com ssCpuRawIdle
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 26734332
UCD-SNMP-MIB::dskAvail.1
.1.3.6.1.4.1.2021.9.1.7.2 Disk space available in /var directory
UCD-SNMP-MIB::dskAvail.2
.1.3.6.1.4.1.2021.9.1.8.1 Disk space used in / directory
UCD-SNMP-MIB::dskUsed.1
.1.3.6.1.4.1.2021.9.1.8.2 Disk space used in /var directory
UCD-SNMP-MIB::dskUsed.2
.1.3.6.1.2.1.25.2.3.1.5.1 Total memory available
HOST-RESOURCES-MIB::hrStorageSize.1
.1.3.6.1.2.1.25.2.3.1.6.1 Memory in use
HOST-RESOURCES-MIB::hrStorageUsed.1
.1.3.6.1.4.1.2021.8.1.101.1 Open monitored session count
UCD-SNMP-MIB::extOutput.1
.1.3.6.1.4.1.2021.8.1.101.2 Requests logged by the current sniffer
process (set to zero for each restart)
UCD-SNMP-MIB::extOutput.2
UCD-SNMP-MIB::extOutput.3
.1.3.6.1.4.1.2021.8.1.101.4 Last construct timestamp
UCD-SNMP-MIB::extOutput.4
.1.3.6.1.4.1.2021.8.1.101.5 Memory used by the sniffer process
UCD-SNMP-MIB::extOutput.5
.1.3.6.1.4.1.2021.8.1.101.7 Packets in on ETH1/ out on ETH2; usually
only one number (inbound) when a SPAN
UCD-SNMP-MIB::extOutput.7 port or TAP is used
Open the Running Query Monitor by clicking Manage > Activity Monitoring >
Running Query Monitor.
We do not recommend setting the Query Timeout higher than the default setting
(180 seconds) for an extended time. If you set this limit higher, it increases the
chances of overloading the system with ad-hoc reporting activity.
Groups
Using groups makes it easy to create and manage classifier, policy and query
definitions, as well as roll out updates to your S-TAP's and GIM clients. Rather
than having to repeatedly define a group of data objects for an access policy, put
the objects into a group to easily manage them.
Groups Overview
Group together similar data objects and use them in creating query, policy, and
classification definitions. Use one of the many predefined groups, or create your
own group using the Group Builder.
158 Administration
There are many places where groups are practical to use. By grouping together
similar data objects, you can use the whole set of objects in policies, classifications,
queries, and reports, rather than having to select multiple data objects individually.
If you need to make changes to a query or policy, rather than applying those
changes to each individual object, you can apply those changes to the group.
S-TAPs and GIM also use groups to make it easier to roll out updates across
managed servers.
Group Builder
The Group Builder allows you to create a new group or modify an existing group
from the user interface.
The Group Filter screen allows you to easily sort through groups based on
application type, group type, description or category.
Types of groups
The field Group Type refers to the type of data that will be grouped together. For
example, Server IP expects data arranged as an IP address and Users expects to see
names of users on the application.
Tuple groups
A tuple group allows multiple attributes to be combined together to form a single
composite group member. Three of an ordered set of values are called 3-tuple. An
n-tuple is one with an n-set of value attributes. This simplifies the specification of
conditions for reporting and policy rules.
Predefined groups
There are a number of predefined groups that are included with Guardium. Use
the Group Filter and Group Type menu to browse the list of groups and find the
one that best suits your needs.
Group types DB User/DB Password are by default only available to admin users.
Modify the group roles if you want to change this default setting.
In some cases you may want to define a set of groups so that each member
belongs to only one group. For example, suppose that for reporting purposes you
need to group database users into one of two groups: employees or consultants.
You would define each of those groups with the same sub-group type
(Employee-Status, for example). When sub-groups are used, the system will not
allow you to add a member to a sub-group if that member has already been added
to another group with the same sub-group type.
Wildcards in members
Group members can include wildcard (%) characters for when the group is used in
a query condition or policy rule.
Table 22. Wildcards in members
Member Matches Does NOT Match
aaa% aaa zzzaaa
aaazzz aaz
%bbb bbb,zzbbb bb
bbbzzz
%ccc% ccc cc
ccczz zzzccczzz
zzzccczzz
Queries
Queries use conditional operators with groups. Here are examples of each
conditional operator:
v IN GROUP - If the value matches any member of the selected group, the
condition is true. IN ALIASES GROUP, this operator works on a group of the
same type as IN GROUP, however assumes the members of that group are
aliases. Note that the IN GROUP/IN ALIASES GROUP operators expect the
group to contain actual values or aliases respectively. Query Builder will look for
records with database values matching the aliases value in the group.
v NOT IN GROUP - If the value does not match any member of the selected
group, the condition is true. NOT IN ALIASES GROUP, this works on a group
of the same type as NOT IN GROUP, however assumes the members of that
group as aliases.
v IN DYNAMIC GROUP - If the value matches any member of a group that will
named as a run-time parameter, the condition is true. IN DYNAMIC ALIASES
GROUP, this works a group of the same type as IN DYNAMIC GROUP,
however assumes the members of that group as aliases.
160 Administration
v NOT IN DYNAMIC GROUP - If the value does not match any member of a
group that will named as a run-time parameter, the condition is true. NOT IN
DYNAMIC ALIASES GROUP, this works a group of the same type as NOT IN
DYNAMIC GROUP, however assumes the members of that group as aliases.
Note: The group may contain either aliases or actual values according to the
operator used (IN GROUP OR IN ALIASES GROUP) can not be used at the
same time.
v LIKE GROUP - If the value is like any member of the selected group, the
condition is true. This condition enables wildcard (%) characters in the group
member names.
Note: A like member value uses one or more wildcard (%) characters, and
matches all or part of the value. For a like comparison, alphabetic characters are
not case sensitive. For example, %tea% would match tea, TeA, tEam, or steam.
When creating a rule as part of a policy, groups simplify the process of specifying
the parameters you want.
Anywhere there is a Group drop-down menu on the rule definition pane you can
select a group.
Further, if you want to create or modify a group on the fly, click the Groups icon
to open a Group Definition window and make your desired changes.
For example: if you want to capture activity occurring on your production servers,
rather than typing in full IP addresses each time, you could create a group
Production Servers and use that.
Procedure
1. Login to your Guardium system, and open the Policy Builder by clicking
Setup > Policy Builder.
2. Create a new policy by clicking the icon to open the Policy Definition
window.
3. Fill out the policy definition, click Apply to save the policy, and then click
Edit Rules to start adding rules to the policy.
4. Enter a rule description, category, classification, and severity to begin.
5. Specify where to look. From the Server IP row, select the group (Public) PCI
Authorized Server IPs. The rule will apply to all activity from all PCI servers.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder.
2. Click Next to bypass the filter and create a new group.
3. In the Create New Group panel, select an option from the Application Type
menu to determine which application you will use the group with.
4. Enter a unique Group Description for the new group - do not include
apostrophe characters in this field.
5. Select a Group Type Description to choose which type of data you are
grouping.
6. Enter a Category, which is an optional label that you can filter by and use to
group items (that the filter has isolated) of policy violations and reports.
7. Enter a Classification, which is another optional label that you can filter by
and use to group items for policy violations and reporting.
8. Select Hierarchical to create a group of groups, where the admin user has
access and then passes it along to users in groups in the hierarchy.
9. Click Add to add the group.
Modifying a group
Make modifications to your group, such as adding a member or changing the
category of the group. Exercise caution when modifying or deleting a group, as
changes made could possibly affect other users or policies.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder.
162 Administration
2. Use the Group Filter to find the group you want to modify, or leave the filter
empty and click Next to look at the complete list of groups.
Select a group from the Group Members list, enter the new category name into the
Category field and click Modify Category to save changes.
Procedure
If you have a new member you want to add to a group, enter the member's name
into the Create & add a new Member named field and click Add.
Note: When adding to a group of objects, valid member names may be composed
of object_name, schema.object_name, use a wildcard such as %object_name, or a
combination of all three.
The new member is now added to the Group Members list.
Predefined Groups
This section details the predefined groups in Guardium.
The following table describes the predefined groups that are included with your
Guardium system. To view the list of all groups, open the Group Builder by
clicking Setup > Group Builder. Select SQL_APP_NAME from the Applications
menu, and click Next. From the next screen, manage members from Selected
Groups. The term Group Type refers to expectations on the type of data designated
by the label. For example, the group type Server IP expects data arranged as an IP
address (192.168.1.0) and the group type Users expects to see names of users of the
application.
Predefined groups of group type DB User/DB Password are allowed only to users
with the role of admin. Users can, if preferred, add other roles or even allow the
groups to all roles.
Table 23. Predefined Groups
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
DB2 zOS Groups zOS Audit Dynamic SQL Group Type for DB2 commands
DB2 zOS Groups zOS Audit Query Group Type for DB2 commands
DB2 zOS Groups zOS Audit Updates Group Type for DB2 commands
DB2 zOS Groups zOS Audit Deletes Group Type for DB2 commands
DB2 zOS Groups zOS Audit Inserts Group Type for DB2 commands
DB2 zOS Groups zOS Audit Utilities Group Type for DB2 commands
DB2 zOS Groups zOS Audit Object Group Type for DB2 commands
Maintenance
DB2 zOS Groups zOS Audit User Group Type for DB2 commands
Maintenance
DB2 zOS Groups zOS Audit User Group Type for DB2 commands
Authorization Changes
DB2 zOS Groups zOS Audit DB2 Commands Group Type for DB2 commands
DB2 zOS Groups zOS Audit Plan/ Package Group Type for DB2 commands
Maintenance
IMS™ zOS Groups zOS IMS Audit Query Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Updates Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Deletes Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Inserts Group Type for IMS commands
164 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
IMS zOS Groups zOS IMS Audit DB Group Type for IMS commands
Commands
Policy Builder Cardholder Objects Group Type, Objects
Policy Builder Financial Objects Group Type, Objects
Policy Builder PHI Objects Group Type, Objects
Policy Builder Authorized Client IPs Group Type, Client IP
Policy Builder Production Users Group Type, Users
Policy Builder PII Objects Group Type, Objects
Policy Builder Production Servers Group Type, Server IP
Policy Builder Financial Servers Group Type, Server IP
Policy Builder Functional Users Group Type, Users
Policy Builder Sharepoint Servers Group Type, Server IP
Security DB2 Database Used for (specific) database version and
Assessment Version+Patches patch level tests.
Builder
Informix Database
Version+Patches
MySql Database
Version+Patches
Netezza Version+Patches
Oracle Database
Version+Patches
Postgress Version+Patches
Sybase Database
Version+Patches
Teradata PDE
Version+Patches
Teradata TDBMS
Version+Patches
Teradata TDGSS
Version+Patches
Teradata TGTW
Version+Patches
166 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Public Administration Objects Privileged Objects, objects that only
DBA or Sys Accounts should access.
These accounts are locked for "public"
by default.
Public Administrative Commands Privileged Commands, privileged
Commands, should be executed only by
DBAs. Examples: GRANT, BACKUP,
DDL commands
Public Administrative Programs Database utilities (clients) that come
with database and usually reside on the
database server and could used by the
server itself
Public ALTER Commands Examples, alter database, alter
procedure, alter profile, alter session,
alter user
Public Application Privileged Public privileged commands that should
Commands be revoked from "public", but not
revoked since they are used by the
application
Public Application Privileged Application Privileged Objects, public
Procedures privileged procedures that should be
revoked from "public" but not revoked
since they are used by the application
Public Application Schema Users Application Users, database user used
by the application to maintain/user the
application tables
Public Archive Candidates Group Type is Objects
Public Authorized Source Group Type is Source Programs
Programs
Public Authorized Users Group Type is Users
Public Connection Profiling List Group Type is Client IP/Src App/DB
User/Server IP/SVC. Name
DW Select Accessed
Objects/Fields
Public EBS App Servers Group Type is Client IP
Public EBS DB Servers Group Type is Server IP
Public EXECUTE Commands Examples, call, execute, execute function
Public GRANT Commands Examples, grant, grant objectives, grant
system privileges
Public Guardium Audit Categories Guardium patches,
for Detailed Reporting TURBINE_USER_GROUP_ROLE
Public ICM App Servers Group Type is Client IP
Public ICM DB Servers Group Type is Server IP
Public ImportLDAPUser Group Type is Objects
Public ImportLDAPUser_bindValues Group Type is Objects
Public Inspection Engine Entities Examples, adminconsole_sniffer,
software_tap_db_client,
software_tap_db_server
Public Java Commands Examples, alter java, create java, drop
java
Public KILL Commands Example, kill
Public Masked_SP_Executions_MS_SQL_SERVER
For MS SQL Server, a group that
includes a collection of stored
procedures (SP) names. If there is an
execution of an included procedure,
than everything will be masked, even if
in quotes. Predefined as empty.
Public Masked_SP_Executions_SybaseFor Sybase, a group that includes a
collection of stored procedures (SP)
names. If there is an execution of an
included procedure, than everything
will be masked, even if in quotes.
Predefined as empty.
Public MongoDB Skip Commands Group Type is Commands
168 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Public MS-SQL Replication Group Type is Objects
Procedures
Public MS-SQL Security System Group Type is Objects
Procedures
Public MS-SQL System Procedures Group Type is Objects
Public Oracle EBS HRMS Sensitive Group Type is Objects
Objects
Public Oracle EBS-PCI Group Type is Objects
Public Oracle EBS-SOX Group Type is Objects
Public Oracle Predefined Users Group Type is Users
Public Peer Association Commands Commands dealing with
links/replications of data, examples,
links, log shipping, replications,
snapshots
Public Peer Association Procedures Peer Association Objects, procedures
dealing with links/replications of data
Replay - Include in
Compare
170 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Audit Process Predefined as empty.
Builder
Baseline Builder Predefined as empty.
Classifier Predefined as empty.
Express Security Predefined as empty.
Populating groups
After creating a group or finding the one you want to work with, populate the
group with members. Use the Group Builder to manually add members to a
group, or through several automated import methods.
Configure Guardium with your LDAP server, and then import on demand, or
schedule an import in the future.
Note:
If you are scheduling an import, consider any other scheduled imports you may
have at that time, as this will affect the behavior of existing scheduled imports.
Procedure
Configure your LDAP server with your Guardium system. Open the Group
Builder by clicking Setup > Group Builder, and fill out the required information.
1. For LDAP Host Name, enter the IP address or host name for the LDAP server
to be accessed.
2. For Port, enter the port number for connecting to the LDAP server.
3. Select the LDAP server type from the Server Type menu.
4. Check the Use SSL Connection check box if Guardium is to connect to your
LDAP server using an SSL (secure socket layer) connection.
5. For Base DN, specify the node in the tree at which to begin the search. For
example, a company tree might begin like this: DC=encore,DC=corp,DC=root
6. For Attribute to Import, enter the attribute that will be used to import users
(for example: cn). Each attribute has a name and belongs to an objectClass.
What to do next
172 Administration
v To run the import on demand, click Run Once Now. After the task completes,
the set of members satisfying your selection criteria will be displayed in the
LDAP Query Results panel.
Note:
When you import on demand, you have the opportunity to accept or reject each
entry returned from the LDAP server.
When you schedule an LDAP import, all of the LDAP entries that satisfy your
search criteria will be imported.
Verify that members have been added to a group by selecting the group in the
Group Builder, then clicking Modify , and looking at the group's membership.
For larger groups, it may be easier to verify members by using the Guardium
Group Details report (Reports > Guardium Group Details).
Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With a group selected, click the Populate From Query button to open the
Populate Group From Query Set Up panel.
3. From the Query menu, select the query to be run.
a. Depending on the type of group being populated, different fields will
appear. For most group types, the Fetch Member From Column menu will
appear.
b. For paired attribute groups (Object/Command, Object/Field, or Client
IP/DB User), two menus will appear: Choose Column for Attribute 1 and
Choose Column for Attribute 2.
c. Select the column (or columns) to be used to populate the group, and any
additional parameters for the query. The run-time parameters for the query
will then be added to the pane.
4. Select the Clear existing group members before importing box to delete
existing group content before importing new members.
The Group Builder can automatically populate command or object group types
through two ways:
v By analyzing stored procedure source code. To use this option, Guardium must
access the database on which the stored procedures have been defined, and the
stored procedures must not be stored in encrypted format.
v By analyzing stored procedures in database traffic that has been monitored and
logged by Guardium. To use this option, the Guardium appliance must be
inspecting the appropriate database streams, and logging the information (as
opposed to using ignore session or skip logging actions), and the analysis task
must run while the data is still on the unit (as opposed to, for example, after an
archive/purge operation).
There are two groups involved when populating a group from stored procedures:
v The receiving group is the one to which members will be added.
v The starting group which will be analyzed. This group must be an existing
commands or objects group. The search-and-add process is recursive. For
example, if the stored procedure named prox_one is added to the receiving
group, and prox_one is referenced in prox_two, prox_two will also be added to
the receiving group.
Note: Wildcards are not supported in the group members field for stored
procedures.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder.
2. Choose a starting group to analyze that is either a commands or objects group
type.
3. With the starting group selected, click Auto Generated Calling Prox. You will
be presented with five options:
a. Using DB Sources: Populate a group by analyzing the stored procedure
definitions from one or more databases.
b. Using Database Dependencies: Populate a group of objects or a group of
qualified objects by analyzing Functions, Java classes, Packages, Procedures,
Synonyms, Tables, Triggers and/or Views.
c. Using Reverse Dependencies: Populate a group by computing a set of
objects used when starting from a set of objects.
Note: The Using Reverse Dependencies option is only available for Oracle.
174 Administration
d. Using Observed Procedures: Populate a group by analyzing the CREATE
PROCEDURE and ALTER PROCEDURE commands as they are observed in the
database traffic.
e. Generate Selected Object: Populate a group by reverse analysis of observed
stored procedures. Starting from a set of stored procedures, compute all the
tables that these procedures use (directly or indirectly).
Note: The Generate Selected Object option can only be used with object
group type.
Guardium will analyze the stored procedure source code, on one or more database
servers. Select a group and then run the Auto Generated Calling Prox process to
scan your stored procedures. This process will check the selected group to see if
any of the objects in that group can be accessed or if any of the commands in that
group can be executed. Any matches will be added to a new group. To populate a
group using database sources:
Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
Note: This option can only be used with commands or objects group types.
2. With the group selected, click Auto Generated Calling Prox, and select the
Using DB Sources option. This opens the Analyze Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The
selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters. Some fields only apply to certain
databases.
v For Sybase, MS SQL Server, and Informix, enter a database name to restrict
the operation to that database. If it is blank, all stored procedures in the
master database will be analyzed.
v For MySQL, Oracle or DB2 only, enter a schema name to restrict the
operation to databases owned by that schema. For MySQL only, the Schema
Owner is in the form user_name@host, where host can be a specific IP or it
can be a % to specify all hosts. To get all hosts, enter the schema name
followed by %.
v For MySQL, Oracle or DB2 only, enter a stored procedure name in Object
Name. Wildcard characters may be used. For example, if only interested in
the procedures beginning with the letters ABC, enter ABC% in the Object
Name box.
5. In the Source Detail Configuration section, do one of the following:
When specifying the group type, keep in mind that only Object or Qualified Object
group types work with this option. A qualified object requires five value attributes:
server IP, instance, DB name, owner and object. This is also called a 5-tuple object.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With the objects or qualified objects group selected, click Auto Generated
Calling Prox, and select the Using Database Dependencies option. This opens
the Analyze Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The
selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters.
5. In the Source Detail Configuration section, do one of the following:
v Add members to an existing group by checking the Append box, and then
selecting a group from the Existing Group Name menu.
v Add members to a new group by entering the new group name in New
Group Name.
Note: Do not include apostrophe characters in a group name, and make sure
that the new group is fully qualified (includes five value attributes: server IP,
instance, DB name, owner and object).
6. Select Flatten namespace to create member names using wildcard characters, so
that the group can be used for LIKE GROUP comparisons. For example, if
176 Administration
sp_1, is discovered, the member %sp_1% will be added to the group, and in a
LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all
match.
7. In the Include Types section, select database dependencies: Functions, Java
classes, Packages, Procedures, Synonyms, Tables, Triggers and/or Views.
8. Click Analyze Database to populate the group. You will be informed of the
results.
Generate Selected Object populates the group through reverse analysis of observed
stored procedures.
These options from the Group auto-populate menu compute a set of objects used
when starting from a set of objects. For example, starting from a set of stored
procedures, compute all the tables that these procedures use (directly or indirectly).
Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
The Generate Select Object option is a part of the Auto Generated Calling Prox
functionality that populates an objects group type through reverse analysis of
observed stored procedures.
Guardium will populate the group by inspecting all changes or additions to stored
procedures. This keeps the mapping information up-to-date through continuous
analysis of changes to stored procedures.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from
the list of all groups.
2. With the starting group selected, click Auto Generated Calling Prox, and
select the Generate selected object option. This opens the Analyze Observed
Stored Procedures panel.
3. To edit an existing configuration, select it from the Source Details menu. To
create a new configuration, click New.
4. In the Access Information section, select all of the database servers to be
analyzed. You can choose any combination of the check-boxes.
178 Administration
5. In the Source Detail Configuration section, enter a name, and choose an
option from the Verb menu.
6. Do one of the following:
v Add members to an existing group by checking the Append box, and then
selecting a group from the Existing Group Name menu.
v Add members to a new group by entering the new group name in New
Group Name.
Security Roles
Security roles are used to grant access to data (groups, queries, reports, etc.) and to
grant access to applications (Group Builder, Report Builder, Policy Builder, CAS,
Security Assessments, etc).
By default, when a component is initially defined, only the owner (the person who
defined it) and the admin user (who has special privileges) are allowed to access
and modify that component.
You can allow other users to access the components you define by assigning
security roles. For example, if you assign a security role named DBA to an audit
process, all users assigned the DBA role will be able to access that audit process.
Note: In order to configure LDAP user import, accessmgr user must have the
privilege to run the Group Builder. In certain situations, when changes are made to
the role privilege, accessmgr's privilege to Group Builder can be taken away. This
results in an inability to save or run successfully LDAP user import. Go to the
access management portal, select Role Permissions. Choose the Group Builder
application and make sure that there is a checkmark in the all roles box or a
checkmark in the accessmgr box.
Notifications
Use the Alerter and Alert Builder to create notifications. When email or other
notifications are required for alerting actions, follow this procedure for each type of
notification to be defined.
Alerter configuration
1. Before you choose alerting actions, you must be configure the email SMTP
settings in theAlerter
2. Open the Alerter by clicking Protect > Database Intrusion Detection > Alerter.
3. Fill out the SMTP and/or SNMP information.
4. After filling out each section, click Test Connection, and verify that the
connection is working. You will receive a message stating the connection is
unreachable if the connection is not working.
5. Click Apply to save the configuration.
6. At a minimum, IP Address/Host name, port, and return email address must be
specified.
7. Select Mail from the Notification Type menu. If the Severity of the message is
HIGH, the Urgent flag is set.
8. Select a user (which can be an individual or group) from the Alert Receiver
list. Additional receivers for real-time email notification are Invoker (the user
that initiated the actual SQL command that caused the trigger of the policy)
and Owner (the owner/s of the database). The Invoker and Owner are
identified by retrieving user IDs (IP-based) configured by using the Guardium
APIs.
9. Click Add.
Build an alert
1. After configuring the Alerter, open the Alert Builder by clicking Protect >
Database Intrusion Detection > Alert Builder.
2. Fill out the information in the Settings, Alert Definition, Alert Threshold, and
Notification sections and click Apply.
3. Choose who will receive the notifications by clicking Add Receiver.. and
choosing a user.
180 Administration
How to create a real-time alert
Send a real-time alert to the database administrator whenever there are more than
three failed logins for the same user within five-minutes.
Prerequisites
Configure SMTP in the Alerter. Open the Alerter by clicking Protect > Database
Intrusion Detection > Alerter, and then fill out the SMTP information.
Note: Policy violations can also be seen as a report in Incident Management See
Policies for complete information.
Procedure
1. Create a policy.
a. Open the Policy Builder by clicking Setup > Tools and Views > Policy
Builder.
b. Click New, or modify an existing policy by selecting the policy from the
Policy Finder and clicking Modify.
c. Fill out the required information and click Apply to save the policy.
2. Add rules to the policy.
a. After saving the policy, click Edit Rules to see the existing policy rules.
b. Click Add Rules... and then you are presented with five rule options.
c. Choose Add Exception Rule and fill out the required information.
The Exception Rule Definition screen beings with the following items:
182 Administration
i. Check the Cont. to next rule check box to continue testing rules once this
rule is satisfied and its action is triggered. If this is not selected, no
additional rules will be tested when this rule is satisfied.
j. Check the Rec. Vals. check box to indicate that when the rule action is
triggered, the complete SQL statement causing that event will be logged and
available in the policy violation report. If not marked, the SQL String
attribute will be empty.
k.
3. Add an action when the rule is triggered.
a. From the Actions section of the Exception Rule Definition screen, click Add
Action.
b. Select an option from the Action menu and click Apply. For this example,
choose ALERT PER MATCH to get a notification every time the rule is
enacted.
c. Select an option from the Notification Type menu. You must configure the
Alerter for mail or SNMP notification types.
d. Add an alert receiver, and click Apply to save the action.
4. Install the policy.
a. Click Setup > Tools and View > Policy Installation.
b. Find the policy from the Policy Installer menu, select an installation action,
and click Modify Schedule or Run Once Now. Your policy is now
installed. Your alert receiver will receive real-time notifications when the
policy rules are enacted.
Predefined Alerts
Table describing the predefined alerts found in the Alert Builder.
Guardium comes with a set of predefined alerts that can be found in the Alert
Builder. Open the Alert Builder by clicking Protect > Database Intrusion Detection
> Alert Builder. When you open the Alert Builder, you are presented with a list of
all existing alerts in the Alert Finder. Select an alert from the finder and click
Modify to edit it.
In the Modify Alert screen, modify any part of the alert, such as receivers or
threshold.
184 Administration
Table 24. Predefined Alerts (continued)
Alert Description
No Traffic Alert to Indicate whether there is no traffic from specific
database servers. This alert will alert when there is no traffic
collected from a server from which the Guardium system was
collecting traffic at some point during the last 48 hours. The
alert will trigger when there is no traffic within the period
defined in the accumulation interval.
Unlike the regular No traffic alert that will trigger if there was
no traffic during the alert interval but there was traffic in the
previous 48 hours per server IP.
Policy Changes Alert Alert once a day if there have been any security policy
changes.
Scheduled Job Exceptions Alert every 10 minutes on any scheduled job exception
(including assessment jobs).
Scheduling
The general purpose scheduler is used to schedule many different types of tasks
(archiving, aggregation, workflow automation, etc.).
Depending on the type of task being performed, not all of the features described
here may be available - for example, the schedules for some types of tasks can be
paused, while others cannot be (they can only be stopped or started).
Note: Be aware of scheduling anomalies that can occur when scheduling tasks
during Daylight Savings Time.
Pause a Schedule
Note: Note that not all types of scheduled tasks provide a pause option.
1. Click Pause and
2. Confirm the action.
Remove a Schedule
After a schedule has been defined, a Remove button appears in the Schedule
Definition panel.
1. Click Define Schedule or Modify Schedule to open the Schedule Definition
panel.
2. Click the Delete button.
Aliases
Create synonyms for a data value or object to be used in reports or queries.
186 Administration
Aliases Overview
An alias is used to display a meaningful or user-friendly name for a data value.
Note: Aliases changes on the Central Manager or managed units will not be
available on other systems until either GUI is restarted or any aliases changes are
made through their GUI.
IP-to-Hostname Aliasing
One of the more common applications of aliases is to use them as synonyms for IP
addresses. Use this tool to schedule the discovery of client and server IP's and
generate aliases for them.
1. Open the IP-to-Hostname Aliasing tool by clicking Protect > Database
Intrusion Detection > IP-to-Hostname Aliasing.
2. Check the Generate Hostname Aliases for Client and Server IPs (when
available) check box.
3. Check the Update existing Hostname Aliases if rediscovered check box if you
want the tool to continually look for and update hostname aliases.
4.
5. Click Apply to save your configuration, then schedule the operation.
v Click Run Once Now to start the tool immediately.
v Click Define Schedule... to schedule the tool in the future.
v Click Pause to pause the generation of client and server IPs aliases.
Alias Builder
Use this method to manually create an alias.
1. Open the Alias Builder by clicking Setup > Tools and Views > Alias Builder.
2. Select the attribute type for which you want to define aliases.
3. Filter your search on that attribute type using the Value and Alias fields and
click Search.
4. If any results match your search, they will display in the value and alias table.
Click Apply for the search results, or add a new alias by specifying a Value
and Alias name, then clicking Add.
Use this method to create an alias for a group on the fly while creating or
populating a group.
1. Open the Group Builder by clicking Setup > Group Builder. Select any group
from the list, and click Modify.
2. Click Aliases... to open the Alias Quick Definition window. Type in an alias for
any group(s), and save the alias by clicking Apply.
Use these GuardAPI commands to create, update and delete alias functions:
v grdapi create_alias
v grdapi update_alias
v grdapi delete_alias
There are two tools that are used to populate date fields: a calendar tool to select
an exact date, and a relative date picker to select a date that is relative to the
current time (now -1 day, for example). In addition, exact or relative dates can be
entered manually.
188 Administration
Be aware that when selecting or entering dates, the date on the system on which
you are running your browser may not be the same as the date on the Guardium
appliance to which you are connected.
Timestamps in Queries
Including a Timestamp attribute value in a query will produce a row for every
value of the Timestamp. This may produce an excessive amount of output. To get
around this, use the count aggregator when including the Timestamp in a query,
and then drill down on a report row, to view the individual Timestamp values for
the items included in that row only, in a drill-down report. See Aggregate Fields in
Queries.
Tip: If your report displays times that are all the same when you expect them to be
different, you have probably included a Timestamp attribute from an entity too
high in the entity hierarchy for the level of detail you want on the report.
Note: The default time for a date selected using the calendar is always 00:00:00
(the start of the day). To specify any other time of day, type over this value,
entering the desired time in 24-hour format: hh:mm:ss, where hh is the hour of
the day (0-23), and mm and ss are minutes and seconds respectively (both
0-59).
Rather than specify an exact date, it is often more convenient to specify dates
relative to either the current date (now) or some other date (the first Monday, for
example). For example, to always include information from the previous seven
days in a query, it’s more convenient to define relative dates (e.g., start = now
minus seven days and end = now). The Relative Date Picker tool can be used to
select a relative date for many types of tasks.
1. Click the Relative Date Picker button next to any field where a relative date is
allowed. This opens the Relative Date Picker window.
2. Select Now, Start, or End from the list. Regardless of your choice, the display
changes to provide for additional selections.
3. From the middle list, select this, last, or previous, which is relative to the unit
(day, week, month, or day of the week selected in the next list) as follows:
v This is the current unit
v Last is the current unit minus one
v Previous is current unit minus two
4. Select the day, week, month, or a specific day: Monday-Friday.
5. Click the Accept button when you are done. The relative date will be inserted
into the field next to the Relative Date Picker button that was clicked.
6.
To enter a relative date manually, follow one of the procedures. The keywords are
not case sensitive but each component must be separated from the next by one or
more spaces.
There are three general formats you can use to enter a relative date:
OR
The Start or End of the current, last or previous day, week, or month
OR
The Past or Previous day of the week (Sunday, Monday, Tuesday, etc.)
190 Administration
Relative to NOW
1. Click in the field where you want to enter the relative date.
2. Enter the keyword NOW.
3. Enter a negative integer specifying the relative number of hours, days, weeks,
or months (no space is allowed between the minus sign and the integer).
4. Enter a keyword for the units used: HOUR, DAY, WEEK, or MONTH. Be aware
that the plural (hours, days, etc.) is not allowed. Example: now -14 day
Time Periods
Use the Time Period Builder to create time periods that can be used for policy
rules and query conditions.
When monitoring database activity, use time periods to specify when you want to
monitor. Use the Time Period Builder to create new time periods or modify
existing ones.
Time Periods
Policy rules and query conditions can test for events that occur (or not) during
user-defined time periods.
Comments
Comments apply to definitions and to workflow process results.
Comments can be added or viewed in several places throughout the UI. You can
add a comment to a group or alias for reference purposes, or add a comment to
report to ease auditing requirements. For example, an auditor may want to know
why a configuration change was made on a certain date. Use a comment to easily
reference the reason why the change was made.
192 Administration
Comments apply to definitions (groups, aliases, reports, policies), and to workflow
process results. You can add multiple comments to a component, and you can add
comments to comments, but you cannot modify or delete existing comments.
Report Comments
View a report of all user comments by clicking Comply > Reports > User
Comments.
v The Local Comments entity is used in a Central Manager environment only.
Local comments remain local to the system on which they were defined, and are
not stored on the Central Manager.
v The Comments entity contains comments that are stored on the Central
Manager.
Use this topic to provide visibility and control over patch installation, status and
history.
This how-to topic uses a combination of commands from the CLI and choices from
the GUI to help you install the latest Guardium patch.
Follow these steps from the Guardium system that is designated and configured as
the Central Manager:
1. Backup the system profile, using the CLI command store backup profile.
Procedure
1. Backup the system profile
Using a SSH client, log into the IBM Security Guardium Central Manager as
the CLI user.
Enter the following command: store backup profile
The following dialog will appear.
Do you want to setup for automatic recovery? (Y/n) Enter the patch backup destination host: Enter
Other related CLI commands for this step
CLI>show backup profile patch backup flag is 1 patch backup automatic recovery flag is 1 patch bac
Use the CLI command if the patch installation failed, patch revert failed, and
the automatic restore failed or disabled.
This procedure gets the pre-patch backup file and restores it on the system.
If the pre-patch backup file is currently located on the system, enter the file
name.
Otherwise, the pre-patch backup profile information is used to get the file.
2. Install the patch(es) to the Central Manager
Enter the following command:
CLI>store system patch install [sys | ftp | scp | cd ] <date><time>
The ftp and scp options copy a compressed patch file from a network location
to the Guardium system.
Note that a compressed patch file may contain multiple patches, but only one
patch can be installed at a time. To install more than one patch, choose all the
patches that need to be installed, separated by commas. Internally the CLI
submits requests for each patch on the list (in the order specified by the user)
with the first patch taking the request time provided by the user and each
subsequent patch three minutes after the previous one. In addition, CLI will
check to see if the specified patch(es) are already requested and will not allow
duplicate requests.
The option (sys) is for use when installing a second or subsequent patch from a
compressed file that has been copied to the Guardium system by using this
command previously.
The option (cd) is for use in installing the patch from a DVD disk. To display a
complete list of applied patches, see the Installed Patches report on the
Guardium Monitor tab of the administrator portal. There is also an Available
Patches report on this same Guardium Monitor tab.
Syntax
store system patch install <type> <date> <time>
<type> is the installation type, sys | scp | ftp | cd
<date> and <time> are the patch installation request time, date is formatted as
YYYY-mm-dd, and time is formatted as hh:mm:ss
If no date and time is entered or if “now” is entered, the installation request
time is NOW.
194 Administration
Table 25. Parameters
Name Description
sys
Use this option to apply a second or subsequent patch from a patch file that has
been copied to the IBM Guardium system by a previous store system patch
execution.
Full path to the patch, including name (file name may use wildcard *):
_______________
Enter the scp/ftp port if you need to use a special port, else just press Enter key
to continue:
Leave the terminal open and do not answer any questions until the transfer is
complete.
The backup profile is not set for saving the backup file when patch installation
failed.
If you want to save the backup file, please answer NO to the question and run
CLI command store backup profile to set up the parameters.
1. (name of file)
Install item 1
Patch has been submitted, and will be installed according to the request time,
please check installed patches report or CLI (show system patch installed).
Note: Any operation that generates a file, that the fileserver will access, should
finish before the fileserver is started (so that the file is available for the
fileserver).
Example of fileserver
To start the file, enter the fileserver command: CLI> fileserver
Starting the file server. You can find it at http://(name of unit)
Press ENTER to stop the file server.
Open the fileserver in a browser window, and to one of the following:
v To upload a patch, click Upload a patch and follow the directions.
v To download log data, click Sqlguard logs, go to the file you want, right-click
on it, and download as you would any other file.
When you are done, return to the CLI session and press Enter to terminate the
session.
3. Using the UI, move the patch(es) from Central Manager to managed units
Central Patch Management
a. Click Setup > Tools and Views > Patch Distribution.
The Patch Distribution button will open a new screen, display an available
patch list with dependencies, and allow for the selecting of a patch and
installing it to all selected units. The list of available patches is constructed
out of the available patches and evaluating the currently installed patches
on each of the selected units along with the dependency list of available
patches. Patches available but not installable (a dependent patch is missing)
are shown in the list as grayed out and cannot be selected. The selection of
patch to install is a single selection - only one patch can be installed at a
time. Once a patch is selected and the install button pushed a command is
sent to all selected units to install that patch; this process of installing
patches will happen in the background.
196 Administration
b. Click on the Central Management link under Central Management.
c. Click on Patch Distribution.
Support Maintenance
The Support Maintenance feature is password protected and can be used only as
directed by Technical Support. Contact Technical Support if you require more
information.
This solution uses Google’s protocol buffers (.protobuf) as the wire format between
BIG-IP ASM and the Guardium system.
Information about configuring the integration between Big-IP ASM and Guardium
real-time database activity monitoring is provided at the F5 website:
http://www.f5.com/pdf/deployment-guides/ibm-guardium-asm-dg.pdf.
Once the BigInsights events are in the IBM Guardium repository, other Guardium
features will be available (for example, workflow to email and track report signoff,
alerting, reporting, etc.)
Note: Guardium does intercept Hadoop HDFS when clients are local and the
setting, dfs.client.read.shortcircuit, is set to true.
199
v Exceptions, such as authorization failures
v Hive/HBASE queries using the Thrift protocol (Cloudera Hadoop only) – alter,
count, crate, drop, get, put, list, etc.
v Oozie jobs (IBM BigInsights only)
Because Hadoop does not log exceptions to its logs, there is no way to send
exceptions to Guardium. If you require exception reporting, you must use an
S-TAP. There is no support for monitoring Hive queries, although you can see the
underlying MapReduce or HDFS messages from Hive. Additionally, if you require
the names of the HBase tables being created, you must use an S-TAP.
Logging events are sent over a socket connection. Port 16015 is used for this socket
connection (16016 is the default Guardium port).
Configuration on BigInsights
1. Stop the services. The stop scripts are in $BIGINSIGHTS_HOME/bin.
Stop-all.sh will stop all BigInsights services. Can also do stop.sh hadoop oozie.
2. 2. The following properties files need to be changed:
Open the properties file $BIGINSIGHTS_HOME/hdm/components/
guardiumproxy/conf/guardium-proxy.properties
And change the setup. The default is:
guardiumproxy.enable=no guardiumproxy.host= <namenode>
guardiumproxy.port=16016
guardium.server=
In order to enable the proxy, change it to:
guardiumproxy.enable=yes
guardiumproxy.host=<namenode>
guardiumproxy.port=16015
guardium.server=<Guardium_server_IP>
3. Setting up the log4j.properties files:
HDFS, MapReduce, HRPC
$BIGINSIGHTS_HOME/hadoop-conf/log4j.properties
$BIGINSIGHTS_HOME/hadoop-conf-staging/log4j.properties
200 Administration
GUARDIUM PROXY INTEGRATION - Setup for HDFS, MapReduce and
Hadoop RPC
#Set up the following lines:
#Set RemoteHost to cluster node (main node, the one from which you installed
BigInsights)
#When changing the Port for cluster-intern communication with
GuardiumProxy, also change it in $BIGINSIGHTS_HOME/conf/
guardiumproxy.properties (main node)
log4j.appender.GuardiumProxyAppender=org.apache.log4j.net.SocketAppender
log4j.appender.GuardiumProxyAppender.RemoteHost=<namenode>
log4j.appender.GuardiumProxyAppender.Port=16015
log4j.appender.GuardiumProxyAppender.Threshold=INFO
#MapReduce audit log Guardium integration: Uncomment to enable.
log4j.logger.org.apache.hadoop.mapred.AuditLogger=INFO,
GuardiumProxyAppender
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
#Hadoop RPC audit log Guardium integration: Uncomment to enable.
log4j.logger.SecurityLogger=INFO, GuardiumProxyAppender
log4j.additivity.SecurityLogger=false
#GUARDIUM PROXY INTEGRATION - End of Setup
Oozie
$BIGINSINGHTS_HOME/oozie/conf/oozie-log4j.properties
$BIGINSIGHTS_HOME/hdm/components/oozie/conf/oozie-log4j.properties
#GUARDIUM PROXY INTEGRATION - Setup for Oozie
#Set up following lines
#Set RemoteHost to cluster node (main node, the one from which you installed
BI)
#Note: When changing the Port for cluster-intern communication with
GuardiumProxy, also change it in $BIGINSIGHTS_HOME/conf/
guardiumproxy.properties (main node)
log4j.appender.GuardiumProxyAppender=org.apache.log4j.net.SocketAppender
log4j.appender.GuardiumProxyAppender.RemoteHost=<namenode>
log4j.appender.GuardiumProxyAppender.Port=16015
log4j.appender.GuardiumProxyAppender.Threshold=INFO
#Oozie audit log Guardium integration: Switch (un)comment between lines to
enable GuardiumProxyAppender for Oozie
#log4j.logger.oozieaudit=INFO, oozieaudit (make sure this line is
COMMENTED OUT)
log4j.logger.oozieaudit=INFO, oozieaudit, GuardiumProxyAppender (this line
should be UNCOMMENTED)
#GUARDIUM PROXY INTEGRATION - End of Setup
4. Update files in all the nodes
In $BIGINSIGHTS_HOME/bin, run syncconf.sh
5. Now restart the services. The start scripts are in $BIGINSIGHTS_HOME/bin/
start.sh and start-all.sh will start the GuardiumProxy if it is enabled in the
properties file.
Stop.sh and stop-all.sh will stop GuardiumProxy
For Cloudera Hadoop, in order to capture Hbase inserts, S-TAPs will need to be
installed at the Hbase region servers.
Guardium can monitor the network traffic generated by applications that use
Hadoop subprojects HDFS, MapReduce, HBase, and Hive.
202 Administration
Table 26. Guardium-supported Hadoop subprojects and protocols (continued)
Hadoop Communication
subprojects protocol Interface Examples
Terms
HDFS is a distributed file system and the primary storage system used by Hadoop
applications.
Hive is a data warehouse system for Hadoop for ad-hoc queries and analysis of
larger datasets.
The objective of this interface is to use Guardium auditing capabilities for OPTIM
activities. The auditing capabilities include: Reporting tools (user-defined queries
and reports); Audit Processes (workflow automation that enables assigning a task
to a role/user/group, user-defined status-flow process, escalation, export...): and,
Thresholds Alerts.
Enabling OPTIM auditing requires enabling via OPTIM and the steps required in
Guardium are: (1) link user to Optim Audit Role; (2) add the predefined reports to
the appropriate pane; (3) enable sniffer; and, (4) set policy action to Log Data With
Values.
This interface includes an optim-audit role, a default layout (psml file) for the
optim-audit role, and seven predefined reports.
Note: When creating the optim-audit role and user, only one tab OPTIM Audit
will display. Similar to roles with custom layouts that customers can generate, this
is a role layout that is meant to be used alone (the optim-audit user has no interest
in the other user role tabs) but since the user role is required, layout merging has
been turned off when the user has the optim-audit role so that they get only the
items of optim interest. Other roles that work in this same way are "review-only"
and "inv".
Note: After creating and saving the optim-audit role, click the Generate Layout
selection within the User Browser menu and click Reset to get the layout
associated with the role. Do this again if changing roles within the User Browser.
This Guardium SIEM (Security Incident Event Manager) integration can be done in
one of the following ways:
v Syslog forwarding (the most common method for alerts and events)
204 Administration
v Using the CLI command, store remotelog, to specify the Syslog forwarding to
facility/priority, and host (destination).
v Using Guardium templates for ArcSight, Envision, and QRadar
v SCP/FTP (CSV or CEF Files sent to an external repository and the SIEM system
must upload and parse from this external repository.)
Note: The SIEM system must enable remote logging as well to know to listen for
the correct facility/priority which is defined within syslog.
SIEM technology provides real-time analysis of security alerts that are generated
by network hardware and applications. It helps companies to respond to network
attacks faster and to organize the massive amounts of log data that is generated
daily. SIEM solutions are log-based correlation engines.
SIEM solutions are primarily focused on detection and security, but not on
auditing. They assemble data from other logs and analyze it at a high level. They
correlate much more data such as IP addresses and routers but have little database
visibility. They do not have forensics-quality, digitally signed, audit monitoring
capabilities so they can be used for immediate information, but not historical proof.
Security information and event management (SIEM) users are faced with the
challenge of importing raw logs that are generated by internal DBMS utilities. The
performance of DBMS logging utilities, the unfiltered information that they
produce, and the lack of necessary granular information create challenges.
You can change the default message template, specify the parameters for syslog
forwarding, and create the CSV or CEF file to export.
Note: CEF is only used for ArcSight. The other SIEM products have a different
format and do not use CEF.
In order for the SIEM product to recognize the information that is being sent, the
message template must be changed through the Global Profile. This formatting
agreement between the SIEM solution and Guardium allows SIEM products to
parse incoming messages and update its own database with the new event/data.
1. To open the Global Profile, click Setup > Tools and Views > Global Profile.
2. Click Edit to Named template.
The following are examples of facility: all, auth, authpriv, cron, daemon, ftp, kern,
local0, local1, local2, local3, local4, local5, local6, local7, lpr, mail, mark, news,
security, Syslog, user, uucp. The following are examples of priority: alert, all, crit,
debug, emerg, err, info, notice, warning.
206 Administration
Entity Audit Trail, and Privacy Set task output can be exported to CSV
(Delimiter-separated Value) files. Additionally, CSV file output can be written to
Syslog. If the remote Syslog capability is used, the output CSV file is forwarded to
the remote Syslog locations.
Each record in the CSV or CEF files represents a row on the report. Contact
Guardium Support for a tool that permits the reformatting of CSV files before
export.
Examples of facility are: all, auth, authpriv, cron, daemon, ftp, kern, local0, local1,
local2, local3, local4, local5, local6, local7, lpr, mail, mark, news, security, Syslog,
user, uucp. Examples of priority are: alert, all, crit, debug, emerg, err, info, notice,
warning.
Each record in the CSV or CEF files represents a row on the report.
To send Syslog messages and export reports to CSV files, complete the following
steps.
Note: Do not zip the file within the audit process definition so that the SIEM
vendor can parse it correctly.
1. To open the Audit Process Finder, click Comply > Tools and Views > Audit
Process Builder.
2. Click the Icon to add a process or select an existing process from the
drop-down list.
CSV/CEF files can also be exported on a schedule to the SIEM host. Modify or
add an audit task.
1. Click Comply > Tools and Views > Audit Process Builder to open the Audit
Process Finder and modify or add an audit task.
2. Choose Export CSV file or Export CEF file.
Note: ACCESS reports can be saved and forwarded in CEF or LEEF format but
other reports, such as Guardium Logins, Aggregation Activity Log, and CAS
events cannot be mapped to CEF or LEEF.
3. Uncheck the Write to Syslog. Otherwise, Syslog messages will be generated
instead of a file.
4. Open the CSV/CEF Export menu by clicking Manage > Data Management >
Results Export (Files).
5. Select either the SCP or FTP Protocol. Then, enter the Host, Directory,
Username, Port, and SCP/FTP password. Click Apply to save the changes
orRevert to clear the fields.
6. Click the Modify Schedule button to schedule the exports of CSVs regularly.
7. Select the Start Time, Restart frequency, Repeat frequency, Schedule by
Day/Week or Month, Schedule Start Time. Check the box to automatically run
dependent jobs. Then, click Save.
To have a policy alert that is routed to Syslog, exception rules, access rules, and
extrusion rules must be modified to trigger notifications to be sent to Syslog. This
action can be accomplished by going to the Policy Builder. Policy rules can be sent
as email or sent to Syslog and forwarded.
1. To open the Policy Builder, click Setup > Tools and Views > Policy Builder.
2. Select the policy and click Edit Rule.
3. Click Add Rule... > Add Exception Rule.
4. Enter the Description, Category, Classification, and select a Severity level
from the drop-down list.
208 Administration
For every policy rule violation logged during the reporting period, the Policy
Violations report provides the Timestamp from the Policy Rule Violation entity,
Access Rule Description, Client IP, Server IP, DB User Name, Full SQL String from
the Policy Rule Violation entity, Severity Description, and a count of violations for
that row. With this report, users can group violations and create incidents, set the
severity of each violation, and assign incidents to users.
Both IBM Guardium and InfoSphere Discovery have the capability to identify and
classify sensitive data, such as Social Security Numbers or credit card numbers.
Note: In IBM Guardium , the Classification process is an ongoing process that runs
periodically. In InfoSphere Discovery, Classification is part of the Discovery process
that usually runs once.
3. Click on Customize icon on Report Result screen and specify the search
criteria to filter the classification results data to transfer to Discovery.
4. Run the report and click on Download All Records icon.
5. Save as CSV and import this file to Discovery according to the InfoSphere
Discovery instructions.
6. Import to Guardium - Import Classification Data from InfoSphere Discovery
to IBM Guardium
7. Export the classification data as CSV from InfoSphere Discovery based on
InfoSphere Discovery instructions.
8. As an admin user in the Guardium application, go to Tools > Report Building
>Custom Tables screen, select ClassificationDataImport and click on Upload
Data button. (See screenshot).
9. In Upload Data screen, click on Add Datasource, click on New button, define
the CSV file imported from Discovery as new datasource (Database Type =
Text). See the following screenshot of CSV Datasource definition.
210 Administration
Note: Alternatively you can load the data directly from Discovery database if
you know how to access the Discovery database and Classification results
data.
10. After defining the CSV as Datasource, click on Add button in Datasource list
screen.
11. In Upload data screen click on Verify Datasource and then Apply.
12. Click on Run Once Now button to load the data from the CSV.
13. Go to Report Builder, select Classification Data Import report, Click on Add to
Pane to add it to your Portal and then navigate to the report.
14. Access the Report, click on Customize to set the From/To dates and execute
the report.
CEF Mapping
The CEF standard from ArcSight defines a set of required fields, and a set of
optional fields.
The latter are called extensions in the CEF standard. Data is mapped to these fields
from Guardium configuration information and reports. Note that not all Guardium
fields map to a CEF field, so there may not be a one-to-one relationship between
the rows of a printed report and the CEF file produced for that report. Also note
that this facility is intended to map data from data access domains (Data Access,
Exceptions, and Policy Violations, for example), and not from Guardium
self-monitoring domains (Aggregation/Archive, Audit Process, Guardium Logins,
etc. ).
Note: Analyzed Client IP has a map for CEF source. If the query used for the CEF
does NOT contain the Client IP but contains the analyzed client IP, the analyzed
client IP will be used for the source. If both included in the query, then Client IP
takes precedence.
212 Administration
Table 28. Required CEF Fields Mapping (continued)
CEF Field Guardium Mapping
Device Product Guardium
Device Version Guardium software version number
Signature ID ReportID
Name Report Title
Severity Numeric severity code in the range 0-10, with 10 being the most
important event. If not reset in the report, 0 (zero, which translates
to Info for Guardium).
The CEF extension fields are optional, and will be present only when the mapping
applies. For example, if the report does not contain an access rule description, the
act field (the first extension field) will not be present. For more detailed
information about the Guardium entities and attributes, see the appropriate entity
reference topic.
Table 29. CEF Mapping, Guardium Version 8.2
CEF Field Entity Attribute
severity Policy Rule Severity
Violation
act Policy Rule Access Rule Description
Violation
app Client/Server DB Protocol
app Exception Database Protocol
dst Client/Server Server IP
dst Exception Destination Address
dhost Client/Server Server Host Name
dpt Session Server Port
dpt Exception Destination Port
dproc Client/Server Source Program
duid Client/Server OS User
duser Client/Server DB User Name
duser Exception User Name
end Exception Exception Timestamp
end Policy Rule Timestamp
Violation
end Access Period Period End
end Session Session End
msg Exception Exception Description
msg Message Text Message Text
msg Message Text Message Subject
src Client/Server Client IP
src Client/Server Analyzed Client IP
src Exception Source Address
shost Client/Server Client Host Name
214 Administration
Table 30. CEF Mapping, Guardium Version 9.0 (continued)
CEF Field Entity Attribute
src Exception Source Address
shost Client/Server Client Host Name
smac Client/Server Client MAC
spt Session Client Port
spt Exception Source Port
start Exception Exception Timestamp
start Policy Rule Timestamp
Violation
start Access Period Period Start
start Session Session Start
proto Client/Server Network Protocol
request FULL SQL Full Sql
request SQL Sql
cs1 Session Uid Chain
cs2 Session Uid Chain Compressed
For more information about CEF, search the web for Common Event Format: Event
Interoperability Standard, or visit the ArcSight Website: www.arcsight.com.
LEEF Mapping
Log Event Extended Format (LEEF) from QRadar
The LEEF format consists of an optional syslog header, an LEEF header and a
collection of attributes describing the event.
Syslog_Header(optional) LEEF_Header|Event_Attributes
The LEEF header is pipe (‘|’) separated and attributes are tab separated
Example
A pre defined set of keys are defined and should be used when
possible.
LEEF format is extensible and allows for additional key value pairs to
be added to the event log.
Example:
Jan 18 11:07:53 192.168.1.1 LEEF:1.0|QRadar|QRM|1.0|NEW_PORT_DISCOVERD|src=172.5.6.67 dst=172.50.123.
Character Encoding
UTF8
Predefined Attributes
Table 32. Predefined Attributes
Key Name Data Type Max Length Description
Cat string Event category
devTime date Time the device or application emitted the event
devTimeFormat string Defined by the java SimpleDateFormat. This is only
required if using a customized date format. See
Date Format section for further details.
proto integer Transport protocol
sev integer (1-10) Severity of this event
src IPv4 or IPv6 address Source address
dst IPv4 or IPv6 address Destination address
VSrc IPv4 or IPv6 address Virtual source address
srcPort integer Source Port. The valid port numbers are between 0
and 65535.
dstPort integer Destination Port. The valid port numbers are
between 0 and 65535.
srcPreNat IPv4 or IPv6 address Source address for the message before Network
Address Translation (NAT) occurred
dstPreNat IPv4 or IPv6 address Destination address for the message before Network
Address Translation (NAT) occurred
srcPostNat IPv4 or IPv6 address Source address for the message after Network
Address Translation (NAT) occurred
dstPostNat IPv4 or IPv6 address Destination address for the message after Network
Address Translation (NAT) occurred
usrName string 255 User name associated with the event
srcMAC MAC address Six colon-separated hexadecimal numbers. Example:
1:2D:67:BF:1A:71
216 Administration
Table 32. Predefined Attributes (continued)
Key Name Data Type Max Length Description
dstMAC MAC address Six colon-separated hexadecimal numbers. Example:
11:2D:67:BF:1A:71
srcPreNATPort integer Source Port. The valid port numbers are between 0
and 65535.
dstPreNATPort integer Destination Port. The valid port numbers are
between 0 and 65535.
srcPostNATPort integer Source Port. The valid port numbers are between 0
and 65535.
dstPostNATPort integer Destination Port. The valid port numbers are
between 0 and 65535.
identSRC IPv4 or IPv6 address
identHostName string 255 Host name associated with the event. Typically, this
parameter is only associated with identity events
identNetBios string 255 NetBIOS name associated with the event. Typically,
this parameter is only associated with identity events
identGrpName string 255 Group name associated with the event. Typically, this
parameter is only associated with identity events.
Custom Attributes
Custom attributes may be used for viewing in the QRadar Event Viewer by
creating custom properties.
Date Formats
218 Administration
Chapter 6. Troubleshooting problems
To isolate and resolve problems with your IBM products, you can use the
troubleshooting and support information. This information contains instructions for
using the problem-determination resources that are provided with your IBM
products, including IBM Guardium.
The first step in the troubleshooting process is to describe the problem completely.
Problem descriptions help you and the IBM technical-support representative know
where to start to find the cause of the problem. This step includes asking yourself
basic questions:
v What are the symptoms of the problem?
v Where does the problem occur?
v When does the problem occur?
v Under which conditions does the problem occur?
v Can the problem be reproduced?
The answers to these questions typically lead to a good description of the problem,
which can then lead you to a problem resolution.
What is the problem? This question might seem straightforward, however, you can
break it down into several more-focused questions that create a more descriptive
picture of the problem. These questions can include:
v Who, or what, is reporting the problem?
v What are the error codes and messages?
v How does the system fail? For example, is it a loop, hang, crash, performance
degradation, or incorrect result?
The following questions help you to focus on where the problem occurs to isolate
the problem layer:
v Is the problem specific to one platform or operating system, or is it common
across multiple platforms or operating systems?
v Is the current environment and configuration supported?
v Do all users have the problem?
219
v (For multi-site installations.) Do all sites have the problem?
If one layer reports the problem, the problem does not necessarily originate in that
layer. Part of identifying where a problem originates is understanding the
environment in which it exists. Take some time to completely describe the problem
environment, including the operating system and version, all corresponding
software and versions, and hardware information. Confirm that you are running
within an environment that is a supported configuration; many problems can be
traced back to incompatible levels of software that are not intended to run together
or have not been fully tested together.
Responding to these types of questions can give you a frame of reference in which
to investigate the problem.
Knowing which systems and applications are running at the time that a problem
occurs is an important part of troubleshooting. These questions about your
environment can help you to identify the root cause of the problem:
v Does the problem always occur when the same task is being performed?
v Does a certain sequence of events need to happen for the problem to occur?
v Do any other applications fail at the same time?
Answering these types of questions can help you explain the environment in
which the problem occurs and correlate any dependencies. Remember that just
because multiple problems might have occurred around the same time, the
problems are not necessarily related.
However, problems that you can reproduce can have a disadvantage. If the
problem is of significant business impact, you do not want it to reoccur. If possible,
220 Administration
recreate the problem in a test or development environment, which typically offers
you more flexibility and control during your investigation.
v Can the problem be re-created on a test system?
v Are multiple users or applications encountering the same type of problem?
v Can the problem be re-created by running a single command, a set of
commands, or a particular application?
Procedure
To search knowledge bases for information that you need, use one or more of the
following approaches:
v Find the content that you need by using the IBM Support Portal.
The IBM Support Portal is a unified, centralized view of all technical support
tools and information for all IBM systems, software, and services. The IBM
Support Portal lets you access the IBM electronic support portfolio from one
place. You can tailor the pages to focus on the information and resources that
you need for problem prevention and faster problem resolution. Familiarize
yourself with the IBM Support Portal by viewing the demo videos
(https://www.ibm.com/blogs/SPNA/entry/the_ibm_support_portal_videos)
about this tool. These videos introduce you to the IBM Support Portal, explore
troubleshooting and other resources, and demonstrate how you can tailor the
page by moving, adding, and deleting portlets.
v Search for content about Guardium by using one of the following additional
technical resources:
– Tivoli Identity Manager Version 4.3 technotes and Authorized Program
Analysis Reports (APARs - problem reports)
– Tivoli Identity Manager Support website
– Tivoli support communities (forums and newsgroups)
v Search for content by using the IBM masthead search. You can use the IBM
masthead search by typing your search string into the Search field.
v Search for content by using any external search engine, such as Google, Yahoo,
or Bing. If you use an external search engine, your results are more likely to
include information that is outside the ibm.com® domain. However, sometimes
you can find useful problem-solving information about IBM products in
newsgroups, forums, and blogs that are not on ibm.com.
Tip: Include “IBM” and the name of the product in your search if you are
looking for information about an IBM product.
Procedure
222 Administration
2. Gather diagnostic information.
3. Submit the problem to IBM Support in one of the following ways:
v Online through the IBM Support Portal: You can open, update, and view all
of your service requests from the Service Request portlet on the Service
Request page.
v By phone: For the phone number to call in your region, see the Directory of
worldwide contacts web page.
Results
If the problem that you submit is for a software defect or for missing or inaccurate
documentation, IBM Support creates an Authorized Program Analysis Report
(APAR). The APAR describes the problem in detail. Whenever possible, IBM
Support provides a workaround that you can implement until the APAR is
resolved and a fix is delivered. IBM publishes resolved APARs on the IBM Support
website daily, so that other users who experience the same problem can benefit
from the same resolution.
Use support must_gather commands, which can be run through the CLI to generate
specific information about the state of any Guardium system. This information can
also be collected through the Guardium GUI.
This information can be uploaded from the Guardium system and sent to IBM
Support whenever a Problem Management Report (PMR) is logged.
The must_gather commands can be run at any time by the user through the CLI.
Complete the following steps.
1. Open a putty session (or similar) to the appropriate collector, aggregator, or
Central Manager.
2. Log in as user cli.
3. Depending on the type of issue, paste the relevant must_gather commands into
the CLI prompt. More than one must_gather command might be needed to
diagnose the problem. The commands are listed and described in the following
list.
v support must_gather agg_issues (aggregation process)
v support must_gather alert_issues (alerts)
v support must_gather app_issues (application)
v support must_gather app_masking_issues (application masking)
v support must_gather audit_issues (audit process)
v support must_gather backup_issues (backup process)
v support must_gather cm_issues (Central Manager)
v support must_gather datamining_issues (data mining)
v support must_gather miss_dbuser_prog_issues (system database user)
v support must_gather network_issues (network architecture)
224 Administration
v support must_gather ocr_issues
v support must_gather patch_install_issues (patch installation and
upgrades)
v support must_gather purge_issues (purge process)
v support must_gather scheduler_issues (scheduler function)
v support must_gather sniffer_issues (sniffer function)
v support must_gather system_db_info (Guardium system database or
operating space performance)
v support must_gather user_interface_issues (user interface)
The output is written to the must_gather directory with a file name such as the
following example:
must_gather/system_logs/.tgz
4. Send the resulting output to IBM Support.
By using fileserver <ip address>, you can upload the .tgz files and send to
IBM Support.
Send the file through email or upload to ECUREP by using the standard data
upload. Specify the PMR number and file to upload.
The guard_diag script produces statistics on the server that helps Guardium with
diagnostics.
Explanation of guard_diag:
General Overview:
The script prompts for the location if the script cannot automatically determine
where S-TAP is installed. The run time is about 1.5 minutes and if no output
directory is specified, the script places the generated .tar file in /tmp. When the
script runs and enables logging from the GUI, the .tar file is placed in /var/tmp.
Known Issues:
v Tusc is not installed on all HP-UX operating systems, so tracing the S-TAP PID
does not work.
v gzip isn't always installed on the system. The fall back is to compress (final
extension of .tar.Z) and failing that, the .tar file is placed in the output
directory.
v Topas output on AIX is best interpreted by the terminal since it contains control
codes that makes it mostly unintelligible when it is opened in an editor.
v The non-root S-TAP has a number of issues concerning the diagnostics script.
v In Linux, /var/log/messages is only readable by the root.
v Some Solaris operating systems might not be configured correctly and causes
netstat to print an error.
v The path for the non-root user is rather basic, and as a result, some commands
might not run at all. Notably, this known issue happens on HP-UX with gzip.
Platforms Supported:
v Linux
v HP-UX
v AIX
v Solaris
226 Administration
v tasks.txt
v system.txt
v evtlog.txt or evtlog2008.txt
v reg.txt
Notes:
1. This diag script can be run with any S-TAP version.
2. Rename the diag script to diag.bat and place it under directory where S-TAP
was installed. Then, you can run it manually. It generates text files with
diagnostic information.
3. Submit the results to Guardium L3 Support or Research & Development.
Procedure
228 Administration
Before you begin
Ensure that your IBM technical-support representative provided you with the
preferred server to use for downloading the files and the exact directory and file
names to access.
Procedure
Procedure
To subscribe to Support updates:
1. Subscribe to the Guardium RSS feeds.
2. Subscribe to My Notifications by going to the IBM Support Portal and click My
Notifications in the Notifications portlet.
3. Sign in using your IBM ID and password, and click Submit.
4. Identify what and how you want to receive updates.
a. Click the Subscribe tab.
b. Select the appropriate software brand or type of hardware.
c. Select one or more products by name and click Continue.
d. Select your preferences for how to receive updates, whether by email, online
in a designated folder, or as an RSS or Atom feed.
e. Select the types of documentation updates that you want to receive, for
example, new information about product downloads and discussion group
comments.
f. Click Submit.
Results
Until you modify your RSS feeds and My Notifications preferences, you receive
notifications of updates that you have requested. You can modify your preferences
when needed (for example, if you stop using one product and begin using another
product).
Related Information
User Interface
Cannot view SVG graphics in Internet Explorer 9
If you cannot view SVG graphics in IE9, switch to Standard mode.
Symptoms
When you open the IBM Security Guardium GUI with Internet Explorer 9 (IE9),
the SVG graphics do not display. The IE9 status window displays the following
message:
alt="SVG Plugin Required”
230 Administration
Causes
With the SVG Viewer, you can view items like the Access Maps and Current Status
Monitor. However, IE9 is in Document mode and not in Standard mode. In
Document mode, the SVG viewer is not automatically loaded by the browser.
Environment
Note: If Standard mode is not available as a choice, then IE9 is already in the
Standard mode. In such an event, contact Guardium Technical Support.
Symptoms
When you add an inspection engine, the new settings remain for a few minutes
and then disappear.
Causes
There is an error in one or more parameter values with either the new inspection
engine or a different inspection engine in the S-TAP configuration file
guard_tap.ini.
Environment
The Guardium collector user interface is affected.
Symptoms
When you refresh the IBM Security Guardium GUI from the system main page,
you receive in the following error:
HTTP Status 403-
type Status report
message
description Access to the specified resource () has been forbidden
Environment
You can disable this feature by using the following CLI command: store gui
csrf_status off
Note: If you turn off CSRF protection, the security level of the Guardium system is
reduced.
You can check the status by running this CLI command: show gui csrf_status
Java.lang.IllegalStateException
If you receive a java.lang.IllegalStateException error, clean up the Java servlets.
Symptoms
Causes
The error is raised when a method is invoked and the Java VM is in a state that is
inconsistent with the method. There might also be corrupted Java servlets that are
caused by deadlocks.
Environment
Wait a few minutes and retry. If the error persists, restart the GUI by logging in as
user cli and executing the command restart GUI.
To clean up the Java servlets, run the command support clean sevlets.
If the problem is not resolved, please collect the following tomcat logs and contact
IBM Security Guardium Technical Support.
tomcat_log/localhost.<date_stamp>.log
tomcat_log/catalina.<date_stamp>.log
232 Administration
Pages are not loading correctly
If pages do not load correctly, restart the GUI or use a different browser.
Symptoms
You might see a blank screen or other errors. The problem appears to happen with
certain browsers on specific systems but not with others.
Causes
The cause might be restricted to a localized browser or there is a Java virtual
machine issue.
Environment
The collector, aggregator, and central manager are affected.
Policies
Query does not appear in the co-relation alert definition
If the query does not appear in the co-relation alert definition, check the count
field and sort by time stamp.
Symptoms
You created an access query for creating a co-relation alert. However, in the
co-relation alert definition, this query does not appear in the drop-down list.
Causes
The co-relation alert search in the report is based on the time stamp.
Environment
The collector and aggregator are affected.
Symptoms
Rules with a value in the policy Command field do not trigger as expected.
Causes
The cause is a misconfiguration in the command field. The Guardium parser does
not consider the command modifiers to be a part of a command.
Environment
Guardium Collectors. The command field in the policy rule is also affected when it
is used with wildcard (%).
ADMIN OPTION and TO PUBLIC do not match and cannot trigger a rule because the
Guardium parser does not recognize them as a part of a command. Generally, the
parser does not consider command modifiers to be part of a command. Instead,
create a report to inspect the traffic that the policy monitors and include the SQL
Verb field from the Command entity in that report. Anything that is listed in the
SQL Verb field is recognized by the parser and can be used in the Command field
of a policy rule. Several commands can be added to a group and the group can be
used in the rule instead of a single command. In this case, each group member
must match an entry in SQL Verb. Guardium includes several such command
groups that you can use or clone.
Symptoms
The redact function causes an overly masked result or an ORA-03106 error in
Oracle traffic.
Causes
The redact function in the Guardium policy rule is doing a pattern match with the
result set. It has a feature to replace the matched string with the user specified
character.
Environment
Guardium collectors are affected.
Symptoms
SSH sessions and automated CRON jobs that log in to your Oracle database
through SQLPLUS and RMAN with /as sysdba show as failed logins.
Causes
Oracle responds to these logins with the following error on such attempts, even if
it is not shown on the screen.
ORA-01-17: invalid username/password; logon denied.
234 Administration
This error triggers the failed login alert. For example, if the database user
WRONGLOGIN is a member of the DBA group, and logs as sqlplus WRONGLOGIN
as sysdba, the database authentication of WRONGLOGIN fails. This failure causes
the ORA-01-17 error alert to trigger and is reflected in the Guardium log. However,
users with sysdba privileges can connect to the database without database
authentication so the session is allowed to continue. Both events are captured and
recorded.
Environment
Guardium collectors are affected.
This rule skips the failed login alerts that are caused by the ORA-01-17 error but
are still logged. To filter the failed login alerts out of the reports, add these
conditions to the end of the conditions list:
AND
(
client IP<>server IP OR
src prg <> SQLPLUS OR
db user NOT IN group of trusted OR
os user NOT IN group of oracle DBAs OR
net protocol <>BEQUEATH (if this is local BEQUEATH, not TCP )
)
Symptoms
The Guardium internal database is filling up and most of the data is in the
GDM_POLICY_VIOLATIONS_LOG table.
Causes
A change to the policy can cause a policy violation rule to be triggered frequently.
You might find that most of the data is stored in the
GDM_POLICY_VIOLATIONS_LOG table.
Environment
The Guardium collector is affected.
Reports
Cannot modify the receiver table for an Audit Process after it has
been executed at least once
If you cannot modify the receiver table for an audit process, clone the audit
process and replace the original.
Symptoms
After an audit process runs at least once, you can neither remove nor add a
receiver. You can also not modify the following properties for a receiver.
v Action Req.
v Cont.
v Appv. if Empty
Causes
After an Audit Process runs at least once, the receiver table is locked and you
cannot modify most of the properties.
Environment
All Guardium configurations (collector, aggregator, central manager) are affected.
Symptoms
You can view reports in the GUI. However, when you export the report to PDF, the
characters are not correct or missing. The characters appear as question marks or
other symbols in the PDF report.
Causes
The default font in Guardium PDF exports does not show multi-byte characters
correctly. For example, Greek, Cyrillic, and Chinese characters do not display
correctly.
Environment
The collector, aggregator, and central manager are affected.
236 Administration
2. Run the command store pdf-config multilanguage_support
3. Select 2 Multi-language.
Symptoms
The file system is filling up and approaching 100%.
Causes
Alerts and reports are sent to the syslog and can fill up the file system.
Environment
The collector or aggregator might be affected.
Symptoms
When you view an Audit report (in .csv format) in Microsoft Excel, you notice that
certain rows are filled with unexpected characters. The characters might look
similar to what you find in the full SQL column. The problem is not seen in .pdf
reports or in GUI reports.
Causes
Microsoft Excel has a limit on what a cell can contain of 32,767 characters. If your
captured SQL is longer than this limit, it will spill over onto the next row.
Environment
The Collector, Aggregator, and Central Manager are affected.
Causes
While Guardium is decrypting the traffic, the IP address is initially recorded as
0.0.0.0 because the sniffer does not know what the actual IP address is. After the
decryption is completed, a separate thread repopulates the session tables with the
correct IP address.
Environment
Any database that encrypts the database traffic is affected.
Symptoms
When you run a report in Guardium, you receive the following error
message.Request was interrupted or quota exceeded.
Causes
The error message Request was interrupted or quota exceeded appears when an
interactive report does not complete within the 3-minute time limit. The
underlying cause is generally the size of the report.
Environment
The collector and aggregator are affected.
Symptoms
Rules with a value in the policy Command field do not trigger as expected.
Causes
The cause is a misconfiguration in the command field. The Guardium parser does
not consider the command modifiers to be a part of a command.
Environment
Guardium Collectors. The command field in the policy rule is also affected when it
is used with wildcard (%).
238 Administration
GRANT
GRANT%
ADMIN OPTION and TO PUBLIC do not match and cannot trigger a rule because the
Guardium parser does not recognize them as a part of a command. Generally, the
parser does not consider command modifiers to be part of a command. Instead,
create a report to inspect the traffic that the policy monitors and include the SQL
Verb field from the Command entity in that report. Anything that is listed in the
SQL Verb field is recognized by the parser and can be used in the Command field
of a policy rule. Several commands can be added to a group and the group can be
used in the rule instead of a single command. In this case, each group member
must match an entry in SQL Verb. Guardium includes several such command
groups that you can use or clone.
Symptoms
You receive the same message in the Scheduled Jobs Exceptions report at regular
short intervals, typically every 5 minutes. This interval is the same as the polling
interval that anomaly detection runs on.
An example of the Scheduled Jobs Exceptions report might look like the following.
Timestamp Exception Description Count of Exceptions
2013-12-05 15:51:22.0 java.lang.NumberFormatException: empty String 1
Causes
One of the active alerts is causing the error.
Environment
Guardium collectors and the Aggregator are affected.
If you find the alert that is causing the problem and need assistance to understand
or stop the error, contact IBM Guardium Technical Support and provide the
following items:
Symptoms
You receive the following message. Merge required, delay executing Process.
You might receive several of these messages over a short period.
Causes
The audit process requires the merge process to finish before it can run.
Environment
The aggregator is affected.
Symptoms
When you view records from the monitored Teradata Database in Guardium
reports, the database user name field does not show up as expected. The user
name is truncated or missing.
Causes
The Teradata Database is not enabled to return the full user name.
Environment
Any Guardium collector that captures data from the Teradata database is affected.
Note: This setup returns the user name in unencrypted form. If encryption is
enabled, the system returns an error message.
240 Administration
Unexpected results in Guardium reports with embedded
commands
If you receive unexpected results in Guardium reports, configure your policy rules
to handle depth by using tuples.
Symptoms
You see results in your reports that you do not expect or that you believe should
be filtered out by the policy. Conversely, you do not capture statements that you
expect to capture.
Causes
The SQL usually has several objects and commands that are embedded in the
statement. The policy or report definition is not configured to deal with objects or
commands at different depths.
Environment
Guardium collectors are affected.
Note: Tuple supports the use of one slash and a wildcard character (%). It does not
support the use of a double slash.
Symptoms
Guardium CAS works with older Java versions but not with Java 1.7.
Causes
msvcr100.dll is missing from <GUARDIUM STAP directory>\cas\bin\
Environment
Guardium CAS on Windows is affected.
Note: This is only needed for Java version 1.7. For older versions of Java, this step
is not needed.
Symptoms
Some members of a test exception group appear in the details field when you run
a vulnerability assessment. The group contains members with a backslash character
and a REGEX tag such as (R)US\John Doe.
Causes
Special characters can trigger errors when Guardium parses the exception group.
Environment
Guardium collectors are affected.
The REGEX tag (R) is used to trigger a regular expression search of the details
field to remove any string that matches the regular expression. A backslash or any
other character that has a meaning in a regular expression needs a backslash
escape sequence to avoid parsing errors. If you do not use the (R) tag, the group
member must exactly match the entire line in the details field for Guardium to
make a match. To pass the vulnerability test, the details field of the test must be
empty.
Symptoms
After you upgrade S-TAP using the Guardium Installation Manager (GIM), you
cannot configure the database path parameters in the Inspection Engine in
Guardium even though the installation results for the module show as successful.
Causes
242 Administration
K-TAP is not properly upgraded if the new S-TAP is installed as a fresh module.
Because the old K-TAP module is not removed, there is a protocol mismatch
between the old K-TAP module and the new S-TAP.
Environment
S-TAP installed in UNIX and Linux such as AIX, HP-UX, Linux, and Solaris.
The modules log file lists the old K-TAP. For example: ktap_24276 338760 0
Symptoms
Guardium fails to recognize the network device VMXNET x during the installation
on VMware. You receive the error eth0: unknown interface: No such device
when you install Guardium on VMware as a guest. The error message appears
after you restart the system.
Causes
VMXNET x virtual network adapter requires a specific driver that is only
contained in VMware tools and no operating system has the driver. Guardium is
running on Linux and the installer does not have a driver for VMXNET x.
Environment
The Guardium system is affected.
Symptoms
After a hardware repair such as replacing the system board on the Guardium
appliance, the network connectivity is lost. The following error message occurs for
each network interface when the appliance is rebooted.
rtnetlink answers: no such device
Causes
After you replace the system board, the MAC address will change. This change
causes a disparity between the actual MAC address and what is stored in the
interface configuration files.
Environment
Any Guardium appliance (collector, aggregator, or central manager) on which the
system board has been replaced and all Guardium versions are impacted.
If the problem is still not resolved, contact Guardium Support for manual
intervention.
Symptoms
You implemented a new Guardium system as a virtual machine and performed all
the required initial network configuration. However, you cannot ping the system
using the IP address and the system is not accessible in the network.
Causes
The MAC address assigned to the virtual machine by the virtual environment does
not match the MAC address in Guardium.
Environment
The collector, aggregator, and central manager are affected.
244 Administration
2. Run the command show network macs to show the MAC address stored in the
Guardium configuration.
3. From the administration utility for your virtual environment, check the MAC
address for the virtual machine.
a. Open the VMWare Workstation.
b. Right-click the virtual machine and select Settings or Properties to open the
Virtual Machine Settings.
c. Select Network Adapter under Hardware.
d. Click Advanced to open the Network Adapter Advanced Settings.
e. Compare the MAC address from steps 2 and 3.
SSLv3 is enabled
If you receive a warning that SSLv3 is enabled, disable SSLv3 to prevent the
POODLE exploit.
Symptoms
You receive the following warning: SSLv3 is enabled.
Causes
SSLv3 contains a protocol vulnerability known as Padding Oracle On Downgraded
Legacy Encryption (POODLE). If SSLv3 is enabled on your system, this
vulnerability allows attackers to force an SSL/TLS fallback to SSLv3, break the
encryption, and intercept network traffic in plaintext. The vulnerability is detailed
in the National Vulnerability Database as CVE-2014-3566.
This topic describes how to check the status of SSLv3 and disable it if necessary.
Access Management
Cannot log in to Guardium except as admin or accessmgr
If you cannot log in to the Guardium GUI except admin or accessmgr, check the
authentication configuration settings.
Symptoms
You are unable to log in to Guardium with any user except admin or accessmgr.
You see an invalid user name or password error despite using the correct user and
password as defined by accessmgr. You receive the following error message.
Invalid user name and/or password. Please reenter your credentials..
Causes
The authentication setting is not configured as local.
Environment
The collector, aggregator, and central manager are affected.
Symptoms
You lost the Guardium accessmgr password and cannot log in to the GUI. The
account is also locked after successive failed attempts.
Causes
Guardium prohibits multiple failed login attempts.
Environment
The collector, aggregator, and central manager are affected.
You can use <N> or random where <N> is a number in the range of 10000000 -
99999999. Random automatically generates a number in the range of 10000000 -
99999999. Open a PMR with IBM Guardium support and send the following
output.
G10.ibm.com> support reset-password accessmgr random
Password for accessmgr account have been successfully reset using keyword:<passkey>
Please provide these number to Guardium Customer Service to receive actual account password.
ok
246 Administration
After you receive the new password, unlock the account.
1. Use the following command to unlock the account. unlock accessmgr.
2. Log in as accessmgr and edit the accessmgr details to enter a temporary
password.
3. Log in again with the temporary password.
4. When you are prompted, enter a new password.
Aggregation
Cannot convert Guardium collector to aggregator
If you cannot convert a Guardium collector to a Central Manager aggregator,
reinstall Guardium and select aggregator during installation.
Symptoms
You try to convert a Guardium collector to an aggregator with the command store
unit type manager aggregator.
However, the following command shows that the unit type is still listed as
manager.
> show unit type
Manager
Causes
A collector cannot be converted to an aggregator with a CLI command.
Environment
Guardium collectors are affected.
Symptoms
You attempt to save new settings for the data export and get the error when you
click Apply to save the configuration:
Please correct the following errors and try again:
A test data file could not be sent to this host with the parameters given. Please confirm the host
Causes
Guardium attempts to log in with scp to the target host with the user and
password that are specified in the Data Export configuration. Then, Guardium
attempts to copy a test file to the target directory. The shared secret on this system
does not match the Shared Secret on the aggregator you are trying to set this
system to export to.
Environment
The Guardium configurations: collector and aggregator are affected.
For the file transfer operation, specify a user, host, and full path name for the
backup keys file. The user that you specify must have the authority to write
to the specified directory.
v On the collector, run this command to restore the shared secret key:
aggregator restore keys file<user@host:/path/filename>
3. Reset the shared secret for both appliances to be the same.
Note: If you change the shared secret for the aggregator, you need to reset the
shared secret for all other Guardium systems that export to it.
Symptoms
You set a report to run on the aggregator as part of an audit process with time
parameters, for example, Start of Last Day and End of Last Day. When you look
at the results of that report, the first time stamps are always at a set tme after 00.00
for example, 02.00. Additionally the last time stamps are always at a set time
before 23.59 for example, 21.59. However, when you run the report interactively,
the time stamps are shown as expected.
Causes
The collector and aggregator time zones might not be set the same.
Environment
The aggregator is affected.
248 Administration
Verify that the time is correct on the appliance with the following commands.
show system clock datetime
store system clock datetime
The datetime can also be synchronized by using an NTP server with the following
commands.
show system ntp all
store system ntp state
store system ntp server
Symptoms
When you restore the configuration of an aggregator or the Central Manager, you
receive one or both of these messages.
ERROR 1031 (HY000) at line 1: Table storage engine for ’GUARD_USER_ACTIVITY_AUDIT’ doesn’t have th
ERROR 1031 (HY000) at line 1: Table storage engine for ’AGGREGATOR_ACTIVITY_LOG’ doesn’t have this
Causes
This error condition can occur if there is a temporary mismatch in the internal
databases.
Environment
The collector and aggregator are affected.
Central Management
A user is disabled in a Guardium managed unit, but shows as
enabled on Central Manager
If a user is disabled in a Guardium managed unit but shows as enabled on Central
Manager, run the Portal User Sync.
Symptoms
A user is disabled in the managed unit. The user's account is re-enabled in the
Central Manager but the user is still showing as disabled in the managed unit. The
user's account shows as enabled in the Central Manager.
Causes
The user's account in the Central Manager is not synchronized with the managed
unit.
Environment
A combination of the Central Manager, collector, or aggregator might be affected.
If the user's account between the managed unit and the Central Manager is still
not synchronized, contact the IBM Guardium Technical Support for assistance.
Symptoms
The Central Manager might not immediately recognize the new version of an
upgraded aggregator or collector it manages. Pushing a patch from the Central
Manager, which requires the new version, can result in an error that shows the
unit is still at the previous version.
The managed unit's old version still displays in the Central Management view of
the GUI. The unit ping times in that view, which implies good communication
between the Central Manager and managed units.
Causes
The GUI needs to be refreshed to pull the new version information.
Environment
The Guardium Central Manager is affected.
Symptoms
Import fails and you receive the following message in agg_progress.log.
* 05/20 04:00:01 --- Import cannot start
(guard_agg|turbine_backup.sh|restore_from_file.pl already running)
* 05/20 20:00:46 --- Merge cannot start - aggregation still active
Causes
There is a conflict with the Central Manager portal user sync.
Environment
The aggregator is affected.
250 Administration
Scheduled policy installation fails on managed units
If the scheduled policy installation fails on managed units, adjust the policy
installation schedules for managed units.
Symptoms
The scheduled policy installation fails on managed units. For example, on Monday,
collector 1, 3, and 4 fail to install the policy. On Tuesday, collector 1, 5, and 6 fail to
install the policy. The scheduled jobs exceptions report indicates an error but the
managed units fail intermittently.
Causes
All of the managed units cannot be scheduled to install a policy at the same time.
Environment
The managed units version 8.2, 9, and 10 are affected.
Symptoms
Selecting a certain custom group in the Central Management view of the
Guardium GUI displays an error instead of the managed units in the group.
org.apache.torque.TorqueException: Failed to select one and only one row.
After the exception appears, it shows for any group or view under the Central
Management tab. The exception even appears for groups that were previously
working until you log out of the GUI and log back in.
Causes
This torque exception might occur if one of the managed units in the group was
unregistered from the managed unit instead of the Central Manager.
Environment
Guardium Central Manager is affected.
Symptoms
The operating system fails when you install or upgrade Guardium S-TAP on AIX
6.1. The AIX crash memory dump shows the following stack trace.
Symptom Information:
Crash Location: [0000000000473260] execvex_common+1880
Component: COMP Exception Type: 131
Stack Trace:
[0000000000473260] execvex_common+1880
[000000000047744C] execve+A8
[F1000000C083E84C] my_execve+424
Causes
This crash is a known issue in AIX version 6.1 due to a system crash in the
execvex_common code path.
Environment
To apply the Fix Pack AIX 6.1 6100-08-04 and resolve the problem, see
http://www-01.ibm.com/support/docview.wss?uid=isg1IV50179
Symptoms
After you configure DB2 COMM_EXIT_LIST to use Guardium libguard and restart the
DB2 server, you get the following error in the DB2 diag log.
2013-06-28-11.41.12.306169-300 E870950E486 LEVEL: Severe
PID : 15764 TID : 139905833363200 PROC : db2sysc 0
INSTANCE: db2001 NODE : 000
APPHDL : 0-16
HOSTNAME: dbhost1
EDUID : 54 EDUNAME: db2agent () 0
FUNCTION: DB2 UDB, DRDA Communication Manager, sqljcCommexitLogMessage,
probe:234
DATA #1 : String with size, 91 bytes
WARNING: Shmem_access /.guard_writer0 failed Error opening shared memory area errno=2 err=8
Causes
The following message indicates that the Guardium library was unable to create
the shared memory device that it requires.
Shmem_access /.guard_writer0 failed
Error opening shared memory area
errno=2
err=8
The DB2 instance owner must be added as an authorized user using the guardctl
command.
Environment
Guardium collectors that use DB2 Exit (Version 10) Integration with S-TAP are
affected.
252 Administration
Resolving the problem
The DB2 instance owner must be added as an authorized user by using the
guardctl command.
1. Stop the DB2 instance.
2. Authorize the DB2 instance owner.
3. Start the DB2 instance.
If the Guardium Installation Manager (GIM) is not installed, authorize the DB2
instance owner with the following command. <guardium_installdir>/bin/
guardctl authorize-user<db2 instance owner>
For example, if the DB2 instance owner is db2001 and GIM is installed in
/usr/local/guardium, the command is /usr/local/gim/modules/ATAP/current/
files/bin/guardctl authorize-user db2001.
Symptoms
Guardium S-TAP does not collect shared memory traffic from Informix.
Causes
The inspection engine is not correctly configured.
Environment
Any S-TAP collection from any Informix system can be affected.
For Linux servers, A-TAP must be configured to collect any shared memory traffic.
Set the value to the same value as the --db-info parameter in the A-TAP
configuration before you activate A-TAP.
Symptoms
You observe a high CPU or I/O usage by the Guardium S-TAP process.
Causes
Environment
S-TAP installed in AIX
Symptoms
You encounter issues in Guardium relating to missing information from the login
packet such as database user name, source program, or database name.
Causes
Login packets might miss information when the session is too short.
Environment
The Guardium collector is affected.
254 Administration
Collect the S-TAP debug trace on the database server where the Guardium S-TAP
is installed and the slon trace on the collector.
Refer to the Technotes in the Related URL section for details on collecting each of
these traces.
1. Run both traces at the same time.
2. Generate a new database session that re-creates the issue while both traces are
running. Login packets are only sent when the database connection is open.
3. Add session start, client port, and server port to your existing report. Refresh
the report after you re-create the issue with the new connection.
4. Confirm that the traces are running during the session by checking the session
start.
5. Leave the session open for at least 5 minutes to allow the sniffer to analyze the
login packets.
6. Send the session with the missing fields. State the application name you used
to generate the session, database name, DB user you connected as, type of
connection, SQL statement, and any other pertinent details.
7. Collect the S-TAP debug trace file on the database server, the slon trace on the
Guardium collector, and the current sniffer must gather.
Symptoms
A message similar to the following is reported one or more times in Guardium
system log (messages) or Alerts:
Nanny process error condition. The nanny process killed the sniffer. VmData was number and was ove
Causes
The sniffer memory usage reached over 90% of the available memory and the
nanny process has restarted it, which is expected behavior of the product.
Environment
Guardium collector
Causes
The sniffer starts with six threads by default. When the number of threads exceeds
the limitation, the sniffer cannot connect to the UNIX S-TAP because of undefined
behavior.
Environment
UNIX S-TAP is affected.
Symptoms
The S-TAP cannot start and issues the following messages:
mmap: Not enough space
Can’t initialize: Can’t mmap buffer file /tmp/stapbuf/192.168.100.107.0.buf
Error Initializing: Stap cannot initialize SQLGuard queue
Causes
The S-TAP is unable to allocate enough memory to match the buffer file.
Symptoms
The S-TAP process does not automatically start on Linux even though the
/etc/inittab file shows a correct U-TAP entry.
Causes
Various Linux distributions such as RedHat 6 deprecated the use of the traditional
init daemon that uses the etc/inittab file. They replaced it with an init process
called Upstart. Upstart uses the /etc/event.d and /etc/init directories for the
automated start, stop, and respawn of processes such as U-TAP.
The S-TAP installer now checks for the existence of the /etc/event.d directory. If it
exists, then entries in /etc/init are created for use by Upstart. If it does not exist,
then entries in /etc/inittab are created for use by the traditional init daemon.
If /etc/event.d is missing for any reason on a system with Upstart, the inittab file
is populated instead. The S-TAP process does not start or respawn when needed.
Environment
S-TAP running on Linux is affected.
256 Administration
Resolving the problem
Check for the existence of the /etc/event.d/ directory.
If the /etc/event.d/ directory does not exist, complete the following steps to
resolve the situation.
1. Uninstall the existing S-TAP installation.
2. Create the /etc/event.d dir as user root (mkdir /etc/event.d) .
3. Install the S-TAP.
Symptoms
You see the following message in the S-TAP event log. LOG_ERR: Not FIPS 140-2
compliant - use_tls=0 failover_tls=1.
Causes
FIPS 140-2 is a U.S. government security standard for cryptographic modules. If
you see this message, it indicates that the S-TAP configuration does not meet
government requirements.
Note: This message does not indicate that there is an error with the S-TAP.
Environment
Guardium S-TAP is affected.
Any other combination turns off the FIPS mode and result in an error message.
You can change the configuration by using one of the following methods.
1. Click Manage > Activity Monitoring > S-TAP Control.
2. Modify the details section for the relevant S-TAP and use the TLS check boxes.
3. Restart the S-TAP.
You can also edit the guard_tap.ini file on the DB server directly and restart the
S-TAP.
Symptoms
The K-TAP kernel module is still present after the uninstallation of S-TAP on a
Solaris server.
Causes
The server did not restart properly to remove the K-TAP kernel module on Solaris
servers.
Symptoms
UNIX S-TAP reads only the first 16 port_range definitions in the inspection engine
settings.
Causes
By design K-TAP can read only 16 port_range definitions.
Environment
UNIX S-TAP that uses K-TAP and defines more than 16 inspection engines is
affected.
The following example defines listening ports 50000 - 50020 as target ports to be
monitored.
[DB_0]
port_range_end=50020
port_range_start=50000
258 Administration
Windows S-TAP service crashes on startup with error ID 1000
If the S-TAP crashes with error ID 1000, check the SOFTWARE_TAP_IP parameter
in the guard_tap_ini configuration file.
Symptoms
The S-TAP on a Windows server does not start. The Windows event log shows
errors from Guardium S-TAP with event ID 1000.
Log Name: Application
Source: Application Error
Event ID: 1000
Task Category: (100)
Level: Error
Keywords: Classic
Description:
Faulting application name: guardium_stapr.exe, version: 9.0.0.0
Exception code: 0x40000015
Causes
Environment
Symptoms
z/OS S-TAP fails to show active on the Guardium system after you start it for the
first time. The policy is correctly configured with a DB2 or IMS Collection Profile
and installed. The z/OS S-TAP is properly configured to use port 16022. All
messages on the mainframe indicate connectivity.
Causes
If the collector has not been actively used as a collector since being built and
configured, the sniffer appears to time out port 16022.
Environment
z/OS is affected.
Symptoms
When you attempt to install the Guardium Installation Manager (GIM) on RHEL6,
you see the following error message. cp: cannot stat `/usr/local/GIM/modules/
central_logger.log’: No such file or directory Installation failed
Causes
Various Linux distributions such as RedHat 6 deprecated the use of the traditional
init daemon that uses the etc/inittab file. They replaced it with an init process
called Upstart. Upstart uses the /etc/event.d and /etc/init directories for the
automated start, stop, and respawn of processes.
Environment
The Guardium Installation Manager (GIM) is affected.
Symptoms
After you successfully installed the Guardium Installation Manager (GIM) on
Windows, you notice that the service is not running.
Causes
GIM is a 32-bit application. If you are using a Windows 64 bit, GIM might be
installed in Program Files instead of Program Files(x86).
Environment
GIM is affected.
Symptoms
You receive an error similar to the following when you run the S-TAP installer to
install Guardium S-TAP on UNIX or Linux.
260 Administration
./guard-stap-v81_r26808_1-aix-6.1-aix-powerpc.sh
Verifying archive integrity...Error in checksums: 2082112805 is
different from 3728267449
Causes
The installer file is corrupted. The file became corrupted when the file was
transferred to the database server or when the product was downloaded.
Environment
S-TAP on UNIX or Linux is affected.
Symptoms
The S-TAP installation fails with the following error message.
A directory called ’guardium’ containing Guardium software needs to be created under a path provid
Enter the path prefix [/usr/local]? /opt/guardium
Directory /opt/guardium/guardium/guard_stap does not exist, would you like to create it [Y/n]? Y
Run STAP as root, or as user ’guardium’ [R/u]? R
Please be patient... This might take more than a minute.
Copying installation files...
cp: illegal option -- f
UX:vxfs cp: INFO: V-3-21462: Usage: cp [-i] [-p] f1 f2
cp [-i] [-p] f1 ... fn d1
cp [-i] [-p] [-r|-R] [-e { force | ignore | warn}] d1 d2
Causes
The path to/usr/bin/cp is different from what the installer expects.
Environment
The UNIX/Linux database server is affected.
If which cp returns a value other than /usr/bin/cp, run the command export
PATH=/usr/sbin:/usr/bin:$PATH.
Symptoms
When you install a new patch it does not complete. The status column in the CLI
command show system patch installed shows one of the following messages.
STEP: Setting “java” off
STEP: Setting “amei” off
STEP: Setting “sqlw” off
Environment
The Collector, Aggregator, and Central Manager are affected.
When you attempt to install S-TAP, you receive the following error message.
Tap_controller::init failed Opening pseudo device /dev/guard_ktap No such file or directory
Causes
There are many possible reasons why the K-TAP device creation can fail. The
following are the most common causes.
v You did not use the module files, including the K-TAP module for the Linux
kernel.
v You did not specify the Flex Loading option to load the K-TAP module from the
module files.
v A previous K-TAP module from an old installation is still running or installed.
Environment
All Linux and UNIX operating systems in which the IBM Guardium S-TAP
product can be installed are affected.
262 Administration
4. Verify that the S-TAP process is not running with the command ps -ef | grep
guard_stap.
5. Uninstall the S-TAP.
6. Confirm that the S-TAP directory is gone.
7. Check whether a K-TAP module is still running from an old installation. Use
the appropriate command for your operating system.
Linux : lsmod | grep ktap
Solaris : modinfo | grep tap
HP-UX : lsdev | grep tap
AIX : genkex | grep tap
Symptoms
When you install the Guardium appliance in VMWare, you receive the following
error:
Error Partitioning
Could not allocate requested partitions:
Partitioning failed: Could not allocate partitions as primary partitions.
Not enough space left to create partition for /boot.
Causes
When you install the Guardium system with VMWare, if you select Typical,
VMWare uses configuration parameters that are predefined for the OS type in
VMWare. These configuration parameters might not be suitable for this installation.
Environment
All Guardium configurations (collector, aggregator, central manager) are affected.
Symptoms
Patch installation in Guardium fails with the error patch.reg: No such file or
directory.
Causes
The following cases can cause the patch installation to fail.
v The patch was not downloaded in binary mode and corrupted the file.
v The compressed file itself was uploaded to the Guardium system.
v The patch was received from Guardium support and has the PMR number
prefixed to the file name.
v The patch was uploaded to the Guardium system from a Windows FTP server.
Environment
The collector, aggregator, and central manager are affected.
If the compressed file itself was uploaded to the Guardium system, extract the
compressed file and upload only the patch.
If there is a PMR number prefixed to the file name, remove the number and then
upload the patch to the Guardium system.
If the patch is uploaded from a Windows FTP server, specify the exact file name
with the correct case.
264 Administration
Index
A I SSH Public Keys 130
Synchronizing Portal User Accounts 76
Access Management Overview 33 Implementing Central Management 72 System Backup 25
Aggregation 57 Implementing Central Management in a System Configuration 1
Alerter Configuration 22 New Installation 73
Aliases 187 Implementing Central Management in an
Anomaly Detection 23 Existing Installation 77
Archive, Purge and Restore 106 Inspection Engine Configuration 5 T
Installing Security Policies on Managed Time Periods 191, 192
Units 83 troubleshooting > problems and solutions
B Investigation Center 91
IP to Hostname Aliasing 25
for a new Guardium patch
installation 261
Basic Information for IBM Support 223 troubleshooting > problems and solutions
for authentication configuration >
265
troubleshooting > problems and solutions troubleshooting > problems and solutions troubleshooting > problems and solutions
for Guardium PDF reports > cannot see for patch installation > no such file or for the Guardium managed unit> user
multi-byte characters 236 directory 264 is disabled in the managed unit but
troubleshooting > problems and solutions troubleshooting > problems and solutions shows as enabled in the Central
for Guardium policy rule for Product X > keywords for the Manager. 249
misconfiguration 233, 238 specific problem 262 troubleshooting > problems and solutions
troubleshooting > problems and solutions troubleshooting > problems and solutions for the GUI> pages are not loading
for Guardium reports > unexpected for receiver table for an Audit correctly 233
results 241 Process 236 troubleshooting > problems and solutions
troubleshooting > problems and solutions troubleshooting > problems and solutions for the inspection engine > changes not
for Guardium reports> Guardium for redact function > overly masked saved 231
reports do not show the database user result 234 troubleshooting > problems and solutions
correctly 240 troubleshooting > problems and solutions for the login packet > missing
troubleshooting > problems and solutions for reports > IP shows as 0.0.0.0 237 information 254
for Guardium reports> request was troubleshooting > problems and solutions troubleshooting > problems and solutions
interrupted or quota exceeded 238 for S-TAP configuration 242 for torque exception in Central
troubleshooting > problems and solutions troubleshooting > problems and solutions Management 251
for Guardium S-TAP > Not FIPS 140-2 for S-TAP installation > checksum troubleshooting > problems and solutions
complaint 257 error 260 for UNIX S-TAP > cannot read more
troubleshooting > problems and solutions troubleshooting > problems and solutions than 16 inspection engines 258
for Guardium S-TAP > OS Crash when for S-TAP installation > cp: option -f troubleshooting > problems and solutions
you install or upgrade Guardium error message 261 for VMXNET x 243
S-TAP 251 troubleshooting > problems and solutions troubleshooting > problems and solutions
troubleshooting > problems and solutions for S-TAP on Linux > process does not for vulnerability assessment > test
for Guardium virtual machine> not start automatically 256 exception group appear in failed VA
accessible from the network 244 troubleshooting > problems and solutions test 242
troubleshooting > problems and solutions for scheduled jobs exception> merge troubleshooting > problems and solutions
for Guardium Windows S-TAP > required delay executing process 240 for z/OS S-TAP > fails to show active
startup error ID 1000 259 troubleshooting > problems and solutions on the Guardium system 256, 259
troubleshooting > problems and solutions for scheduled policy installation> troubleshooting > sniffer > cannot
for GUI > installation fails on managed units 251 connect to UNIX S-TAP 255
java.lang.IllegalStateException troubleshooting > problems and solutions
error 232 for scheduled tasks > Import and
troubleshooting > problems and solutions
for high CPU and I/O Use in
Archive will not fire at the scheduled
time 250
U
Unit Utilization Level 99
Guardium 253 troubleshooting > problems and solutions
Unregistering from a Managed Unit 74
troubleshooting > problems and solutions for shared memory area >
Upload Key File 130
for K-TAP kernel module > still present COMM_EXIT_LIST 252
Using Central Management
after uninstallation of S-TAP 257 troubleshooting > problems and solutions
Functions 78
troubleshooting > problems and solutions for SSH sessions and automated CRON
for network interface error after jobs > failed logins 234
motherboard replacement 244 troubleshooting > problems and solutions
troubleshooting > problems and solutions for the aggregator > HY000 errors in V
for partition error 263 target system 249 Viewing Management Maps 83
troubleshooting > problems and solutions troubleshooting > problems and solutions
for password reset > accessmgr for the Guardium file system > file
password reset 246 system is almost full 237
266 Administration
IBM
iii
iv S-TAP and other agents
Chapter 1. S-TAPs and other agents
The Guardium S-TAPs is a lightweight software agent that is installed on a
database server or file server system. The S-TAPs monitors database or file traffic
and forwards information about that traffic to a Guardium system. Other agents,
including K-TAPs and A-TAPs, perform complementary functions.
Installing S-TAPs
You can install S-TAPs on servers with databases or file systems that you want to
monitor. There are several options for installing an S-TAP.
When you install S-TAP on a database server, you must provide the IP address or
fully qualified host name of the Guardium system that will receive data from the
S-TAP. After the S-TAP has connected to the Guardium system, all of the remaining
S-TAP configuration parameters can be set from the Administration Console on the
Guardium system.
Note: During the installation, the S-TAP installer checks if the K-TAP is available
for the kernel version. If the K-TAP cannot be installed or does not start up, a
query is presented to the user whether to continue installation.
The installation directory for the S-TAP must be empty or not exist. You cannot
install an S-TAP into a directory that already contains any files.
Before installing an S-TAP, check the System Requirements for IBM® Security
Guardium version 10.0 to make sure that your database and operating system
versions are supported.
There are two major tasks you need to perform to install and start using an S-TAP:
1. Install the S-TAP on a database server.
2. Configure the S-TAP to monitor the appropriate traffic.
Installation methods
The recommended method for installing S-TAP and other agents on your database
servers is the Guardium Installation Manager (GIM). GIM enables you to install,
upgrade, and manage agents on individual servers or groups of servers. GIM also
monitors processes that were installed under its control. You can use GIM to
modify parameters and perform other management tasks. See the Guardium
Installation Manager section for details about GIM.
In some cases, you might prefer to install the S-TAP locally. You can do so by
using an interactive installer, or by using the command line. These methods are
described in this section.
When you install an S-TAP on a UNIX server, the installation program checks
whether the guardium group exists. If the group does not exist, the installation
1
program creates it. If you use certain components or features, such as A-TAP or
DB2 Exit, you must add users to this group to ensure proper functioning. These
requirements are described in the relevant sections of this information.
The following tables list database components that must be installed, at a certain
release or patch level, or configured to support S-TAP.
Table 1. Database components
Component Prerequisite
CAS under Windows If CAS will monitor the MS SQL Server event log, the dumpel.exe
program from the Microsoft Windows Resource Kit must be
installed on the database server. Check if this program exists in
the c:\Program Files\Resource Kit\ directory. If not, you can
download it from Microsoft.
S-TAP, all UNIX If the Tee monitoring method and the Hunter component are
used, Perl 5.8.0 or later
S-TAP on Red Hat Make version 3.81 or later. To view your version of the make
Linux V4 utility, issue this command: make -v
For AIX 5.3, technology level 5 or later is required to support
Oracle ASO, SSL AIX® - LDR_PRELOAD. AIX 6 and 7 include this support.
all
Note: During installation / upgrade, for Java 1.6.0, an error may be generated by
the JVM that indicates it is unable To Locate DLL, The dynamic link library
MSVCR71.dll could not be found in the specified path. This error can be remedied
by implementing one of the two workarounds: 1) use a different (another release)
JVM (if one is available on the system) or 2) download the DLL from Microsoft
and place it in the windows system directory.
Table 2. Requirement Type per platform
Requirement
Type HP-UX Solaris AIX Linux
or or or or
file exist tar, awk, grep, tr tar, awk, grep, tr tar, awk, grep, tr tar, awk, grep, tr
/tmp or perl exist /tmp or perl exist /tmp or perl exist /tmp or perl exist
S-TAP Program files AIX: 90 MB HP-UX: 360 MB Linux: 176 MB Solaris: 243
MB Windows: 138MB
CAS Program files including AIX: 309 MB HP-UX: 630 MB Linux: 405 MB Solaris: 390
Java MB Windows: 277 MB
Perl UNIX only. If the Tee data collection mechanism and its
optional Hunter component are used, Perl 5.8.0 is
required. If it has not been installed previously, you must
obtain and install it yourself. For space requirements or to
download Perl, see perl.org.
The installation process for each component creates a log file. Locations include
/var/tmp and the component installation directory.
For example, to check that port 16018 (the port Guardium uses for TLS) is
reachable at IP address 192.168.3.104, you would enter the following command:
nmap -p 16018 192.168.3.104 Starting nmap V. 3.00 Interesting ports on g4.guardium.com (192.168.3.10
Note: If using Windows 2012 R2, make sure that Microsoft.NET 3.5 (includes 2.0)
is available on the server. Microsoft.NET 3.5 is not loaded by default. Availability
of this file is required. For further information on Microsoft.NET 3.5, search for key
words, "Microsoft .NET Framework 3.5 Deployment Considerations" or click on
this link, https://technet.microsoft.com/en-us/library/dn482066.aspx
Note: The log file for Windows S-TAP installation is viewable from this location,
c:\IBM Windows S-TAP.ctl
Note: If using Load Balancer options, review the Windows S-TAP installation
information in “SQLGuard parameters” on page 110.
Note: A Windows S-TAP will not stay connected to more than three Guardium
systems that participate in mirroring traffic.
Note: Windows S-TAP has limited support for IPV6 tunneled over IPV4. The IPV6
traffic is generated by LHMON using the IPV4 addresses of the ISATAP tunnel.
You might not want the S-TAP to perform this automatic discovery. If you want to
prevent it, you must configure the installation before you begin. The procedure
See Installing GIM on the Database Server (Windows) and Guardium Installation
Manager (GIM) - GUI for additional information on installing and using GIM to
install Guardium components in a Windows environment.
Note: The log file for Windows S-TAP installation is viewable from
location, c:\IBM Windows S-TAP.ctl
Table 6. Parameters applicable to all .NET installers
Parameter Description
-UNATTENDED Install silently. (Does not require value)
-INSTALLPATH This is the install directory. Default install path is C:\Program
Files\IBM\Windows S-TAP
-UNINSTALL Uninstall
-CUSTOMER To change customer name
-COMPANY To change company name
-SERVICEUSER To specify a user to run the service under
-SERVICEPASSWORD
The password for the user
These parameters are mostly off by default except DB2 relevant ones, but
setting them to “ON” will turn them on. DB2SHMEM will not turn off if
DB2 is installed on DB server. This may also be the behavior for other
databases in this list. If the value is not “ON”, then it defaults to “OFF”.
Table 8. Other S-TAP Parameters
Parameter Description
-TAPHOST This is the local/client IP
-APPLIANCE This is the SQLGUARD IP. You can set up multiple appliances by
simply specifying this parameter with a new value multiple times.
Note: Certain files from previous releases will not be fully removed until
the next scheduled reboot.
Remove Previous Windows S-TAP
This procedure will remove the installed S-TAP while making sure the
configuration file is saved for future use. If you simply want to un-install
the product, start with Step 4.
1. Log on to the database server system using a system administrator
account.
2. Copy the current S-TAP configuration file to a safe location (a
non-Guardium directory). Look for this file in "C:Program Files
(x86)\IBM\Windows S-TAP\Bin\guard_tap.ini"
3. From the Add/Remove Programs control panel, remove
GUARDIUM_STAP.
Note: The removal of IBM Security Guardium Windows S-TAP can also
be done through a silent uninstall option of the setup program: setup -
UNINSTALL
Note: Certain files from previous releases will not be fully removed
until the next scheduled reboot.
To install UNIX S-TAP, run the appropriate installation script, as detailed in the
following steps. If any stage of the installation fails, undo all of the steps up to that
point. Do not leave S-TAP partially installed.
1. Log on to the database server system using the root account.
2. Some companies require the use of native installers to register packages on the
system, or to perform other house-keeping functions. If this is a requirement
for you, see Installing an S-TAP with a native installers before continuing with
the next step.
3. Copy the appropriate S-TAP installer script from the Guardium Installation
DVD (or network), to the local disk on the database server. The installer script
name identifies the database server operating system. See full list of UNIX
Installer Files to select the correct file.
4. For Linux only: the S-TAP installer includes all possible modules specific to
the different Linux kernels. In case the particular module is not included in
the modules built with the S-TAP installer the module file can be copied to
the system via FTP/SSH and then use the --modules option to specify,
including the full path, the K-TAP module.
As an example, assuming modules will be in the /tmp directory:
./guard-stap-guard-8.0.xx_r20992_1-rhel-5-linux-x86_64.sh --
--modules /tmp/modules-guard-8.0.xx_r20992_1.tgz"
or for non-interactive installation:
./guard-stap-guard-8.0.xx_r20992_1-rhel-5-linux-x86_64.sh --
--modules /tmp/modules-guard-8.0.xx_r20992_1.tgz --ni --tls 1 -k
--dir /usr/local --tapip 19.12.144.102 --sqlguardip 19.12.148.109"
5. For any modules needed that are not supported in the current distribution,
obtain via FTP the modules-<stap version>.tgz file and copy it to the /tmp/
folder on the destination server.
6. Decide how you will run the installer:
v Non-interactive mode is recommended for larger S-TAP deployments (10 or
more servers). To use this mode, skip the remainder of this procedure and
go to Install UNIX S-TAP from the command line.
v Interactive mode is recommended for smaller deployments (fewer than 10
servers). Continue with this procedure for interactive mode.
7. Run the installer and respond to the legal notification and other prompts, as
directed by the installer. We suggest that you accept all of the supplied
defaults.
These are description of each component. Variables are shown enclosed in angled
brackets: < >.
guard-stap-setup -- [--modules <linux modules files>] [--ni]
[--tls <0|1>] |-k|-t] --dir <dir>|--tapip <tapip>|--sqlguardip <sqlguardip>
|--tapfile <file>|--ktap_allow_module_combos]
[--presets <presets-file> | <preset-option-list>...]
If there are multiple lines containing the same hostname, the first one is
used.
Loader Flexibility
Loader Flexibility aids in the installation of currently built modules when an exact
match between module and kernel version does not exist.
v Loader flexibility is only enabled if explicitly requested at installation time
v Pass the option of --ktap_allow_module_combos when using the non-interactive
installer
v If installing interactively, answer y to the question posed after editing the
guard_tap.ini file (and setting ktap_installed=1) The loader flexibility default
is disable. This means that the K-TAP will be disabled, if the booted kernel is
not directly supported or tested as working with another module.
v If you wish to switch from not allowing the loader to try module combinations,
you will need root access on the database machine and perform the instructions
printed in /var/log/messages when it was detected that no module is available
for the running kernel
v When performing a K-TAP live update, whatever was specified to the question
of whether or not to try module combinations (implicitly or explicitly) will be
applied to the K-TAP installed as part of the update. The same procedure
applies for switching from not allowing module combinations as is printed in
/var/log/messages.
You can supply all of the parameters needed to install Unix CAS from the
command line.
These are description of each component. Variables are shown enclosed in angled
brackets: < >.
usage: guard-cas-setup -- install --java-home <JAVA_HOME> --install-path <INSTALL_PATH> --stap-conf <
Depending on the install / uninstall scenario, you may need to start and stop CAS
from the command line. One scenario might be not supplying the --stap-conf
path to the guard_tap.ini file as this is an optional parameter; resulting in CAS
not starting. Use the following methods when needing to start or stop CAS:
1. Log on to the database server system using the root account.
2. For Red Hat Enterprise Linux 6
a. Stop / Start CAS using the stop cas or start cas commands.
3. All others:
a. Comment out (if stopping CAS) or remove comment (if starting CAS) the
cas agent entry in the /etc/inittab file. In a default installation, this
statement should look like this:
cas:<nnnn>::respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/
b. Save the /etc/inittab file.
c. Run the init q command
4. You may validate if CAS is running or not by issuing the ps -fe | grep cas
command.
Usage:
./guard-stap-update <full_path_Guard-Installer.sh> <existing Guard-Install-Dir> [<Linux-Kernel-Module
Place the latest installer along with the guard-stap-update utility in the database
server folder /var/tmp and specify the install directory for the existing S-TAP of
As an example:guard-ktap-update-doberman_r19987_1-sunos-5.9-solaris-
sparc.sh 2>&1 | tee /tmp/output.save.txt
Perform this procedure before installing a new version of S-TAP if you want to
save the old configuration file. For an upgrade, we recommend that you use the
Upgrade Procedure Utility, as previously described.
Note: If you have installed A-TAP, you must deactivate it before attempting any
upgrade/install operations; see the description of the A-TAP deactivation
command, in the Configure A-TAP topic.
If you are removing a previous version of S-TAP that used K-TAP, you will need to
reboot the database server. If K-TAP has been installed, you will have a device file
named: /dev/guard_ktap.
1. Log on to the database server system using the root account.
2. If un-installing version 6.0 or later of S-TAP
a. For Red Hat Enterprise Linux 6
1) Stop S-TAP using the stop utap command.
b. All others:
1) Remove the utap agent entry in the /etc/inittab file (regardless of
whether or not it has been commented). In a default installation, this
statement should look like this: utap:<nnnn>:respawn:/usr/local/
guardium/guard_stap/guard_stap /usr/local/guardium/guard_stap/
guard_tap.ini
2) Save the /etc/inittab file.
3) Run the init q command
c. You can then run ps - ef | grep stap to verify that S-TAP is no longer
running.
3. Copy the S-TAP configuration file to a safe location (a non-Guardium
directory). By default, the full path name is: /usr/local/guardium/guard_stap/
guard_tap.ini.
You can use this file later if you have to re-install this version of the software,
or you can refer to it when configuring an updated version of S-TAP. Do not
ever use an older configuration file directly with a newer version of the
software - newer properties may be missing, and the defaults taken may result
in unexpected behavior when you start S-TAP.
4. Run the uninstall script. For example, if the default directory has been used:
[root@yourserver ~]# /usr/local/guardium/guard_stap/uninstall
A-TAP Installation
A-TAP installs as a part of S-TAP installation.
Note: A-TAP depends on K-TAP, so make sure that K-TAP is installed as well. In
particular, the ktap_installed parameter must be set to 1 in the guard_tap.ini file.
Note: For the installation procedure on Solaris Zones see Procedure to Make
A-TAP Work Under Solaris Zones. The guardctl utility DOES NOT automatically
add db-user to group Guardium. That behavior matches the behavior of
guard_tap.ini encryption=1 ATAP based activation, i.e. database OS user is never
added automatically to group guardium.
Note: The database must be stopped when either a user is being added to the
guardium group or when activating A-TAP using the guardctl utility.
Note: The database must be restarted after performing an upgrade for modules
that include ATAP.
Note: Group Guardium can be removed on most OS's with groupdel guardium.
However, after removal, only the guard_ktap_loader parameter can correctly
re-create it and change the K-TAP device permissions.
A-TAP Uninstallation/Upgrade
The is-active command returns true if there is at least one active instance and
false otherwise.
To check that port 16018 (the port Guardium uses for TLS) is reachable at IP
address 192.168.3.104, you would enter the following command: > nmap -p 16018
192.168.3.104 Starting nmap V. 3.00 Interesting ports on g4.guardium.com
(192.168.3.104): Port State Service 16018/tcp open unknown >
Initial setup
Both versions will be in the Guardium installation directory in the lib directory. On
Linux servers, the 64-bit version will be found in lib64.
Library names
Linux/Solaris/HP-UX
libguard_db2_exit_32.so
libguard_db2_exit_64.so
AIX
libguard_db2_exit_32.a
libguard_db2_exit_64.a
As the db2 user, run db2level, the output will be similar to this -
DB21085I Instance db2inst1 uses 64 bits and DB2 code release SQL09070.
mkdir $DB2_HOME/sqllib/security/plugin/commexit
OR
mkdir $DB2_HOME/sqllib/security64/plugin/commexit.
(This is done only the first time the library is installed, as the directory does not
exist)
NOTE: if copy failed with error ....: Text file busy, remove the file first and do
a copy.
Change owner to the commexit directory and lib files in the directory
For example:
root@buzzard:~# su - db2inst2
Oracle Corporation SunOS 5.11 snv_151a November 2010
db2inst2@buzzard:/export/home/db2inst2$ id
uid=109(db2inst2) gid=102(db2iadm1) groups=102(db2iadm1),101(dasadm1)
When the S-TAP is installed, it creates the Guardium group. You must add the DB2
user to this group before starting the database with the exit library loaded. This
requirement increases the security of shared memory regions that are created by
the S-TAP. You can use the guartctl command to add the user. For example, if the
DB2 user is named db2inst2: guardctl authorize-user db2inst2
Set up DB2
OR
Restart DB2
~/sqllib/db2dump/db2diag.log
Setup Zones/WPARs
start stap: (on wpars, need to manually copy/add the utap server entry in inittab
file)
https://scm.guard.swg.usma.ibm.com/wiki/index.php?page=Use_Solaris_services
Log Level
When S-TAP log level = 10, debug info will be logged into both S-TAP log and
db2_exit log (db2diag.log)
When S-TAP log level = 11, debug info will only be logged into db2_exit log
(db2diag.log)
Informix EXIT is similar to DB2 EXIT. It supports firewall and UID chain.
A native installer ensures that S-TAP is registered in the operating system asset
repository. This registration is not required by Guardium for the installation of the
Use the following command to generate the AIX native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate native installer file (.bff file) from the Guardium S-TAP
Installation DVD, for your version of AIX.
2. Enter the following command on a clean server (no previous S-TAP installation)
to extract the shell installer for AIX, substituting the appropriate file name with
the appropriate .bff file:
installp -aX -d/var/tmp<filename> SqlGuardInstaller
Example:
installp -aX -d/var/tmp/guard-stap-guard-8.0.00rc1_r20934_1-aix-5.2-aix-powerpc.bff SqlGuardIns
The shell installer that is extracted, named guardium, is under /usr/local.
3. Continue with Step 3 of the installation procedure, running the generated
installation script rather than the default installation script for the operating
system version.
4. Return to Step 5 of the UNIX Installation procedure.
Use the following command to remove AIX S-TAP using the native installer:
/usr/lib/instl/sm_inst installp_cmd -u -f ’filename’
Example
/usr/lib/instl/sm_inst installp_cmd -u -f’SqlGuardInstaller’
To remove HPUX S-TAP using the native installer, use the following command:
swremove @<hostname>:/var/spool/sw
Use the following command to generate the Linux native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate Linux native installer file (.rpm file) on the Guardium
S-TAP Installation DVD, for your version of Linux.
2. Enter the rpm command, supplying the selected file name where filename is the
native installer file:
rpm -ivh <filename> --ignorearch
The shell installer is extracted under /usr/local/guardium
3. Continue with Step 3 of the installation procedure, running the extracted shell
installer script rather than the default installation script for the operating
system version.
4. Return to Step 5 of the UNIX Installation procedure.
To remove S-TAP using the native installer, use the following command (selecting
filename):
rpm -e <filename>
Use the following command to generate the Solaris native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate native installer file (.pkg file) on the Guardium S-TAP
Installation DVD, for your version of Solaris:
2. Enter the pkgadd command to run the installer using the selected file:
pkgadd -d <filename>.pkg
The shell installer is extracted under /usr/local/guardium
3. Continue with Step 3 of the installation procedure, running the extracted shell
installer script rather than the default installation script for the operating
system version.
4. Return to Step 5 of the UNIX Installation procedure.
When you install an S-TAP on a Linux system, the installation process checks the
Linux kernel to determine whether a K-TAP has been created to work with that
kernel. If the installation process does not find a matching K-TAP, it will attempt to
build one to match your Linux kernel.
Most of the K-TAP code is independent of the kernel. The installer for version 9.1
provides a new layer of code, which enables the kernel-independent code to
interact with your kernel. This new layer is delivered as proprietary source code.
The installer builds the complete K-TAP by compiling this proprietary source code
against your Linux kernel. This produces a K-TAP specific to your Linux
distribution.
This process requires that the standard kernel development utilities, provided with
Linux distribution, are present on the database server, where the K-TAP is to be
built. The development package must be an exact match for the kernel. The gcc
compiler and version 3.81 (and newer) of the make utility are also required.
If you have several systems running the same Linux distribution, you can build a
K-TAP on one system and copy it to the others. For example, you might build a
K-TAP on a test system and then copy it to one or more production database
servers after testing. If you use the Guardium Installation Manager (GIM) to install
the S-TAP, GIM can automatically copy the bundle containing the new K-TAP to a
Guardium system from which you can distribute it to other database servers.
When the installer attempts to build a K-TAP module, you see messages issued by
guard-ktap-loader. These messages can include:
v It is attempting to build
v The build has completed
v The K-TAP has been loaded
v The build cannot be attempted, because the kernel development package is not
found
“Copying a new K-TAP module to other systems”
When you build a new K-TAP module for a Linux database server, you can
copy that module to other database servers that run the same Linux
distribution.
“Copying a K-TAP module by using GIM” on page 182
If you build a custom K-TAP module for a Linux database server, you can use
GIM to copy that module to other Linux database servers.
If you use the Guardium Installation Manager (GIM) to manage agents on your
database servers, see the GIM section of this information for the procedure to use.
Procedure
1. Change directory to /usr/local/guardium/guard_stap/ktap/current/ and run
./guard_ktap_append_modules to add the locally built modules to modules.tgz.
2. Copy the updated modules.tgz file to the target server.
3. Log in to the target server and change directory to /usr/local/guardium/
guard_stap/ktap/current/.
4. Run the K-TAP loader with the retry parameter and the full path to the
updated modules.tgz file. For example:
guard_ktap_loader retry /tmp/modules-9.0.0_r55927_v90_1.tgz
5. Restart the S-TAP to connect it to the new K-TAP module.
Results
The custom K-TAP module is ready to use on the target system. Repeat this
procedure for each matching Linux system to which you want to deploy the
K-TAP module.
“Building a K-TAP on Linux” on page 25
There are hundreds of Linux distributions available, and the list is growing.
This means that there might not be a K-TAP already available for your Linux
distribution. If the correct K-TAP is not available, the S-TAP installation process
can build it for you.
“Copying a K-TAP module by using GIM” on page 182
If you build a custom K-TAP module for a Linux database server, you can use
GIM to copy that module to other Linux database servers.
Note: To use CAS over SSL in a FIPS-compliant environment, you must install
IBM Java on the server where the CAS agent runs.
Use one of the following techniques to locate the java command directory.
1. Enter the which java command. For example:
[root@yourserver ~]# which java
/usr/local/j2sdk1.4.2_03/bin/java
2. If the which java command returns a symbolic link, use the ls -ld
<symbolic_link> command to determine the real Java directory name.
3. If the which java command returns the message command not found, Java may
be installed, but it has not been included in the PATH variable. In this case, use
the find command to locate the Java directory; for example:
[root@yourserver ~]# find . -name java
./usr/bin/
Guardium stored procedures are placed in the SharePoint database on the host
system. Guardium software code runs as an extra thread in the SQL Server
process, on the host system. This code is the GuardSp TAP agent (SharePoint
Agent).
The only dependency for GuardSp TAP is a SharePoint SQL Server database on the
host and the Guardium system.
SharePoint logging/monitoring/auditing
SharePoint native auditing must be set up and activated. The way to enable
Sharepoint native auditing is to navigate, via a web browser, to the Central
Administration pane of SharePoint 2007 or SharePoint 2010. Choose Site Actions
and specify the events to be audited/logged.
Guardium does not monitor everything with a SharePoint TAP, only what is
logged in SharePoint itself.
Looking for errors and messages in the application event log of SharePoint as a
way of determining SharePoint configuration correctness.
Known Limitations
v
Server-IP and Client-IP will list as the virtual-IP of the cluster.
v
Server-hostname will be the virtual hostname of the cluster, not those of the
individual nodes.
Note: The SQL server user used to perform the installation must have
sysadmin privilege AND must have been granted UNSAFE ASSEMBLY
permission by another sysadmin user. A SQL user cannot grant UNSAFE
ASSEMBLY permission to itself, even if it is a sysadmin user.
2. Enter the Name listed on the certificate of the Guardium system. (The default
is gmachine if a custom certificate is not used on the system.)
3. Specify the IP address of the Guardium system.
4. Use the default value of the Buffer Size.
5. Specify the SSL Port and the non-SSL port of the Guardium system. (Leave at
default unless using custom ports.)
6. Check Use Secure Socket Layer if using a SSL port.
7. Choose SQL Table Logging if Microsoft SharePoint itself is monitoring the
activity. Guardium is not collecting information with this choice.
8. Choose SQL Guard Logging if Guardium is monitoring the SharePoint
activity. This choice does not place records in Microsoft SharePoint audit logs.
9. Choose Both for monitoring to be visible on both the Guardium system and
the Microsoft SharePoint audit logs.
10. Click Next to continue with configuration.
This screen is where you log on to the database, set the authentication and
specify the SharePoint version.
11. Choose your authentication method, supply username and password if
necessary, then click Login.
12. Choose Microsoft SharePoint installation from Sharepoint database.
13. Choose Microsoft SharePoint version from Sharepoint version.
14. Click Finish.
Audit Events for SharePoint are now into the Guardium database as SQL
statements.
Access S-TAP reports (Administrator portal, TAP monitor tab) to see the SharePoint
events. See S-TAP Reports for more information.
Sharepoint TAP has been modified so it fixes the permission problems of different
application pools in different content databases.
New component: Sharepoint TAP service. The short name for this service is
SpTapService, and this name is used in the application eventlog.
This service monitors Sharepoint content databases, and when a new database
shows up on the server, it automatically installs Sharepoint TAP into that content
database within a minute of the database's creation.
Note: This background service must use Windows Authentication, as SQL Server
Authentication is inherently less secure.
A: Think of them as DLLs and leave them alone. They are used by the installer to
set the modified stored-procedures within SharePoint. (2007 going to SharePoint
2007, 2010 to 2010)
Q3. What is the CPU and memory usage of this SharePoint agent?
Q4. What are the pre-requisites for SharePoint monitoring? For example, ports
required to be opened, SharePoint agent to be installed, etc.
A: Have SharePoint 2007 or 2010 installed. Have the Guardium Sharepoint TAP
agent installed. Allow TCP traffic from dbsystem to port 16016 (or 16018 if using
SSL) on the Guardium system. No Windows S-TAP is needed.
Q6: How do we verify that SharePoint traffic is reaching the Guardium collector?
A: The easiest way to see that SharePoint traffic is reaching the collector is to use
an SQL or deeper report and to see what SQLs come through from the SharePoint
database server.
Q7: The report Detailed sessions list should show SharePoint traffic, but it is not.
What might cause this problem with the SharePoint TAP?
A: For the Detailed session list report, as long as you have set the server IP, the
port, and permissions correctly, you should see the SharePoint sessions.
A: SharePoint traffic is subject to policy rules, but the only actions that can be
taken on the SharePoint traffic are logging actions, alerting actions, and
Ignore-session (at the Guardium system level) actions. Ignore actions are not
recommended for use with SharePoint traffic.
Q9: Why did the Sharepoint TAP installation not work because the SQL user did
not have UNSAFE ASSEMBLY permission?
A: The SQL server user used to perform the Sharepoint TAP installation must have
sysadmin privilege AND must have been granted UNSAFE ASSEMBLY permission
by another sysadmin user. An SQL user cannot grant UNSAFE ASSEMBLY
permission to itself, even if it is a sysadmin user.
Note: When a customer live updates, the files of the older version will
remain on the system until the next reboot. In fact, some of the files of the
older version will remain even after the reboot of the server.
Windows S-TAP
There is a need to distinguish between a fresh Windows installation and a
live upgrade.
All database instances must be restarted (not rebooted) when installing
from scratch (fresh installation). Database instance restarts are not required
after live upgrade.
The Windows server does not need to be rebooted, unless upgrading from
V7.0. The reason for this is that upgrading from V7.0 requires a full
uninstall of the S-TAP software.
However, if proxy driver files are updated, a system reboot is required.
Examples of proxy driver files: LhmonProxy.sys/NpProxy.sys. For each
release, check the release document to see if proxy drivers are updated.
A reboot is required if upgrading from Guardium V7.0. This is true for all
installation/upgrade methods - GIM, Interactive, or Batch.
UNIX/Linux S-TAP
No reboot
S-TAP/KTAP may be upgraded without a reboot when using the
guard-stap-update utility. This utility can be used from V8.0 versions and
up.
If the system is being "upgraded" from a non-GIM version to the same
GIM version, the system doesn't need to be rebooted.
Overview
Load balancing automatically allocates managed units to S-TAP agents when new
S-TAPs are installed and during fail-over when a managed unit is unavailable. The
load balancing application also dynamically re-balances loaded or busy managed
units by relocating S-TAP agents to less-loaded managed units.
Important: When using the enterprise load balancing application, the Guardium
system assumes control over the allocation of managed units to S-TAP agents. This
is an automated and dynamic process: you will see S-TAPs change association
based on the relative load of available manged units. Use the Load Balancer
Events report to review all load balancing activity.
Prerequisites
The enterprise load balancer runs on a Central Manager, listens to port 8443, and
uses Transport Layer Security (TLS). No new firewall or additional system setup is
required.
How it works
The enterprise load balancing application works by collecting and maintaining
up-to-date load information from all its managed units.
It uses the load information from managed units to create a load map. This load
map provides the data that directs load balancing and managed unit allocation
activities. Use the GuardAPI command grdapi get_load_balancer_load_map to
view the current load map at any time.
Load information is only collected from managed units that are online and
configured with the parameter LOAD_BALANCER_ENABLED=1. Setting
LOAD_BALANCER_ENABLED=0 disables load balancing and prevents that managed unit
from being dynamically allocated to S-TAP agents during load balancing activities.
Load collection errors from specific managed units are recorded in the Load
Balancer Events report but do not interfere with the overall load collection and
load balancing processes. However, failure to collect load information from a
managed unit excludes that managed unit from participation in load balancing
processes.
Note: When the S-TAP is installed interactively with the Load balancer options for
a specified S-TAP group and a specified Managed Unit (MU) group and with the
client IP being set to anything other than the default value, an incorrect MU may
be allocated by the load balancer and the host name in the S-TAP group will be
Procedure
v When using GIM to install S-TAP use the following parameters in GIM's module
parameter screen:
Table 11. Parameters for using the GIM S-TAP installer
Parameter Description
STAP_LOAD_BALANCER_IP Required. This option specifies the IP
address of the Central Manager this S-TAP
should use for load balancing.
STAP_INITIAL_BALANCER_TAP_GROUP Optional. This option specifies the S-TAP
group this S-TAP will belong to.
STAP_INITIAL_BALANCER_MU_GROUP Optional. This option specifies the managed
unit group the app-group will be associated
with. An application group must also be
specified to use this parameter.
STAP_LOAD_BALANCER_NUM_MUS Optional. This option specifies the number
of managed units the load balancer should
allocate for this S-TAP.
Load balancing creates associations between S-TAP groups and groups of managed
units such that S-TAPs within a group are allowed to be reallocated to the
most-available managed unit within a group. This task introduces you to the
process of establishing associations between S-TAP groups and managed unit
groups for the purposes of enterprise load balancing.
Procedure
1. On a Central Manager, navigate to Manage > Central Management >
Enterprise Load Balancer > Associate S-TAPs and Managed Units.
2. If necessary, create a new S-TAP group.
a. Click the icon to open the Create New S-TAP Group dialog.
b. Provide a name in the Group Name field. For example, North American
S-TAPs.
c. Add group members by selecting from existing host names or adding new
members using the Group Member field. S-TAPs indicated with a icon are
included with the new S-TAP group.
d. Click Create New Group to create the S-TAP group.
3. Associate the S-TAP group with a group of managed units.
a. Select the S-TAP group you want to associate. For example, North American
S-TAPS.
b. Click Associate Managed Units to open the Associate Managed Unit Group
dialog.
c. If necessary, create a new group of managed units.
1) Click the icon to open the Create New Managed Unit Group dialog.
2) Provide a name in the Group Name field. For example, North American
MUs.
3) Add group members by selecting from existing Managed Unit IP
addresses.
4) Click Create New Group to create the new group of managed units.
d. Select the group(s) of managed units to associate with the S-TAP group. For
example, North American MUs.
e. Click Apply.
4. Click Save to complete the association between an S-TAP group and a group of
managed units.
The enterprise load balancing application uses the load information from managed
units to create a load map. This load map provides the data that directs load
balancing and managed unit allocation activities.
Procedure
APPLIANCE_RESOURCE_INFO={NUM_PROCESSORS=2,CPU_SPEED=2660,CPU_CACHE=24576,CPU_CORES=2,CACHE_READ_RA
}
{
MU=gct1.domain.com
MU_QUEUE_SIZE(MB)=25.0
MU_TIMES_REBALANED=0
MU_EFFECTIVE_MAX_USED_QUEUE(%)=0.0
MU_MAX_LOAD_CONTIB_BY_STAP_TO_MAX_USED_QUEUE(MB)=0.0
MU_ADJUSTED_STAP_CONTRIB_IN_MB=0.0
MU_BASE_MAX_USED_QUEUE_IN_MB=0.0
IS_REBALANCABLE=true
INSTALLED_POLICIES=GATF_STAP_Policy_Ignore|GATF_STAP_Policy_Firewall|GATF_STAP_Policy_SCRUB|
APPLIANCE_RESOURCE_INFO={NUM_PROCESSORS=24,CPU_SPEED=2601,CPU_CACHE=15360,CPU_CORES=6,CACHE_RE
STAP_LIST=
{
STAP_IP=01_gct1.domain.com,
STAP_HOST=01_gctl.domain.com,
CONNECTED_TO_MU=gct1.domain.com,
PARTICIPATES_IN_LOAD_BALANCING=false,
STAP_CONTIBUTION_TO_LOAD_IN_MB=0.0
}
{
STAP_IP=02_gct1.domain.com,
STAP_HOST=02_gct1.domain.com,
CONNECTED_TO_MU=gct1.domain.com,
PARTICIPATES_IN_LOAD_BALANCING=true,
STAP_CONTIBUTION_TO_LOAD_IN_MB=0.0
}
{
STAP_IP=03_gctl.domain.com,
STAP_HOST=03_gctl.domain.com,
CONNECTED_TO_MU=gct1.domain.com,
PARTICIPATES_IN_LOAD_BALANCING=true,
STAP_CONTIBUTION_TO_LOAD_IN_MB=0.0
}
}
03_gctl.domain.com->gct1.domain.com
02_gctl.domain.com->gct1.domain.com
01_gctl.domain.com->gct1.domain.com
ok
The Load Balancer Events report shows all load balancing events and activities,
including successfully associations between S-TAP agents and managed units,
changes in managed unit load, and failed associations.
To view the report, navigate to Manage > Reports > Activity Monitoring > Load
Balancer Events.
If
ENABLE_DYNAMIC_LOAD_COLLECTION
is set to 0, the load balancer
will collect the load from all
the managed units at the
interval specified by
STATIC_LOAD_COLLECTION_INTERVAL.
LOAD_BALANCER_ENABLED 1 (0 or 1) Controls the load balancer
feature.
v 0 enables the feature
v 1 disables the feature
S-TAP monitors database traffic and forwards information about that traffic to a
Guardium system.
S-TAP Overview
v S-TAP can monitor database traffic that is local to that system. This is important
because local connections can provide back door access to the database, and all
such access needs to be monitored and audited.
v S-TAP can be used to monitor any network traffic that is visible from the
database server on which it is installed. It can thus act as a collector on remote
network segments, where it is not practical to install a Guardium system.
v S-TAP can be installed remotely from the command line on both Windows or
Unix servers as well as installed through the Guardium Installation Manager.
Upgrades can be configured to be applied at the next server reboot. Under
Linux, S-TAP takes care of upgrading S-TAP kernel components at boot time
--adjusting to kernel upgrades in Linux environments.
Note: If S-TAP is installed both on the application side (see previous note) and on
the database server, additional precautions should be taken so as to not monitor
duplicate traffic.
When a failover of S-TAP occurs, session information can also be sent over to the
current active Guardium host. See Edit the S-TAP Configuration File for more
information for setting tap_failover_session_size (0 will disable feature) and
tap_failover_session_quiesce.
Restartability
Valid when wait_for_db_exec is greater than 0, when S-TAP restarts, either from a
system reboot or user initiated S-TAP stop / start commands, S-TAP will poll all
databases that have been configured to be monitored and begin monitoring them
when available. Any configuration anomalies (either on the database side or the
S-TAP side) that limits S-TAP ability to monitor a database will not limit S-TAP
from monitoring other databases with valid configurations. Instead, S-TAP will
start successfully, monitor all valid configurations, and continue to poll other
databases until they become available and then start monitoring them as well. It is
advisable that users use existing alerts and reports to monitor and report on any
failed statuses.
For Oracle, after relinking Oracle BEQ traffic will not be logged for 15 minutes,
this is the time it takes for S-TAP to check if an Oracle device node has been
changed.
Proxy Firewall
While S-TAP is normally deployed on a database server, a K-TAP based firewall
can be deployed to a proxy server. By setting the parameter app_server=1 and
utilizing S-GATE, you can monitor traffic that originates from the proxy server. See
Edit the S-TAP Configuration File and S-GATE Actions (Blocking Actions) in the
Policies help topic for more information on setting app_server and using S-GATE
within Policies.
To define secondary hosts for an S-TAP, see Define Secondary Guardium Hosts for
an S-TAP, under Configure S-TAPs from the GUI.
Note: Guardium does not provide Certificate Authority (CA) services and will not
ship systems with different certificates than the one installed by default. A
customer that wants their own certificate will need to contact a third party CA
(such as VeriSign or Entrust).
Note: In addition to ensuring that the S-TAP feed to a collector is encrypted, the
S-TAP client can also be configured to authenticate the Guardium system it is
trying to talk to. This way, in addition to ensuring that the traffic is encrypted, it is
ensuring that the S-TAP is not feeding information to a non-authorized server.
S-TAP Setup
Note: The same certificate/key pair can be installed on several machines. The
customer does not have to buy N certificate for N machines.
3. guardium_crl_path
If this path points to a PEM-encoded file with Certificate Revocation List from
the CA, any Guardium system certificate that has been revoked will be rejected.
The Guardium CRL is provided in the STAP installation (or GIM) and can be
and will be updated via software patches and upgrades.
In addition a customer can manually install a CRL provided by the CA
(Guardium or third party).
Since Guardium systems are not assumed to have internet access, no web-based
CRL servers are queried automatically.
There are four CLI commands related to system key and certificate management:
v show certificate sniffer
This command will print the system certificate in a text format, followed by the
Base64 encoded PEM form encoding. The text format only serves the purpose of
viewing the certificate details (in particular the CN and the Signer/Serial that
can be filtered by the S-TAP). The PEM encoded part between ---BEGIN
CERTIFICATE--- and ---END CERTIFICATE--- is the one that should be used to
backup/store/email the certificate to other machines and parties (BEGIN and
END delimiters should always be included together with the Base64 encoded
part).
v store certificate sniffer <console | import>
This command enables a user to set the system certificate used by the Guardium
system (in communication with S-TAP). The certificate can either be pasted from
the console or imported via one of the standard import protocols. The certificate
should format should be PEM and should include the BEGIN and END
delimiters. This certificate needs to be signed by a CA whose self-signed
certificate is available to S-TAP software through the guardium_ca_path.
v store certificate keystore <console | import>
Note: Only once both the certificate and the matching key are available on the
Guardium system can S-TAP successfully perform Guardium system
authentication.
v create csr sniffer
This command can be used to create a Certificate Signing Request in PEM
format. The command will internally generate the 2048-bit key and issue a set of
questions to the user to fill out the CSR form (Country, State/Province,
Locality/City, Organization and Organizational Unit). Finally the user needs to
provide the Common Name. As a rule, the common name should include only
letters, digits, underscores and dots. It should be a unique identifier for a
particular installation and include the company name, department, cluster or
Guardium system specific name. However, the instructions from the external CA
override those recommendations. For example:
GCluster1DataCenterGuardiumIBM - which stands for GCluster1 in the
DataCenter at Guardium, an IBM company
SqlGuard1DataCenterGuardiumIBM - which stands for SqlGuard1 machine
system (might have a failover too)
Provide a valid email when asked, so that you can be contacted by support
personnel.
You can leave the Challenge Password and Optional Company Name blank.
Finally the Certificate Signing Request will be displayed in the readable and
PEM encoded forms.
You should verify the details and send the PEM encoded part (between ---BEGIN
CERTIFICATE REQUEST--- and ---END CERTIFICATE REQUEST---, inclusively) to the
CA for signing.
Note: At this point, the system has a new, internally generated key, that does
not correspond to the system certificate previously installed. This is to ensure
that S-TAPs will not feed the information while the certificate is being submitted
for signing. If you need to ensure continuous operation and S-TAP feed, you will
need to disable the Guardium system authentication on the S-TAP side during
this period.
Once CSR has been verified the CA will issue the signed certificate in the PEM
format. You need to install this certificate using the store system certificate
command.
At this point the new certificate and the internally generated key (during the
create csr sniffer command) will be matching and ready to use for Guardium
system authentication by S-TAPs.
Ensure that all certificate-related parameters in the S-TAP configuration file are
correct.
If you need to install the same key/certificate on more than one Guardium
system, you can use the show system certificate | key command to export and
back them up.
Be extra careful when storing the key (which is encrypted by a user-supplied
password) on an external computer or device. Use non trivial passwords when
asked by the show system key.
Procedure
1. Log out of Guardium.
2. From an SSH client window, log in to the Guardium system command line
interface (CLI), as the cli user.
ssh –l cli 192.168.2.16
See CLI Overview for more information on using the Guardium CLI.
3. Enter the following two commands:
store unit type stap
restart inspection-core
See Configuration and Control CLI Commands and Inspection Engine CLI
Commands respectively for more detailed information on these two commands.
4. Enter the quit command to log out of the Guardium CLI.
S-TAP Certification
Use this function to block unauthorized S-TAPs from connecting to the Guardium
system.
If there is a check mark in the S-TAP Approval Needed box, then S-TAPs cannot
connect until they are specifically approved.
The function S-TAP Approval Needed can be controlled by using the CLI
command store stap approval or by the GuardAPI command, grdapi
store_stap_approval.
The new configuration will be effective after you run the restart inspection-core
command if you use the CLI command stap approval ON | OFF .
Approve S-TAPs
1. Place a check mark in the box for S-TAP Approval Needed.
2. Specify the approved S-TAP clients.
Note:
Before you begin: You need to know either who/what/where is acting as your
CA. If the CA is sending you a whole certificate to install, you need two files, the
private key in PKCS#8 (password protected) format, and the public key in PEM
format. The certificate generated needs to be a 2048 bit RSA key.
The CA will sign the certificate and send you back a public key.
The public key the CA sends you back will look something like:
Have this file handy for either copying the contents of or importing to the
Guardium system.
If console, copy paste from -----BEGIN CERTIFICATE----- all the way to -----END
CERTIFICATE----- (including those within the copy) and paste into cli when
prompted. If choosing import, tell the Guardium system where to import the file
from.
It will ask you to confirm that you want to store the certificate, and when you
confirm, it does.
You need to restart the inspection-core for the new certificate to take effect.
You will receive a pair of files from you CA (plus the public cert for you CA)
which is your certificate.
The next will be the public-cert specific to you/this Guardium system, it will look
like:
The last will be a private key (encrypted with pkx#8) and will look like:
import takes the saved file, console, and then copy and paste the contents of the
file into your console interface.
It will ask for the password that the file was saved with. Either you provided this
to the CA for creation of the certificate, or more likely, they provided you with a
password when they sent your files.
It will display the information on the cert and then ask you to confirm storing the
cert.
You need to restart the inspection-core for the new certificate to take effect.
First, take note of what you have assigned as the CA and the CN of the certificate.
If you don't remember, use the show system certificate cli command to display
the values.
You need the CN of the cert installed on the Guardium system and the public-key
for the CA that signed the certificate on the Guardium system. You also might
want a Certificate Revocation list signed by the same CA that signed the Guardium
system cert, but it's not necessary.
The parameters in the guard_tap.ini you're concerned with look the same in Unix
vs. Windows:
The only functional difference between UNIX and windows, is, in windows, if you
do not choose to use a value for a parameter, simply do not include it in the
guard_tap.ini, instead of putting the parameter equal to NULL. (This is pertinent
to the CRL path in particular, or if you want to shut off certificate authentication
and go back to TLS.)
Set guardium_ca_path=[path-to-CA.pem]
Once those parameters are set, change tls=1 and restart the S-TAP.
You can configure any S-TAP to create multiple threads to increase the throughput
of data. If the S-TAP configuration file defines more than one Guardium system, a
thread can be created for each Guardium system. This feature is activated by
setting the value of the participate_in_load_balancing parameter to 4. When this
value is set to 4, the S-TAP creates extra threads, matching the number of
Guardium systems, and the K-TAP creates a similar number of buffers. The K-TAP
alternates between the buffers, placing entire packets in each buffer. Each S-TAP
thread reads from a different K-TAP buffer, and sends traffic data to a single
Guardium system.
In this configuration, no one Guardium receives all the data from the S-TAP. The
distribution is similar to that used when participate_in_load_balancing is set to
1. However, when a Guardium system becomes unavailable, no failover is
provided. Data that was being sent to that Guardium system is lost until the
system becomes available or the configuration is changed.
UNIX S-TAP
Install and configure UNIX S-TAP.
A UNIX S-TAP is a userspace daemon that collects data from various sources in
order to send it to the Guardium system for analysis and logging. It collects traffic
from: KTAP - kernel module that performs interception in kernel; ATAP -
userspace libraries that are loaded when a database starts to collect traffic; and,
EXIT libraries to send traffic directly from the database server to S-TAP.
Note: Because of the complexity and diversity of environments, there are notes
that might, if not read carefully and followed, cause installations/upgrades to fail
or work improperly. While not all inclusive, for the ease of finding some of these
notes, the following sections are listed to aid the reader in pinpointing areas that
require careful and special attention.
v Live vs. Non-Live K-TAP upgrade (Solaris, AIX, HP-UX)
v UID Chains (Solaris Zones, AIX WPAR, Solaris 8/9, Solaris 11 SPARC)
v Before Installing S-TAP on a UNIX Host (Solaris Zones)
v Maintain UNIX S-TAP with GIM (IBM DB2 pureScale®)
v Install UNIX S-TAP (Linux, AIX)
v Upgrade Procedure Utility (SUSE 11, HP-UX)
v Remove Previous UNIX S-TAP (Manual) (HP-UX, AIX WPAR)
v A-TAP Installation (Solaris Zones)
v A-TAP Configuration (Oracle, DB2)
v A-TAP DB Instance Activation (Solaris Zones)
v A-TAP Configuration Pitfalls and Mistakes (Oracle, DB2, Informix)
v A-TAP Procedure to help ensure A-TAP works with Solaris Zones/Aix Wpars
(Solaris Zones, AIX WPAR, Solaris 10/11)
Tip: The PCAP uses the client IP/mask values for all local inspection
engines to determine what to monitor and report. If the PCAP is installed
with an S-TAP with multiple inspection engines, and those inspection
engines have different client IP/mask values, the PCAP captures traffic
from all clients that are defined in all inspection engines. This can result in
more information being processed and sent to the Guardium system than
you intend.
K-TAP
K-TAP is the recommended mechanism to collect local and network traffic
on a UNIX database server. Unlike the Tee, with K-TAP you do not need to
change how database clients connect to the server. K-TAP is a kernel
module that is installed into the operating system. After it is installed, it
can be enabled or disabled by using a configuration file setting. When
enabled, it observes access to a database server by hooking the
mechanisms used to communicate between the database client and server.
When K-TAP is disabled, the Tee can be used to monitor local traffic.
K-TAP and Tee are almost always mutually exclusive - to monitor local
access you either use K-TAP or the Tee.
At installation time, you will choose whether or not to load the K-TAP
kernel module to the server operating system. This is the only way to load
that module. If you do not load K-TAP initially, and decide later that you
want to use it (instead of the Tee), you will need to remove S-TAP, and
then re-install it.
Note: To use the Hunter, version 5.8.0 or later of Perl must be installed in
the /usr/bin/ directory.
K-TAP upgrades - live vs. non-live
Note: K-TAP for AIX only will fail to load during a S-TAP installation or
upgrade if ODMDIR environment is not defined. ODMDIR is Object Data
Manager Directory. ODM is a database of system and device configuration
information integrated into the OS. It is intended for storing system
information, software information, and device information. All ODM
commands use the ODMDIR environment variable, that is set in file
/etc/environment. The default value of ODMDIR is /etc/objrepos.
K-TAP and UID Chains
UID chain is a mechanism which allows S-TAP (by way of K-TAP) to track
the chain of users that occurred prior to a database connection. For
example, a user may have changed users several times before connecting
to the database; perhaps he ran ssh informix@barbet, su - db2inst1, su -
, su - oracle9, before finally running sqlplus scott/tiger@onora1. With
UID Chains, Guardium can trace this process back to the process that
called it and so on back to the original (offending) user.
Note:
v For Solaris Zones, we may have the user ids instead of user names in
the UID Chain.
v For Solaris Zones and AIX WPAR, db2bp_path in the guard_tap.ini file
should point to the full path of the db2bp executable, the full path of the
relevant db2bp as seen from the global zone/wpar.
v No UID Chains for Inter-process Communication (IPC) on Solaris 8/9.
v UID chains are not detected for Hadoop databases.
v When using any database, the UID chain is not logged for all sessions if
the session is very short.
v Setting of hunter_trace is required for TCP/IP connections on UNIX
S-TAP and should be set according to:
– HUNTER_TRACE parameter can set to 0 or 1 to enable or disable
UID CHAIN
– For regular installations, set hunter_trace = 1 will enable uid_chain
for local TCP/IP connections.
– For appserver connections, you need to set hunter_trace=2.
– For Solaris zones and AIX WPARs, you need to set hunter_trace=3 to
capture zones/WPARs connections.
v Local TCP is not supported for UID chain on Linux for DB2. In addition,
DB2 exit requires a specific version of the database to support UID
chains.
v When running as user, UID chain does not work for DB2 Shared
Memory (SHM) with S-TAP.
Purging of UID Chain Records
UID Chain Records older than 2 hours are purged when the regular
inference process runs. Also, records older than one day are purged on a
nightly basis.
Discovery Agent
Note: The Discovery Agent reports its findings back to the primary S-TAP
target, NOT to the system listed as GIM_URL or secondary S-TAP target.
See Installing GIM on the Database Server (Unix) and Guardium Installation
Manager (GIM) - GUI for additional information on installing and using GIM to
install Guardium components in a Windows environment.
Note: If A-Tap is being used, A-Tap must first be disabled on the database server
before performing a GIM-based S-TAP upgrade or uninstall.
While GIM has been provided for ease of installation and management of
Guardium components, there are still environments that may benefit from a more
manual approach or fine-tuning of the installation at a more granular level. The
following section is provided for those environments.
Note: Clicking the Select All button will only select the clients on
the current page being viewed.
4. Click Next to open the Common Modules panel.
5. Select the Module for S-TAP
6. Click Next button to open the Module Parameters panel.
7. Select the client that will be the target for the action (stopping S-TAP).
8. Change the STAP_ENABLED parameter to 0 (zero).
9. Click Apply to Clients to apply to the targeted clients.
10. Click the Install/Update button to schedule the update to the targeted
clients. This update can be scheduled for NOW or some time in the
future.
On the database host itself, you can stop S-TAP (and all other GIM
modules except GIM itself) by stopping GIM's supervisor service with the
command: stop gsvr_<release number>. Use initctl list to get the list of
services statuses.
Non-GIM Installation
1. Log on to the database server system by using the root account.
2. For all non-Red Hat Enterprise Linux 6
a. Open the /etc/inittab file for editing.
b. Locate and comment the following two statements in the
/etc/inittab file, by inserting a comment character (: for AIX, # for
all others) at the start of each statement:
Depending on the method of S-TAP installation, you can stop S-TAP by:
GIM Installation
Use GIM to start S-TAP without ever having to log into the database
server. Complete the following steps to change the STAP_ENABLED parameter
and schedule the change on the database server.
1. Click Manage > Install Management > Setup by Client to open the
Client Search Criteria
2. Perform a filtered search of registered clients or click Search to
perform an unfiltered search of all registered clients.
3. Select the clients that will be the target for the action (starting S-TAP)
v If there are more than 20 clients, then the list of clients will be split
onto additional pages.
Note: Clicking Select All will only select the clients on the current
page being viewed.
4. Click Next to open the Common Modules panel.
5. Select the Module for S-TAP.
6. Click Next to open the Module Parameters panel.
Note: These processes are not used in the default configuration and
must not be started if you are using the K-Tap monitoring
mechanism.
#utee:2345:respawn:/usr/local/guardium/guard_stap/guard_tee /usr/local/guardium/guar
#hsof:2345:respawn:/usr/local/guardium/guard_stap/guard_hnt
d. Run the init q command to restart the S-TAP processes.
3. For Red Hat Enterprise Linux 6
a. List the currently running agents by using the operating system
command initctl list. The output shows the agents that are listed
as in the following example:
gim_33264 start/running, process 910
gsvr_33264 start/running, process 2552
b. Stop each of the agents that might be running by using the stop
<agent> command where agent would be the first entry in the list
from a. See the following example.
stop gim_33264
stop gsvr_33264
stop guard_utap
Use stop guard_utap to stop the S-TAP or stop guard_tee to stop
the TEE mechanism of the S-TAP agent.
4. Run ps -ef | grep stap to verify that S-TAP is running.
5. From the administrator portal of the Guardium system to which this
S-TAP reports, verify that the Status light in the S-TAP control panel is
green.
If the administrator portal is not available, you can display the S-TAP version
number from the UNIX command line of the database server, by running the
guard_stap binary with the -version or --version argument.
To check the UNIX S-TAP version, assuming S-TAP has been installed in the
default installation directory, enter the following command:
-bash-3.2# <guardium_base>/modules/STAP/current/guard_stap --version
or
-bash-3.2# <guardium_base>/guard_stap/guard_stap --version
STAP-doberman_r20511_1-20100728_0514
Note: If there are multiple DB2 instances that are configured for a single WPAR in
guard_tap.ini file and they have the same db2_shmem_size, then the
db2_fix_pack_adjustment and db2_shmem_client_position configured in the first
DB2 section for that WPAR will be returned. So in cases where there are multiple
DB2 instances running on the WPAR:
1. If all DB2 instances have the same db2_shmem_size, db2_fix_pack_adjustment,
and db2_shmem_client_position, the packets from all instances will be collected
even if only one instance is configured.
Note: The theory behind this computation is based on the IBM DB2 Universal
Database Administration Guide: Performance manual. The following diagram
shows the DB2 shared memory layout.
ATAP and KTAP rely on the size for identification of the Application/Agent shared
memory segments. These segments are then tapped for C2S and S2C packets.
The segments are equal to the sum of the ASLHEAPSZ and RQRIOBLK parameters. DB2
allocates much larger segments. In most cases, the size is equal to (ASLHEAPSZ + 1) *
2 pages, or (ASLHEAPSZ + 1) * 8192 bytes. Exact size can be determined by
observation of the shared memory segments in the system before and after new
DB2 local connection is created.
The following sequence of commands helps you to determine the shared memory
segment size.
ipcs command parameters and output format differ from platform to platform. The
following script is based on the AIX version.
ipcs -ma | sort -n -2 +3 > /tmp/before.txt
db2 connect to <some_existing_database>ipcs -ma | sort -n -2 +3 > /tmp/after.txt
db2 terminate
diff /tmp/before.txt /tmp/after.txt | awk ’{if ($10 == 2) print $11}’
It is always a good idea to verify the result. It is equal or at least close to the
output of the following command:
db2 get database manager configuration | awk ’/ASLHEAPSZ/{print ($9 + 1) * 8192}’
The output contains several columns beyond those shown here, but they do not
affect this procedure. Find the line that contains the process ID that was
identified in step 2 and also has a value of 2 under NATTCH. The DB2
shared-memory segment size is the value in the SEGSZ column. In this
example, it is 131072.
4. Tip: if the list returned in step 3 is too long, you can filter it by using the
process ID. In this case, you would enter ipcs -ma | grep 5309370. The results
do not contain the column headers, but you can look at the previous results to
see the column headers and identify the correct line and column. In this
example, it is the last line.
m 131072014 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 1342177280 5309370
m 763363344 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 268435456 5309370
m 227541013 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 163905536 5309370
m 106353238 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 2 131072 5309370
Overview
Note: S-TAP Discovery is not supported on AIX 5.3 because of static libraries are
needed on that platform.
Note: In order to avoid an instance where S-TAP discovery does not open the
Informix database, it is recommended to start the databases using the full path to
the executable.
The parameters of the S-TAP Discover application are described in the following
table.
Table 13. Parameters
Parameter Description
tap_ip The value of this parameter determines the name of the
S-TAP that the S-TAP Discovery application uses in its
results. This parameter does not affect the inspection
engines that the S-TAP Discovery creates. It is used for
associating discovered instances with an S-TAP host.
sqlguard_ip This parameter determines where to send the results of
the S-TAP Discovery application.
discovery_interval This parameter specifies how long the S-TAP waits in
between runs of the S-TAP Discovery application. The
unit is in hours. Specifying 0 disables S-TAP Discovery
from running automatically. The default is 24 hours.
discovery_dbs This parameter is a colon (':') separated list of database
types for S-TAP Discovery to look for. The default is
"oracle:db2:informix:mysql:postgres:sybase:hadoop".
discovery_debug This parameter determines the level of logging for S-TAP
Discovery. The default value of 0 logs only errors. A value
of 1 logs both errors and debug statements.
discovery_ora_alt_locations This parameter specifies alternative locations to look for
listener.ora files in a comma (',') separated list.
discovery_port This parameter defines what port S-TAP Discovery to use
when it connects to the Guardium system. The default
port number is 8443.
If a scheduled discovery is running, and a new request comes in from the user
interface for running discovery, the new request is ignored.
S-TAP Discovery can be run manually but this action is not suggested. The main
reason is to run it manually is for debugging purposes.
To send results to the Guardium system, use the file path <absolute path to
guard_discovery binary>/guard_discovery<path to guard_tap.ini>/
guard_tap.ini or <absolute path to guard_discovery binary>/guard_discovery
<path to guard_tap.ini>/guard_tap.ini --print-output
A-TAP
The A-TAP monitors communication between internal components of the database
server.
Some traffic can only be tapped at the database server application level. This may
be required because the DBMS uses its own encryption, or because of other
internal database implementation details. For these cases, the A-TAP
(application-level tapping) mechanism monitors communication between internal
components of the database server. A-TAP uses K-TAP as a proxy to pass data to
S-TAP, and it must be configured separately for each database environment.
A-TAP can be controlled from the guard_tap.ini parameter file, by the guardctl
utility, or on some platforms A-TAP can also be activated from S-TAP
configuration.
The guardctl utility provides commands that facilitate different aspects of A-TAP
installation, activation, deactivation, uninstallation and upgrade.
To use the guardctl utility, you must log in as root, since it requires superuser
privileges.
Note: The guardctl utility requires version 3 or greater of bash. Enter bash
--version at the command prompt to display the current version.
Syntax
<guardium_base>/xxx/guardctl [<name>=value>] [<name>=<value> ...] [command]
Commands
v help - default command, prints the list of supported commands, parameters and
their default values
1
If the Oracle Listener and all Oracle instances are not running under the same
user, all users must belong to the same group (a shared one) in order to capture
Oracle TCP traffic. In addition, in HPUX, HP-2005-security-patch is required.
2
The DB2 shared memory-related parameters should be determined at installation
time using the procedure described in DB2 Linux S-TAP Configuration Parameters.
store-conf command
Use the store-conf command to name and store the configuration of an
instance of the database for future use. These stored configurations may
later be used for A-TAP activation and deactivation.
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [<name>=<value> ...] store-conf
The value specified for instance (db_instance parameter) can be used later
to reference this configuration in other guardctl commands.
Installing
1. Install STAP/KTAP on the master/global Zone/WPAR by the normal method
2. For Solaris Zones:
v for each sub-zone where Oracle is installed, make sure Guardium device is
mapped:
zoneadm -z <zonename> halt
zonecfg -z <zonename>
<zonename>> add device
<zonename>device> set match=/dev/ktap_xxx (for Solaris 10)
<zonename>device> set match=/dev/guard_ktap (for Solaris 11 on v8.2 and later)
<zonename>device> end
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
v With multiple KTAP devices, repeat the steps for each KTAP device by using
the name, ktap_xxxx (Solaris 10) or guard_ktap_x (Solaris 11).
3. Copy the entire A-TAP installation directory to a sub-Zone/sub-WPAR:
v On the master/global Zone/WPAR:(assuming Guardium software is installed
on the master Zone/WPAR under /usr/local/guardium, and there exists a
writable directory /usr/local with enough free space on the
sub-Zone/sub-WPAR)
cd /usr/local; tar -cvf - guardium | ssh root@subzonehost ’cd /usr/local && tar -xvf -’
Note: Note: For GIM installations, the installation path on the master/global
and sub-Zones/sub-WPARs must be identical. For non-GIM installs the paths
may be different, although this is not recommended.
4. Copy the A-TAP libraries to each sub-Zone/sub-WPAR:
v If an A-TAP is to be activated on the master Zone/WPAR, activate it
normally using guardctl. (Activation must be done using guardctl; it cannot
be done by setting encryption=1 in the guard_tap.ini file).
v If A-TAP will not be used on the master Zone/WPAR, use guardctl to
prepare the libraries for use. On the master Zone/WPAR:
/usr/local/guardium/bin/guardctl --db_instance=<instance-name> --db_type=<database-type> --d
5. Normally activate A-TAP for database instances using guardctl on each desired
sub-Zone/sub-WPAR:
Note: A-TAP (guardctl) activation may complain and issue warnings about the
following:
v errors installing libraries under /usr/lib (since that directory belongs to the
global/master zone)
v not being able to change the guard_tap.ini to monitor oracle-guard instead
of oracle.
v not being able to restart stap (since it is running only on the master zone)
Uninstalling
1. On every sub-Zone/sub-WPAR with A-TAP installed/active:
v Deactivate (and deinstrument if necessary, i.e. for Oracle on AIX) all A-TAPs
using guardctl
v Manually remove (rm -rf) the installation directory (usually
/usr/local/guardium)
v Manually remove the ATAP libraries:
find /usr/lib -type f -name ’libguard-*.so’ | xargs rm -f
Note: removing the libraries may give errors; these can be ignored.
2. On the master/global Zone/WPAR:
v Uninstall STAP/KTAP using the normal method
v Remove the libraries:
Oracle patches may invoke relink and will replace the Oracle executable, causing
the A-TAP to stop functioning.
However, in case A-TAP was not properly deactivated prior to Oracle patch
installation, DO NOT try to deactivate it after patch installation. Instead follow
these steps:
1. Check if A-TAP IS OK.
grep guardium $ORACLE_HOME/bin/oracle >& /dev/null && echo "ATAP IS OK"
a. If ATAP IS OK is displayed, the A-TAP is still active and there is no need to
do anything.
b. If ATAP IS OK is NOT displayed, remove $ORACLE_HOME/bin/oracle-guard
and activate the A-TAP.
Several problem may occur that have to do with user and group permissions.
v In 'BEQUEATH' access from the user other than the one that installed the
database the permissions have to be set manually:
– add user running sqlplus to group 'guardium'
– open the read permissions 'chmod a+rx' on the following two directories:
/usr/local/guardium/xxx/etc/guard
/usr/local/guardium/xxx/etc/guard/executor
– make sure that the SUID and SGID bits are on ${ORACLE_HOME}/bin/oracle.
- If not, run the command chmod ug+s ${ORACLE_HOME}/bin/oracle')
For example
root@ub10u4x64t:~#
72 S-TAP and other agents
Activate and deactivate your A-TAPs
A-TAP Database Instance Activation
Use the activate command to activate an A-TAP. The A-TAP must be activated for
each DB instance to be monitored on the server. Note the following:
v A-TAP cannot be activated or deactivated while the DB instance is up and
running.
v A-TAP activation relies on stored configuration for given instance.
v A-TAP parameters may also be specified on the command line. Command line
parameters override the stored ones.
v Operating system users for the DB instances have to be completely logged off
from the system during DB instance activation.
v A-TAP has to be deactivated prior to any upgrade of the Database server.
v On AIX and Oracle, the instrument command must be used before activating the
A-TAP, either by using the activation command or by setting encryption=1 in the
.ini file.
v Enabling encryption in the inspection engine is only supported on AIX, HP-UX,
and Solaris. It is not supported in Linux, WPAR, or zones environments. Enable
encryption using encryption=1 in the guard_tap.ini file or from the S-TAP
Control > Edit S-TAP Configuration screen in the Guardium user interface.
v In a GIM installation, every zone has to be populated with libguard-* as well
(see Solaris Zones 2.)
v For a multi-instance configuration where a single executable is used for all of the
instances, guardctl activate should only be done once as it will be effective for
all instances.
v For Solaris Zones and WPARS, to make A-TAP to work on zone architectures,
the file system /usr/local on the sub-zone system has to be read and write.
Instrument command
To instrument an Oracle executable (needed on AIX), use the following
syntax:
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ <name1=value1> ... <nameN=valueN> ]
activate command
A-TAP activation can either be done from the guard_tap.ini (via the
encrypted=1) on Solaris (not on Solaris zones) and HP-UX, only, or by
issuing the following command:
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ <name1=value1> ... <nameN=valueN> ]
Note: Command line parameters (if specified) supersede those stored for
given instance. The parameters are stored for future use, overwriting
previously specified ones.
Note: After Oracle instrument has been issued, the monitoring has to be
activated as well.
On the S-TAP control screen of the Guardium system, check the 'Encryption'
checkbox in the inspection engine definition screen. Note that in A-TAP activation
through 'Encryption=1' is not possible within a Solaris subzone.
In order to enable A-TAP using this method on supported platforms, follow these
steps:
v Upon installation of S-TAP on the host, the Oracle OS user has to be added to
the Guardium group. The group is created by S-TAP install script. Some
platforms require the user to be completely logged off in order for this change to
take effect.
– On Solaris, the user has to be completely logged off from the system.
– No process should be running in the system under this user id.
– In order to verify this, use the following command (assuming the user is
Oracle):
ps -efU oracle
– If the output is empty, use the following command to add the user to the
group:
usermod -G dba,guardium oracle
– Please note that if the user belongs to groups other than dba, they should be
listed as well. The latter can be verified using the following command:
id -a oracle
– Once the user is added to the Guardium group, the encrypted traffic should
be logged for this user.
v On Solaris zone architecture, the following extra steps are to be taken:
– On the local zone with the Oracle instance, create new user group named
guardium with GID equal to the GID of group guardium on the global zone.
– On the local zone with the Oracle instance, create the /var/guard directory
like this:
mkdir -p /var/guard
chown root:guardium /var/guard
chmod ug+wx /var/guard
– On the local zone with the Oracle instance, add Oracle OS user to the group
Guardium.
– On global zone, edit guard_tap.ini file. Prepend the global zone path to local
zone /var/guard directory to atap_exec_location parameter. Use ':' (colon) as
a separator.
atap_exec_location=/data/zones/oracle10/root/var/guard:/var/guard
Deactivate A-TAPs
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ --force-action=yes ] deactivate
If the optional --force-action parameter is specified and its value is set to yes,
forced deactivation will be attempted. In particular, it will try to deactivate a DB
instance even if it is running or the OS user is logged in. This can be beneficial to
use if a normal deactivate attempt is unsuccessful. The --force-action parameter
must precede the deactivate command, as shown in the example, or an error will
be issued.
deactivate-all Command
Use the deactivate-all command to deactivate A-TAP for all database instances
on the server.
Syntax
<guardium_base>/xxx/guardctl [ --force-action=yes ] deactivate-all
Note: Note: The --force-action parameter may be specified if any of the instances
fail to deactivate after a normal deactivate-all is attempted.
Tee
The Tee is a non-kernel-based data collection mechanism that can be used as an
alternative to K-TAP.
This topic does not apply if the K-TAP mechanism will be used to monitor local
connections. The Tee is a non-kernel-based data collection mechanism that can be
used as an alternative to K-TAP, and as such, requires the clients to explicitly
connect to the Tee.
Do not perform this procedure until the S-TAP has been installed on the DB2
server, and you are ready to start collecting data. For the local DB2 clients to use
the Tee, you will create a database alias named tee, and the clients will change
their login sequence to log into tee (instead of the DB2 server).
1. Log on to the database server system using an administrative account.
2. Locate the entry in the /etc/services file for the node name that clients use to
connect to the database. Each entry in this file is in the following format:
node_name port_number/protocol [aliases]
For example:
db2inst1 50000/ tcp # DB2 connection service port
Note: Record the node name (db2inst1, in this example) and the port number
(50000). When you configure the inspection engine, this is the port number
you will specify as the Tee Real Port.
3. Select an unused port number in the range of 1025-65535 for use by the S-TAP.
Search the /etc/services file for the selected port number to be certain that it
is not used. When you configure the inspection engine, this is the port
number you will specify as the Tee Listen Port.
4. Enter the db2 command to start the db2 command-line interface. To execute
this command, you may need to add the command to the $PATH, or switch
users to a db2 user on the system.
5. Enter the list node directory command to list all nodes defined. A very simple
example:
db2 => list node directory
Node Directory
Number of entries in the directory = 2
Node 1 entry:
Node name = GACCTEST
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = merlin
Service name = 50000
Node 2 entry:
Node name = LOCGOOSE
Comment =
Directory entry type = LOCAL
Protocol = LOCAL
Instance name = db2inst1
Do not log out of the database server system yet. After configuring an
inspection engine, you will enter one or more SQL commands using the DB2
command-line SQL utility to verify the alias connection.
13. When you are ready to start collecting data, define a DB2 inspection engine to
listen on the selected Tee Listen Port (12344 in the example), and forward
messages to the Tee Real Port (50000 in the example). Be sure to set all other
properties required for a DB2 inspection engine, as described elsewhere.
14. Use the DB2 command-line to verify that the database connection through the
local tee process works correctly. Log in to the database from the command
line using a command like the following (where sample is the database (or
some tee catalog name), db2inst1 is user name, passwd is the password, and
tee is the database alias):
$ db2 connect to sample user db2inst1 using passwd
Database Connection Information
Database server = DB2/LINUXX8664 9.7.0
SQL authorization ID = DB2INST1
Local database alias = SAMPLE
15. Enter a command that you know will create an SQL exception (for example,
select * from my_mistake), and then quit the session.
16. Log in to a user portal on the Guardium system, and navigate to the Reports
& Alerts - Report Templates - Exceptions tab, and select the SQL Errors report.
You should be able to locate your SQL error near the beginning of the report,
and thus verify that the tee is seeing the Informix traffic.
17. Now modify all client logins to log into the tee alias (instead of the DB2
server)
Do not perform this procedure until the S-TAP has been installed on the Informix
server, and you are ready to start collecting data. For the local Informix clients to
use the Tee, you will create an staptcp service name in the /etc/services file, create
an stap_sqlhosts file, and modify several environment variables such that local
Informix clients will connect to the Tee Listen Port instead of to the Informix
server.
For example:
Note: Pay attention to the port number (1400, in the example). When you
configure the inspection engine, this is the port number you will specify as the
Tee Real Port.
11. Select an unused port number in the range of 1025-65535 for use by the S-TAP.
Search the services file for the selected port number, to be certain that it is not
used. In our example, we will use 12344. When you configure the inspection
engine, this is the port number you will specify as the Tee Listen Port.
12. Add a line to the services file for S-TAP listening port, staptcp in the example:
staptcp 12344/tcp
13. Save the services file.
14. Set the environment variable INFORMIXSQLHOSTS, to specify the full path
name for the cloned version of the sqlhosts file that you created earlier. For
example:
setenv INFORMIXSQLHOSTS $INFORMIXDIR/etc/stap_sqlhosts
15. When you are ready to start collecting data, define an Informix inspection
engine to listen on the selected Tee Listen Port (12344 in the example), and
forward messages to the Tee Real Port (1400 in the example).
This topic does not apply if the K-Tap mechanism will be used to monitor local
connections. The Tee is a non-kernel-based data collection mechanism that can be
used as an alternative to K-Tap, and as such, requires the clients to explicitly
connect to the Tee.
Do not perform this procedure until the S-TAP has been installed on the Oracle
server, and you are ready to start collecting data. Use the following procedure
outlined to modify the tnsnames.ora file, which maps service aliases to ports. Do
not change this file until the S-TAP has been installed and you are ready to start
collecting data.
1. Make a backup copy of the tnsnames.ora file, which is located in the
$ORACLE_HOME/network/admin directory.
2. Open the tnsnames.ora file for editing in a text editor program.
3. Locate the entry in this file for the service alias used to access the database.
An entry named EAGLE10 on the EAGLE host is illustrated here:
EAGLE10 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eagle)(PORT = 1521)))
(CONNECT_DATA = (SERVICE_NAME = GUARD10))
)
Where scott is the database user name, tiger is the password, and LOCALTEE
identifies the service.
10. Enter an invalid SQL command to create an SQL exception that will be easy to
find. For example: select * from my_mistake
11. Log in to a user portal on the Guardium system, and navigate to the Reports
& Alerts - Report Templates - Exceptions tab, and select the SQL Errors report.
You should be able to locate your SQL error near the beginning of the report,
and thus verify that the tee is seeing the local Oracle traffic.
12. Reopen the tnsnames.ora file and replace the database service port number
with the selected number. Continuing our example, the EAGLE10 entry would
be UPDATED as follows:
EAGLE10 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eagle)(PORT = 12344)))
(CONNECT_DATA = (SERVICE_NAME = GUARD10))
)
13. Save the tnsnames.ora file. All local clients connecting to EAGLE10 will now
connect to port 12344 (the Tee Listen Port) instead of the actual database port
(the Tee Real Port).
Follow these steps to modify the local interface file, which maps servers to ports.
Do not change this file until the S-TAP has been installed and you are ready to
start collecting data.
Use the following steps to switch from Tee to K-Tap without an uninstall or
reinstall. This condition may exist after an unsuccessful loading of K-Tap.
1. Disable S-TAP. See Stop UNIX S-TAP for more information.
2. Comment guard_tee and guard_hnt lines out of inittab, or do the appropriate
change for Red Hat 6 which does not use inittab
3. Run 'init q', or Red Hat equivalent; alternately just kill the tee and hunter jobs
4. Edit guard_tap.ini and change ktap_installed to 1 and tee_installed to 0
5. Run the 'guard_ktap_loader install' command
Windows S-TAP
Use this section for Windows S-TAP configuration information.
Note: When Windows S-TAP encounters a fatal error during start up that
is due to configuration problems (unknown local IP address, more than 1
primary SQL-Guards defined, etc.) it will log the reason to the Windows
event log. In some cases an exit after a failure may cause a crash and
another logged event. This crash should not cause any concern if it is
preceded by the event explaining the reason for the failure.
Under normal conditions, the Kerberos names will never be seen on the Guardium
system. In heavy volume situations, if names have not yet been resolved by the
time messages must be sent to the Guardium system, traffic with Kerberos names
can either be sent as is (with the Kerberos names), or dropped (your choice).
Perform this step from the Administrator GUI after installing the S-TAP agent on
the database server system.
1. Log on to the active Guardium host for the S-TAP just installed. (The active
host is the only host from which you can modify an S-TAP configuration.)
2. Click Manage > Activity Monitoring > S-TAP Control to open the S-TAP
Control panel.
3. Locate the database server on which the S-TAP was installed, in the S-TAP
Host column, and click Edit S-TAP Configuration to open the S-TAP
Configuration panel.
4. Expand the S-TAP Control Details pane.
5. Check the MSSQL Encryption box.
6. When Kerberos authentication is used, Kerberos Credentials Mapping
controls how S-TAP obtains the database user names. If either Sync option
(below) is selected, S-TAP will not forward messages to the S-TAP until it
resolves the real database user name. So in high-message-volume situations,
some messages may be lost. When the Async option is used, all messages will
be forwarded to the S-TAP, but initial sessions for users with new Kerberos
tickets will have strings of hexadecimal characters in the database username
field until S-TAP resolves the actual database user name.
At Startup, Sync During startup processing, S-TAP obtains all authenticated
users from the domain controller. This can be time consuming. After all users
have been obtained and tabled, S-TAP starts sending data to the S-TAP. When
it encounters a message from a user it does not recognize, it obtains that
database user name as described for On Demand, Sync, below.
On Demand, Sync When S-TAP encounters a Kerberos message for an
unrecognized user, S-TAP fetches the user name from the domain controller. It
does not forward any traffic from that user to theS-TAP until it has the actual
database user name.
On Demand, Async Like the above option, except that messages are not held
while waiting to obtain the database user name.
See further MSSQL Encryption and Kerberos Credentials Mapping in S-TAP
Controls - Details table in“Configure S-TAP from the GUI” on page 90 help
topic.
Note: To monitor all clients, enter 1.1.1.1 and 0.0.0.0 in the Client IP and
Mask fields.
14. Click Apply to save the inspection engine definition.
In most cases the installation program takes care of finding the JAVA_HOME
value. This value is placed in the CAS configuration file.
If for any reason (for example, you install a new Java version after installing the
Guardium CAS product), you need to change the location of JAVA_HOME, follow
the following procedure.
1. Locate and open the CAS configuration file for editing. Its full path name is:
<installation directory>/case/conf/wrapper.conf
2. Locate the following entry:wrapper.java.command=<value>
3. Replace value with the JAVA_HOME directory
4. Save the file.
CAS is a 32-bit Java application, so it would not normally have access to the 64-bit
software configuration parameters. CAS has been enhanced to detect a 64-bit
environment and handle the partitioned Registry. CAS interest in the Registry is to
retrieve values of Registry keys to detect changes or to compare against
recommended values.
This occurs when the LhmonProxy driver is loaded AFTER the NetBT driver by
the operating system during the boot process. To determine the relative boot order
of the drivers, you need the following information from the registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip Tag
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NetBT Tag
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LhmonProxy Tag
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\GroupOrderList
PNP_TDI
To determine if NetBT is loaded before LhmonProxy, find their Tag values in the
PNP_TDI list. If the LhmonProxy Tag number comes after the NetBT tag value
(even if LhmonProxy's tag value is smaller) in the list, then LhmonProxy loads
afater NetBT.
For example, let's say the Tcpip Tag is 4, NetBT Tag is 6, LhmonProxy Tag is 7, and
the PNP_TDI list looks like:
03 00 00 00 04 00 00 00 06 00 00 00 07 00 00 00
00) is after NetBT (06 00 00 00) in the list, so the LhmonProxy driver starts after
the NetBT driver.
The solution to the problem is to force the system to load LhmonProxy after Tcpip
but before NetBT by editing the PNP_TDI entry. The solution for the previous
example would have the PNP_TDI entry look like:
03 00 00 00 04 00 00 00 07 00 00 00 06 00 00 00
Be careful when editing the PNP_TDI entry to insure that you put the proper
number of 0s after the tag value (3 pairs of zeros). Each number in the entry is in
hexadecimal, so tag 10 would look like 0A 00 00 00.
Collected DRDA traffic can be sent to Optim Query Capture Replay with a
microseconds timestamp, since OQCR requires a granularity of 1 microsecond. Use
the CLI command store unit type sink to switch from a granularity of 1
millisecond to 1 microsecond.
HIGH_RESOLUTION_TIMER
0 (default) - Send time stamps in milliseconds (Guardium Version 7.0 and Version
8.0 behavior)
1 - Send time stamps in microseconds, but use milliseconds system timer (to
reduce system performance hit - multiply milliseconds by 1000)
2 - Send time stamps in microseconds, use high resolution windows timer (most
accurate)
For values 1 and 2, the S-TAP will indicate to Sniffer that microseconds are sent, by
setting the reserved byte in PacketData to 1.
The S-TAP will send the same time stamp values to all connected Guardium
systems.
Wfpmonitor is the new S-TAP TCP driver, replacing lhmonproxy and lhmon
Named pipes driver has been redesigned. Now split into proxy and monitor:
NmpProxy and NmpMonitor, replacing Nptrc. This splits the functionality into
basic-OS (NmpProxy) and Guardium-logic (NmpMonitor).
To make changes to the S-TAP configuration, you must be logged into the
Guardium system that is the active host for the S-TAP. You can only edit an S-TAP
configuration from its active host. Some configuration changes require that the
S-TAP agent be restarted manually.
Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.
If there is no Local Taps section, you must first configure your Guardium system
to manage S-TAP agents. Refer to “Configuring the Guardium system to manage
S-TAPs” on page 47 for more information.
Locate the S-TAP to be configured in the S-TAP Host column by looking for its IP
address or the symbolic host name of the database server on which it is installed.
Each S-TAP has its own controls which are detailed in the following table.
Control Description
Delete Click Delete to remove an S-TAP.
Debug Level:
Make your desired changes, and verify that the Status Indicator is green. If the
Status Indicator is not green, the Guardium system and S-TAP are not connected.
1. Verify that the Status indicator is green. If it is not, the Guardium system and
S-TAP are not connected.
2. Click the (Edit) button for that S-TAP. If the Edit button is not active, this
Guardium system is not the active host for this S-TAP. You must log on to the
active host for this S-TAP to make any changes.
3. Expand and make modifications to any of the following sections of the S-TAP
configuration. Typically, the only additional task at this point is to define one or
more inspection engines. (An inspection engine identifies a set of database
connections to monitor.) Click any of these sections for a detailed description of
its use.
v S-TAP Control - Details
v S-TAP Control - Hunter
v S-TAP Control - Change Auditing
v S-TAP Control - Application Server User Identification
The details section of the S-TAP Control panel applies to basic configuration
settings for the S-TAP agent.
Control Description
Version The S-TAP version installed
Devices Always blank for a Windows server.
0 = Disabled.
When used, the Hunter component can be configured to report and optionally kill
any rogue connections that it discovers on the database server. A rogue connection
is any connection that bypasses the TEE mechanism.
oracle:pipe
Sleep Time Maximum number of seconds between the
randomized starting time of the Hunter’s
rogue process search routine. The start time
is random to increase the difficulty of
defeating it by running in fixed time slots or
intervals. The recommended value for sleep
time is anywhere between 60 and 300.
DBs Using a comma separated list, the database
types to be reported:
v DB2
v Informix
v Oracle
v Sybase
v PostgreSQL
v Teradata
The Change Auditing pane of the S-TAP Control panel applies to the CAS
(Configuration Auditing System) agent only. The CAS product is an optional
component unrelated to S-TAP's, but all Guardium components installed on the
database server share a single configuration file.
Control Description
Session Timeout Number of minutes for a timeout. Default is
1800.
Ports Application server ports. Use commas to
separate entries, or hyphens for inclusive
ranges. The default is 8080.
Login Pattern Pattern used to identify a user login.
Username Prefix Start of user name in the Post/Get data.
Username Postfix End of user name in the Post/Get data.
Session Pattern Pattern used to identify a new session.
Session Prefix Start of session ID in the Post/Get data.
Session Postfix End of session ID in the Post/Get data.
Session ID Pattern Pattern used to identify an existing session.
Session ID Prefix Start of session ID in the Post/Get data.
Session ID Postfix End of session ID in the Post/Get data.
This pane lists all Guardium systems defined as hosts for the S-TAP. In many cases
only a single Guardium system will be defined as the host for an S-TAP.
Additional hosts can be defined to provide a fail over and load balancing
capability. Guardium S-TAP hosts are referred to using three terms:
In the S-TAP Configuration panel, the Guardium Host pane contains the controls
described here. Note that the buttons shown are available only in the S-TAP
Configuration panel (and not in the S-TAP Control panel):
Control Description
Active A check mark in this column indicates the
active host for this S-TAP.
Guardium Host Identifies a Guardium system by using
either the IP address or the symbolic host
name.
Delete Click to delete the associated host. This
control does not appear on the active host
row.
Down Click to move the associated host one
position down in the list.
Up Click to move the associated host up one
row in the list.
Check Set primary. Move this host to the beginning
of the list, designating it as the primary host.
Before defining a secondary host, be sure that you understand how secondary
hosts are used. See Secondary Guardium hosts for S-TAP agents in the overview of
“S-TAP administration guide” on page 42.
Note: If you have changed the primary host, and you want the S-TAP to begin
using the new primary host immediately, and this is a Windows server, you will
need to restart the GUARD_STAP service. Restarting the service is not required on
UNIX servers.
Note: Do not configure an S-TAP inspection engine to monitor network traffic that
is also monitored directly by a Guardium system that is hosting the S-TAP, or by
another S-TAP reporting to the same Guardium system. If that happens, the
Guardium system will receive duplicate information, will not be able to reconstruct
sessions, and will ignore that traffic.
Control Description
Protocol The type of database server being monitored
(Cassandra, CouchDB, DB2, DB2 Exit,
exclude IE, FTP, GreenPlumDB, Hadoop,
HTTP, ISERIES, Informix, KERBEROS,
MongoDB, MS SQL, Mysql, Named Pipes,
Netezza, Oracle, PostgreSQL, SAP Hana,
Sybase, Teradata, or Windows File Share).
Port Range The range of ports monitored for this
database server. There is usually only a
single port in the range. For a Kerberos
inspection engine, this value should always
display as 88-88. If a range is used, do not
include extra ports in the range, as this may
result in excessive resource consumption
while the S-TAP attempts to analyze
unwanted traffic.
TEE Listen Port Real Port Not used for Windows. Under UNIX,
replaced by the KTAP DB Real Port when
the K-Tap monitoring mechanism is used.
Required when the TEE monitoring
mechanism is used. The Listen Port is the
port on which S-TAP listens for and accepts
local database traffic. The Real Port is the
port to which S-TAP forwards traffic.
For example:
v Oracle: /home/oracle10/prod/10.2.0/
db_1/bin/oracle
v Informix: /INFORMIXTMP/.inf.sqlexec
Informix: /INFORMIXTMP/.inf.sqlexec
Applies to all Informix platforms but
Linux. For Informix with Linux, example:
/home/informix11/bin/oninit
v MYSQL: mysql
v PostgreSQL: POSTGRES.EXE, PG_CTL.EXE
v Teradata: GTWGATEWAY.EXE
v For all other database types, enter NULL
Encryption Activate ASO encrypted traffic for Oracle
(versions 9, 10 and 11) and Sybase on Solaris
or HPUX.
Named Pipes Windows only. Specifies the name of the
named pipe used by MS SQL Server for
local access. If a named pipe is used, but
nothing is specified here, S-TAP attempts to
retrieve the named pipe name from the
registry.
Instance Name The database instance name is required for:
v MS SQL Server 2005 using encryption, or
MS SQL Server using Kerberos
Authentication (MSSQLSERVER is the
default)
v Oracle using database encryption (there is
no default)
DB2 Shared Memory The following three fields apply only when
DB2 is selected as the database type. If
shared memory connections are monitored,
the following three parameters must be set.
Adjustment Default is 20
Client Position Default is 61440
Size Default is 131072
Identifier Identifier is an optional field that can be
used to distinguish inspection engines from
one another. If you do not provide a value
for this field, Guardium will auto populate
the field with a unique name using the
database type and GUI display sequence
number.
Note: For Informix versions 7 or 11, the Informix version must be set for the
inspection engine through the use of the API (create_stap_inspection_engine) or
through editing the guard_tap.ini file (informix_version parameter).
Use this function to ignore all database responses at the S-TAP level, without
sending anything to the Guardium system.
Use this function for an easier configuration for ignoring unwanted responses from
the database, without loading the network.
Database types may be listed comma separated or ALL can be specified to ignore
responses from all types of databases, for example, see following. The default is
none.
If it is set to all, this means that the responses from all DBs are ignored.
DB_IGNORE_RESPONSE=MSSQL,SYBASE,DB2
DB_IGNORE_RESPONSE=all
DB_IGNORE_RESPONSE=none
The following are valid as database types: ALL, CIFS, FTP, DB2_EXIT, PGSQL,
MSSQL_NP, MSSQL, MYSQL, TRD, SYBASE, INFORMIX, DB2, ORACLE,
KERBEROS.
To add CIFS/FTP inspection, use fixed ports for CIFS or FTP. FTP always uses port
21, CIFS uses port 139 or port 445.
Configuring inspection engines for FTP traffic is easy. For net inspection, simply
select Protocol FTP, enter port 21, and enter the IPs/Masks as you normally would.
FTP Sniffing is the ability to sniff FTP traffic between a client and server as if it
were database traffic. With FTP, any machine can be a client (UNIX or Windows)
and also any system can also be a server, as long as there is a valid user to login
with. Note that there is no local FTP. However, FTP can be sniffed by either
network inspection or by network S-TAP sniffing. FTP traffic will typically appear
on port 21. In GDM_CONSTRUCT, FTP traffic will appear as "_FTP" followed by
the RAW FTP command that was sent (note that the raw FTP command is different
from the actual FTP that was sent).
CIFS Sniffing (or Windows File Share Sniffing) is the ability to sniff the sharing of
Windows files between a client and server as if it were database traffic. When
sharing out directories and files in Windows, this sharing system is based on the
smb or Samba language, which the Guardium system sniffs and translates into a
CIFS language. Use the smbclient function to sniff Windows File Share traffic but
also UNIX connections to Windows shared folders. Note that there is no local CIFS.
However, CIFS can be sniffed by either network inspection or by network
Windows S-TAP sniffing. Also note that there is no such thing as a CIFS Server.
Any Windows machine can either share files or access shared files, so any
Windows machine can be a client or server. CIFS traffic will typically appear on
either port 139 or port 445. In GDM_CONSTRUCT, CIFS traffic will appear as
"_CIFS" followed by the CIFS command that was sent.
After changing an S-TAP configuration, you may notice its status light in the
S-TAP Control panel turn yellow. A yellow light means that there is a mismatch
between the configuration on the Guardium system and the configuration on the
S-TAP. A temporary yellow light is acceptable, as it takes some time for the S-TAP
to receive and approve the new configuration. If the yellow light persists, it usually
means that the S-TAP did not accept the new configuration and reverted to the last
known good configuration.
When an error has occurred, you can review the errors by opening Reports >
Real-time Guardium Operational Reports > S-TAP Events. In most cases the
event log will contain error messages indicating what was wrong with the new
configuration. See Viewing the S-TAP Events Panel for a description of error
messages.
The DB2-specific S-TAP and A-Tap parameters apply only when all of the
following conditions are met:
v The DB2 server is running under Linux.
v The K-Tap monitoring mechanism is installed.
v Clients connect to DB2 using shared memory.
The DB2-specific S-TAP parameters are set on the Inspection Engine definition
panel.
Set the Position parameter value according to the shared memory size used by
db2bp, as follows:
If you do not know the shared memory size used by db2bp, you can use the
following procedure to find it.
The following table summarizes the required parameters used both for S-TAP and
A-Tap when configured to monitor DB2 shared memory on Linux.
Note: The theory behind this computation is based on the DB2 Administration
Guide: Performance document.
If you have installed your S-TAP by using the Guardium Installation Manager
(GIM), you can update some parameters through the GIM UI or CLI. If you cannot
use any of these methods to update the parameters, you can edit the configuration
file on the data server.
The following tables provide a detailed description of the S-TAP parameters. They
indicate which parameters can be updated through the Guardium UI and by using
GIM.
If it is necessary to modify the configuration file from the database server, follow
the procedure outlined. The file contains comments that explain many of the
parameters.
1. Log on to the database server system using the root account.
2. Stop the S-TAP:
3. Make a backup copy of the configuration file: guard_tap.ini. It is located in
one of the following directories, depending on the server operating system
type:
v Windows: \Windows\System32
v Unix: /usr/local/guardium/guard_stap
4. Open the configuration file in a text editor.
5. Edit the file as necessary.
6. Save the file.
7. Restart the S-TAP and verify that your change has been incorporated.
SQLGuard parameters
These parameters describe a Guardium system to which this S-TAP can connect.
Table 18. S-TAP configuration parameters for a Guardium system
Default
Parameter VersionGUI value Description
sqlguard_ip NULL IP address or hostname of the Guardium
system that will act as a host for the S-TAP
primary 1 Indicates if the server is a primary server:
Windows: 0=NO, 1=YES (1). UNIX: 1=Primary,
2=Secondary, 3=tertiary, etc. If
participate_in_load_balancing=1, there must be
at least one primary server. If
participate_in_load_balancing=0, there must be
exactly one primary server.
General parameters
These parameters define basic properties of the S-TAP running on a Windows
server and the server on which it is installed, and do not fall into any of the other
categories.
These parameters are stored in the [VERSION] section of the S-TAP properties file.
Table 19. S-TAP configuration parameters in the [VERSION] section
Default
Parameter VersionGUI value Description
stap_client_build Read only. The build version of the installed
S-TAP
protocol_version Read only. The version of the Guardium system
These parameters are stored in the [TAP] section of the S-TAP properties file.
Table 20. S-TAP configuration parameters in the [TAP] section
Default
Parameter Version
GUI GIMvalue Description
tap_type Read only. STAP for UNIX, WTAP for
Windows
tap_version Read only. The version of S-TAP installed on
the server
tap_ip IP address or hostname for the database
server system on which S-TAP is installed
all_can_control Yes 0 0=S-TAP can be controlled only from the
primary Guardium system. 1=S-TAP can be
controlled from any Guardium system.
These parameters are stored in the Database section of the S-TAP properties file,
with the name of a data repository. There can be multiple sections in a properties
file, each describing one inspection engine used by this S-TAP.
Firewall parameters
These parameters affect the behavior of the S-TAP with respect to the firewall.
Table 23. S-TAP configuration parameters for firewall
Default
Parameter VersionGUI value Description
firewall_installed 0 Firewall feature enabled. 1=yes, 0=no.
firewall_timeout 10 Time in seconds to wait for a verdict from the
Guardium system if timed out. Look at
firewall_fail_close value to know whether to
block or allow the connection. The value can be
any integer value.
firewall_fail_close 0 If the verdict does not come back from the
Guardium system and the firewall_timeout is
passed, then if firewall_close = 0 the connection
will go through; if firewall_close=1 the
connection will be blocked.
firewall_default_state 0 What triggers the start of the firewall mode
0=event triggering a rule in the installed policy
happens 1=start in firewall mode enabled
regardless of a triggering event (0). This flag
forces the watch (or enabling) of the firewall
regardless of any rule, but specific actions
(DROP etc) still happen only when triggered by
a rule.
9.0
firewall_force_watch NULL When the firewall feature is enabled and
firewall_default_state is 0, the session will be
watched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2
9.0
firewall_force_unwatch NULL When the firewall feature is enabled and
firewall_default_state is 1, the session will be
unwatched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2,
Note: All of these parameters are deprecated for use on Windows servers and
should not be modified. They are listed here because you might see them in your
configuration file.
Table 24. S-TAP configuration parameters for application servers
Default
Parameter VersionGUI value Description
appserver_installed 0 Deprecated (for Windows only). 0 is default,
S-TAP acts as normal. 1=S-TAP is set in 'client
mode', switches S2C and C2S packets to reflect
S-TAP being installed on client, not db server.
Also, if 1, checks to see if the other appserver_*
parameters are filled in, and if so, examines http
packets on the supplied port to grab session
information about the end-user of the
java-application that resides on the client system.
appserver_ports Yes 8080 Deprecated (for Windows only).
Comma-separated list of ports on which the java
application is accessed via web browser. If my
url to a certain estore is: http://
woodpecker:8888/estore 8888 would be the
value I supply this parameter.
appserver_login_pattern Yes Deprecated (for Windows only).
Comma-separated list of strings specifying the
login pattern passed to the application. This is
the pattern that the java application is passed
that indicates a login of a user.
Yes
appserver_username_prefix Deprecated (for Windows only).
Comma-separated list of strings specifying the
prefix to the username for a given session. This
is the pattern the java application uses to
indicate the username of the given session.
Yes
appserver_username_postfix Deprecated (for Windows only).
Comma-separated list of strings specifying the
postfix to the username for a given session. This
is the pattern (or character) used by the java
application to indicate the end of the value for
the given variable that indicates the username.
Yes
appserver_session_pattern Deprecated (for Windows only).
Comma-separated list of strings that specify the
start of an end-user session using a particular
database session.
appserver_session_prefixYes Deprecated (for Windows only).
Comma-separated list of strings specifying
where the session id starts
Yes
appserver_session_postfix Deprecated (for Windows only).
Comma-separated list of strings specifying
where the session id ends.
Yes
appserver_usersess_pattern Deprecated (for Windows only).
Comma-separated list of strings specifying the
identifier for marking which end-session a given
connection is continuing with.
Debug parameters
These parameters affect the behavior of S-TAP debugging.
These parameters are stored in the [TAP] section of the S-TAP properties file:
Table 26. More S-TAP configuration parameters for debugging
Default
Parameter VersionGUI value Description
debug_file_name Location of the S-TAP debug file. The default
location is c:/guardium/stap.txt
debug_max_file_size 200
debuglevel 0 Level of debug messages to store. Leave at 0
unless directed by IBM Support.
0 Only critical error information
1 All previous messages plus repeatable
critical error information
2 Not used
3 All messages from level 1, plus brief
information about packets sent to a
Guardium system
4 All messages from level 3, plus local
sniffing log
5 All messages from level 4, plus network
sniffing log
6 All messages from level 5, plus
heartbeat receiving log
7 All messages from level 6, plus
miscellaneous debugging information
Driver parameters
These parameters affect the behavior of several drivers with which the S-TAP
interacts.
Table 28. S-TAP configuration parameters for drivers
Default
Parameter VersionGUI value Description
lhmon_driver_installed 1 LHMON can be used for both local and
network tcp traffic. S-TAP on Windows uses
lhmon driver for local traffic. Use 1 to turn
on ,0 to turn off local traffic snif
lhmon_driver_level 0 Advanced. Used for thread prioritization.
lhmon_for_network 1 Uses lhmon instead of winpcap for sniffing
network traffic if set to 1
lhmon_log_size 1 Advanced
nptrc_log_size 2 Advanced
shstrc_log_size 4 Advanced
ora_driver_installed 1 Set to 1 for sniffing Oracle ASO and SSL
traffic
ora_driver_level Yes 0 Advanced. Used for thread prioritization.
named_pipes_driver_installed 1 Set to 1 for local named pipes sniffing
named_pipes_driver_level Yes 0 Advanced. Used for thread prioritization.
shared_memory_driver_installed 0 Deprecated
shared_memory_driver_level Yes 0 Advanced. Used for thread prioritization.
krb_mssql_driver_installed 2 Set to 1 for sniffing MSSQL SSL traffic and
Kerberos tickets. set to 2 if you want to
collect just MSSQL decrypted traffic but not
Kerberos tickets to save time by the
collecting the domain user names when
starting the program. Note that this
parameter is always set to 0 after installation.
krb_mssql_driver_level 0
krb_mssql_driver_nonblocking 0 1=get domain user names from the domain
controller in a separate thread. In this case
the first packet with the new user does not
resolve the user SID into domain user name.
krb_mssql_driver_user_collect_time 30 Time limit for collecting SIDs. In case the old
method is used for pre-collecting
SID/usernames map
(KRB_MSSQL_DRIVER_INSTALLED=1) from
the domain controller, TAP property
KRB_MSSQL_DRIVER_USER_COLLECT_TIME
might be used for limiting the time of
communicating with the domain controller at
STAP start-up (default is 30 sec).
SQLGuard parameters
These parameters describe a Guardium system to which this S-TAP can connect.
Table 29. S-TAP configuration parameters for a Guardium system
Default
Parameter Version
GUI GIMvalue Description
sqlguard_ip NULL IP address or hostname of the Guardium
system that will act as a host for the S-TAP
sqlguard_port 16016 Read only. Port used for S-TAP to connect to
Guardium system
General parameters
These parameters define basic properties of the S-TAP running on a UNIX server
and the server on which it is installed, and do not fall into any of the other
categories.
These parameters are stored in the [VERSION] section of the S-TAP properties file.
Table 30. S-TAP configuration parameters in the [VERSION] section
Default
Parameter Version
GUI GIMvalue Description
stap_client_build Yes The build version of the installed S-TAP
protocol_version The version of the Guardium system
These parameters are stored in the [TAP] section of the S-TAP properties file.
Table 31. S-TAP configuration parameters in the [TAP] section
Default
Parameter Version
GUI GIMvalue Description
tap_type S-TAP for UNIX, W-TAP for Windows
tap_version The version of S-TAP that is installed on
the server
tap_ip IP address or hostname for the database
server system on which S-TAP is installed
devices Which interfaces to listen on. Use
ifconfig to find the correct interface.
all_can_control Yes 0 0=S-TAP can be controlled only from the
primary Guardium system. 1=S-TAP can
be controlled from any Guardium system.
These parameters are stored in the database section of the S-TAP properties file,
with the name of a data repository. There can be multiple sections in a properties
file, each describing one inspection engine used by this S-TAP.
Table 32. S-TAP configuration parameters for an inspection engine on UNIX
Default
Parameter VersionGUI GIM value Description
db_type Yes The type of data repository being monitored
port_range_start Yes Starting port range specific to the database
port_range_end Yes Ending port range specific to the database
networks Yes Identifies the clients to be monitored, using a
list of addresses in IP address/mask format:
n.n.n.n/m.m.m.m. There is no default. To
select all clients, omit the list of addresses. To
select local traffic only, use
127.0.0.1/255.255.255.255 If an improper IP
address/mask is entered, the S-TAP will not
start.
tee_listen_port Yes 12344 Not used for Windows. Under Unix, replaced
by the KTAP DB Real Port when the K-Tap
monitoring mechanism is used. Required
when the TEE monitoring mechanism is
used. The Listen Port is the port on which
S-TAP listens for and accepts local database
traffic. The Real Port is the port onto which
S-TAP forwards traffic.
connect_to_ip Yes 127.0.0.1 IP address for S-TAP to use to connect to the
database. When Tee is enabled, this
parameter will be the IP address for S-TAP to
use to connect to the database. Some
databases accept local connection on
127.0.0.1, while others accept local connection
only on the 'real' IP of the machine and not
on the default (127.0.0.1). When K-TAP is
enabled, this parameter will be used for
Solaris zones and AIX WPARs and it should
be the zone IP address in order to capture
traffic.
exclude_networks Yes A list of client IP addresses and
corresponding masks to specify which clients
to exclude. This option allows you to
configure the S-TAP to monitor all clients,
except for a certain client or subnet (or a
collection of these). When editing the list, to
create an additional Exclude Client IP/Mask
entry, click the Add button. To delete the last
Exclude Client IP/Mask entry, click the
Delete button.
real_db_port Yes 4100 Not used for Windows. Under Unix, used
only when the K-Tap monitoring mechanism
is used. Identifies the database port to be
monitored by the K-Tap mechanism.
Firewall parameters
These parameters affect the behavior of the S-TAP with respect to the firewall.
Table 34. S-TAP configuration parameters for firewall
Default
Parameter Version
GUI GIMvalue Description
firewall_installed Yes 0 Firewall feature enabled. 1=yes, 0=no.
firewall_timeout Yes 10 Time in seconds to wait for a verdict from the
Guardium system if timed out. Look at
firewall_fail_close value to know whether
to block or allow the connection. The value
can be any integer value.
firewall_fail_close Yes 0 If the verdict does not come back from the
Guardium system and the firewall_timeout
is passed, then if firewall_close = 0 the
connection will go through; if firewall_close=1
the connection will be blocked.
firewall_default_state Yes 0 What triggers the start of the firewall mode
0=event triggering a rule in the installed
policy happens 1=start in firewall mode
enabled regardless of a triggering event (0).
This flag forces the watch (or enabling) of the
firewall regardless of any rule, but specific
actions (DROP etc) still happen only when
triggered by a rule.
9.0
firewall_force_watch Yes NULL When the firewall feature is enabled and
firewall_default_state is 0, the session will
be watched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2
9.0
firewall_force_unwatch Yes NULL When the firewall feature is enabled and
firewall_default_state is 1, the session will
be unwatched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2,
Debug parameters
These parameters affect the behavior of S-TAP debugging.
These parameters are stored in the [TAP] section of the S-TAP properties file:
Table 38. More S-TAP configuration parameters for debugging
Default
Parameter VersionGUI GIM value Description
debug_file_name Location of the S-TAP debug file. The
default location is c:/guardium/stap.txt
debug_max_file_size 200
K-TAP parameters
These parameters affect the behavior of the K-TAP.
Table 39. K-TAP configuration parameters
Default
Parameter Version
GUI GIMvalue Description
ktap_installed Yes 0 Is Kernel Monitor module installed: 0=NO,
1=YES. ktap_installed and tee_installed are
mutually exclusive; only one can be set on.
8.0
ktap_request_timeout 5 It is the timeout on waiting for K-TAP reply.
K-TAP sends ioctl to stap to ask for some
information, and wait for the reply from stap.
It is in seconds and can have any value.
8.0
ktap_dbgev_ev_list 0 It is used to enable K-TAP trace log either
through GUI or through guard_tap.ini file:
0=disable, 1=enable ktap trace log located
under /var/tmp directory
8.0
ktap_dbgev_func_name all List of functions to log in K-TAP trace log.
all= all the functions or we can specify
specific function such as accept so we log in
the log file only the accept functions. If you
specify a function that is not relevant to the
K-TAP trace log it won't log anything to the
log.
8.0
ktap_fast_tcp_verdict 0 For tcp connection, K-TAP will send ioctl to
stap to confirm that session is the database
connection configured in our IE by checking
Ips. When ktap_fast_tcp_verdict is set to 1,
then K-TAP will not send the request to
S-TAP as long as session's ports are in the
range. it can have either 1 or 0 values (0)
8.0
ktap_fast_file_verdict 1 For tli connection, K-TAP will send ioctl to
S-TAP to confirm that session is the database
connection configured in our IE by checking
ports and Ips, when ktap_fast_file_verdict is
set to 1, then K-TAP will not send the request
to S-TAP as long as session's ports are in the
range. it can have either 1 or 0 values (1).
For these database types, when the S-TAP starts it must have access to the
database home. If your environment uses a clustering scheme in which multiple
nodes share a single disk that is mounted on the active node, but not on the
passive node, the database home is not available on the passive node until failover
occurs.
Before setting this property to a positive value, be sure to set all other necessary
configuration properties and test that the S-TAP starts and collects data correctly.
This property can be set only by editing the configuration file, and not from the
Guardium administrator console.
The list of inspection engines shows whether they have been verified. If an
inspection engine is unverified, you can submit it for verification immediately, or
add it to the existing verification schedule. Verification is supported for these
database types:
v DB2
v Greenplum
v Informix
v MSSQL
v MySQL
v Netezza
v Oracle
v PostgreSQL
v Sybase
v Teradata (advanced verification only)
If you check the box next to a database of an unsupported type, a message is
displayed saying that the type is not supported for verification.
For both types of verification requests, the results are displayed in a new dialog
that provides information about the tests that were performed and recommended
actions for tests that failed.
By default, the system waits five seconds before displaying verification results. If
your network latency is high, this might not be enough time to receive the
expected response from the database server. If you need to allow more time, you
can use the store stap network_latency CLI command to change the period.
Related topics:
v “Viewing S-TAP verification results”
v “Configuring the S-TAP verification schedule” on page 137
v “Troubleshooting S-TAP problems” on page 138
Before connecting to the database, the verification process checks whether the
sniffer process is running on the Guardium system. The sniffer is responsible for
communicating with each S-TAP and processing the data that is received. If the
sniffer is not running, responses from the S-TAP will not be recognized.
Next the verification process checks whether it can connect to the selected
inspection engine on the database server. It expects to receive a response that
indicates a failed login. If a different response is received, you might have to
investigate further.
Some error messages from individual databases do not indicate a specific problem.
For example, on several supported databases, the error code returned for a wrong
port can also mean that the database itself is not started.
The results of the verification process are displayed in a dialog. Failed checks are
shown first, with recommendations for next steps. Checks that succeeded are
shown in a collapsed section at the end of the list. In some situations, it might be
useful to review the successful checks in order to choose among possible next
steps.
Related topics:
v “S-TAP Status Monitor” on page 135
v “Troubleshooting S-TAP problems” on page 138
v “Configuring the S-TAP verification schedule”
Note: Use the following command to check the port availability: nmap -p
port guardium_hostname_or_ip
– Windows: UDP Port 8075 and TCP Port 9500, or TLS Port 9501 for encrypted
connections.
Note:
- Use the following command to check the port availability: netstat -an
- Verify that any Windows firewall is either turned off or that it is allowing
traffic through those ports.
v Verify that the sqlguard_ip parameter is set to the correct
guardium_hostname_or_ip for the Guardium system that you are connecting to.
1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP
Control.
If the S-TAP shows green status but no data is being processed, check the status of
the A-TAP.
Related topics:
v “S-TAP Status Monitor” on page 135
v “Viewing S-TAP verification results” on page 136
v “Monitoring S-TAP behavior”
By default, the S-TAP monitor is disabled. This is an advanced function, for use by
knowledgeable users. To enable it, uncomment the guard_monitor line in the
etc/inittab file; on Solaris systems, use the svcadm command to activate it. Before
you activate the monitor, choose the options and thresholds that you want to use.
The monitor is controlled by using the guard_monitor.ini file. This file contains
comments showing the meaning of each parameter. Default thresholds are
provided for each function. For example, you might want to monitor CPU usage,
and set one threshold (75%) for gathering diagnostic information and a higher
threshold (85%) at which the S-TAP should be killed. You would set auto_diag=1
to enable gathering of diagnostic information, and diag_high_cpu_level=7500 to
gather diagnostic information when CPU usage reaches 75%. Then set
auto_kill_on_cpu_enable=1 to enable automatic killing of the S-TAP process, and
set auto_kill_on_cpu_level=8500 to kill the process when CPU usage reaches 85%.
But you do not want to keep killing the S-TAP process repeatedly, so you can set a
limit on that as well. You can limit how many times the process can be killed
within one hour by setting kill_num_in_hour=5. Then specify what should happen
To open the S-TAP Events panel for any S-TAP listed in the control panel:
1. Click Reports > Real-Time Guardium Operational Reports > S-TAP Events to
open S-TAP Events.
Column Description
Event Type Success, Error Type, and so on
Event Description Short description of the event
Timestamp Date and time the event occurred
Note: If no messages display in the S-TAP Events panel, the production of event
messages may have been disabled in the configuration file for that S-TAP. If this is
the case, you may be able to locate S-TAP event messages on the host system in
the Event Log (for Windows) or the syslog file (for UNIX/Linux).
S-TAP reports
By default, the reports that are described in this topic appear in the Reports panel.
You can define new queries or reports on the Rogue Connections domain, and you
can create alerts that are based on exceptions that are created by S-TAPs, but other
domains that are used by S-TAP reports are system-private and cannot be accessed
by users.
System View
S-TAP Status Monitor - For each S-TAP reporting to this Guardium system, this
report identifies the S-TAP Host, S-TAP Version, DB Server Type, Status (active or
inactive), Last Response Received (date and time), Primary Host Name, and
true/false indicators for: KTAP, TEE, MS SQL Server Shared Memory, DB2 Shared
Memory, Local TCP monitoring, Named Pipes Usage, and Encryption.
Note: The DB2 shared memory driver has been superseded by the DB2 Tap
feature.
Tap Monitor
Rogue Connections - This report is available only when the Hunter option is
enabled on UNIX servers. The Hunter option is only used when the Tee
monitoring method is used. This report lists all local processes that have
circumvented S-TAP to connect to the database.
Primary Guardium Host Change Log - Log of primary host changes for S-TAPs.
The primary host is the Guardium system to which the S-TAP sends data. Each
line of the report lists the S-TAP Host, Guardium Host Name, Period Start, and
Period End.
S-TAP Status - Displays status information about each inspection engine that is
defined on each S-TAP Host. This report has no From and To date parameters,
since it is reporting current status. Each row of the report lists the S-TAP Host, DB
Server Type, Status, Last Response, Primary Host Name, Yes/No indicators for the
following attributes: K-TAP Installed, TEE Installed, Shared Memory Driver
Installed, DB2 Shared Memory Driver Installed, LHMON Driver Installed, Named
Pipes Driver Installed, and App Server Installed. In addition, it lists the Hunter
DBS.
Inactive S-TAPs Since - Lists all inactive S-TAPs that are defined on the system. It
has a single runtime parameter: QUERY_FROM_DATE, which is set to now -1
hour by default. Use this parameter to control how you want to define inactive.
This report contains the same columns of data as the S-TAP Status report, with the
addition of a count for each row of the report.
Message Description
Cant read inifile .../guard_tap.ini: The S-TAP configuration file (guard_tap.ini) has errors, which is most likely to
Cannot resolve hostname xxx for happen when it has been edited manually. When this happens, S-TAP attempts
the IP address parameter to restart from the last known good backup file (if one is available).
sqlguard_ip in section
SQLGUARD_x. Reverting to
.../guard_tap.ini.bak
bind: Address already in use [DB A port that an S-TAP TEE is trying to use is already in use. For example, if you
server name or IP] Cant bind configure a TEE to listen on port 4100, and Sybase is already listening on that
listening socket for tee: Address port, you will receive this message.
already in use
connect: Network is unreachable The standard message received when trying to reach a host that is not accessible.
In most cases this means that the Guardium system is not answering ping
requests.
Delayed server connection error: The Guardium system is refusing a connection request from this S-TAP. That
Connection refused Guardium system either has no inspection engine running (not likely), or it is
not configured to accept S-TAP connections (check the unit_type setting for that
Guardium system).
Deleting connection on unknown Not an error message; disregard.
pid:n
Got a connection from a remote S-TAP has received a connection request (to a TEE port) from an application at a
machine, ignoring remote host, and is ignoring that request. The Tee should be used only for local
connections.
Got new configuration The Guardium administrator has updated the configuration while logged into
the Guardium system, and the updated configuration file has been received by
the S-TAP.
S-TAP appendix
This section details moving from one Informix version to another.
For SUSE 32-bit Linux, when there are multiple Informix versions installed, the
following step by step procedure can be used to move from one Informix version
to another; cleaning up semaphores and shared memory segments to help ensure a
clean start of the Informix database. The following steps assume an A-TAP
activation.
1. Using ipcs command, get the list of all semaphores and shared memory
segments that are created by Informix database. This might be tricky (since
they may show up belonging to user root), but the rule of thumb is that there
should be three shared memory segments with permissions 0660 and one with
0666 and four semaphore arrays with permissions 0660 and one with 0666, all
belonging to root.
2. Stop Informix database.
3. Make sure that all semaphores and shared memory segments created by
Informix database are gone.
4. (On Linux only) If the Informix instance was activated in A-TAP, deactivate it.
5. Make the changes to /etc/passwd (point the user Informix home directory to
the correct location).
6. Make sure that the installation directory is correct in the S-TAP inspection
engine for the new instance of Informix. Make the changes if needed, restart
S-TAP.
7. (On Linux only) Activate Informix in A-TAP (make sure to specify the correct
version).
8. Start the new instance.
To create and modify IMS definitions using the Guardium system interface, an
S-TAP must already be installed on the IMS system and the agent address space
(AUIASTC) must have a preestablished connection to the Guardium system. If the
agent has not successfully connected and you need help establishing a connection,
refer to “Installing IBM Guardium S-TAP for IMS on z/OS” in the IBM Guardium
S-TAP for z/OS User's Guide.
Once defined, IMS Definitions are sent to the S-TAP along with any additional
policies according to the agent's policy pushdown settings. Policies for IMS on
z/OS S-TAPS must be associated with an IMS Definition in order to be included
during policy pushdown. For more information about configuring pushdown
events, refer to the "Policy pushdown” topic in the IBM Guardium S-TAP for z/OS
User's Guide.
For step-by step support while configuring IMS Definitions, refer to “Creating and
modifying IMS definitions” in the IBM Guardium S-TAP for z/OS User's Guide.
You can use information gathered by the Guardium DB2 for i S-TAP to create
activity reports, help you meet auditing requirements, and generate alerts of
unauthorized activity. Detailed auditing information includes:
v Session start and end times
v TCP/IP address and port
v Object names (for example, tables or views)
v Users
v SQLSTATEs
v Job and Job numbers
v SQL statements and variables
v Client special register values
v Interface information, such as ODBC, ToolboxJDBC, Native JDBC, .NET, and so
on
Note: i S-TAP TLS support and load balancing is supported only for IBM i 7.1
and 7.2.
Administrators configure the S-TAP is done using the same APIs and UI (S-TAP
Control) as other UNIX S-TAPS. When the GUI or API is used to make a change to
the S-TAP configuration, the Guardium sniffer sends a message to the S-TAP,
which backs up the old .ini file, saves the configuration to the new .ini file and
then restarts itself.
Administrators can set up encrypted communication between the S-TAP and the
appliance using the S-TAP configuration controls as well as set up various load
balancing options.
Using S-TAP failover and load balancing
The failover and load balancing options for the i S-TAP are similar to what
exists for UNIX S-TAPs. Use the participate_in_load_balancing parameter
to determine whether to use failover or load balancing behavior, and use
the SQLGuard sections of your S-TAP to set up primary, secondary, and
tertiary Guardium hosts.
One difference is that there is no need for participate_in_load_balancing=3;
because of the way the I S-TAP communication is architected, complete
session information is available on each message. This means that even
before the enhancements delivered in this patch, you could have used
hardware balancing (such as F5) with participate_in_load_balancing=1 and
a virtual IP address in the primary SQLGuard section of the configuration
file.
In a failover configuration, the S-TAP is configured to register with
multiple collectors, but only send traffic to one collector at a time
(participate_in_load_balancing=0). The S-TAP in this configuration sends
all its traffic to one collector unless it encounters connectivity issues to that
collector that triggers a failover to a secondary collector.
Monitoring strategy
Make your monitoring and auditing effective and efficient by developing a
strategy that recognizes and fulfills your regulatory and other requirements.
Audit journal
You can configure the system audit journal to capture only those entries
that concern objects of interest or users of interest. By default, entries of
these types are sent from the S-TAP to the Guardium system:
Ignoring data after it has been sent over the network is inefficient. Wherever
possible, filter out information that you do not need before it is queued for the
S-TAP.
The DB2 for i S-TAP requires Portable Application Solutions Environment (PASE),
which is automatically started and stopped as needed when a user starts and stops
the DB2 for i S-TAP from the IBM Guardium user interface.
You must know the IP address of the Guardium system to which this S-TAP will
connect.
When you download the S-TAP, be sure to filter for the IBM i platform, to ensure
that you download the correct package.
You can use 5250 emulator software to connect to the IBM i system remotely.
Procedure
1. On the IBM i server, enter this command to open the PASE shell: call qp2term.
2. In the PASE shell environment, create a temporary directory to hold the S-TAP
installation script, such as /tmp.
Results
What to do next
To validate the successful installation and start of the audit process, log in to the
IBM Guardium web console as an administrator, navigate to the System View tab,
and check the status of the S-TAP.
You must know the log-in credentials for the IBM i system.
Procedure
1. Click Setup > Tools and Views > Datasource Definitions to open the
Datasource Builder. Select Custom Domain from the Application Selection box.
Click Next.
2. In the Datasource Finder, click New, which opens the Datasource Builder.
Results
Using the data that you have entered, the update_istap_config API performs these
tasks:
v Creates the message queue that will be used to send entries from the S-TAP to
the Guardium system and starts a global database monitor using a view with an
INSTEAD OF trigger, which sends the entries to the message queue.
v Starts PASE and the S-TAP.
v Receives journal entries from QAUDJRN and adds them to the message queue.
Note: For DB2 on z/OS, within reports, the source program as defined within the
Client/Server Entity will be the concatenation of requestor server name and
correlation id.
Note: For DB2 on z/OS, to use DB2 Unicode Database and show multi-byte
characters properly within reports, the user should change the DB2 parameter
UIFCIDS from No to Yes.
S-TAP for VSAM on z/OS is a tool that collects and correlates data access
information from records to produce a comprehensive view of business activity for
auditors. S-TAP provides the following features and functions:
v Data collection - S-TAP can collect and correlate many different types of
information:
– Access to VSAM data sets and security violations as recorded by SMF.
– Data set operations performed against VSAM data sets such as deletes or
renames.
It is assumed the S-TAP for z/OS client is installed and configured to capture
traffic.
For additional information, see the following User Guides. Copies are available
through the IBM Information Management Software for z/OS Solutions
Information Center (http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/
index.jsp?topic=/com.ibm.db2tools.adhz.doc.ug/adhucon_bkoverview.htm). These
User Guides contain information about Guardium S-TAP; providing an overview
and its functions as well as tasks for planning, installing, configuring, and using
Guardium.
Note: The current versions of these guides can be found in the Guardium
Infocenter, http://publib.boulder.ibm.com/infocenter/igsec/v1/index.jsp
Policy push down (only for use with DB2 and VSAM)
When the DB2 S-TAP for z/OS connects to the Guardium system or when the
policy is installed, the installed policy will be sent from the Guardium system to
the Mainframe S-TAP.
Once the user installs the policy, this will trigger a policy push down to the Agent
on the Mainframe collection profile. This applies only to access rules with DB Type
of DB2 Collection Profile or VSAM Collection Profile.
The fields used by the DB2 S-TAP for z/OS policy pushdown are: DB Type, Service
Name, Server IP (S-TAP IP for z/OS), DB User, OS User, Net Protocol, App. User,
(DB2) Client Info, Command, and Object. All other policy fields are ignored.
The fields used by the VSAM S-TAP for z/OS policy pushdown are: DB Type and
Object. DB Type must be set to VSAM COLLECTION PROFILE for the policy to be
picked up by the VSAM S-TAP. When setting up the Access Rule Definition, users
need to enter the value in uppercase for every field in order for VSAM S-TAP to be
processed properly.
The Object is the data set name to act upon. If NOT is selected, it will exclude all
actions matching the data set name, otherwise it will include all actions matching
the data set name. Wild cards are accepted for the Object field, ? will match a
single character while % will match any number of characters.
The underlying protocol for the connection between the z/OS Mainframe and the
Guardium UI is protobuf.
Fields Used
v DB Type: DB2 Collection Profile
v Service Name: DB2 sub-system ID, for the DB2 sub-system this rule applies to
v DB User: AuthID
v OS User: original auth id
v Net Protocol - connection types available for this feature:
– TSO:1 = TSO FOREGROUND AND BACKGROUND
– CALL::2 = DB2 CALL ATTACH
– BATCH:3 = DL/I BATCH
– CICS:4 = CICS ATTACH
– BMP:5 = IMS ATTACH BMP
or
PROG=<prog name>
is accepted.
Also accepted is use of NOT, for example,
PLAN= not <plan name>
PROG= not <prog name>
v Object: The object name.
v Command: The command name.
v DB2 Client Info: For access rules only. For z/OS only, a CLIENT INFO field (and
CLIENT_INFO_GROUP_ID) will be visible if DB_TYPE is DB2 COLLECTION
Profile. The type of information that can be placed in this field is USER=x;
WKSTN=y; APPL=z.
v Time Period: FROM_HOUR, TO_HOUR: Hours and minutes are valid values.
v All other fields are ignored.
Note: There can be multiple values for each field by selection and group. Click the
Group Builder icon can open the Group Builder.
Note: The relationship between rules is OR. For example, Rule1 with
NET_PROTOCOL of TSO and OS_USER of User1 and Rule2 with
NET_PROTOCOL of CICS and OS_USER of User2 means TSO connection type and
User1 original Auth ID OR CICS connection type and User2 original Auth ID are
going to be collected.
If a report has a S-TAP HOST column, then double-clicking on this row will
produce a Collection Profile menu item. Clicking on the menu item to popup a
Collection Profile Summary window to show the POLICY field of the
SOFTWARE_TAP_PROPERTY record corresponding to that row.
If the row is not a policy client (check to see that the S-TAP value does not end
with :POLICY), then a warning message will be shown instead of the popup
window.
Further Mapping Information
Target: Concatenation of schema and table with a period in the format of
x.y. X means value for schema, y means value for table. If there is no
period, it is schema. This will be part of Read/Change events - read or
change or % concatenates with Target with a /, like, read/x.y
Concatenation of target and Read/Changes events is put to OBJECT field.
For example, if Value_group field of OBJECT has members read/%.%,
change/%.%, %/%.% then Reads of all targets, Changes of all targets, Read
and Change of all targets will be turned on.
General - Failed AuthID changes, All failed Authorizations, Successful
AuthID changes, Grant and Revokes, DB2 Utilities, DB2 Commands.
COMMAND group with members like: All Failed Authorizations/Set
Current Sqlid/Failed AuthId Changes/Grant and Revokes/IBM DB2
Utilities/DB2 Commands/
APP_USER_NAME and DB2_CLIENT_INFO contain concatenated fields.
The NOT checkbox is hidden for these two fields in the GUI. For these rule
fields, inverted is always false. If the user need to express inversion, the
user needs to explicitly put a NOT ahead of the value for the field to be
inverted, for example,
USER=NOT X; WKSTN=y; APPL=NOT z
means User and APPL can be any value and workstation is not y.
Use the Definitions screen to maintain one or more servers where the z/OS files
should be retrieved from.
1. Select Guardium for z/OS from the Configuration panel in the Administration
Console.
2. Click New button to create a new Guardium for z/OS Interface or select an
existing interface from the Guardium for z/OS Interface Definition Finder to
delete, modify or comment on the selected interface.
3. In the Server IP box, enter the IP address the Guardium for z/OS interface
will use to retrieve files from.
4. In the Server Name box, enter the Server Name the Guardium z/OS interface
will use to retrieve files from.
5. In the Directory box, enter the directory where the z/OS audit files are
located.
Note: This Guardium for z/OS Interface Definition menu screen has tool tips for
certain menu choices. Move the cursor over a menu choice (such as Directory), and
a short description will appear.
Note: When editing an existing definition, if user input of password is empty, the
old password is retained. When adding a new record, user name and password
must be specified.
Along with authenticating users and restricting role-based access privileges to data,
even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have
the privileges required to perform their duties. This is also known as database user
rights attestation reporting.
Custom database entitlement reports have been created to save configuration time
and facilitate the uploading and reporting of data from DB2 on z/OS.
The predefined entitlement reports for DB2 on z/OS are listed as follows. They
appear as domain names in the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections :
v
DB2 zOS Object Privileges Granted To GRANTEE
v
DB2 zOS Database Resource Granted To GRANTEE
v
DB2 zOS Schema Privileges Granted To GRANTEE
v
DB2 zOS Database Privileges Granted To GRANTEE
v
DB2 zOS System Privileges Granted To GRANTEE
v
DB2 zOS Object Privileges Granted To PUBLIC On Object Type Table View
Package Routine Sequence And Plan
v
DB2 zOS Executable Object Privileges Granted To PUBLIC (Object type: Package,
Routine and Plan)
v
DB2 zOS Database Resource Granted To PUBLIC
v
DB2 zOS Schema Privileges Granted To PUBLIC
v
DB2 zOS Database Privileges Granted To PUBLIC
v
DB2 zOS System Privileges Granted To PUBLIC
v
DB2 zOS System Privileges Granted To PUBLIC
v
DB2 zOS Object Privileges Granted To GRANTEE With GRANT OPTION
(Object type: Table, View, Package, Routine, sequence and Plan)
v
DB2 zOS Database Resource Granted To GRANTEE With GRANT OPTION DB2
zOS Schema Privileges Granted To GRANTEE With GRANT OPTION
v
DB2 zOS Database Privileges Granted To GRANTEE With GRANT OPTION
v
DB2 zOS System Privileges Granted To GRANTEE With GRANT OPTION
The processing of SQLCODE lists is quite different from how collection rules are
processed in by the Guardium S-TAP.
For collection rules, all rules are evaluated until any rule determines that the event
can be collected.
The GIM component includes a GIM server, which is installed as part of the
Guardium system, and a GIM client, which must be installed on servers that host
databases or file systems that you want to monitor. The GIM client is a set of Perl
scripts that run on each managed server. After you install the GIM client, it works
with the GIM server to perform these tasks:
v Check for updates to installed software
v Transfer and install new software
v Uninstall software
v Update software parameters
v Monitor and stop processes that run on the database server
For example, you can use GIM to install your S-TAP modules and keep them
up-to-date.
The GIM client uses port 8444 to communicate with the GIM server.
You can use the GIM server through the Guardium user interface or through the
command-line interface (CLI).
The software modules that you can deploy by using GIM are packaged as GIM
bundles. A bundle is a file of type gim that contains software that can be deployed
by using GIM.
If you upgrade to Version 10.0 from V9.0 GPU patch 50 or later, there is no change
in how you can view information about GIM clients. If you upgrade from an older
version, these restrictions apply: After you upgrade your Central Manager, you can
still view information about GIM clients that are assigned to other Guardium
systems, but you can no longer do provisioning to those GIM clients from the
Central Manager. After you upgrade all your Guardium systems, you can view
each GIM client only from the Guardium system that is its GIM server.
To manage large numbers of GIM installations, you can create groups of GIM
clients. Then, you can use the groups to install, update, and manage software
bundles.
The GIM client monitors the processes that you install by using GIM. It checks the
heartbeat of each process once each minute, and passes status changes for the
165
processes to the GIM server. The status of each process is displayed on the Process
Monitoring panel. Changes are reflected within three minutes. Changes to the
status of the GIM client itself are reflected according to the interval at which the
client polls the server and delivers its “alive message”.
Note: When performing a system backup and restore from one server, which has
GIM defined, to another server, then the user must configure a GIM failover to the
restore server. This GIM configuration applies to a Backup Central Manager or a
System backup and restore.
Overview
The following process (also called GIM Auto-Discovery) allows you to remotely
connect to a pre-installed and inactive GIM agent and make it connect to a
collector without accessing the database server.
1. An inactive GIM client runs in listener mode and waits for a connection from
any collector.
2. From the collector's graphic user interface (GUI) or the GuardAPI, you can
send the IP address of any collector to the inactive GIM client.
3. The inactive GIM client accepts the collector's IP address and connects to it.
You can define your own certificates, shared secret, and port number. To use other
certificates, specify the certificate/key full path name in the installation parameters:
--key_file and --cert_file. Load the certificates to the collector key store with
the GuardAPI command store certificate gim.
To set a shared secret other than the default one, use the GuardAPI command
grdapi gim_set_global_param paramName=gim_listener_default_shared_secret
paramValue=<password>. The format should be a string. The shared secret must be
identical on the database server and collector.
Note: Do not specify the unencrypted shared secret in the command line.
To use a port other than the default one, specify the port in the installation
parameter --listener_port. Set the GIM global parameter
gim_listener_default_port with the new port in the GIM Global Parameters.
Note: The default or user defined port must be enabled in the firewall.
Parameters
Note: The following parameters must exist in the file system or the installation
fails:
v ca_file
v key_file
v cert_file
This value is encrypted and stored in the database. The value must be identical to
the unencrypted value as the shared secret if you install the GIM agent on the
database server.
To set up a new default server mode GIM port, use the following GuardAPI
command:
grdapi gim_set_global_param paramName=gim_listener_default_port paramValue=<port number>
This value must be identical to the unencrypted value of the shared secret if you
install the GIM agent on the database server.
Note: If you use a different port or shared secret, you must specify the shared
secret or port every time you connect the collector IP/hostname to the server mode
GIM agent.
Note: You must enter an IP address / host name or select a server group, but the
GIM listener port and GIM listener password are optional. When you install the
GIM client in listener mode, the settings of the shared secret and certificates cannot
be changed unless you reinstall the GIM client.
Note:
v Wildcard characters are enabled. For example: to select all addresses
beginning with 192.168.2, use 192.168.2.*.
v Specify a range of ports by putting a dash between the first and last port
numbers in the range. For example: 4100-4102.
v After you add a scan, modify the host or port by typing over it. Click Apply
to save the modification.
v If you have a dual stack configuration, you will need to set up a scan for
both the IPV4 and the IPV6 addresses.
v To remove a scan, click the Delete this task icon for the scan. If a task has
scan results dependent upon it, the scan cannot be deleted.
6. When finished adding scans, click Apply, and run the job or schedule the job
in the future.
Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Run the setup.exe file to start the wizard that installs the GIM client. The
setup.exe file is located in the gim_client folder.
3. Follow and answer the questions in the installation wizard.
What to do next
You can view the results of the installation in the log file at c:\
guardiumstaplog.txt.
Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Open a command prompt and navigate to the gim_client folder under the
folder where you placed the installer.
3. Enter this command:
setup.exe /s /z" <--host=>g10.guardium.com --path=c:\\program files\\guardium\\GIM"
The --host= parameter is optional. The GIM Listener is installed in listener
mode.
What to do next
You can view the results of the installation in the log file at c:\
guardiumstaplog.txt.
The GIM client requires Perl version 5.8.x or 5.10.x to be installed. Verify that the
following packages are installed:
v IPC-Run3
v Win32-DriveInfo
Beginning with Guardium 9.1, you can install and use the GIM client in a Solaris
slave zone or an AIX workload partition (WPAR). This enables you to use the GIM
client to install an S-TAP in a slave zone or WPAR. When you install an S-TAP in a
slave zone or WPAR, the K-TAP is disabled, regardless of the setting of the
ktap_enabled parameter. You can also use the GIM client to install the
Configuration Auditing System (CAS) agent in a slave zone or WPAR. You cannot
install the discovery bundle in a slave zone or WPAR; the discovery agent running
on the global zone can collect information from other zones. The process for
installing the GIM client in a Solaris slave zone or an AIX workload partition is the
same as the process for installing in the master zone. The installation can take a
few seconds longer than installing in the master zone. If you install the GIM client
on a Solaris system with master and slave zones, you must install the client in the
same location on the master and slave zones. This location cannot be a shared
directory.
On Solaris, the GIM client and supervisor in each slave zone are controlled by the
GIM supervisor process that runs in the master zone. If the supervisor process on
the master zone is shut down, all GIM processes on the slave zones are shut down
as well.
Procedure
1. Place the GIM client installer on the database server in any folder.
2. Run the installer:
./<installer_name> [-- --dir <install_dir> <--sqlguardip> <g-machine ip> --tapip <db server ip add
Where sqlguardip is optional. If you omit this parameter, the GIM client is
installed in listener mode.
3. On Red Hat Linux, version 6 or later, run these commands to verify that the
files have been added:
ls -la /etc/init/gim*
ls -la /etc/gsvr*
On Solaris, version 10 or later, run this command:
ls /lib/svc/method/guard_g*
On all other platforms, run these commands to verify that the following new
entries were added to /etc/inittab:
gim:2345:respawn:<perl dir>/perl <modules install dir>/GIM/<ver>/gim_client.pl
gsvr:2345:respawn:<modules install dir>/perl <modules install dir>/SUPERVISOR/<ver>/guard_supervis
Where modules install dir is the directory where all GIM modules are
installed, for example, /usr/local/guardium/modules.
What to do next
Procedure
1. Upload the latest available BUNDLE-GIM.gim file to the Guardium system.
2. Use the GIM GUI to schedule the installation of the new BUNDLE-GIM.gim file.
3. Monitor the installation process by clicking on the i icon and pressing Refresh.
When the installation has successfully completed the INSTALLED status will be
displayed.
You can create group of GIM clients and use it to roll out updates to those
managed servers.
Procedure
1. Click Setup > Tools and Views > Group Builder. In the Group Builder, create
a new group. For the Group Type Description choose Client Hostname. The
new group is added to the list of existing groups.
2. Choose the new group in the Modify Existing Groups list and add members to
the group. You can add them manually or populate the list from a query. To
populate the list from a query, click Populate from Query and note these
requirements:
a. For Query select a report name that begins with GIM.
b. For Fetch Member from Column, select GIM Client Name.
c. In each Enter (Like) field, enter a value to be matched, or % if this field is
not used to identify clients.
d. Save the group and run or schedule the query.
Results
You can use the group in the GIM Setup by Client screen to work with this set of
clients as a group rather than individually.
Users may also interact with GIM through the CLI. See “GIM - CLI” on page 179
for information on installing and upgrading modules with GIM using CLI.
You can use the GUI of the Guardium Installation Manager (GIM) for these tasks:
v Process Monitoring
v Upload Module Package
v Configure, Install, or Update Modules (by client)
v Configure, Install, or Update Modules (by module)
v Rollback Mechanism
Note: If A-TAP is being used, A-TAP must first be disabled on the database server
before performing a GIM-based S-TAP upgrade or uninstall.
Note: GIM does not support the installation of native S-TAP installers (rpm, dept,
bff, etc.)
Note: Installation of modules on a specific client for the FIRST TIME using the
GIM utility must be in the form of a BUNDLE. Future upgrades of specific
modules which are part of the installed bundle can be either as single modules or
bundles.
Process Monitoring
Supervisor
The GIM Supervisor is a process with the main purpose of supervising and
monitoring Guardium processes. Specifically, it is responsible for starting, stopping,
and making sure all of Guardium processes are running at all times and restarting
them if they fail.
Note: For Guardium V9.0, on Solaris 5.10/5.11, GIM and SUPERVISOR are now
SMF services. They are not inittab entries anymore.
GIM
The GIM process is the GIM client process, which is responsible for such duties as
registering to the GIM server, initiate a request to check for software updates,
You can use this option to configure/install a module for any number of clients
from packages already loaded.
The simplest, safest, and quickest way to install or uninstall modules is by using
bundles. Using bundles guarantees automatic dependency and order resolution.
If you have already created groups of clients, you can use a group to specify the
clients to be the target for the specified action. Otherwise use these steps to select a
list of clients.
1. Click Manage > Install Management > Setup by Client to open theClient
Search Criteria.
2. Click the Search button to perform filtered search and display the Clients
panel.
3. Select the clients that will be the target for the specified action.
v If there are more than 20 clients then the list of clients will be split onto
additional pages
Note: Clicking the Select All button will only select the clients on the
current page being viewed
4. From the Clients panel, two actions can be taken:
v Configure/install common parameters
v Configure/install module
v Reset Clients - By clicking Reset Clients, you can disassociate modules from
selected clients and remove the client definition from the Guardium system
database. Note: Resetting a client does NOT trigger module removal on the
database server.
v View installation state of this client - By clicking on the information icon you
can open up the Installation Status panel and view the installation status of a
client. This panel displays all modules on the client which are installed or
scheduled for update or uninstall. From this panel, you can use the Edit this
module icon to configure parameters for each module individually.
Starting from modules, enables users to configure and install a module for any
number of clients. Any required packages should have been loaded beforehand.
Note: Clicking the Select All button will only select the clients on the
current page being viewed
5. From the Clients panel, two actions can be taken:
v Configure/install common parameters
v Configure/install module
v Reset Clients - By clicking the Reset Clients button you can disassociate
modules from selected clients and remove the client definition from the
Guardium system database. Note: Resetting a client does NOT trigger
module removal on the database server.
v View installation state of this client - By clicking on the information icon you
can open up the Installation Status panel and view the installation status of a
client. This panel displays all modules on the client which are installed or
scheduled for update. From this panel, the Edit this module icon can be
used to configure parameters for each module individually.
Note: The Generate Grdapi button at the front of the client line under the
Client Module Parameter section enables you to view the list of grdapi
commands that reflect the changes tat you have made to the module such as
assigning, installing, uninstalling, scheduling, and updating of the module.
These grdapi commands are provided so you can take the set of commands
and apply them to other clients in a script if you would like to reproduce the
changes .
Note: The View installation state of this client button, also at the front of the
client line under the Client Module Parameter section provides a view into the
current installation status for the module.
Note: When installing KTAP as part of BUNDLE-STAP, KTAP status will set to
INSTALLED even if the actual KTAP module was missing for this specific
platform. However a message will be shown on the GIM-EVENTS report
indicating KTAP module was missing.
Note: You should check the GIM-EVENTS report after installing bundles on the
DB servers.
6. Click Back to go back to the Clients panel.
Here is the list of options, can be set either to 0 (not installed) or to 1 (installed).
v MSSQLSharedMemory
v DB2SharedMemory
v CAS
v NamedPipes
v Lhmon
v LhmonForNetwork
v START: this parameter controls whether S-TAP is started or not after installation.
v INSTALL_DIR: this specifies where to install the software.
v QUIET: controls the switches that are passed to the Windows installer, do not
make any changes, used to debug installation issues.
v DBALIAS: an alias for the data base server machine, can use the machine host
name, not related to the actual data base installed on the server.
If you are installing an S-TAP and you do not want it to automatically discover
MSSQL databases, type START=0 in the WINSTAP_CMD_LINE column to prevent
the S-TAP from starting when it is installed. You can also specify this parameter for
a single database server by using the GIM API:
grdapi gim_update_client_params clientIP=xx.xx.xx.xx paramName=WINSTAP_CMD_LINE paramValue="START=
The installation directory for the S-TAP must be empty or not exist. You cannot
install an S-TAP into a directory that already contains any files. For installation on
64-bit machines you must specify the 32-bit program files folder (for example,
C:/program files (x86)/guardium/stap and NOT C:/program
Configure/install module
1. If configuring, installing, or updating:
a. by client
1) Click Next to display the Common Modules panel where a list of all
available common modules and bundles that can be installed on the
selected clients.
2) Select a module or bundle to configure/install for the selected clients.
Note: The configuration for module and all of its dependencies can be saved
to the database only at once. Also, they can only be installed as a bundle.
This means that they cannot be individually saved or scheduled for
installation. For example, if, in middle of scheduling installation, the process
fails for one of modules on one of the clients, it will roll back all installations
before that failure.
Note: The Generate Grdapi button at the front of the client line under the
Client Module Parameter section allows the user to view the list of grdapi
commands that reflect the changes the user has made to the module such as
assigning, installing, uninstalling, scheduling, and updating of the module.
These grdapi commands are provided to the user so they can take the set of
commands and apply them to other clients in a script if they would like to
reproduce.
Note: The open Property content button appears in front of every writable
properties and opens up a window that simplifies the editing of a long field.
Note: The View installation state of this client button, also at the front of the
client line under the Client Module Parameter section provides a view into the
current installation status for the module.
Note: When installing K-TAP as part of BUNDLE-STAP, K-TAP status will set
to INSTALLED even if the actual K-TAP module was missing for this specific
platform. However a message will be shown on the GIM-EVENTS report
indicating K-TAP module was missing.
Note: Always check the GIM-EVENTS report after installing bundles on the DB
servers
Note: When uninstalling modules, GIM will only uninstall the selected module
and not uninstall dependencies
Rollback Mechanism
GIM's rollback mechanism purpose is to handle errors during installation and
recover modules to their prior state. The Rollback mechanism supports the
following recovery scenarios:
1. Live Upgrade Recovery
For Bundles
Note: When the status is 'IP-PR' booting the DB-server is different per OS (Any
other way of rebooting the system will keep the pending modules in a pending
state):
Linux : shutdown -r
SuSe : reboot
HP : shutdown -r
Solaris : shutdown -i [6|0] (Note : ’0’ can be used only if shutdown is done from the terminal s
AIX : reboot
Tru64 : reboot
You can change the GIM server that manages one or more GIM clients. You might
want to make this change in order to balance the load among your GIM servers, or
to make it easier to distribute GIM packages. To reassign a group of GIM clients to
a different GIM server, follow these steps:
1. Click Manage > Install Management > Setup by Module to change the GIM
server for a GIM client.
2. Select a GIM bundle that is installed on the clients that you want to reassign.
Click Next.
3. Select the clients to be changed. You can click Select All or select clients
individually. Click Next.
4. Click Select All.
5. For the GIM_URL parameter, enter the hostname or IP address of the GIM server
(Guardium system) to which you want to reassign the selected GIM clients.
Click Apply to Selected.
6. On the same panel click Apply to Clients, then click Install/Update and
schedule the update.
After the update has been processed, the GIM client will be managed by the new
GIM server.
The following examples are presented only to cover some of the more common
scenarios. For more information and a complete list of all supported CLI
commands refer to GuardAPI GIM Functions.
v Loading module packages
v Upgrade or Scratch install using bundles
v Uninstall a module/bundle
v Installation Status
v Querying modules state
Before modules can be installed on DB server, they must be loaded onto the
Central Manager GIM database. If a Central Manager is not part of the
architecture, packages must be loaded onto each Guardium system. Use the Load
package option in the GIM UI in order to get the packages loaded to the database.
Note: Scratch install refers also to a case where old (pre-GIM) S-TAP is installed on
the database server.
Note: For flexible GIM scheduling, use now + [1-9][0-9]* minute | hour | day
| week | month. Example: now + 1 day, now + 3 minutes
GIM scheduling
All time is relative to Guardium system time. Now means right now as specified
by the Guardium system. Now +30 minute is the current Guardium system time +
30 minutes. This can be seen when looking at the installation status by clicking on
the small "i" next to a client, for example in Manage > Module Installation >
Setup by clients. If the time on the database server has passed the time on the
Guardium system specified for install, then the install begins.
Example one, set up three clients (a) set for Guardium system time - 1 hour, (b) set
for Guardium system time, and (c) set for Guardium system time + 1 hour.
Guardium system (a), which is already 30 minutes ahead of the time set for
installation, will install immediately.
Guardium system (c) will take another hour after (b) to install.
Example two - Same setup as example one but this time specify "now".
Uninstalling a module/bundle
grdapi gim_uninstall_module clientIP=192.168.2.210 module=BUNDLE-STAP date=now
You can specify date=now or use the format of YYYY-MM-DD HH:mm. The
uninstallation will take place the next time GIM client checks for updates
(GIM_INTERVAL).
Installation Status
Additional information about the latest status the client has sent can be retrieved
by running the following command (The status message will appear as an entry in
GIM_EVENTS table from which a report can be generated):
The general status message can be obtained by running the following CLI
command:
grdapi gim_get_client_last_event clientIP="client ip"
grdapi gim_get_client_last_event clientIP=winx64
grdapi gim_get_client_last_event clientIP=9.70.144.73
Output example
ID=0
####### ENTRY 0 #######
MODULE_ID: 11
NAME: INIT
INSTALLED_VERSION 8.0_r3852_1
SCHEDULED_VERSION 8.0_r3852_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 1 #######
MODULE_ID: -1
NAME: COMMON
INSTALLED_VERSION 8.0_r0_1
SCHEDULED_VERSION 8.0_r0_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 2 #######
Enabling K-TAP
If, during the installation process, K-TAP fails to load properly, possibly caused by
hardware or software incompatibility, Tee is installed as the default collection
mechanism. To switch back to K-TAP, after compatibility issues are resolved, follow
these steps.
1. Disable the S-TAP. See Stop UNIX S-TAP for more information.
2. Edit guard_tap.ini and change ktap_installed to 1 and tee_installed to 0
3. Run the guard_ktap_loader install command.
example: /usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader install
4. Run the guard_ktap_loader start command.
example: /usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader start
5. Re-enable S-TAP. See Restart UNIX S-TAP for more information.
The custom K-TAP module is built when you install an S-TAP on a Linux server
for which there is no pre-built K-TAPfor the current kernel. The custom K-TAP
module is built only if the kernel-devel package is installed. When you install the
S-TAP bundle, use the GIM UI to set the value of the GIM parameter
STAP_UPLOAD_FEATURE to 1. This tells the GIM client to upload the custom K-TAP
module to the Guardium system after it is built and then automatically create a
custom S-TAP bundle.
Each GIM client sends an “alive” message to its GIM server regularly, to check
whether any updates are ready to be processed. In prior releases, this message has
been sent at a fixed interval, regardless of system conditions. Now this polling
interval can be calculated and updated based on conditions at the GIM server. The
The calculation begins with the number of GIM clients that are connected to the
GIM server. Two conditions on the GIM server are used to calibrate the polling
interval: the load on the CPU and the number of database connections in use on
the Guardium system. Thresholds are defined for each of these conditions, and the
update interval is adjusted based on those thresholds, and on the number of GIM
clients connected to the GIM server.
These parameters are used in the calculation. The default value for each parameter
is shown in parentheses.
dynamic_alive_enabled (1)
Dynamic alive feature control. 1 - enabled, 0 – disabled.
dynamic_alive_check_interval (5)
The interval, in minutes, at which the polling interval is recalculated
dynamic_alive_default_load_factor (3)
Dynamic alive load factor, calculated each interval
dynamic_alive_cpu_level1_threshold (65)
Dynamic alive CPU usage level 1 threshold (%)
dynamic_alive_cpu_level2_threshold (85)
Dynamic alive CPU usage level 2 threshold (%)
dynamic_alive_db_conn_level1_threshold (75)
Dynamic alive DB connections usage level 1 threshold (%)
dynamic_alive_db_conn_level2_threshold (90)
Dynamic alive DB connections usage level 2 threshold (%)
dynamic_alive_cpu_load_sample_time
Dynamic alive cpu load sample time in seconds
The polling interval is calculated by dividing the number of GIM clients by a load
factor. The load factor defaults to three, so that by default the polling interval in
seconds for each GIM client is the number of GIM clients connected to that GIM
server divided by three. For example, if you have 150 GIM clients attached to a
GIM server, the default polling interval is 50 seconds.
The load factor is adjusted according to whether either the CPU load or the
number of database connections passes its thresholds. If either of these conditions
passes its first threshold, the load factor is adjusted to two. In the example of 150
If either condition passes its second threshold, the load factor is adjusted to one. In
the example of 150 clients, each client is told to poll the server every 150 seconds.
This prevents frequent polling from contributing to a problem with CPU load or
network traffic.
When a condition returns to a value smaller than the current threshold, the next
calculation adjusts the load factor accordingly so that the calculated interval
reflects the conditions in effect at the GIM server.
It is best to update all your GIM-installed modules as soon as possible after the
upgrade, whether manually or automatically. By default, the option to update these
modules automatically is disabled. If you want to use automatic updating, you
must configure the Guardium system that acts as your GIM server to support this
option, and you must make the required bundles available on this server.
Procedure
1. For each module that you have installed on your database server, locate the
GIM bundle containing the latest version of this module that supports the new
operating-system version. The build number of each bundle must be the same
or greater than the bundle that is currently installed. Load each bundle onto the
GIM server.
2. Use the gim_set_global_param command to set the value of the global
parameter auto_install_on_db_server_os_upgrade to 1. This enables the
automatic update option on the GIM server.
grdapi gim_set_global_param paramName="auto_install_on_db_server_os_upgrade" paramValue="1"
Results
At first boot after OS upgrade, the GIM client recognizes that the operating system
has been upgraded and because the automatic update option is enabled, the client
takes these steps:
1. Changes the configuration files for all GIM-installed modules to support the
new operating system attributes.
When the modules are re-registered, the GIM server looks first for a bundle that
has the same build number as the previously installed bundle, but is compatible
with the upgraded OS. If it does not find such a bundle, it looks for the latest
bundles that support the new OS attributes. If the server cannot find appropriate
bundles, it issues an error message. If the server finds appropriate bundles, it
schedules them for upgrade and runs the upgrade process immediately.
What to do next
Review the messages in the GIM_EVENTS report. If the GIM server reports that
the modules have been upgraded successfully, verify the proper operation of the
modules as you would do after any update.
If error messages have been written to the GIM_EVENTS report, indicating that the
upgrade was not successful, review the error messages for guidance.
After completing your planned OS upgrade, disable the automatic update option
on the GIM server. This prevents a GIM client from erroneously starting an update
process.
grdapi gim_set_global_param paramName="auto_install_on_db_server_os_upgrade" paramValue="0"
You can re-enable the automatic update option when you perform another OS
upgrade.
If you manage all your GIM clients from your Central Manager, you can deploy
bundles to all your GIM clients directly from the Central Manager. If you manage
groups of clients from several managed units, you can distribute GIM bundles
from your central manager to those managed units.
The time required for distribution depends on the size of the bundles and network
conditions. In a network with substantial latency, transfers can take several hours.
Procedure
1. Copy the bundles that you want to distribute into the /var/gim/dist_packages
directory on your Central Manager. All files in this directory will be
distributed; you cannot select which bundles you want to distribute.
2. Choose the managed units to which you want to distribute the bundles.
3. Click Distribute GIM bundles. The bundles are copied to the selected
managed units.
This function enables you to maintain your inventory of GIM bundles and prevent
it from using disk space unnecessarily.
You can use two new Guardium API commands to identify and remove unused
GIM bundles. Perform this procedure on each Guardium system that acts as a GIM
server.
Procedure
1. Run the gim_list_unused_bundles command to identify unused bundles. Use
the includeLatest parameter to indicate whether you want the list that is
returned by the command to include the latest version of each GIM bundle.
You might have some bundles that you have not yet distributed, or you might
want to keep one older version so that you can reinstall it if needed. Set
includeLatest to 0 to exclude the latest unused version of each bundle from
the command results. Set it to 1 to include all unused versions. This parameter
is required and no default value is provided. For example:
gim_list_unused_bundles includelatest=0
The command returns a list of GIM bundles that are found on the GIM server
but are not installed on any database server whose GIM client works with this
GIM server.
2. If step 1 identifies some unused bundles, use the gim_remove_bundle command
to remove each unwanted bundle. This command takes a single parameter,
bundlePackageName, which identifies the bundle to be removed. This parameter
is required and no default value is provided. Use names that are returned by
the gim_list_unused_bundles command.
The named bundle is removed only if:
v The name specified in bundlePackageName matches the name of one and only
one specific GIM bundle.
v There is no GIM bundle whose name matches bundlePackageName installed
on any database server whose GIM client works with this GIM server.
For example:
gim_remove_bundle bundlePackageName=name
Results
GIM bundles that are not needed are removed from your GIM server.
If you experience trouble with a GIM client, your first step should be to verify that
the GIM server has accurate data about that client. Running GIM diagnostics
verifies that the modules listed for that client on the GIM server match the
modules installed on that client, and that the parameters stored on the GIM client
match those stored on the GIM server.
You can run GIM diagnostics either from the Guardium user interface or from the
command line. To run from the command line, use this command:
grdapi gim_run_diagnostics clientIP=xx.xx.xx.xx
The value of clientIP can be either an IP address or a hostname. You must run the
command on the Guardium system that is the GIM server for this client.
Procedure
1. Use the check boxes next to each client to choose the clients for which you
want to run GIM diagnostics.
2. Click Run diagnostics. The next time that each client polls the GIM server for
updates, it will receive the diagnostic command and run it immediately.
Results
Use these steps to turn on GIM debugging on the GIM server (Guardium system).
Procedure
1. Edit the GIM properties file: /usr/local/jakarta-tomcat-4.1.30/webapps-http/
ROOT/WEB-INF/conf/gimserver.log4j.properties.
2. Change the value ERROR to DEBUG.
3. Save the file.
Results
Debugging will be turned on in a few seconds and debug messages will be written
to the daily debug log file in /var/log/guard/debug-logs/.
To restart the supervisor, complete the following procedure. Only use this
procedure on Solaris servers with SMF support.
Procedure
1. Stop the supervisor by running the command svcadm -v disable guard_gsvr.
2. Run the command svccfg delete -f guard_gsvr.
3. Restart the supervisor with the command svccfg import <gim install
dir>/SUPERVISOR/current/guard_gsvr.xml where <gim install dir> is the file
path to the GIM installation directory.
Results
191
IBM
iii
iv Install and Upgrade
Chapter 1. Installing your Guardium system
This document details the steps necessary to install and configure your IBM®
Security Guardium for Applications system. The system is referred to as “your
Guardium system” throughout these instructions.
The requirements listed in this document apply to the installation of both the
physical appliance and the virtual appliance unless specified otherwise.
Operating modes
You can deploy a Guardium system in any of several operating modes.
As you plan your Guardium environment, you might deploy systems in any or all
of these operating modes:
Collector
A collector receives data about database activities or file activities from
agents that are deployed on database servers and file servers. The collector
processes this data and responds according to policies that are installed on
the collector. A collector can export data to an aggregator.
Aggregator
An aggregator collects data from several collectors, to provide an
aggregated view of the data. The aggregator is not connected directly to
database servers and file servers. You can allocate collectors to aggregators
according to location or function. For example, you might want to connect
the collectors that monitor your human resources database servers to a
single aggregator, so that you can view data that is related to all those
servers in one location. If you want, you can implement a second tier of
aggregation by deploying an aggregator that collects data from all your
other aggregators, rather than from collectors.
Central manager
There is only one central manager in a Guardium environment, although
1
you can designate another Guardium system as a backup central manager.
You can use the central manager to define policies and distribute them to
all collectors, to perform other configuration tasks that affect all your
Guardium systems, and to perform various other administrative tasks from
a single console. Your central manager can also function as an aggregator,
collecting data from collectors or from other aggregators. This model
provides an enterprise-wide view of activities and enables you to view
reports that are based on data that is aggregated from all your Guardium
systems.
Vulnerability assessment
If you are using the Guardium Vulnerability Assessment component, you
must decide where to run assessment tests. Some customers dedicate a
separate Guardium system for this function. You can also run tests from
any Guardium system that is deployed as a collector, an aggregator, or a
central manager.
The number of monitored database servers and file servers that you assign to an
collector depends on the amount of data that flows from the servers to the
collector. For information about how many collectors and aggregators your
environment requires, and how to locate your Guardium systems for best results,
refer to the Deployment Guide for IBM Guardium.
Hardware Requirements
Detailed hardware requirements and sizing recommendations are posted on the
Web.
Physical Appliance
After the appliance has been loaded into the customer's rack, connect the appliance
to the network in the following manner:
1. Find the power connections. Plug the appropriate power cord(s) into these
connections.
2. Connect the network cable to the eth0 network port. Connect any optional
secondary network cables.
3. Connect a Keyboard, Video and Mouse directly or through a KVM connection
(either serial or through the USB port) to the system.
4. Power up the system.
Use this CLI command to locate a physical connector on the back of the appliance.
After using the show network interface inventory command to display all port
names, use this command to blink the light on the physical port specified by n (the
digit following eth - eth0, eth1, eth2, eth3, etc.), 20 times.
show network interface port 1
When you receive a physical appliance from IBM, use these passwords for your
initial configuration.
Note: Be sure to change all default passwords when you complete the installation.
Table 1. Default passwords for predefined users
User Default password
accessmgr guard1accessmgr
admin guard1admin
cli guard1cli
Virtual appliance
The IBM Security Guardium Virtual Machine (VM) is a software-only solution
licensed and installed on a guest virtual machine such as VMware ESX Server.
To install the Guardium VM, follow the steps in Creating the Virtual Image. The
steps are:
v Verify system compatibility
v Install VMware ESX Server
v Connect network cables
v Configure the VM Management Portal
v Create a new Virtual Machine
v Install the IBM Security Guardium virtual appliance
After installing the VM, return to Step 4, Setup Initial and Basic Configuration, for
further instructions on how to configure your Guardium system.
Note: Installation can take place from DVD. If needed, get the UEFI/BIOS
password from Technical Support.
2. Load the Guardium image from the installation DVD.
3. The following two options appear:
Standard Installation: this is the default. Use this choice in most cases when
partitioning the disk.
Custom Partition Installation: allows more customization of all partitions
(locally or on a SAN disk). See Custom partitioning for further information on
how to implement this option.
In the following steps, you will supply various network parameters to integrate the
Guardium system into your environment, using CLI commands.
In the CLI syntax, variables are indicated by angled brackets, for example:
<ip_address>
Replace each variable with the appropriate value for your network and installation.
Do not include the brackets.
The default network interface mask is 255.255.255.0. If this value is the correct
mask for your network, you can skip the second command.
To assign a secondary IP address, use the CLI command, store network interface
secondary [on <interface> <ip> <mask> <gw> | off], that can be used to
enable/disable the secondary interface.
Next you must restart the network by using the CLI command, restart network.
Assigning a secondary IP address cannot be done by using the GUI, only through
the CLI.
The remaining network interface cards on the appliance can be used to monitor
database traffic, and do not have an assigned IP address.
SMTP Server
An SMTP server is required to send system alerts. Enter the following commands
to set your SMTP server IP address, set a return address for messages, and enable
SMTP alerts on startup.
store alerter smtp relay <smtp_server_ip>
store alerter smtp returnaddr <first.last@company.com>
store alerter state startup on
Note: You can also configure the SMTP server by using the user interface.
ClickSetup > Alerter.
Choose the appropriate time zone from the list and use the same command
to set it.
store system clock timezone <selected time zone>
Note: When setting up a new timezone, internal services will restart and
data monitoring will be disabled for a few minutes during this restart.
Store the date and time, in the format: YYYY-mm-dd hh:mm:ss
store system clock datetime <date_time>
Note: Do not change the hostname and the time zone in the same CLI
session.
store unit type standalone - use this command for all appliances.
Unit type standalone and unit type stap are set by default. Unit type manager (if
needed) must be specified.
Note: Unit type settings can be done at a later stage, when the appliance is fully
operational.
You can choose to configure the Squid proxy either as a transparent proxy or as a
manual proxy.
Procedure
1. Connect the eth0 adapter to the external network, and connect the eth1 adapter
to the subnet of the application server.
2. If you configured Squid as a manual proxy and want to configure Squid as a
fully transparent proxy again, complete the following steps:
a. Enter the command store squid proxy default.
b. Restart Squid by entering the command restart squid.
3. Enter the following command, where XX.XX.XX.XX is the IP address to be
assigned to eth1 and MM.MM.MM.MM is the network mask. Set the IP address and
the network mask for eth1 so that eth1 is on the same subnet as the application
server.
store net int appmaskingnic on eth1 XX.XX.XX.XX MM.MM.MM.MM
4. Restart the network by entering the command restart network.
If you plan to use Secure Socket Layer (SSL) connections with Squid, you must
store the certificates and private key.
You can show whether Squid is configured as a fully transparent proxy by entering
the command show squid proxy.
Procedure
1. Connect the eth0 adapter to the external network, and connect the eth1 adapter
to the subnet of the application server.
2. Enter the command store squid proxy manual.
3. Restart Squid by entering the command restart squid.
What to do next
After you configure Squid as a manual proxy, users must configure the proxy
manually on their browsers to connect to the application server through the
appliance. Users must specify the IP address or the host name and domain of eth0
as the HTTP proxy and 3128 as the port.
If you plan to use Secure Socket Layer (SSL) connections with Squid, you must
store the certificates and private key.
You can show whether Squid is configured as a manual proxy by entering the
command show squid proxy.
You can use either a self-signed certificate or a certificate that has been signed by a
trusted certificate authority (CA).
You must have the private key, the certificate, and the CA root certificate if the
certificate was self-signed.
Use this procedure if you already have a signed certificate and a corresponding
private key.
Procedure
To enable SSL, store the private key and associated certificates by using the
appropriate command:
v store certificate squid default console: Use this command to paste PEM
data corresponding to the private key, the certificate, and the CA root certificate
(if applicable).
v store certificate squid default import: Use this command to import the files
corresponding to the private key, certificate, and CA root certificate from a
remote location. You can import the files from secure copy (SCP), file transfer
protocol (FTP), Tivoli Storage Manager (TSM), Centera, or Amazon S3. After you
enter this command, the console prompts you for connection information for the
remote location.
Procedure
1. Run create csr squid to generate a Certificate Signing Request (CSR).
2. To enable SSL, store the associated certificates by using the appropriate
command:
v store certificate squid selfsign console: Use this command to paste
PEM data corresponding to the certificate and CA root certificate.
v store certificate squid selfsign import: Use this command to import
files corresponding to the certificate and CA root certificate from a remote
location. You can import the files from secure copy (SCP), file transfer
protocol (FTP), Tivoli® Storage Manager (TSM), Centera, or Amazon S3. After
you enter this command, the console prompts you for connection
information for the remote location.
What to do next
To display the Squid certificate information, enter the command show certificate
squid.
To delete the Squid certificate, CA root certificate, and private key and turn off
SSL, enter the command delete certificate squid.
To restore the last certificate that was used to configure SSL for the squid proxy,
run the following command: restore certificate squid backup.
To configure Squid to fail open, enter the command store squid bypass on.
What to do next
To configure Squid to fail close again, enter the command store squid bypass off.
To show whether Squid is configured to fail open or fail close, enter the command
show squid bypass.
Save the passkey used in your documentation to allow future Technical Support
root accessibility. To see the current pass key use the following CLI command:
support show passkey root
Questions - How secure is the Guardium system root password? Who has access
to it?
Guardium appliances are "black box" environments with the end user only
having access to limited access Operating System accounts, such as:
cli; guardcli1; guardcli2; guardcli3; guardcli4; and, guardcli5.
The Graphical User Interface user accounts (for example admin and
accessmgr) are not defined by the Guardium system's operating system,
but are application IDs defined and managed via an application interface
(accessmgr).
Being a secured server, root access is not readily available to anyone, but,
is often required by Guardium support to gain access to the Guardium
apoliances to troubleshoot and resolve issues. Guardium support does not
use sudo, or any other userid other than root, to gain access to Guardium
appliances.
The root password is secured using a "joint password" mechanism. The
customer holds the keys to the appliance in the form of a eight-digit
numeric passkey. IBM holds the passkey decoder. Without having both, the
passkey and passkey decoder, neither IBM nor the customer can access the
appliance as root.
The passkey is managed by the customer via the CLI interface. The
customer can change the passkey at any time, without notifying IBM, by
using the following CLI command:
support reset-password root
Anyone with CLI access can retrieve the passkey for root by using the
following CLI command:
support show passkey root
The system shuts down. Move the system to its final location, re-cable the system,
and power the system back on. After the system is powered on, it is accessible
(using the CLI and GUI) through the network, using the provided IP address or
host name.
Login to the Guardium web-based interface and go to the embedded online help
for more information on any of the following tasks.
Use the store unit type command to set the type of each Guardium system.
Note: In federated environments, license keys are installed only on the central
manager.
Note: The license agreement must be accepted. Do this task from the GUI.
There may not be any maintenance patches included with the installation
materials. If any are included, follow these steps to apply them:
1. Log in to the Guardium console, as the cli user, using the temporary cli
password you defined in the previous installation procedure. You can do this
by using an ssh client.
2. Do one of the following:
v If installing from a network location, enter the following command (selecting
either ftp or scp):
store system patch install [ftp | scp]
And respond to the following prompts (be sure to supply the full path name
to the patch file):
Installation of IBM Guardium is always in English. Use the CLI command store
language to change from the baseline English and convert the database to the
preferred language. A Guardium system can be changed only to Japanese or
Chinese (Traditional or Simplified) after an installation. The store language
command is considered a setup of the Guardium system and is intended to be run
during the initial setup of the system. Running this CLI command after
deployment of the appliance in a specific language can change the information
already captured, stored, customized, archived or exported. For example, the psmls
(the panes and portlets you have created) will be deleted, since they need to be
re-created in the new language.
Install S-TAP agents on the database servers and define their inspection engines.
S-TAP is a lightweight software agent installed on the database server, which
monitors local and network database traffic and sends the relevant information to
a Guardium system (the collector) for further analysis, reporting and alerting. To
install an S-TAP, refer to the S-TAP section of this information center. To verify that
the S-TAP have been installed and are connected to the Guardium system:
1. Log in to the administrator portal.
2. Do one of the following:
Navigate to the Manage > System View, and click S-TAP Status Monitor from the
menu. All active S-TAPs display with a green background. A red background
indicates that the S-TAP is not active.
Navigate to Manage > Activity Monitoring > S-TAP Control, and confirm that
there is a green status light for this S-TAP.
The VMware ESX Server on which you can install the Guardium VM is one
component of the VMware infrastructure. Although not all VMware Infrastructure
components are required to support the Guardium VM, you should be familiar
with all components that are in use at your installation.
ESX Server: This component is used to configure and control VMware virtual
machines on a physical host referred to as the ESX Server host. To install an
Guardium VM, you first define a virtual machine on an ESX Server host, and then
install and configure the Guardium VM image on that virtual machine. You can
create multiple Guardium VMs on a single ESX Server.
Web Browser: Use a Web browser to download and use the VI Client software
from an ESX Server host or the VirtualCenter server.
License Server (Optional): Stores and manages the licenses needed to maintain a
VMware Infrastructure.
For more information, go to www.vmware.com and search for “ESX Quick Start”
VM Installation Overview
To install the IBM Security Guardium VM, follow the steps that are described here.
After you install the VM, return to earlier Step 3, Install the IBM Security
Guardium image, and earlier Step 4, Initial Setup and Basic Configuration.
Note: The ESX server is only supported on a specific set of hardware devices. For
more information, see the VMware Virtual Infrastructure documentation.
Before you define any virtual switches that will be used for the Guardium VM,
you must connect the appropriate NICs to the network. You cannot assign NICs to
virtual networks or switches until the NICs are physically connected.
The following table describes how the Guardium VM uses network interfaces.
Refer to this table to make the appropriate connections before you configure the
virtual switches for use by the Guardium VM.
Table 2. IBM Security Guardium VM Network Interface Use
Interface Description
Proxy interface This interface is the main gateway to the appliance, and is used for these purposes:
(eth0) v Graphical web-based User Interface (GUI) to manage, configure, and use the solution
v Command Line Interface (CLI) for initial setup and basic configuration
v Connections with external systems such backup systems, database servers, and LDAP server
v Communication with other Guardium components such as other appliances (aggregator,
central manager) and agents that are installed on database or file servers such as S-TAP or
CAS clients
Application
server interface This interface is required if you configure your Guardium system as a transparent proxy. It
(eth1) connects to the application servers whose content your Guardium system is configured to mask.
The default configuration for a new VMware ESX Server installation creates a
single port group for use by the VMware service console and all virtual machines.
For the Guardium VM, we strongly recommend that you do not share ports with
This opens the Add Network Wizard, which is used for various purposes.
Use the Add Network Wizard to define a new virtual switch for the
Guardium VM network interface. This is the connection over which you will
access the Guardium VM management console, and over which the Guardium
VM will communicate with other Guardium components (S-TAPs, for
example, which are software agents that you will install later on one or more
database servers).
5. In the Connection Types box, click Virtual Machine and click Next.
6. In the Network Access panel, click Create a virtual switch, and mark the
unclaimed network adapter that you will use for the Guardium VM network
interface:
10. In the Summary page, click Finish. The new virtual switch is displayed in the
Configuration tab.
11. Optional. If you have defined a second adapter for failover purposes: (a) Click
Properties link for the virtual switch just created to open the virtual switch
Properties panel. (b) Click Ports tab and select the virtual port group just
created (GuardETH0 in the example), and click Edit. (c) In the virtual port
group Properties panel, click NIC Teaming tab, mark the Override vSwitch
Failover box, and then move the second adapter to the Standby Adapters list.
(d) Click OK to close the virtual port group Properties box, and click Close to
close the virtual switch Properties box.
If you have not already done so, create a new virtual machine on which to install a
Guardium VM.
This completes the definition of the new virtual machine. The operating system has
not yet been installed, so if you attempt to start the virtual machine, that activity
will fail.
(Optional) To install multiple GuardiumVMs, you can repeat the procedures for
each appliance, or you can minimize your work by cloning the first Guardium VM
that you created, and following these steps:
1. Use the VMware virtual infrastructure server product to clone the first
Guardium VM that you configured to a template.
2. From the template, create a clone for each additional Guardium VM to be
configured.
3. For each clone, log in to the Guardium VM console as the cli user, by using the
temporary cli password, and reset any of the IP configuration parameters that
you set in the previous procedure. Mandatory tasks are: Reset the IP address,
Reset the host name (store system hostname) and Reset the GLOBAL_ID (store
product gid). However, review all of the IP configuration settings entered in the
previous procedure.
store network interface ip <ip_address> store network interface mask <subnet_mask> store system ho
When you are done, enter the restart network command.
restart network
Note: The unique ID of the appliance is recalculated every time the hostname
changes, in order to avoid having multiple appliances with the same unique ID.
Note: The boot loader, a special program that loads the operating system into
memory, is part of any custom partitioning installation.
2. Create custom layout. In this case, there are existing partitions on the disk. Do
not delete any partitions. Choose the custom layout selection to add whatever
partitions you want to what is already on the disk.The following table specifies
recommended values for custom layout.
Table 3. Recommended values for custom layout
Partitions Values
/ 10 GB
Swap portion half of RAM size
/boot 5 GB
/var All the rest
All the available drives are also displayed on this screen. Choose the drive for the
partitioning and then installation.
If values are created that exceed the space available on the disk, an error message
appears.
Click OK to reboot the system and return to the beginning of Custom Partitioning.
For more information on how the RedHat distribution handles partitioning, see
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/
html/Installation_Guide/s1-diskpartsetup-x86.html.
For the encrypted LVM installation, you are asked to enter an encryption key.
Then, on EVERY reboot, the user is required to enter this key to unlock the LVM
volume (This means that the user must have console access to the appliance, either
physical or remote access).
Note: The boot loader, a special program that loads the operating system into
memory, is part of a custom partitioning installation. An example of the password
entry screen is shown near the end of this topic.
1. Insert the IBM Guardium DVD and boot the machine.
Note: The passphrase must be entered each time that the system is booted.
There is no way to recover a lost LVM passphrase.
The Bootloader configuration dialog is displayed. When a computer with Red
Hat Enterprise Linux is turned on, the operating system is loaded into
memory by a special program that is called a boot loader. A boot loader
usually exists on the system's primary hard disk (or other media device) and
has the sole responsibility of loading the Linux kernel with its required files or
(in some cases) other operating systems into memory.
In most cases, the default options are acceptable, but depending on the
situation, changing the defaults options may be necessary.
18. At this screen, click Next. This starts the encrypted installation.
During the installation and further re-boots, you are asked to enter the LUKS
(Linux Unified Key Setup) passphrase for the LVM during boot. After you
enter the LUKS passphrase, the system completes the boot process.
First partition space on the SAN storage device, and then install the IBM Security
Guardium OS. Choose one hard disk for this installation.
Note: While the RedHat installation process would allow you to create the
partitions and load the OS, the system does not boot properly after the
installation unless the partitions are pre-created with fdisk.
4. Proceed with the OS installation utilizing the previously defined partitions (use
only the /dev/sda device).
5. Reboot and finish the remaining installation steps (hostname, IP configuration,
and so on).
Note:
In the SAN environment, the single LUN is presented to RedHat 5.8 as multiple
devices due to redundant paths within the network switch(es) on the SAN. (The
SDD storage was eight devices.)
This is a function of the SAN storage brand/type and how it is configured at each
site.
It is very important to only edit the existing partitions that the IBM Guardium
installation sees by adding the mount point and setting the file system (ext3 or
swap,) and not changing other settings (such as size) and to unselect all devices
other than /dev/sda when selecting which device to load the OS on.
Follow these instructions for running fdisk to pre-partition the SAN storage from
RedHat rescue mode:
1. Assuming SAN is the only storage attached to the server, type fdisk
/dev/sda. Type y if a warning appears regarding working on the whole
device.
2. Type n for a new partition.
3. Type pfor a primary partition.
4. Type 1for partition #1.
5. Press Enter to accept the default start location.
4. Use your arrow keys to select Host Adapter BIOS and press Enter to toggle to
Enabled.
8. Press Esc until you have backed out to the screen that says Reboot and select it
to reboot the system. You are now ready to proceed with the IBM Security
Guardium installation.
Planning an upgrade
Learn about different upgrade scenarios and identify the correct approach for upgrading your
Guardium systems with minimal downtime.
1
Identify the correct upgrade scenario
The best approach for upgrading to Guardium depends on multiple factors, including the
Guardium version you are upgrading from, the hardware of your system, and any special
partitioning requirements you may have.
Determine your current Guardium version and patch level by clicking the icon in the main
user interface and selecting About Guardium. Use the following table to identify the best
approach for upgrading your systems to Guardium V10.0.
The standard upgrade patch for V10.0 is greater than 2GB in size while the network
upgrade patch for V10.0 is approximately 50KB in size. For this reason, consider using
the network upgrade patch when upgrading managed units in an environment with a
Central Manager.
2
Arranging upgrade resources
Identify and understand the scope, timing, and resources required for your upgrade.
Before upgrading any Guardium systems, begin by arranging the required resources. This
includes:
Typically, the upgrade process cannot be completed on all Guardium systems and all S-
TAPs simultaneously. It requires a multi-stage upgrade approach that creates temporary
version mismatches. During this transition period, the Guardium environment operates in
a hybrid mode with reduced functionality: plan your upgrade to minimize the time spent
operating in hybrid mode. For more information, see Version mismatches during
upgrade.
A backup Central Manager can also be used to reduce downtime while upgrading your
Guardium environment. For more information, see Using a backup Central Manager
during upgrade.
Planning your upgrade for off-peak or otherwise quiet periods will minimize the impact of
the upgrade on your other systems and users.
Typical Guardium upgrades may require two or more hours. During this time, your
Guardium systems may not be accessible or performing any data collection activity.
Factors contributing to the duration of the upgrade process include:
Purging unnecessary data from the appliance may significantly decrease the time
required for upgrade. For more information about purging data before an upgrade, see
Performing an upgrade.
3
Version mismatches during upgrade
The upgrade process cannot be completed on all Guardium systems (Central Managers,
aggregators, and collectors) and all S-TAPs simultaneously. During the upgrade transition, you
will have a environment that includes systems operating different versions of Guardium.
While this hybrid mode is supported by Guardium, many functions are limited until all
components are at the same version level. You should complete the upgrade in a timely manner
and have all components at the same version and patch level.
Data collection, data assessment, and policies (with some restrictions) will continue to work
while in the hybrid mode. Functions with new or enhanced capabilities will not work in a mixed
environment. While in the hybrid mode, it is recommended that you avoid making any
configuration changes.
You cannot install policies from an upgraded Central Manager to a managed unit that is
running an older version. Until the managed unit is upgraded, you can only install
policies locally on the managed units.
Capture/Replay configurations created in a prior release must be re-created in latest
version. After the upgrade, the Replay user should redo the staged configuration, stage
it again, and then export it again (assuming the data in GDM tables are still available).
Attention: Before beginning any upgrade procedures, review and assess the following
restrictions that apply when operating a hybrid environment with Guardium V10 and V9
systems:
Guardium V10.0 Central Managers can manage Guardium systems at or above V9 GPU
200 with limited functionality.
Guardium V10.0 backup Central Managers can provide limited services to Guardium V9
Central Managers at or above V9 GPU 300 with patch 337 and all security patches
installed.
Guardium V9 backup Central Managers cannot provide services to Guardium V10.0
Central Managers.
4
Upgrading environments with
aggregators or Central Managers
Minimize disruptions to your Guardium environment by following a top-down upgrade approach.
This means first upgrading one high-level system and then upgrading the systems or agents
that report to it, then upgrading the next high-level system and the systems or agents that report
to it, and so on. This approach minimizes the impact of operating a hybrid environment with
multiple Guardium versions.
A top-down approach is necessary because an upgraded aggregator can aggregate data from
older releases, but an older aggregator cannot aggregate data from newer releases. Similarly,
an upgraded Central Manager can manage units running older releases, but the managed units
will not enjoy full functionality until they are upgraded to match the Central Manager.
To avoid these issues, upgrade a Central Manager before upgrading any of its managed units. If
you have multiple Central Managers, first upgrade one Central Manager and then upgrade its
managed units before going on to upgrade the next Central Manager and its managed units.
Similarly, upgrade an aggregator before upgrading any units that export data to it. If you have
several aggregators, first upgrade one aggregator and then upgrade the collectors that report to
it before going on to upgrade the next aggregator and its collectors.
Finally, upgrade a collector before upgrading the S-TAPs registered to it. Upgrade one collector
and all the S-TAPs registered to it before going on to upgrade the next collector and its S-TAPs.
5
For example, considering the following environment with multiple Central Managers, a top-down
upgrade approach moves vertically through this list of systems:
Central Manager
o Aggregator
Collector
S-TAP
S-TAP
Collector
S-TAP
S-TAP
o Aggregator
Collector
S-TAP
S-TAP
Collector
S-TAP
S-TAP
Central Manager
o Aggregator
Collector
S-TAP
S-TAP
Collector
S-TAP
S-TAP
o Aggregator
Collector
S-TAP
S-TAP
Collector
S-TAP
S-TAP
6
Using a backup Central Manager during
upgrade
The availability of a backup Central Manager allows you to upgrade your Central Manager with
minimal disruption to your Guardium services.
Guardium V10.0 Central Managers can manage Guardium systems at or above V9 GPU
200 with limited functionality.
Guardium V10.0 backup Central Managers can provide limited services to Guardium V9
Central Managers at or above V9 GPU 300 with patch 337 and all security patches
installed.
Guardium V9 backup Central Managers cannot provide services to Guardium V10.0
Central Managers.
Procedure
1. On the backup Central Manager, view the Manage > Maintenance > General >
Aggregation/Archive Log and verify that backup Central Manager synchronization files
are being created successfully.
2. Upgrade your backup Central Manager to the latest version of Guardium.
After the upgrade is complete, wait approximately 30 minutes for the backup Central
Manager synchronization files to be created on the upgraded system.
3. Make the upgraded machine your primary Central Manager by navigating to Setup >
Make Primary Central Manager.
Managed units will now be assigned to the new primary Central Manager running the
latest version of Guardium.
7
When you upgrade the system that had been your primary Central Manager (under the
previous version), you may choose to establish that system as a backup Central
Manager running the latest Guardium version. However, if you want to reestablish this
system as your primary Central Manager after the upgrade, navigate to Setup > Make
Primary Central Manager.
8
Performing an upgrade
Learn how to upgrade your Guardium system after identifying the appropriate upgrade scenario.
Planning an upgrade, specifically the information in Identify the correct upgrade scenario
System Requirements for Guardium V10.0
This sequence of tasks guides you through the processes of upgrading your Guardium system
using a V10.0 upgrade patch.
9
Purge system data
Purging unnecessary data from the appliance can significantly decrease the time required to
upgrade.
For best performance and to minimize risks associated with upgrading large amounts of data,
try to achieve less than 20% internal database utilization by purging unnecessary system data.
Procedure
Important: Changes made to the Data Archive purge configuration will also be applied
to the Data Export purge configuration.
3. Define a Purge data older than time period. All data older than the specified period of
days, weeks, or months will be purged from the system.
4. Click the Allow purge without archiving or exporting check box.
5. Click Apply to save the configuration changes.
6. Click Run Once Now to execute the purge operation and purge old system data.
What to do next
Open Manage > Reports > Activity Monitoring > Scheduled Jobs to monitor the status of the
data archive job.
10
Create a system backup
Performing a full system backup allows you to recover from a failed or interrupted upgrade
attempt, or can be used to upgrade your Guardium system using a backup, rebuild, and restore
procedure.
Before creating a backup, purge all unnecessary data from the system being upgraded. The
less data on the system, the more quickly the backup procedure runs.
The backup process copies data to a file using the following naming convention:
host_name.domain_name-yyyy-mm-dd.sqlguard.bak.
Procedure
1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Enter the following command to backup the Guardium system: backup system.
You will be prompted to enter host, directory, and password information for the system to
which the backup data will be sent. The backup utility will display status information
during the backup process. When the backup process is complete, the following
message will display:
Backup done
Keep the file /xxx/host_name.domain_name-yyyy-mm-dd.sqlguard.bak in a
safe place.
3. Press Enter to complete the backup process. A series of messages will display to
confirm the backup.
What to do next
Log in to the host machine that contains the backup file and verify that the file has been created.
11
Apply the health check patch
The health check patch performs preliminary tests that help prevent problems during an
upgrade.
Download the latest health check patch for your version of Guardium. For Guardium V10.0,
download the following package from Fix Central: SqlGuard-9.0p9997.tgz.enc.
You must apply the health check patch before upgrading. The patch prevents potential upgrade
issues by verifying the following:
Hardware requirements
System hostname
Additional system configuration and status
For detailed information about the health check tests, review the release notes included with the
latest health check patch.
Apply the health check patch as you would apply any other patch to your Guardium system. It is
also possible to use a Central Manager to push the health check patch to managed units. For
more information, see Central Patch Management.
Procedure
1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Depending on the location of the patch file, perform the following steps:
o If you are installing from optical media, insert the patch media in the IBM®
Security Guardium optical drive, and enter the following command:store
system patch
o If installing from the network, enter the following command: store system
patch install [ftp | scp]
You will be prompted to enter the host machine name, the path to the patch file,
and both the user name and password for the host machine.
3. When prompted, enter the number that identifies the patch in the patches directory.
Press Enter to apply the patch.
Results
The health check generates a log file using the following naming convention:
health_check.time_stamp.log.
12
To view the log file, perform on the following actions:
The log file will contain the status of each validation performed by the health check patch:
ERROR
If the patch finds an error, the message will contain an ERROR prefix.
WARNING
If the patch finds an error that may not prevent the upgrade, the message will contain a
WARNING prefix. Review the message details for more information about how to
proceed.
If the patch does not find any errors, the following message appears at the end of the log file:
Appliance is ready for GPU installation/upgrade..
Important:
If the patch status is WARNING and a WARNING message appears in the log, the GPU
installation or upgrade may still be possible as some messages are version-specific.
Review the message details for more information about how to proceed.
If the log file includes an ERROR or WARNING message that you cannot resolve, send the
log file to IBM Software Support to prevent potential issues during the upgrade.
13
Enable a Central Manager as an upgrade
server
Configure an existing Guardium V10.0 system to distribute packages to managed units being
upgraded using the network patch.
This task is only required if you are upgrading managed units using the network upgrade patch.
Review the topic, Identify the correct upgrade scenario, to determine if the network upgrade
patch can be used for your scenario.
Procedure
1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Enter the following command: upgradeserver on. The Central Manager will now be
available to distribute upgrade files to managed units.
What to do next
Upgrade the managed units in your environment using the network upgrade patch as described
in the topic, Apply the upgrade patch. When you have finished upgrading the managed units in
your environment, disable the upgrade server on your Central Manager by enter the following
command as the CLI user: upgradeserver off.
14
Apply the upgrade patch
You can download patches in ISO format and create installation media or use SCP/FTP to apply
patches from a remote host on the network.
For more information about which upgrade patch to use, see Identify the correct upgrade
scenario.
Procedure
1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Depending on the location of the patch file, perform the following steps:
o If you are installing from optical media, insert the patch media in the IBM®
Security Guardium optical drive, and enter the following command:store
system patch
o If installing from the network, enter the following command: store system
patch install [ftp | scp]
You will be prompted to enter the host machine name, the path to the patch file,
and both the user name and password for the host machine.
3. When prompted, enter the number that identifies the patch in the patches directory.
Press Enter to apply the patch.
Results
The Guardium system may reboot several times after applying an upgrade patch, but the
process does not require any further action.
15
Verify and cleanup after the upgrade
Verify that the upgrade completed successfully and perform post-upgrade maintenance.
Procedure
1. If you upgraded using an upgrade patch, log in as the CLI user and issue the following
command: show upgrade-status. The command will output detailed status
information from the upgrade process.
2. If you upgraded using a Central Manager to distribute upgrade packages to managed
units, disable the upgrade server on the Central Manager using the following CLI
command: upgradeserver off.
3. You may need to update the Guardium DPS file after upgrade or restore procedures.
Download the latest DPS file, then use the Harden > Vulnerability Assessment >
Customer Uploads tool to upload and import the new DPS file.
Attention: If you use add-on accelerators (for example, SOX or PCI), you may need to
reinstall the accelerator patches after importing a new DPS file.
4. Verify that custom reports created in previous versions of Guardium are available at
Reports > My Custom Reports.
My Custom Reports should contain any new reports that you created as well as any
predefined reports that you modified in a previous version of Guardium.
5. After completing upgrade or restore procedures, you may need to reload the open
source Microsoft SQL Server and Oracle JDBC drivers using the Harden > Vulnerability
Assessment > Customer Uploads tool. You may also need to update and save any
datasources that rely on these drivers.
6. Company logos uploaded before upgrade or restore procedures may need to be
reloaded. To reload a customer logo, follow these steps:
a. Log in as an admin user.
b. Navigate to Setup > Tools and Views > Global Profile.
c. Browse for the company logo file.
d. Upload the logo file.
7. If the upgrade or restore procedures disable the Database Discovery or auto-detect
functionality, you may need to download and install a separate patch to reinstall the
gauto-detect component.
8. Verify the status of the Cross-Site Request Forgery (CSRF) and Cross-Site Scripting
(XSS) services using the CLI commands show gui csrf_status and show gui xss_status.
16
Upgrade using a backup, rebuild, and
restore procedure
Upgrade by restoring a system backup onto a newly rebuilt installation of Guardium V10.0.
Planning an upgrade, specifically the information in Identify the correct upgrade scenario
System Requirements for Guardium V10.0
This sequence of tasks guides you through the processes of upgrading your Guardium system
by restoring a system backup onto a newly rebuilt V10.0 system.
17
Purge system data
Purging unnecessary data from the appliance can significantly decrease the time required to
upgrade.
For best performance and to minimize risks associated with upgrading large amounts of data,
try to achieve less than 20% internal database utilization by purging unnecessary system data.
Procedure
Important: Changes made to the Data Archive purge configuration will also be applied
to the Data Export purge configuration.
3. Define a Purge data older than time period. All data older than the specified period of
days, weeks, or months will be purged from the system.
4. Click the Allow purge without archiving or exporting check box.
5. Click Apply to save the configuration changes.
6. Click Run Once Now to execute the purge operation and purge old system data.
What to do next
Open Manage > Reports > Activity Monitoring > Scheduled Jobs to monitor the status of the
data archive job.
18
Create a system backup
Performing a full system backup allows you to recover from a failed or interrupted upgrade
attempt, or can be used to upgrade your Guardium system using a backup, rebuild, and restore
procedure.
Before creating a backup, purge all unnecessary data from the system being upgraded. The
less data on the system, the more quickly the backup procedure runs.
The backup process copies data to a file using the following naming convention:
host_name.domain_name-yyyy-mm-dd.sqlguard.bak.
Procedure
1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Enter the following command to backup the Guardium system: backup system.
You will be prompted to enter host, directory, and password information for the system to
which the backup data will be sent. The backup utility will display status information
during the backup process. When the backup process is complete, the following
message will display:
Backup done
Keep the file /xxx/host_name.domain_name-yyyy-mm-dd.sqlguard.bak in a
safe place.
3. Press Enter to complete the backup process. A series of messages will display to
confirm the backup.
What to do next
Log in to the host machine that contains the backup file and verify that the file has been created.
19
Rebuild Guardium to the latest version
Rebuild Guardium to the lastest version to provide a target system for restoring your backup.
At this stage of the backup, rebuild, and restore upgrade procedure, you must rebuild a new
installation of Guardium V10.0. For more information, see Installing your Guardium system.
Important: You must rebuild the Guardium system to match the system type you will be
restoring in the next step of the backup, rebuild, and restore upgrade procedure. This is
because you can only restore backups from the same system type as the rebuilt system, for
example a backup from a Central Manager must be restored to a system rebuilt as a Central
Manager.
What to do next
In the next step of the backup, rebuild, and restore upgrade procedure, you will restore your
backup data onto the newly rebuilt installation of Guardium V10.0.
At this stage of the backup, rebuild, and restore upgrade procedure, you must have successfully
rebuilt the system to the latest version of Guardium. You can only restore backups from the
same system type as the rebuilt system, for example a backup from a Central Manager must be
restored to a system rebuilt as a Central Manager.
Procedure
1. Using an SSH client, log in to the Guardium system as the CLI user.
2. If the backup files are on a remote system, import the files by entering the following
command: import file .
You will be prompted to provide information for the system that contains the backup files
and the location of the files.
The import process copies the backup data files to the /var/dump directory.
3. Begin the restore process by entering the following command: restore db-from-prev-
version.
20
When you receive prompts to Update portal layout (panes and menus
structure), responding y moves all customized reports (including modified predefined
reports) to Reports > My Custom Reports.
Procedure
1. If you upgraded using an upgrade patch, log in as the CLI user and issue the following
command: show upgrade-status. The command will output detailed status
information from the upgrade process.
2. If you upgraded using a Central Manager to distribute upgrade packages to managed
units, disable the upgrade server on the Central Manager using the following CLI
command: upgradeserver off.
3. You may need to update the Guardium DPS file after upgrade or restore procedures.
Download the latest DPS file, then use the Harden > Vulnerability Assessment >
Customer Uploads tool to upload and import the new DPS file.
Attention: If you use add-on accelerators (for example, SOX or PCI), you may need to
reinstall the accelerator patches after importing a new DPS file.
4. Verify that custom reports created in previous versions of Guardium are available at
Reports > My Custom Reports.
My Custom Reports should contain any new reports that you created as well as any
predefined reports that you modified in a previous version of Guardium.
5. After completing upgrade or restore procedures, you may need to reload the open
source Microsoft SQL Server and Oracle JDBC drivers using the Harden > Vulnerability
Assessment > Customer Uploads tool. You may also need to update and save any
datasources that rely on these drivers.
6. Company logos uploaded before upgrade or restore procedures may need to be
reloaded. To reload a customer logo, follow these steps:
a. Log in as an admin user.
b. Navigate to Setup > Tools and Views > Global Profile.
c. Browse for the company logo file.
d. Upload the logo file.
7. If the upgrade or restore procedures disable the Database Discovery or auto-detect
functionality, you may need to download and install a separate patch to reinstall the
gauto-detect component.
8. Verify the status of the Cross-Site Request Forgery (CSRF) and Cross-Site Scripting
(XSS) services using the CLI commands show gui csrf_status and show gui xss_status.
21
IBM
iii
iv CLI and API
CLI and API
The Guardium® command line interface (CLI) is an administrative tool that allows
for configuration, troubleshooting, and management of the Guardium system. The
Guardium application programming interface (API) provides access to many
Guardium functions from the command line.
CLI Overview
The Guardium command line interface (CLI) is an administrative tool that allows
for configuration, troubleshooting, and management of the Guardium system.
Documentation Conventions
All CLI command examples are written in courier text (for example, show system
clock).
Interactive access to the Guardium appliance is through the serial port or the
system console.
1
PC keyboard and monitor – A PC video monitor can be attached to either the front
panel video connector or the video connector on the back of the appliance.
A PC keyboard with a PS/2 style connector can be attached to the PS/2 connector
on the back of the appliance. Alternatively, a USB keyboard can be connected to
the USB connectors located at the front or back of the appliance.
Serial port access – Using a NULL modem cable, connect a terminal or another
computer to the 9-pin serial port at the back of the appliance. The terminal or a
terminal emulator on the attached computer should be set to communicate as
19200-N-1 (19200 baud, no parity, 1 stop bit).
A login prompt displays once the terminal is connected to the serial port, or the
keyboard and monitor are connected to the console. Enter cli as the user name, and
continue with CLI Login.
The SSH client may ask you to accept the cryptographic fingerprint of the
Guardium appliance. Accept the fingerprint to proceed to the password prompt.
Note: If, after the first connection, you are asked again for a fingerprint, someone
may be trying to induce you to log into the wrong machine.
CLI Login
Access to the CLI is either through the admin CLI account cli or one of the five
CLI accounts (guardcli1,...,guardcli5). The five CLI accounts (guardcli1,...,guardcli5)
exist to aid in the separation of administrative duties.
Access to the GuardAPI, which is a set of CLI commands to aid in the automation
of repetitive tasks, requires the creation of a user (GUI username/guiuser) by
access manager and giving those accounts either the admin or cli role. Proper login
to the CLI for the purpose of using GuardAPI requires the login with one of the
five CLI accounts (guardcli1,...,guardcli5) and an additional login with guiuser by
issuing the 'set guiuser' command. See GuardAPI Reference Overview or Set
guiuser Authentication for additional information.
Password Hardening
The welcome message will add further information if the internal database is
down due to maintenance or during an upgrade.
If this is the case, the number of CLI commands available will be limited.
The internal database on the appliance is currently down and CLI will be working
in "recovery mode"; only a limited set of commands will be available.
The CLI commands that available for use during recovery mode are as follows:
support reset-password root
restart mysql
restart stopped_services
restart system
restore pre-patch-backup
restore system
Syntax
Parameters
user@host:/path/filename For the file transfer operation, specifies a user, host, and
full path name for the backup keys file. The user you specify must have the
authority to write to the specified directory.
Sets the system shared secret value to null. All files archived or exported from a
unit with a null shared secret can be restored or imported only on systems where
the shared secret is null.
Syntax
Note: For more information about the shared secret use, see System Shared Secret.
aggregator debug
Syntax
Syntax
Syntax
Parameters
Use the all option to move all files from the /var/dump directory ending with the
suffix .decrypt_failed, or use the filename option to identify a single file to be
moved.
Use this command to move and rename failed restore files, prior to re-attempting a
restore operation. Failed restore files are stored in the /var/dump directory, with
the suffix .decrypt_failed. Before re-attempting a restore operation, those files must
be renamed (by removing the .decrypt_failed suffix) and moved to the
/var/importdir directory.
Syntax
Parameters
Use the all option to move all files from the /var/dump directory ending with the
suffix .decrypt_failed, or use the filename option to identify a single file to be
moved.
Note: After moving the failed files, but before a restore or import operation runs,
be sure that the system shared secret matches the shared secret used to encrypt the
exported or archived file.
Syntax
Parameters
user@host:/path/filename For the file transfer operation, specifies a user, host, and
full path name for the backup keys file.
Note: For more information about the shared secret use, see System Shared Secret.
Syntax
Use this CLI command to clean orphans on aggregators that will be scheduled to
run on data older then 3 days and will run at the end of a purge.
This process will be started by the user with this CLI command, so in case of large
database, the user will be aware of the time length of the process.
It will cover the whole data on the aggregator, but will run it all on a separate
temporary database.
Note: On a collector, orphans cleanup is not changed - it runs with the small
cleanup tactics and is invoked before export/archive.
store aggregator orphan_cleanup_flag <flag>, where flag is one of the words < OFF
small large analyze >
If set to one of small, large or analyze - orphans cleanup script is invoked after
each run of merge process.
The orphans cleanup on an aggregator does not remove orphan records of the last
3 days - it does remove all orphans older then 3 days.
If small is specified, the process does not interfere with audit processes that can
start after the merge is completed.
If large is specified, the process would run faster where there is a large number of
orphans but it's run might interfere with audit processes - if large is specified,
audit processes will not start until orphans cleanup is complete.
If analyze is specified, the process first evaluates the number of orphans and uses
the large tactics if there are more than 20% orphans - if analyze is specified, audit
processes will not start until orphans cleanup is complete.
Syntax
Default is OFF.
Show command
Show command
show archive_static_table
store next_export_static
As stated previously, the data of static tables is not time dependant. The data of
dynamic tables that is time dependant is linked to static data. As static tables can
grow to be very large, the archive process does not archive the full static data
every day - it archives the full static data the first time it runs, and then at the first
day of each month, on any day besides the first of the month, it only archives
static data that changed during that day. For this reason when restoring data of
any day, it is also required that the first of the month be restored - this ensures that
full static data is present and references are not broken.
Use the CLI command, store next_export_static, to set a flag so that the next export
contains the full static data.
Syntax
Show command
show next_export_static
store last_used
Use this CLI command during purging and aggregation.
Syntax
Show command
All Tables - 1
Only GDM_Object - 2
None - 0 (Default)
Note: Set the CLI command, last_used logging, prior to using this command.
When the LAST_USED column is updated by the Sniffer in Static tables, this
column can be referenced when purging data from these tables or when archiving
and exporting data from these tables.
The value of this column can also be updated when importing data to an
aggregator.
Note: Options 2 and 3 are only enabled when the sniffer is configured to collect
and update this data.
Syntax
Show command
store run_cleanup_orphans_daily
Use this CLI command to clean all the old construct records that are no longer in
use. This CLI command is relevant for aggregators only and by default is enabled.
store run_cleanup_orphans_daily
Show command
show run_cleanup_orphans_daily
The Alerter subsystem transmits messages that have been queued by other
components - correlation alerts that have been queued by the Anomaly Detection
subsystem, or run-time alerts that have been generated by security policies, for
example. The Alerter subsystem can be configured to send messages to both SMTP
and SNMP servers. Alerts can also be sent to syslog or custom alerting classes, but
no special configuration is required for those two options, beyond starting the
Alerter. There are four types of Alerter commands. Use the links in the lists, or
browse the commands, which are listed in alphabetical sequence following the
lists.
restart alerter
Restarts the Alerter. You can perform the same function using the store alerter state
operational command to stop and then start the alerter:
Syntax
restart alerter
stop alerter
You can perform the same function using the store alerter state operational
command:
Syntax
stop alerter
Starts (on) or stops (off) the Alerter. The default state at installation time is off. You
can also use the restart alerter or stop alerter commands to restart or stop the
Alerter subsystem.
Syntax
Show Command
Syntax
Show Command
Enables or disables the automatic start-up of the Alerter on system start-up. The
default state at installation time is off.
Syntax
Show Command
Syntax
Show Command
Enables or disables the Anomaly Detection subsystem, which executes all active
statistical alerts, checks the logs for anomalies, and queues alerts as necessary for
the Alerter subsystem.
Syntax
Show Command
Sets the alerter SMTP authentication password to the specified value. There is no
corresponding show command.
Syntax
auth: Username/password authentication. When used, set the user account and
password using the following commands:
Syntax
Show Command
Sets the alerter SMTP email authentication username to the specified name.
Syntax
Show Command
Sets the port number on which the SMTP server listens, to the value specified by
n. The default is 25 (the standard SMTP port).
Syntax
Show Command
Sets the ip address of the SMTP server to be used by the Guardium appliance.
Syntax
Show Command
Syntax
Show Command
Sets the SNMP trap community used by the Alerter, to the name specified. There is
no corresponding show command.
Syntax
Sets the Alerter SNMP trap server to receive alerts, to the specified IP address or
DNS host name.
Syntax
Show Command
store syslog-trap
Note: Guardium does not provide certificate authority (CA) services and does not
ship systems with different certificates than the one installed by default. A
customer that wants their own certificate must contact a third-party CA (such as
VeriSign or Entrust).
Certification Expiration
Expired certificates will result in a loss of function. Run the show certificate
warn_expire command periodically to check for expired certificates. The command
displays certificates that will expire within six months and certificates that have
already expired. The user interface will also inform you of certificates that will
New Certificates
To obtain a new certificate, generate a certificate signed request (CSR) and contact
a third-party certificate authority (CA) such as VeriSign or Entrust. Guardium does
not provide CA services and will not ship systems with different certificates than
the ones that are installed by default. The certificate format must be in PEM and
include BEGIN and END delimiters. The certificate can either be pasted from the
console or imported through one of the standard import protocols.
Note: Do not perform this action until after the system network configuration
parameters have been set.
create csr
Creates a Certificate Signed Request (CSR) for the Guardium system. Do not
perform this action until after the system network configuration parameters are set.
Within the generated CSR, the common name (CN) is created automatically from
the host and domain names assigned.
create csr gim creates a certificate request for gim (GIM Listener).
create csr squid creates a certificate signing request and associated key, which
must be signed by a certificate authority. A matching certificate must then be
supplied by using the store certificate squid selfsign command.
Syntax
Syntax
Restores the certificate gim to the last certificate gim on record or the default
certificate gim that was originally provided.
restore certificate gim backup restores the gim certificate to the last saved
sniffer gim certificate.
Syntax
Restores the certificate keystore to the last certificate keystore on record or the
default certificate keystore that was originally provided.
restore certificate keystore backup restores the certificate keystore to the last
saved certificate keystore.
Syntax
restore certificate mysql backup restores the last saved mysql certificate.
Syntax
restore certificate mysql backup client ca restores the last saved client
certificate authority (CA) certificate.
restore certificate mysql backup client cert restores the last saved client
certificate.
Syntax
restore certificate mysql backup server ca restores the last saved server
certificate authority (CA) certificate.
restore certificate mysql backup server cert restores the last saved server
certificate.
Restores the mysql client certificate to the default version that was supplied with
the system.
restore certificate mysql default client cert restores the mysql client
certificate to the default version that was supplied with the system.
Syntax
Restores the mysql server certificate to the default version that was supplied with
the system.
restore certificate mysql default server cert restores the mysql server
certificate to the default version that was supplied with the system.
Syntax
restore certificate sniffer backup restores the sniffer certificate to the last
saved sniffer certificate.
restore certificate sniffer default restores the sniffer certificate to the default
sniffer certificate.
Syntax
restore cert_key mysql backup client restores the last saved mysql client cert
key.
restore cert_key mysql backup server restores the last saved mysql server cert
key.
Syntax
Restores the mysql client or server certificate key to the default version that was
supplied with the system.
restore cert_key mysql default client restores the default mysql client cert key
that was supplied with the system.
restore cert_key mysql default server restores the default mysql server cert key
that was supplied with the system.
Syntax
show certificate
show certificate gim displays all GIM certificate information (GIM Listener).
show certificate keystore displays all certificates in the keystore and an alias list
for you to select which certificate to show.
show certificate mysql displays client and server mysql certificate information.
show certificate stap displays all S-TAP certificate information in the keystore.
Syntax
show certificate <alias | all | gui | keystore | mysql | sniffer | stap | squid |
summary | trusted | warn_expired>
show certificate <alias | all | gim | gui | keystore | mysql | sniffer | stap |
summary | trusted | warn_expired >
show certificate keystore alias displays an alias list for you to select which
certificate to show.
Syntax
Parameters
Syntax
store certificate
Stores a certificate. Paste your certificate in PEM format and include the BEGIN
and END lines.
Parameter
store certificate gim will allow the custom gim certificate to be stored in
keystore by prompting for certificate, key (optional) and CA certificate (GIM
Listener).
store certificate gui stores the tomcat certificate in the keystore after a CSR has
been generated.
store certificate keystore asks for a one-word alias to uniquely identify the
trusted certificate and store it in the keystore.
Syntax
Syntax
Syntax
store certificate squid caroot Stores a ca root certificate onto the Guardium
system and configures SSL proxy settings.
Syntax
store cert_key
Stores the system certificate key and the certificate key of a mysql client and
server.
store cert_key mysql stores the certificate key of a mysql client and server.
Syntax
store cert_key myself client stores the certificate key of a mysql client.
store cert_key myself server stores the certificate key of a mysql server.
Syntax
Stores the system certificate key. This command enables a user to set the system
certificate that is used by the Guardium system (in communication with S-TAP®).
The certificate can either be pasted from the console or imported via one of the
standard import protocols. The certificate format should be PEM and should
include the BEGIN and END delimiters. This certificate needs to be signed by a
CA whose self-signed certificate is available to S-TAP software through the
guardium_ca_path.
store cert_key sniffer import stores the sniffer certificate key by importing the
key file.
Syntax
store sign certificate squid console stores the proxy server certificate and the
self-signed ca root certificate by pasting the data into the console.
store sign certificate squid import stores the proxy server certificate and the
self-signed ca root certificate by importing the associated files.
Syntax
You can choose to restore certificates and certificate keys with the backup or
default parameter. Use the backup parameter to restore a certificate to the last
saved certificate. Use the default parameter to restore a certificate to the original
certificate that Guardium supplied.
? (question mark)
When entering a command, enter a question mark at any point to display the
arguments.
Syntax
<partial_command> ?
Example
ok
CLI>
Use this command to clear one or more unit type attributes. Note that not all unit
type attributes can be cleared using this command. See the table, located after the
store unit type command, for more information.
Syntax
commands
Syntax
commands
debug
Syntax
eject
This command dismounts and ejects the CD ROM, which is useful after upgrading
or re-installing the system, or installing patches that were distributed via CD ROM.
Syntax
eject
delete scheduled-patch
To delete a patch install request, use the CLI command delete scheduled-patch
See the CLI command, store system patch install for further information on
patch installation.
Show Command
show support-email
generate-keys
Use this command to generate PGP keys for cli, tomcat and grdapi. Use the show
command to display the key (which you can then copy and paste, as appropriate
for your needs).
Syntax
generate-keys
Show Command
iptraf
http://iptraf.seul.org/2.7/manual.html
Syntax
iptraf
license check
Indicates if the installed license if valid. Use this command after installing a new
product key.
Syntax
license check
ping
Sends ICMP ping packets to a remote host. This command is useful for checking
network connectivity. The value of host can be an IP address or host name.
Syntax
ping <host>
Syntax
quit
recover failed
Command to restore failed CSV/CEF/PDF transfer files, placing the files back into
the export folder for another export attempt.
Syntax
register management
Registers the Guardium system for management by the specified Central Manager.
The pre-registration configuration of this Guardium system is saved, and that
configuration will be restored later if the unit is unregistered.
Syntax
Parameters
port is the port number used by the Central Manager (usually 8443).
restart gui
Restarts the IBM® Guardium Web interface. To optionally schedule a restart of the
GUI once a day or once a week, use additional parameters. HH is hours 01-24.
MM is minutes 01-60. W is the day of the week, 0-6, Sunday is 0. If HHMM is
listed twice, only the last entry is used. The parameter clear deletes the scheduled
time.
In order to restart the Classifier and Security Assessments processes, run the
restart gui command from the CLI (not from the GUI).
Running restart GUI from the GUI only restarts the web services. It is necessary
to run the restart GUI command from the CLI to fully restart all processes,
including Classifier and Security Assessments processes. It is necessary to run the
restart GUI command from the CLI for each managed unit to restart the Classifier
listener.
Syntax
Syntax
restart stopped_services
restart system
Reboots the Guardium system. The system will completely shut down and restart,
which means that the cli session will be terminated.
Syntax
restart system
show buffer
This command displays a report of buffer use for the inspection engine process. If
you are experiencing load problems, IBM Technical Support may ask you to run
this command.
Syntax
Use this CLI command to display the buffer usage of the inspection engine
process.
show build
Displays build information for the installed software (build, release, snif version).
Syntax
show build
show defrag
Identify fragmented packets and attempt to reconstruct the packets before they get
to the network sniffing process. The defrag is relevant only for network sniffing
through SPAM or a TAP device.
Syntax
show defrag
Parameters
Permit the user to have only one IP address per appliance (through eth0) and
direct traffic through different routers using static routing tables. List the current
static routes, with IDs.
Syntax
Delete command
show password
This CLI command displays password functions. Password disable [0|1] removes
the use of a password by storing the value 1. Password Expiration [CLI|GUI]
[Number of days] displays the number of days between required password
changes. Default is 90 days. Password Validation [ON|OFF] determines how
strong the password is.
Syntax
Syntax
Syntax
Syntax
Displays the public key for cli or tomcat. If none exists, this command creates one.
Note: See show system key, store system key in Certificate CLI commands.
Syntax
stop gui
Syntax
stop gui
stop system
Syntax
stop system
store apply_user_hierarchy
If ON, the non-audit group receiver (the receiver other than the audit group
receiver (normal or role) will only see audit results with a group IP beneath the
receiver's hierarchy, including the receiver.
Syntax
Show command
show apply_user_hierarchy
store allow_simulation
Enables (on) or disables (off) the ability to run the Policy Simulation on the
appliance.
Syntax
Show command
show allow_simulation
store alp_throttle
Use this CLI to regulate the amount of data that will be logged.
Default is 0.
Example
store analyzer
Ignore session: The current request and the remainder of the session will be
ignored. This action does log a policy violation, but it stops the logging of
constructs and will not test for policy violations of any type for the remainder of
the session. This action might be useful if, for example, the database includes a test
region, and there is no need to apply policy rules against that region of the
database.
This command sets the value of the timeout of the ignore session and sets the
duration of the ignore session.
Syntax
Show command
store auto_stop_services_when_full
When ON, will stop internal services if database exceeds the 90% full threshold.
Syntax
Show command
show auto_stop_services_when_full
Use this command to connect and disconnect the Oracle parser from the DB2
parser. The default is OFF (disconnect).
Syntax
Show command
store default_queue_size
Use this CLI command to control the configuration parameter
ADMINCONSOLE_PARAMETER.DEFAULT_QUEUE_SIZE. The default is 25. The
range is 25-300.
Syntax
Show command
show default_queue_size 25
Syntax
store defrag [default | size <s> interval <i> trigger <t> release <r>]
Show command
show defrag
Parameters
default - Restore the default size.
s - The packet size in bytes, up to a maximum of 217 (131072)
i - The time interval
t -The trigger level
r - The release level specified as a number of seconds, up to a maximum of
the 31st power of two (2147483648).
store delayed_firewall_correlation
Use this CLI command to hold a user connection until the decryption correlation
has taken place.
Syntax
Show command
show delayed_firewall_correlation
store full-bypass
This command is intended for emergency use only, when traffic is being
unexpectedly blocked by the Guardium system. When on, all network traffic
passes directly through the system, and is not seen by the Guardium system.
When using this command, you will be prompted for the admin user password.
Syntax
store gdm_analyzer_rule
Analyzer rules - Certain rules can be applied at the analyzer level. Examples of
analyzer rules are: user-defined character sets, source program changes, and
firewall watch or firewall unwatch modes. In previous releases, policies and rules
were applied at the end of request processing on the logging state. In some cases,
Note: When applying analyzer rules on source program changes, if the source
program is not matching the exact pattern, add a .* at the end of the pattern to
deal with the possibility that the source program has a trailing space (unseen by
user).
Syntax
Use the CLI command, show gdm_analyzer_rule, to see a list of GDM analyzer
rules.
Show command
show gdm_analyzer_rule
store gdm_http_session_template
Use this CLI command to set the template for the HTTP session.
Usage
store gdm_http_session_template [activate] [add] [deactivate] [remove]
Show command
show gdm_http_session_template
Attempting to retrieve the template information. It may take time. Please wait.
Table 1. store gdm_http_session_template
Active
URL Username Login_Session
ID# Regex Session Regex Regex Regex Comment Logout_Session_ID
Logout_URL_Regex
1 1 Cookie.*PHPSESSID=([[:a
.*user_name=([[:alnum:]
Set- example of HTTP
Cookie:.*PHPSESSID=
session deleted
2 1 Cookie.*PSJSESSIONID=([
.*SignOnDefault=([[:aln example of HTTP cmd=logout
session
3 1 Cookie.*JSESSIONID=([0-
.*username=([[:alnum:]]
Set- example of HTTP Logout.jsp
Cookie:.*JSESSIONIDsession
Usage
store log external [file_size] [flush_period] [gdm_error] [state]
Default is 60 seconds.
Show command
show log external [file_size] [flush_period] [gdm_error] [state]
Use this CLI command to get information about the Unit Utilization. Default is 1
(run the script every hour).
Syntax
CLI> store monitor gdm_statistics
USAGE: store monitor gdm_statistics <hour>, where hour is value from 0 to 24.
Default value is 1, means to run the script every hour.
Value 0, means not to run the script.
Show command
store gui
store gui [port | session_timeout | csrf_status]
Sets the TCP/IP port number on which the IBM Guardium appliance management
interface accepts connections. The default is 8443. n must be a value in the range of
1024 to 65535. Be sure to avoid the use of any port that is required or in use for
another purpose.
Set Cross-site Report Forgery (CSRF) (ON | OFF) - See the section CSRF and 403
Permission Errors in the Getting Started with GUI help topic. The default value is
enabled on an upgraded system. Trying to use certain web browser functions (for
example, F5/CTRL-R/Refresh/Reload, Back/Forward) will result in a 403
Permission Error message.
The new session timeout value will take effect only after the next GUI restart.
Syntax
Show command
Displays the GUI port number, state, session timeout (in seconds) and/or CSRF
status.
Syntax
Use this CLI command to turn web browser caching ON or OFF (Enable or
Disable).
The response is
Restarting gui
Stopping.......
Safekeeping xregs
ok
The act of changing the cache setting will automatically restart the Guardium web
server.
For Firefox, in order for the setting to take affect, the cache on the respective
browsers has to be cleared.
Show command
Sets the length of time (in seconds) with no activity before timeout. After the no
activity timeout has been reached, it is necessary to log on again to IBM
Guardium. The default length is 900 seconds (15-minutes).
Syntax
Show command
Use this CLI command to enable or disable the Cross-site Request Forgery (CSRF)
status.
Syntax
Show command
Use this CLI command to enable or disable the Cross-Site Scripting (XSS) status.
This option is enabled by default on upgraded systems.
Syntax
Show command
Syntax
store keep_psmls
Use this CLI command to retain the current layouts/profiles/portlets created the
users of the Guardium application. Set this CLI command to ON before an
upgrade, and the psmls from the previous version will be retained.
Syntax
show keep_psmls
store ldap-mapping
Store LDAP mapping parameters - allow a custom mapping for the LDAP server
schema. This command permits customized mapping to the LDAP server schema
for email, firstname and lastname attributes. The paging parameter is used to
facilitate transfer between any LDAP server type (Active Directory, Novell
Directory, Open LDAP, Sun One Directory, Tivoli® Directory). If the paging
parameter is set to on, but paging is not supported by the server, the search is
performed without paging.
Example for paging. If the CLI command, ldap-mapping paging is set to ON, then
Microsoft Active Directory will download the maximum number users defined
under the limit value on the LDAP Import configuration screen. If CLI command,
ldap-mapping paging is set to OFF, then Active Directory will download up to
only 1000 users not matter what the limit value is set to. All other LDAP server
configurations must use the CLI command, ldap-mapping paging off in order to
download users up to the set limit value.
Note: Each time you change the CLI ldap-mapping attributes you also need to
select Override Existing Changes on the LDAP Import configuration screen in IBM
Guardium GUI before updating. This action must occur each time you change the
CLI ldap-mapping email, firstname or lastname attributes and import LDAP users.
Show commands
A GUI restart of the CLI is required for new parameters to take effect.
Examples
If the attributes are written as follows, the mapping process will use the first
attribute it finds. If this is not what you want, use one of the examples to map to
specific attributes.
store license
A license key may be of one of two kinds: override type or append type; an
override type replaces the currently installed license while the append type license
will be appended to the currently installed license. Append-type licenses can only
add functionality; new functions may be enabled and when relevant - expiration
dates be updated, remaining number of scans and datasources will be increased,
and a certain numeric fields in the license, such as number of managed units will
be replaced.
Syntax
store license
Show Command
show license
Example
When using the store license command, you will be prompted to paste the new
product key:
Paste the string received from IBM Guardium and then press Enter.
Copy and paste the new product key at the cursor location, and then press Enter.
The product key contains no line breaks or white space characters, and it always
ends with (and includes) a trailing equal sign. A series of messages will display,
ending with:
ok
Note:
Syntax
Show command
Syntax
Note: A restart of the inspection engine is required after the store command is
issued to apply change.
Show command
Syntax
Show command
Syntax
When on, logs the entire SQL command when logging exceptions.
Syntax
Show command
Sets the logging granularity to the specified number of minutes. You must use one
of the minute values shown in the syntax. The default is 60.
Syntax
Show command
store max_audit_reporting
Displays the audit report threshold. The default is 32. When defining reports in
Audit Process, the number of days of the report (defined by the FROM-TO fields)
should not exceed a certain threshold (one month by default). See the Workflow
Process, Central Management and Aggregation section of the Compliance
Workflow Automation help topic for further information on this using this CLI
command.
Syntax
store max_audit_reporting
Show command
show max_audit_reporting
store max_result_set_size
Store the max_result_set_size, default value is 100 (size is between 1 and 65535)
and aids in tuning the inspection engine when observing returned data. This
command sets the limitation for total result set size. This parameter works for any
type of database. If the value is beyond the defined threshold, the analyzer will not
retrieve data to calculate records affected value.
Syntax
Show command
show max_result_set_size
store max_result_set_packet_size
Store the max_result_set_packet_size, default value is 32 (size is between 1 and
65535) and aids in tuning the inspection engine when observing returned data.
This command sets the limitation for packet size in response. This parameter
works for any type of database. If the value is beyond the defined threshold, the
analyzer will not retrieve data to calculate records affected value.
Syntax
Show command
show max_result_set_packet_size
store max_tds_response_packets
Syntax
Show command
show max_tds_response_packets
Syntax
Show Command
Use the CLI command, store monitor custom_db_usage to set the state to on and
to specify a time to run this job.
Syntax
CLI> store monitor custom_db_usage
USAGE: store monitor custom_db_usage <state> <hour>
where state is on/off.
If state is on, specify the hour to run.
Valid value is number from 0 to 23
Use the CLI command, store monitor gdm_statistics to get information about the
Unit Utilization. Default is 1 (run the script every hour).
Syntax
CLI> store monitor gdm_statistics
USAGE: store monitor gdm_statistics <hour>, where hour is value from 0 to 24.
Default value is 1, means to run the script every hour.
Value 0, means not to run the script.
Show Commands
Syntax
Show Command
store pdf-config
Use this command to change the pdf font size and pdf orientation of the PDF
image body content (excluding header/footer).
Syntax
Show Command
There are different static pdf generator config files for English (Used on English
version) and language C/J (Used on Chinese/Japanese). Use this CLI command to
define the fonts in the PDF generator. Default is English. Multi-language is
language C/J.
Syntax
CLI> store pdf-config multilanguage_support
Current setting is Default
1 Default
2 Multi-language
Please select the option (1,2, or q to quit)
Show command
store populate_from_query_maxrecs
Sets the maximum number of records that can be used to populate groups and
aliases from a query.
Use caution when setting a maximum records value via this CLI command. Setting
it too high may result in incomplete populate group from query processes. The
maximum threshold is dynamic and dependent on the system load and memory
utilization. This CLI command is limited to a high value of 200000.
Syntax
Show command
show populate_from_query_maxrecs
Syntax
Show Command
Sets the age (in days) at which non-essential objects will be purged. Use the show
purge objects age command to display a table showing the index, object name,
and age for each object type for which a purge age is maintained. Then use the
appropriate index from that table in the command to set the purge age.
Note: The value of number of days will be set to the default (90 days) when the
unit type changes between managed unit/Manager/standalone unit.
Syntax
Show Command
Example
Assume you want to keep an Event Log for 30 days. First issue the show purge
objects age command to determine the index (do not use the table; your list may
be different). Then enter the store purge object command.
CLI>show purge objects age
Index Name, Age
1. Central Management Persistent Operations, 7
2. S-TAP Event Log, 14
4. Assessment Tests, 7
5. Central Management Temporary Policies, 7
6. S-TAP Change History, 14
7. Kerberos Authentication Information, 1
8. Comment History, 60
9. Comment Local History. 60
10. Call Graph History, 90
11. CAS Host Event History, 7
12. Unused CAS Access Names, 7
13. Unused CAS Access Name Templates, 7
14. Custom Table Operations Log, 7
15. table in custom db without def, 7
16. Custom Table Upload Log, 7
17. Baseline entries referred to user, 30
18. Classification Process Results, 7
19. Sniffer Buffer Usage, 14
store quartz_thread_run
The Java™ Virtual Machine allows the application to have multiple threads. Thread
is a piece of the program execution.
Use the store quartz_thread_num CLI command to set the number of threads that
can run at the same time.
Use this command to ease conflict between too many threads running at the same
time.
Syntax
Show command
show quartz_thread_num
org.quartz.threadPoll.threadCount= 5
store remotelog
If you enable remote logging, be sure that the receiving host has enabled this
capability (see the note).
Syntax
The standard IBM Guardium severity codes for alerts and violations map as
follows:
INFO / info
LOW / warning
MED / err
HIGH / alert
host Identifies the host to receive this facility.priority combination.
optional
port
number
mandatory UDP or TCP
protocol
Note:
Note:
If you want to send the encrypted remote log message to the server, the
rsyslog configuration in the server needs to accept encrypted message.
Encrypted setting on client and server only works in TCP mode.
Switching from one mode to other on the same remote server: it needs to
modify the configuration file to sync with the designated mode and the
remote service needs to restart.
Example
store remotelog add non_encrypted
store remotelog clear
g32.guard.swg.usma.ibm.com> show remotelog
*.* @9.70.148.175:10514
store replay
This feature is used for performance and capacity testing. Use the CLI commands
to set configuration values.
See the Replay Configuration help topic for examples on how to use this feature.
Note: The Replay feature will work only on sniffed data captured with a Log Full
Details policy.
Syntax
Show command
50 (default value)
The command will update the number of minutes for parameter replay keep
active.
Show command
This command will update the number of replay maximum queue size.
store s2c
Sets several configurable parameters for ADMINCONSOLE. These parameters are
used for throttling server-to-client (S2C) traffic.
Note: Use this CLI command only when directed by IBM Guardium Technical
Services.
ANALYZER_S2C_IGNORE = {0,1,2,3}
Syntax
store s2c
The new configuration will be effective once the CLI command, restart
inspection-core, command is executed.
Show command
show s2c
Ignore: 0
Max interval: 30
-------------------
Scenario 1
The sniffer starts to receive traffic from S-TAP or network in the middle of large
query. Since all incoming packets are DB server responses, no new session will be
Scenario 2
store sender_encoding
Use this CLI command to encode outgoing messages (email and SNMP traps) in
different encoding schemes, where previously everything is encoded in UTF8.
For example, a Guardium customer wanted to encode all of the outgoing SNMP
messages in SJIS - an alternative Japanese encoding.
Note: If the conversion fails, for either reason (a) the encoding scheme specified is
invalid, or (b) the characters to be encoded can not be represented in the requested
encoding scheme, then the message will be sent using UTF8, which is the default
encoding scheme.
Syntax
Show command
show sender_encoding
store serial
Enable/disable a console or other terminal connection via serial port.
Syntax
Note:
The CLI command, store stap approval, does not work within an environment
where there is an IP load balancer.
Within a Central Managed environment, after adding the IPs to approved STAPs,
there is a wait time associated with synchronization that might take up to an hour.
After synchronization is complete the approved STAPs status will appear green in
GUI.
Syntax
Show command
GuardAPI command
grdapi store_stap_approval
The new configuration will be effective after running the CLI command, restart
inspection-core.
Stores a certificate from the S-TAP host (usually a database server), on the IBM
Guardium appliance. This command functions exactly like the store certificate
console command, described later.
Syntax
If you have not done so already, copy the server certificate to your clipboard. Paste
the PEM-format certificate to the command line, then press CRTL-D. You will be
informed of the success or failure of the store operation.
Syntax
If the number goes higher the S-TAP verification process will become slower.
Show command
store storage-system
store storage-system
Syntax
Show Command
show storage-system
Example
Assume you are currently using Centera for system backups, but want to switch to
a TSM system. You must turn off the Centera backup option (unless you want to
leave that as another option), and turn on the TSM backup option. The commands
to do this are highlighted in the example. The show commands are not necessary,
but are for illustration only.
NETWORK :
CENTERA : backing-up
TSM :
ok
ok
ok
NETWORK :
CENTERA :
TSM : backing-up
ok
CLI>
Enables (on) or disables (off) the sending of email alerts to the support email
address, which can be configured using the forward support email command. By
default, the support state is enabled (on), and the default support email address is
support@guardium.com.
Syntax
Show Command
store throttle
This CLI command stores the throttle parameters. After entering this command,
you must issue the CLI command, restart inspection-core for the changes to take
effect.
This command is used to filter out (ignore) large packets. Throttling has two
modes: Thresholds, per session - ignore sessions when identifying a long enough
burst (duration configurable) of large packets (size configurable) and stop ignoring
the session when traffic goes under a certain threshold (also configurable); and,
Overall - ignore all packets larger than a certain size (configurable) in all sessions.
Syntax
store throttle [default | size <s> interval <i> trigger <t> release <r>]
Show Command
show throttle
Throttle parameters:
Parameters
default - Enter the keyword default to restore the system defaults (no other
parameters are used). The default throttling parameters are never throttle.
Note: To restore the throttle defaults, use the CLI command, store throttle default.
store timeout
Sets the timeout value of a CLI session and/or fileserver session. The default value
is 600 seconds. A timeout will also close the CLI session.
Syntax
Show command
store transfer-method
Sets the file transfer method used for CSV/CEF export. For export file, need to use
CLI command, store transfer-method csv, to set the method of transfer. For
backup/archive, use the CLI command, store transfer-method backup, to set the
method of transfer.
Syntax
Show Command
show transfer-method
Note: Files sent from one IBM Guardium appliance to another (from a collector to
an aggregator, for example) are always sent using SCP.
store uid_chain_polling_interval
Set the interval for UID Chain polling with this CLI command. UID chain is a
mechanism which allows S-TAP (by way of K-Tap) to track the chain of users that
occurred prior to a database connection.
Set the interval to 0 to turn off the UID Chain processing, in order to improve
database performance. If the UID Chain processing is turned off, then calculating
the UID Chain and updating children sessions are skipped.
Note: When using any database, the UID chain is not logged for all sessions if the
session is very short.
Syntax
Show command
show uid_chain_polling_interval
store upd_session_end
This CLI command adds an option to skip the update for the session_end time.
Syntax
Show command
show upd_session_end
Use this CLI command to set unit type attributes for the Guardium appliance. See
the Unit Type Attributes table for a description of all unit type attributes that can
be displayed by this command.
Syntax
Collected DRDA traffic can be sent to Optim Query Capture Replay with a
microseconds timestamp, since OQCR requires a granularity of 1 microsecond. Use
the CLI command. store unit type sink, to switch from a granularity of 1
millisecond to 1 microsecond.
Show Command
Note: Some attributes listed are set using the store unit type command, and
cleared using the delete unit type command. One attribute (aggregator) is set
only when the IBM Guardium software is installed, and cannot be modified except
by re-installing the IBM Guardium software.
unregister management
The unregister command restores the configuration that was saved when the
appliance was registered for central management. If that happened under a
previous release of the IBM Guardium software, restoring that configuration
without first applying a patch to bring the saved configuration to the current
software release level will disable the appliance, potentially causing the loss of all
data stored there. Accordingly, do not unregister a unit until you have verified that
the pre-registration configuration is at the current software release level. If you are
unsure about how to verify this, contact Technical Support before unregistering the
unit.
Syntax
unregister management
Notes
v This command is intended for emergency use only, when the Central Manager is
not available.
v After unregistering using this command, you should also unregister from the
Central Manager (from the Administration Console), since that is the only way
the count of managed units will be reduced. The count of managed units is
authorized by the product key.
There are no functions that you would perform with this command on a regular
basis. Each main menu entry is described in a separate topic (see Main Menu
Commands).
This output is accessed through the fileserver CLI command. See fileserver for
further information.
We recommend that you “clean up” after each session, so in subsequent sessions
you are not looking at old information. When you pack files to a single
compressed file for exporting (see the following topic), all files in the current
directory are deleted. Alternatively, you can use the Delete recordings command of
the Output Management menu to delete individual files.
The files in the current directory are easy to identify since the names are created
from menu and command names. For example, after you use the File Summary
command from the System Interactive Queries menu, a file named
interactive_filesummary.txt is created in the current directory.
If you look at the current directory while in the process of using a command, you
may see a hidden temporary file with the same name as the one that will contain
the output for that command. The temporary file will be removed when the output
is appended to the command output file.
.../guard/diag/depot Directory
When you pack the diag output files in the current directory to a compressed file
(to send to Guardium Technical Support, for example), it is stored in the depot
directory. The filename is in the format diag_session_<dd_mm_hhmm>.tgz,
where the variable portion of the name indicates when the file was created. For
example, a file created at 12:15 PM on May 20th would be named as follows:
diag_session_20_5_1215.tgz.
After exporting files (see the Export recorded files topic), you can remove them
from the depot directory using the Delete recordings command of the Output
Management menu.
1 Output Management
The Output Management commands control what is done with the output
produced by the diag command. Each Output Management command is described
separately.
You can navigate the directories using the Up and Down arrow keys and pressing
Enter. For example, selecting ../ and pressing Enter moves the selection up one
level in the directory structure.
You could then select the current directory and press enter, to navigate down to
that folder and delete individual command output files. Note that you can
navigate to other directories, but you cannot delete files except from the current
and depot directories.
When you have selected the file you want to delete, press Enter.
Use this command to send a file from the depot directory to another site. To export
a file:
1. Select Export recorded files from the Output Management menu. The depot
directory displays.
2. Select the file to be sent or use the ../ and ./ entries to navigate up or down
in the directory structure. (However, keep in mind that you can only export
files from the depot directory.)
3. With the file to be transmitted selected, press Enter.
4. You are prompted to select FTP or exit. Select FTP and press Enter.
5. You are prompted to supply a host name. Enter the host name of the receiving
system (or its IP address), and press Enter.
6. You are prompted for a user name. Enter a user account name for the
receiving system, and press Enter.
7. You are prompted for a password. Enter the password for the user on the
receiving system.
8. You are prompted to identify a directory to receive the sent file on the
receiving system. Enter the path relative to the ftp root of the directory to
contain the file on the receiving system and press Enter.
9. You are prompted to confirm the details of the transfer (the file to be sent and
its destination). Press Enter to perform the transfer, or select Cancel and press
Enter to start over.
10. You are informed of the success (or failure) of the operation.
1.5 Exit
Use the Exit command to return to the main menu.
The following subtopics provide an outline of the major components of the System
Static Reports output. The fragments of output shown are intended to illustrate the
type and level of information contained in the report, rather than provide a
detailed description of the actual contents (that is beyond the scope of this
document).
The System Static Reports output describes the build version, the patches applied,
the current system up time, and name server information:
Build version: 34e1eb12eb68ba76cb49028251c9a0d6 /opt/IBM/guardium/etc/cvstag
Patches:
2009/02/22 16:16:50: START Installation of ’Update 5.0’
2009/02/22 16:18:04: Installation Done - Successfully Installed
Current uptime:
09:03:43 up 6 days, 17:34, 1 user, load average: 0.44, 0.50, 0.41
System nameservers:
192.168.3.20
DB nameservers:
192.168.3.20
Gateway: 192.168.3.1 (system) 192.168.3.1 (def)
This is followed by information about the mail and SNMP servers configured:
SMTP server: 192.168.1.7 on port 25 : REACHABLE
SMTP user: undef
SMTP password: undef
The final section of the system configuration section describes the network
configuration for the unit: IP address, host and domain names, etc:
eth0: 192.168.3.101 (system) 192.168.3.101 (def)
hostname: (system) g1 (def)
domain: (system) guardium.com (def)
mac address: 00:04:23:A7:77:F2 (MAC1) 00:04:23:A7:77:F2 (MAC2)
unit type: 548 Standalone STAP
The next major section of the System Static Reports output contains information
about the internal database status and threads (only the first few threads are
shown):
uptime 77097 seconds.
27 threads.
78545028 queries.
+------+------------+-----------------------------+---------+---------+------+-----------
| Id | User | Host | db | Command | Time | State | +---------
| 1137 | enchantedg | localhost | TURBINE | Sleep | 26 |
The next several sections of the System Static Reports output contain information
about the Web servlet container environment (Tomcat):
============================================================================
Currently defined Tomcat port is 8443.
The TOMCAT daemon is running and listening on port(s): 8005 8443.
Currently OPEN ports
java run by tomcat on port *:8443
The next major section of the System Static Reports output contains information
about the inspection engine:
============================================================================
This is the SNIF (pid: 13036) command line: 13036 /opt/IBM/guardium/bin/snif.
This is the SNIF status:
Name: snif
State: R (running)
============================================================================
IP Tables Information
S-TAP Information
The next major section contains S-TAP information:
============================================================================
STAP:
----
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:9500
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9500
2696 148K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:16016
2835 175K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:16016
IP Traffic Information
The next major section contains IP traffic information:
IP traffic statistics.
OUTPUT OF ETH0
Fri May 20 11:57:04 2012; ******** Detailed interface statistics started ********
*** Detailed statistics for interface eth0, generated Fri May 20 11:58:04 2009
OUTPUT OF ETH1
Fri May 20 11:57:04 2012; ******** Detailed interface statistics started ********
*** Detailed statistics for interface eth1, generated Fri May 20 11:58:04 2009
The next section contains the last messages output by the sniffer:
Snif STDERR:
Snif STDOUT:
Fri_20-May-2009_04:04:35 : Guardium Engine Monitor starting
Fri_20-May-2009_04:14:37 : Guardium Engine Monitor starting
Fri_20-May-2009_04:24:38 : Guardium Engine Monitor starting
Audit Report
Authentication Report
Login Report
3 Interactive Queries
Select System Interactive Queries from the main menu to open the Interactive
Queries menu. (Use the Down arrow key to scroll past the tenth item to see all
items on this menu.)
Use the Files Changed command to display a list of files changed either before or
after a specified number of days.
1. Select Files Changed from the Interactive Queries menu. You are prompted to
enter a number days. Type a number and press Enter.
2. You are asked if you are interested in the files changed before or after that
number of days. Select 1 or 2 and press Enter.
3. The full directory path for each changed file is displayed. Note that if not all
data fits in the display area, use the Up and Down arrow keys to scroll through
Use the Summarize Folder command to display the output of the du (Disk Usage)
command:
1. Select Summarize Folder from the Interactive Queries menu. There are no
prompts. You are presented with a display of disk use for various directories.
2. Use the Up and Down arrow keys to scroll through the directories.
3. Press Enter or click Exit when you are done.
Use this command to send a test SNMP trap to the configured SNMP server.
1. Select Test SNMP from the Interactive Queries menu.
2. You are informed of the activity and the results. Note that on the Alerter
Configuration panel, the Test Connection link in the SNMP pane only tests that
an SNMP port is configured, not that a trap can actually be delivered via that
server. You can use this command to test trap delivery without having to
configure (and trigger) a statistical or real-time alert, or an audit process
notification.
Use this command to display the actual select statement used for a report query.
This might be useful if a user-written report is producing unexpected output.
1. Select Report Query Data from the Interactive Queries menu.
2. You are prompted to make a selection from a list of report titles. Use the Up
and Down arrow keys to select an entry and press the Enter key. Each entry in
this list is a Report entity. All pre-defined reports are listed first. These are
numbered in the range 100-225 (for version 3.6.1 – the numbers will most likely
grow incrementally with each release, as more pre-defined reports are created).
User written reports are listed following the pre-defined reports, beginning
with number 20001 (for version 3.6.1).
The selected report select statement will be displayed.
Use this command to display a count of observed SQL calls during a 100 second
interval.
1. Select GDM Queries from the Interactive Queries menu.
2. A message displays requesting your patience. Select yes to continue. The
CMD_CT column on the display lists the number of observed SQL calls from
the specified clients to the specified servers.
3. Press Enter when you are done viewing the report.
Use this command to run the slon utility, which tracks packets. Typically, you
would only run this command as directed by Technical Support. For this
command, output is not written to the screen. Output is written to one of two
command files in the current directory, for each execution of the command:
apks.txt.<day_dd-mmm-yyyy_hh.mm.ss.ttt> OR requests.txt.<day_dd-mmm-
yyyy_hh.mm.ss.ttt>
The variable portions or the file names are date and time stamps. For example,
apks.txt.Fri_20-May-2011_08.52.00.789.
1. Select Slon Utility from the Interactive Queries menu.
2. Select the action to be performed and click OK. The choices are:
(a) to dump Analyzer rules info
(f) to filter Analyzer packets based on IP and/or mask
(p) to dump packets to apks.txt
(l) to dump logger requests to requests.txt
(m) to dump STAP packets (Select how long to run. Wait for completion and
then check the msg-dump file under /var/log/guard/diag/current/tap/ )
(r) to record IPQ traffic
(s) to dump State machine info
Example
SQLGuard Diagnostics
SYSTEM_NETMASK1: 255.255.255.0
SYSTEM_DOMAIN:
SYSTEM_DEFAULT_ROUTE:
SYSTEM_DNS1:
SYSTEM_DNS2:
SYSTEM_DNS3:
TOMCAT_IP:
MANAGER_IP:
HOST_MAC_ADDRESS:
SECOND_DEVICE:
This selection is different from other diag selections in the section called Generate
TCP and Generate TCP and slon.
For Generate TCP dump in rotation, enter Filter IP address (enter blank for all IPs).
Enter Filter Port number. For the question, How long to run? if the TCP dump in
rotation is already running, choose the option “Rotation OFF” or “Rotation” (ON).
If Rotation is selected, add file size.
Use this command only under the direction of Technical Support. This command
provides access to the Management Menu of the RAID controller utility program,
which can be used to display the status of the RAID drives. If your system does
not have a RAID controller, an error message displays if you select this command.
You must be extremely careful when using the RAID controller utility program,
since several of the functions provided will erase all information on the disk.
Use this command to turn debugging on or off. You are prompted to enable or
disable logging, or to reset the system defaults.
Use this option to change the timeout limit for long queries.
Use this command to restore a backed up version of the internal database. You will
be prompted to confirm the operation.
Choose Classifier to select debug level options: ERROR, WARN, INFO, DEBUG,
ALL.
Choose DLS (data level security), Workflow, or Other (text input) to select debug
level options: ERROR, WARN, INFO, DEBUG, ALL.
If Other is chosen (text input separated by ',') , enter valid components (dls,
workflow, audit, customtable, gui, other, job).
Brings all imported tables to the schema of the latest patch level (runs in the
background and may take several hours to complete).
This option should be used only by Technical Support and only in those cases
where static tables grow too much and needed to be cleaned. This utility cleans all
the old construct records that don’t have any Instances associated with them. A
progress message will display during the Clean Static Orphans (for use on collector
or aggregator).
5 Exit to CLI
Select Exit to CLI on the Main Menu. Press Enter to close the diag command and
return to the command line interface.
<daysequence>-<hostname.domain>-w<run_datestamp>-
d<data_date>.dbdump.enc
Syntax
backup config
restore config
backup system
This topic applies to backup and restore operations for the Guardium internal
database. You can back up or restore either configuration information only, or the
entire system (data plus configuration information, except for the shared secret key
files, which are backed up and restored separately, see the aggregator backup keys
file and aggregator restore keys file commands). These commands stop all
inspection engines and web services and restart them after the operation
completes.
Before restoring a file, be sure that the appliance has the system shared secret of
the system that created that file (otherwise, it will not be able to decrypt the
information). See About the System Shared Secret in the Guardium Administrator
Guide.
Note: System restore must be done to the same patch level of the system backup.
For example, if a customer backed up the appliance when it was on Version 7.0,
Patch 7 and then wishes to restore this backup into a newly-built appliance, then
there is a need to first install Version 7.0, Patches 1 to 7 on the appliance and only
then to restore the file.
For all backup, import and restore commands, you will receive a series of prompts
to supply some combination of the following items, depending on which storage
systems are configured, and the type of restore operation. Respond to each prompt
as appropriate for your operation. The following table describes the information
for which you may be prompted.
Note:
When configuring backups, value of zero '0' for the port number indicates that the
default port is being used for that protocol and no need to change.
Table 4. backup system
Item Description
SCP, FTP, TSM, Centera, Select the method to use to transfer the file. TSM and Centera
Snapshot will be displayed only if those storage methods that have been
enabled (see the store storage-method command)
restore from archive or Select restore from archive to restore archived data, or select
restore from backup restore from backup to restore configuration information.
normal or upgrade If restoring from the same software version of Guardium, select
normal. If restoring configuration information following
software upgrade of the Guardium appliance, select upgrade.
remote directory The directory for the backup file. For FTP, the directory is
relative to the FTP root directory for the FTP user account
used. For SSH, the directory path is a full directory path. For
Windows SSH servers, use Unix-style path names with forward
slashes, rather than Windows-style backslashes.
username The user account name to use for the operation (for backup
operations, this user must have write/execute permission for
the directory specified).
file name The file name for the archive or backup file. See Archived Data
Names.
Centera server Enter the Centera server name. If using PEA files, use the
following format: <Host name/IP>? <full PEA file name>,
for example:
128.221.200.56?/var/centera/us_profile_rwqe.pea.txt
Centera clipID For a Centera restore operation, the Content Address returned
from the backup operation. For example:
6M4B15U4JM4LBeDGKCPF9VQO3UA
After you have supplied all of the information required for the backup or restore
operation, a series of messages will be displayed informing you of the results of
the operation. For example, for a restore system operation the messages should
look something like this (depending on the type of restore and storage method
used):
gpg: Signature made Thu Feb 22 11:38:01 2009 EST using DSA key ID 2348FF9E gpg: Good signature fro
The backup process will check for room in /var before running and fail. This
process will also warn the user if there is insufficient space for backup.
The archive process will check the size of the static tables and make sure there is
room in /var to create the archive.
An error is now logged in the logfile and GUI if the backup is over 50%
Example:
ERROR: /var backup space is at 60% used. Insufficient disk space for backup. CLI> backup system
backup profile
Use this command to maintain the backup profile data (patch mechanism).
The backup file will be copied to the destination according to the backup profile.
If the parameter indicating whether to keep the backup file is “1” AND there is
enough disk space the backup file will be kept within the system, otherwise
removed.
All four fields must be filled in - backup destination host, backup destination
directory, backup destination user, and backup destination password.
Syntax
Example
patch backup flag is 1 patch backup automatic recovery flag is 1 patch backup dest host is
Syntax
Example
Do you want to set up for automatic recovery? (y/n) Enter the patch backup destination host:
Note: Only users with admin role may run this command .
Syntax
Example
If you enter the audit-data command for the date 2005-09-16, a set of messages similar to the followi
The data from each of the named internal database tables is written to a text file, in
CSV format. The name of the archive file ends with exp.tgz and the remainder of
the name is formed as described in About Archived Data File Names.
You can use the export file command to transfer this file to another system.
delete audit-data
Use this command only under the direction of Guardium Support. This command
is used to remove compressed audit data files. You will be prompted to enter an
index number to identify the file to be removed. See Archived Data File Names, for
information about how archived data file names are formed.
Syntax
delete audit-data
show audit-data
Use this command to display any files that were created by executing the CLI
command, export audit-data. For more information about audit data files, see
export audit-data.
Syntax
export file
This command exports a single file named filename from the /var/dump,
/var/log, or /var/importdir directory. Use this command only under the direction
of Guardium Support. To export Guardium data to an aggregator or to archive
data, use the appropriate menu commands on the Administration Console panel.
Syntax
fileserver
Use this command to start an HTTP-based (different from an HTTPS) file server
running on the Guardium appliance. This facility is intended to ease the task of
uploading patches to the unit or downloading debugging information from the
unit. Each time this facility starts, it deletes any files in the directory to which it
uploads patches.
Note: Any operation that generates a file that the fileserver will access should
finish before the fileserver is started (so that the file is available for the fileserver).
Syntax
ip address is an optional parameter that allows access to the fileserver from the
indicated IP address. By default (without the parameter), access is restricted to the
IP address of the SSH client that started the fileserver.
duration is an optional parameter that specifies the number of seconds that the
fileserver is active. After the specified number of seconds, the fileserver shuts
down automatically. The duration can be any number of seconds from 60 to 3600.
In case of a security setup where browser sessions are redirected through a proxy
server, the IP address of the fileserver client will not be the same as SSH client that
started the fileserver. Instead, the fileserver client will have the IP address of the
proxy server, and this address must be passing the optional ip address parameter.
To find the proxy IP address, check your browser settings or the client IP addresses
shown in the Logins to Guardium report in the Guardium Monitor interface.
Example
When you are done, return to the CLI session and press Enter to terminate the
session.
import file
See backup config and restore config.
Syntax
import file
Uploads a TSM client configuration file to the Guardium appliance. You must do
this before performing any archiving or backup operations using TSM. You will
always need to upload a dsm.sys file, and if that file includes multiple servername
sections, you will also need to upload a dsm.opt file. For information about how to
create these files, check with your company’s TSM administrator.
You will be prompted for a password for the user account on the specified host.
Syntax
Parameters
After uploading the tsm config file, if tsm config has a passwordaccess generate
prompt, passwordaccess is set to be generated.
Would you like to run a dsmc command now to ensure password is set locally (y/n)? If the answer i
Syntax
When restoring a configuration, you must restore a backup that is of the same
version and patch level as the original appliance where the backup was created.
Syntax
backup config
restore config
restore db-from-prev-version
This command takes a backup from the immediate past system (backup data must
be provided, configuration backup is optional) and performs a restore on a newer
system. It includes upgrading the data, portlets, etc.
Perform a full system backup prior to upgrading your Guardium system. If for
some reason the upgrade fails and leaves the machine in a way that can not be
used, instead of trying to fix and re-run the upgrade, rebuild the machine as the
latest system, setting up this latest system with only the basic network information
(IP, resolver, route, system hostname and domain).
The result will be the latest system with the data and customization (if
configuration file is provided) from the previous system.
First, try a regular upgrade from the previous system to the latest system. If this is
not successful, then use the backup as an alternative way to upgrade from the
previous system to the latest system.
Note: Older data being restored to an aggregator (not to investigation center), and
outside the merge period, will not be visible until the merge period is changed and
the merge process rerun.
To run this command, back up the current server for both data and configuration.
Once the backup is complete, install the latest release onto the same server. Next,
import both the data and configuration file from CLI via the import file command.
Then after the two backup files are imported, run, again from CLI, the command
restore db-from-prev-version. This restores the backup files (data and
configuration) from the older version to the newly installed server.
Note: If you are using Guardium in a non-English language, the restore CLI
command sets some strings, including report headers, to English. To view these
strings in the non-English language, run the store language CLI command after
you run the restore CLI command.
Syntax
restore db-from-prev-version
This procedure will restore and upgrade a previous backup on a newly-installed latest system. If t
Answering Y (yes) to the following questions during the execution of the CLI
command, restore db-from-prev-version, will result in all non-canned/customized
reports and panes to compress into one pane with the name of v.x.0 Custom
Reports.
Answering N (no) to the same questions will result in all panes being restored to
what they were in previous version.
Update portal layout (panes and menus structure) to the new v8 default (current instances of custom r
restore keystore
Use this command to restore certifications and private keys used by the Web
servlet container environment (Tomcat).
Syntax
restore keystore
restore pre-patch-backup
Use this command to recover the pre-patch-backup when the appliance database is
up or down.
Syntax
restore pre-patchbackup Please enter the information to retrieve the file: Is the file in the local s
restore system
This topic applies to backup and restore operations for the Guardium internal
database. You can back up or restore either configuration information only, or the
entire system (data plus configuration information, except for the shared secret key
files, which are backed up and restored separately, see the aggregator backup keys
file and aggregator restore keys file commands). These commands stop all
inspection engines and web services and restart them after the operation
completes.
Before restoring a file, be sure that the appliance has the system shared secret of
the system that created that file (otherwise, it will not be able to decrypt the
information). See About the System Shared Secret in the Guardium Administrator
Guide.
Note: System restore must be done to the same patch level of the system backup.
Note:
Backup system will copy the current license, metering and number of datasources,
and then backup the data. Restore system will restore the data and then restore the
license, metering and number of datasources. This sequence applies to the regular
restore system. Restore from a previous system will require re-configuring license,
metering and number of datasources.
Table 5. restore system
Item Description
SCP, FTP, TSM, Select the method to use to transfer the file. TSM and Centera will
Centera, Snapshot be displayed only if those storage methods that have been
enabled (see the store storage-method command)
restore from archive or Select restore from archive to restore archived data, or select
restore from backup restore from backup to restore configuration information.
normal or upgrade If restoring from the same software version of Guardium, select
normal. If restoring configuration information following software
upgrade of the Guardium appliance, select upgrade.
remote directory The directory for the backup file. For FTP, the directory is relative
to the FTP root directory for the FTP user account used. For SSH,
the directory path is a full directory path. For Windows SSH
servers, use Unix-style path names with forward slashes, rather
than Windows-style backslashes.
username The user account name to use for the operation (for backup
operations, this user must have write/execute permission for the
directory specified).
file name The file name for the archive or backup file. See Archived Data
files names.
Centera server Enter the Centera server name. If using PEA files, use the
following format: <Host name/IP>? <full PEA file name>, for
example:
128.221.200.56?/var/centera/us_profile_rwqe.pea.txt
Note the ? between the server IPs and Pea file name.
This IP address and the .PEA file comes from EMC Centera. The
question mark is required when configuring the path. The
.../var/centera/... path name is important as the backup may fail
if the path name is not followed. The .PEA file gives permissions,
username and password authentication per Centera backup
request.
Centera clipID For a Centera restore operation, the Content Address returned
from the backup operation. For example:
6M4B15U4JM4LBeDGKCPF9VQO3UA
After you have supplied all of the information required for the backup or restore
operation, a series of messages will be displayed informing you of the results of
the operation. For example, for a restore system operation the messages should
look something like this (depending on the type of restore and storage method
used):
gpg: Signature made Thu Feb 22 11:38:01 2009 EST using DSA key ID 2348FF9E gpg: Good signature from "
Install a secondary disk or for backup on R610 R710 appliances. Place it slot
number 2 and proceed with set up snapshotdisk to configure the partition, format
the drive, and mount it. The two CLI choices are set up help and set up
snapshotdisk.
Syntax
store language
Use this CLI command to change from the baseline English and convert the
database to the desired language. Installation of Guardium is always in English. A
Guardium system can only be changed to Japanese or Chinese (Traditional or
Simplified) after an installation.
For example, the psmls (the panes and portlets you have created) will be deleted,
since they need to be recreated in the new language.
Syntax
Show command
show language
Use this CLI command to install VMware that runs on the ESX infrastructure.
Syntax
Step 1: Open the VM client/console and select the VM instance that contains the
IBM Guardium appliance. Right-click the instance, select (from the popup menu)
Guest => Install/upgrade VMware tools. This enables the instance to access the
VMware tools via a mount point.
Step 2: Run the CLI command (from within the VM client/console), setup
vmware_tools install, to install VM tools.
An inspection engine monitors the traffic between a set of one or more servers and
a set of one or more clients using a specific database protocol (Oracle or Sybase,
for example). The inspection engine extracts SQL from network packets; compiles
add inspection-engines
Adds an inspection engine configuration to the end of the inspection engine list.
The parameters are described. You can re-order your list of inspection engines after
adding a new one by using the reorder inspection-engines command. Adding an
inspection engine does not start it running; to start it running, use the start
inspection-engines command.
Syntax
Parameters
name - The new inspection engine name; must be unique on the unit.
protocol - The protocol monitored, which must be one of the following: Cassandra,
CouchDB, DB2, DB2 Exit, exclude IE, FTP, GreenPlumDB, Hadoop, HTTP, ISERIES,
Informix, KERBEROS, MongoDB, MS SQL, Mysql, Named Pipes, Netezza, Oracle,
PostgreSQL, SAP Hana, Sybase, Teradata, or Windows File Share.
port - The port or range of ports over which traffic between the specified clients
and database servers will be monitored. To specify a range, separate the two
numbers with a hyphen.
delete inspection-engines
Removes the single inspection engine identified by its name. The name can include
only letters, numbers and blanks. If the inspection engine name contains any
special characters, use the administrator portal GUI to remove it.
reorder inspection-engines
Specifies a new order for the inspection engines, using index values from the list
produced by the list inspection-engines command.
Syntax
Example
If the displayed indices are 1, 2, 3, and 4, the following command will reverse
order of the engines:
restart inspection-core
Restarts the inspection-engine core, but not the inspection engines. The collection
of database traffic stops when this command is issued.
Syntax
restart inspection-core
Note: To restart the collection of traffic for one or more specific inspection engines,
follow this command with one or more start inspection engine commands.
Alternatively, to restart the collection of traffic for all inspection engines, use the
restart inspection-engines command.
restart inspection-engines
Restarts the database inspection engine core and all inspection engines. The
collection of database traffic stops temporarily while this occurs and restarts only
when database connections re-initiate.
Syntax
restart inspection-engines
show inspection-engines
Displays inspection engine configuration information, as follows:
Syntax
start inspection-core
Syntax
start inspection-core
start inspection-engines
Starts one or more inspection engines identified using index values from the list
produced by the list inspection-engines command.
Syntax
Syntax
start inspection-engines id
Usage: start inspection-engines id <n>, where n is a numeric sniffer id.
Syntax
stop inspection-engines id
Usage: stop inspection-engines id <n>, where n is a numeric sniffer id.
stop inspection-core
Syntax
stop inspection-core
Syntax
Syntax
stop inspection-engines id
Stops one or more inspection engines identified using index values from the list
produced by the list inspection-engines command.
Syntax
Sets the complete set of port numbers to be ignored by all inspection engines. The
list you specify completely replaces the existing list. Each number is separated
from the next by a comma, and no blanks or other white-space characters are
allowed in the list. Use a hyphen to specify an inclusive range of numbers.
Syntax
Example
Show Command
restart network
Restarts just the network configuration. For example, change the IP address, then
run this CLI command.
Syntax
restart network
This command shows settings for the network interface used to connect the
Guardium appliance to the desktop LAN. The IP address, mask, state (enabled or
disabled) and high availability status will be displayed. If IP high-availability is
enabled, the system will display two interfaces (ETH0 and ETH3). Otherwise, only
ETH0 will be displayed.
Syntax
Syntax
Example
ok
CLI>
Syntax
Syntax
Show Command
Use this command only when auto-negotiation is not available on the switch to
which the Guardium port is connected. This command configures duplex mode for
the port named ethn. Use the show network interface inventory command to
display all port names.
Show Command
The two ports used (ETH0 and a second interface) must be connected to the same
network. There is a slight delay, caused by the switch re-learning the port
configuration. The default setting is off.
The port used for the primary IP address is always ETH0. When the
high-availability option is enabled, the Guardium system automatically fails over,
as needed, to the specified second interface, in effect transferring the primary IP
address to the second interface.
Note: IP Teaming and Secondary Interface can not done at the same time.
Syntax:
store network interface high-availability [on <NIC> | off ]
Resets the network interface MAC addresses stored in the Guardium internal
tables. This command should only be used after replacing or moving a network
card.
Note: The store network interface inventory command will detect on-board NIC
cards within the Guardium appliance and assign these cards as eth0 and eth1. This
command should only be run if specifically instructed to by Guardium Support as
it can rearrange the NIC cards.
Syntax
CLI> > store network interface inventory
WARNING: Running this function will reorder your NICS and may make the machine unreachable.
WARNING: It is suggested to run this from the console or equivalent.
Are you SURE you want to continue? (y/n)
Use the show command to display the port names and MAC addresses of all
installed network interfaces.
Syntax
Example
eth0| 00:50:56:3b:c3:73|
eth1| 00:50:56:8a:0d:fa|
eth2| 00:50:56:8a:0d:fb|
eth3| 00:50:56:8a:00:c1|
Note: The “Member of” will show which NICs are in the bond pair, if a bonding
exists).
Syntax
Show Command
Sets the primary IP V6 address for the Guardium appliance. When changing the
network interface IP address, you may also need to change its subnet mask. See
store network interface mask. See store network interface secondary to create and
manage a secondary IP address. Bonding/failover is managed from the CLI
command, store network interface high-availability.
Syntax
Show Command
Syntax
Syntax
Use this CLI command to set the MTU (Maximum Transfer Unit).
CLI> store network interface mtu
Usage: store network interface mtu <interface> <mtu>]
where <interface> is the interface name,
that is one of ( eth0 )
and <mtu> is number between 1000 and 9000.
Show command
eth0 1500
Use this command to locate a physical connector on the back of the appliance.
After using the show network interface inventory command to display all port names,
use this command to blink the light on the physical port specified by n (the digit
following eth in the command - eth0, eth1, eth2, eth3, etc.), 20 times.
Syntax
Example
Syntax
Note: IP Teaming and Secondary Interface can not done at the same time.
Syntax:
store network interface secondary [on <NIC> <ip> <mask> <gateway> | off ]
Show command
Use this command only when auto-negotiation is not available on the switch to
which the Guardium port is connected. This command configures the speed setting
for the port named ethn. Use the show network interface inventory command to
display all port names.
Syntax
Show Command
Syntax
Example
ok
Displays a list of MAC addresses (like the show network interface inventory
command).
Syntax
Example
eth0| 00:50:56:3b:c3:73|
eth1| 00:50:56:8a:0d:fa|
eth2| 00:50:56:8a:0d:fb|
eth3| 00:50:56:8a:00:c1|
Note: The “Member of” will show which NICs are in the bond pair, if a bonding
exists).
ok
Sets the interface definition for the network interface card that connects to the
server that is to be proxied. Set on when in transparent proxy mode, off when in
manual proxy mode.
Syntax
Show Command
Show Command
Sets the IP address for the default router to the specified value.
Syntax
Show Commands
Permit the user to have only one IP address per appliance (through eth0) and
direct traffic through different routers using static routing tables. Add line to static
routing table.
Syntax
Show Command
List the current static routes, with IDs - Device, Index, Address, Netmask, Gateway
Delete command
Syntax
Show Command
Syntax
Show Command
These commands are to assist Technical Support in analyzing the status of the
machine, troubleshooting common issues and correct some common problems.
There are no functions that you would perform with these commands on a regular
basis.
support clean audit_task
A way to manually purge audit results, this command should be used only
when absolutely necessary to deal with audit tasks that produce a high
number of records and take up too much disk space.
It is strongly advised to consult with Technical Support before running this
command.
A Warning message is presented and a confirmation step is needed when
running this command.
This command will list the audit processes and tasks information.
It will present the number of rows, ordered from the largest result set to
the smallest. The number of report results is greater or equal to the input
value.
Next, after the report is presented, the user can select a line number to
purge the results of the audit process corresponding to that line number.
Selection of this line number will delete the audit data for the selected
process name.
Syntax
support clean audit_tasks <rows>
Input parameters
rows - an integer, number of rows to show. Default 10.
Note: On a system with a great many audit tasks, the completion of this
command can take some time.
support clean log_files
This CLI command will delete the specified file after user confirms to
delete. If it can not find the file, it will list files larger than 10MB in
/var/log and the user delete a large file from the list. A warning message
is presented and a confirmation step is included.
Syntax
Use this command to configure automatic powering down options when a UPS is
attached. Note that the UPS must be attached to a USB connector (serial
connections for a UPS are not supported).
Sets the minimum charge percent (0-100) before powering down, or the number of
seconds to run on battery power before powering down. The defaults are 25 and
zero, respectively.
There are also commands to start and stop the apc process. The apc process is
disabled by default.
Syntax
Show Command
Syntax
store system banner clear - use this CLI command to remove an existing banner
message.
store system banner message - use this CLI command to create a banner message.
Enter the banner message and then press CTRL-D.
Show command
show system banner - use this CLI command to view an existing banner message.
Sets the system clock's date and time to the specified value, where YYYY is the
year, mm is the month, dd is the day, hh is the hour (in 24-hour format), mm is
the minutes, and ss is the seconds. The seconds portion is required, but will
always be set to 00.
Syntax
Show Command
Example
Lists the allowable time zone value (list option), or sets the time zone for this
system to the specified timezone. Use the list option first to display all time zones,
and then enter the appropriate timezone from the list.
IBM Guardium also logs the local timezone in the standard audit trail, to address
cases where data is used in (or aggregated with) data collected in another time
zones.
Note: The timezone setting is not updated automatically when Daylight Saving
time occurs. In order to update the machine, the user will need to reset the
timezone. Reset the timezone means to set a new timezone, different from what
currently is, and then resetting to the correct timezone. Just resetting the timezone
to the same one will not work and give the message, No change for the timezone.
Syntax
Show Command
Example
Use the command first with the list option to display all time zones. Then enter
the command a second time with the appropriate zone.
Timezone: Description:
--------- -----------
Africa/Abidjan:
Africa/Accra:
Africa/Addis_Ababa:
...
...output deleted
...
Sets the current status of connection tracking subsystem of the Linux kernel. Status
can be ON|OFF.
Syntax
Show command
Use this CLI command to set the appropriate CPU scaling policy for your needs:
v conservative = less power usage, conservative scaling
v balanced = medium power usage, fast scale up
v performance = runs the CPU(s) at maximum clock speed
Show command
Use this CLI command to set the maximum size of the custom database table (in
MB). The Default value is 4000 MB.
Syntax
CLI> store system custom_db_max_size
USAGE: store system custom_db_max_size <N>
where N is number larger than 4000.
Show command
Syntax
Show Command
Syntax
Show Command
The CLI command, store system issue message, will receive input from the console
until Ctrl-d and write it to /etc/motd after removing from the input any $,\,
\followed by single letter, and ` characters. This is a way to enter messages that
make this system compliant with the security policies of customers.
The version comes from /etc/guardium-release. For example, SG70 -> 7.0, SG80 ->
8.0. If the SG is not found in the /etc/guard-release, the default version is an
empty string.
Syntax
Show command
Use this CLI command to run ntpq -p and ntptime and send the output directly to
the screen. The Guardium system queries ntpd from localhost via udp.
Syntax
Example
CLI> show system ntp diagnostics
Output from ntpq -p :
localhost.localdomain:
-------------------------------------------------------------------
Output from ntptime :
(Note that if you have just started the ntp server, it may report an ’ERROR’ until it has synchronize
-------------------------------------------------------------------
ntp_gettime() returns code 5 (ERROR)
time d3443c21.47a46000 Thu, Apr 26 2012 17:26:57.279, (.279852),
maximum error 16384000 us, estimated error 16384000 us
ntp_adjtime() returns code 5 (ERROR)
modes 0x0 (),
offset 0.000 us, frequency 0.000 ppm, interval 1 s,
maximum error 16384000 us, estimated error 16384000 us,
status 0x40 (UNSYNC),
time constant 2, precision 1.000 us, tolerance 512 ppm,
Sets the host name of up to three NTP (Network Time Protocol) servers. Note that
to enable the use of an NTP server, you must use the store system ntp state on
command. To define a single NTP server, enter its host name or IP address. To
define multiple NTP servers, enter the command with no arguments, an you will
be prompted to supply the NTP server host names.
Show Command
Delete command
delete ntp-server
Syntax
Show Command
Installs a single patch or multiple patches as a background process. The ftp and
scp options copy a compressed patch file from a network location to the IBM
Guardium appliance. Note that a compressed patch file may contain multiple
patches, but only one patch can be installed at a time. To install more the one
patch, choose all the patches that need to be installed, separated by commas.
Internally the CLI will submit requests for each patch on the list (in the order
specified by the user) with the first patch taking the request time provided by the
user and each subsequent patch three minutes after the previous one. In addition,
CLI will check to see if the specified patch(es) are already requested and will not
allow duplicate requests.
The last option (sys) is for use when installing a second or subsequent patch from
a compressed file that has been copied to the IBM Guardium appliance using this
command previously.
To display a complete list of applied patches, see the Installed Patches report on
the IBM Guardium Monitor tab of the administrator portal.
In store system patch install CLI command, user can choose multiple patches from
the list.
Syntax
<date> and <time> are the patch installation request time, date is formatted as
YYYY-mm-dd, and time is formatted as hh:mm:ss
If no date and time is entered or if NOW is entered, the installation request time is
NOW.
Parameters
Regardless of the option selected, you will be prompted to select a patch to apply:
cd - To install a patch from a CD, insert the CD into the IBM Guardium CD ROM
drive before executing this command. A list of patches contained on the CD will be
displayed.
User on hostname:
Password:
In store system patch install scp CLI command, user can use wildcard * for the
patch file name.
The compressed patch file will be copied to the IBM Guardium appliance, and a
list of patches contained on file will be displayed.
sys - Use this option to apply a second or subsequent patch from a patch file that
has been copied to the IBM Guardium appliance by a previous store system patch
execution.
The store system patch install command will not delete the patch file from the IBM
Guardium appliance after the install. While there is no real need to remove the
patch file, as same patches can be reinstalled over existing patches and keeping
patch files around can aid in analyze various problems, a user may remove patch
files by hand or use the CLI command diag (Note, the CLI command diag is
restricted to certain users and roles.)
To delete a patch install request, use the CLI command delete scheduled-patch
Enable/disable SSH (root access). Secure Shell or SSH is a network protocol that
allows data to be exchanged using a secure channel between two networked
devices.
Syntax
Show command
Use store system scheduler restart_interval [5 to 1440 or -1] to restart the timing
function after 5 minutes to 1440 minutes. The default is -1 which means the timing
restart mechanism is not installed.
Syntax
Show command
Sets the system's shared secret value to the specified value. This key must be the
same for a Central Manager and all of the appliances it will manage; or an
Aggregator, and all of the appliances from which it aggregates data. After an
appliance has registered for management by a Central Manager, the shared secret
on that unit is no longer used. (You cannot unregister a unit from Central
Management by changing this value.)
The aggregator password will be <the current password> concatenated with the
shared secret, meaning: password=<current passwd><share secret>
Users will need to make sure the collectors' shared secret and the aggregator's
shared secret is exactly the same, otherwise the SCP transfer will fail from the
collector to the aggregator (This is a requirement for managed units and
aggregators, collectors and aggregators, and export setup screen). The shared secret
can be set both from CLI and from the System pane in the Admin Console tab.
Syntax
Use this CLI command only when directed by IBM Guardium Technical Services.
Syntax
Show command
Use this CLI command to specify how many threads are running.
The new configuration will be effective once the CLI command, restart
inspection-core, is executed.
Syntax
Show command
Stores the email address for the snmp contact (syscontact) for the IBM Guardium
appliance. By default it is info@guardium.com.
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
When logging on via CLI with one of the default CLI accounts (guardcli1,
...guardcli5), it is required to run the CLI command, set guiuser, before any
GuardAPI commands will work. This authentication is required to prevent users
with limited roles in the GUI from gaining unauthorized access to GuardAPI
commands.
The use of the guardcli1 ... guardcli5 accounts requires the setting of a local
password. Use the CLI command, set guisuer, command to reset the guardcli1 ...
guardcli5 accounts and then add a local password, as shown in the Syntax.
Certain CLI commands are dependent on the role of the guiuser. For example, the
role of the guiuser (marked when creating a new user from accessmgr view) must
be accessmgr in order to access grdapi create_user, grdapi set_user_roles, and
grdapi update_user
Syntax
Example
$ ssh guardcli1@a1.corp.com
guardcli1@a1.corp.com's password:
================================================================
================================================================
ok
a1.corp.com>
create_user
Examples
userName=john disabled=0
ID=20000
>grdapi set_user_roles userName="john"
roles="dba,diag,cas,user"
ID=20000
Failed to add role (diag). Diag must have one of these roles: cli or admin.
ID=20000
ID=20000
ID=0
Username: accessmgr
Email:
Disabled: false
Username: admin
Email:
Disabled: false
Username: anon
Email:
Disabled: false
Username: john
Email: john.smith@gmail.com
Disabled: false
Username: bill
Email:
Disabled: true
Each time that you execute a set_user_roles, you reset the roles of a user. You don't
append to the roles. You reset.
When you create a user using GrdAPI, it will create the user with user role.
Whenyou set the role, you have to specify all of its roles This is done to enable
deletion of existing roles and addition of new roles.
Even in GUI, it displays all roles, in which you can either check or uncheck a role
and when you save it, it will save everything that you checked.
What GrdAPI does, is to give user kevin only role INV, where any user must have
one of these roles: user, cli, admin, or accessmgr
Example
ok
ID=20000
ok
> grdapi set_user_roles userName="kevin" roles="inv"
set_user_roles:
ERR=3700
User must have one of these roles: user, cli, admin, or accessmgr.
ok
> grdapi set_user_roles userName="kevin"
roles="user,inv"
ID=20000
Failed to add role (inv). Sorry, before assigning the inv role the user's Last Name
must be set to the name of one of the three investigation databases -
ok
> grdapi set_user_roles userName="kevin"
roles="dba,diag,cas,user"
ID=20000
Failed to add role (diag). Diag must have one of these roles: cli or admin.
ok
>
show guiuser
Show command
show guiuser
Use the account lockout commands to disable a Guardium user account after one
or more failed login attempts. Use these commands to:
v Enable or disable the feature. See store account lockout.
v Set the maximum number of login failures allowed an account within a given
time interval. See store account strike count and store account strike interval.
v Set the maximum number of failures allowed an account for the life of the
Guardium appliance. See store account strike max.
v To unlock the admin user account in the event it becomes locked, see the unlock
admin command description.
After a Guardium user account has been disabled, it can be enabled from the
Guardium portal, and only by users with the accessmgr role, or the admin user.
Example
Note:
If the admin user account is locked, use the unlock admin command to unlock it.
If account lockout is enabled, setting the strike count or strike max to zero does
NOT disable that type of check. On the contrary, it means that after just one failure
the user account will be disabled!
Enables (on) or disables (off) the automatic account lockout feature, which disables
a user account after a specified number of login failures.
Syntax
Show Command
Sets the number of failed login attempts (n) in the configured strike interval before
disabling the account.
Syntax
Show Command
Sets the number of seconds (n) during which the configured number of failed login
attempts must occur in order to disable the account.
Syntax
Show Command
Sets the maximum number (n) of failed login attempts to be allowed for an
account over the life of the server, before the account is disabled.
Syntax
Show Command
Syntax
Show Command
Sets the age (in days) for user password expiration. When set to 0 (zero), the
password never expires. For any other value, the account user must reset the
password the first time they log in after the current password has expired. The
default value is 90. You must restart the GUI after changing this setting.
Syntax
Show Command
Turns password validation on or off. The default value is on. You must restart the
GUI after changing this setting.
Show Command s
@ Commercial at sign
# Number sign
$ Dollar sign
% Percent sign
& Ampersand
; Semicolon
! Exclamation mark
- Hyphen (minus)
+ Plus sign
= Equals sign
Syntax
You will be prompted to enter the current password, and then the new password
(twice). None of the password values you enter on the keyboard will display on
the screen.
Running this CLI command will also update the change-time record in the
password expiration file.
unlock accessmgr
Use this command to enable the Guardium accessmgr user account after it has
been disabled. This command does not reset the accessmgr user account password.
Note: Only users with admin role are allowed to run this CLI command.
Syntax
unlock accessmgr
restart gui
unlock admin
Use this command to enable the Guardium admin user account after it has been
disabled. This command does not reset the admin user account password.
Note: Only users with admin role are allowed to run this CLI command.
Syntax
unlock admin
restart gui
Authentication commands
store auth
Use this command to reset the type of authentication used for login to the
Guardium appliance, to SQL_GUARD (i.e. Local Guardium authentication, the
default).
Syntax
Show Command
The Guardium portal window contains one or more panes. Each pane defines the
layout of some portion of the window. Each pane may contain one or more other
panes. The default layout contains three different types of panes: tab panes, menu
panes, and portlet panes, each of which is described in the help topic, Portal
Customization.
The Guardium administrator or access manager can generate, via CLI, a default
layout for a role. After that, any new user who is assigned that role will have that
layout after logging in for the first time.
Note: Default .psml structures for user and role can be defined, via the GUI, by
the admin user. See Portlet Editor for further information.
generate-role-layout
Parameters
If either of the following parameters contains spaces (John Doe is user , or DBA
Managers is role), replace the space characters with underscore characters.
For example:
user - The name of the user whose layout will be used as a model for the role
layout.
After you install the Guardium system, use the following commands to configure
the proxy server that checks if the ICAP server is available. The port number for
the proxy server is 3128, and th eport number for the transparent proxy is 3129.
The port number for ICAP is 1344. You can upload a certificate and key that is
signed by an authorized company such as VeriSign. After the certificate has been
uploaded, a path to the proxy server is provided. The certification for the proxy
server must be signed by an authorized company. If it is not, the certificate will be
denied.
Note: Any configuration will require restarting the proxy server and ICAP.
restart icap
Restarts the icap process that handles HTTPS traffic. This command stops the icap
process with a time stamp and displays the message - stop icap. Another time
stamp appears with the message - start icap. Then, a third time stamp appears
with the message - start icap completed to confirm that the icap has restarted.
Syntax
restart icap
restart squid
Restarts the proxy server service. This command stops the service with a time
stamp and displays the message - stop squid. Another time stamp appears with
the message - start squid. Then, a third time stamp appears with the message -
start squid completed to confirm that the proxy server has restarted.
Syntax
restart squid
show squid
Shows the state of the proxy server bypass, proxy, or SSL (Secure Sockets Layer).
You cannot enable the proxy server bypass when it is already enabled. Also, you
cannot disable the proxy server bypass when it is already disabled.
Syntax
Syntax
Syntax
Shows the state of the proxy server SSL connection. The proxy server SSL
configuration displays: enable when the SSL connection is on and disable when
the SSL connection is off. To change the setting, use the command store squid ssl
<on | off>. A certificate file must exist to enable the proxy server SSL connection.
Syntax
start icap
Starts the icap process that handles Hypertext Transfer Protocol Secure (HTTPS)
traffic. It is a method that secures the transfer of information across a network. A
time stamp shows when the process has started with the following message: -
start icap. After the process is completed, a confirmation message states: - start
icap completed.
Syntax
start icap
start squid
Starts the proxy server service. A time stamp shows when the process is started
with the following message: - start squid. After the process is completed, a
confirmation message states: - start squid completed.
Syntax
start squid
stop icap
Stops the icap process that handles Hypertext Transfer Protocol Secure (HTTPS)
traffic. A time stamp indicates that the process to stop icap has started. It is
followed by the message: - stop icap. The process stops and sends back a time
stamp and the following message after it is completed: - stop icap completed.
Syntax
stop squid
Stops the proxy server service. A time stamp indicates that the process to stop the
proxy server has started. It is followed by the message: - stop squid. The process
stops and sends back a time stamp and the following message after it is
completed: - stop squid completed.
Syntax
stop squid
store squid
Stores the proxy server bypass, proxy, or SSL configuration. The current state is
determined by the argument <state> where on is to enable and off is to disable.
Syntax
Syntax
Stores the proxy configuration in the configuration file. You can set the state of the
proxy server to default or manual. Use the show squid proxy to view the current
status of the proxy server.
If this setting is set to default, the default setting of the proxy is transparent proxy,
and the client does not need to configure the proxy in the web browser. If the
proxy is set to manual, the client must configure the proxy in the browser.
Syntax
Syntax
Use this command to set the connection timeout. If Quick Search for Enterprise
cannot connect to the collector within the specified timeout period, no results from
that collector will be returned.
GuardAPI Reference
GuardAPI provides access to Guardium functionality from the command line.
This allows for the automation of repetitive tasks, which is especially valuable in
larger implementations. Calling these GuardAPI functions enables a user to quickly
perform operations such as create datasources, maintain user hierarchies, or
maintain the Guardium features such as S-TAP just to name a few.
Proper login to the CLI for the purpose of using GuardAPI requires the login with
one of the five CLI accounts (guardcli1,...,guardcli5) and an additional login
(issuing the 'set guiuser' command) with a user (GUI username/guiuser) that has
been created by access manager and given either the admin or cli role. See Set
guiuser Authentication for more information.
GuardAPI is a set of CLI commands, all of which begin with the keyword grdapi.
v To list all GuardAPI commands available, enter the grdapi command with no
arguments or use the 'grdapi commands' command with no search argument.
For example:
CLI> grdapi
or
CLI> grdapi commands
v To display the parameters for a particular command, enter the command
followed by '--help=true'. For example:
CLI> grdapi list_entry_location --help=true
ID=0
function parameters :
Case Sensitivity
Both the keyword and value components of parameters are case sensitive.
For example:
grdapi create_datasource type ="MS SQL SERVER" ...
Return Codes
To see a complete list of GuardAPI error codes, type grdapi-errors, at the CLI
command prompt.
Table 9. Common Error Codes
Error Description
0 Missing parameters or unknown errors such as unexpected exceptions.
1 An Exception has occurred, please contact Guardium's support
2 Could not retrieve requested function - check function name. To list all
functions, type either the CLI command, grdapi, or grdapi commands, with no
arguments.
To search, by function name, given a search string, use the CLI command,
grdapi commands <search-string>
3 Too many arguments. To get the list of parameters for this function call the
function --help=true
4 Missing required parameter. To get the list of parameters for this function call
the function with --help=true
5 Could not decrypt parameter, check if encrypted with the correct shared secret.
6 Wrong parameter format, specify a function name followed by a list of
parameters using <name=value> format.
7 Wrong parameter value for parameter type.
8 Wrong parameter name, please note, parameters are case sensitive.
9 User has insufficient privileges for the requested API function
10 Parameter Encryption not enabled - shared secret not set.
11 Failed sending API call request to targetHost
12 Error Validating Parameter
13 Target host must be the ip address of the central manager
14 Target host is not managed by this manager
15 Target host is not online
16 Target host cannot be specified on a standalone unit
17 User is not allowed to operate on the specified object
18 Target host cannot be specified
19 Missing end quote
20 User is not allowed to run grdapi commands
21 --username and --source-host are grdapi reserved words and cannot be passed
on the command line.
22 A parameter name cannot be specified more than once, please check the
command line for duplicate parameters.
23 Value not in constant list.
24 Not a valid encrypted value.
25 Not a valid parameter format - parameters should be specified as
<name=value>, spaces are not allowed.
All grdapi activity will be attributed to the cli user. Double-click on the cli row in
that report, and select the Detailed Guardium User Activity drill-down report.
Every command entered will be listed, along with any and all changes made. In
addition, the IP address from which the command was issued is listed.
Encrypted Parameter
Note: Trying to run an API call with encrypted parameter on a system where
shared secret was not set will result in an error message of
Parameter Encryption not enabled - shared secret not set
For Guard API scripts generated through the GUI, if encryption is required it is
done using the shared secret of the system where script generation is performed.
Example
The admin user can see all query attributes in Query Builder and non-admin users
can see query attributes in Query Builder, except those that are designed as admin
only (IDs, for example).
There are some entities (like FULL SQL) that have large numbers of attributes in
them.
By default, all attributes will show up for all users (admin and non-admin).
Two GuardAPI commands have been added to display or not display certain
attributes for certain users.
The valid values for this parameter are: VSAM, IMS, MapReduce, APEX, Hive, BI
(BigInsights), IMS/VSAM, DB2 i, F5 (Not case sensitive).
Each Grdapi will enable (disable) all the correspondent attributes for the group, for
example VSAM will enable (disable) the following attributes:
v VSAM records
v VSAM records delected
Note: The attributes will still be displayed if the user has the admin role; enabling
or disabling these attributes applies ONLY to non-admin users (with no admin
role).
Note: The GUI does not have to be restarted for the change to take effect. With
this exception: If a report with the attributes of group F5 has been created and
added it to My New Reports, even though the attributes have been enabled, the no
admin-user does not have the privilege to view the report. The GUI needs to be
restarted to see the report fields.
Example
grdapi list_expiration_dates_for_restored_days
get_expiration_date_for_restored_day
Get the expiration date associated with a given restored day.
Table 12. get_expiration_date_for_restored_day
Parameter Description
newExpDate Required. The new expiration date for the day restored.
restoredDay Required. Identifies the restore day for data.
Example:
grdapi get_expiration_date_for_restored_day restoredDay=restoredDay
purge_results_by_id
Example
set_expiration_date_for_restored_day
Example:
grdapi set_expiration_date_for_restored_day newExpDate=newExpDate restoredDay=restoredDay
where newExpDate and restoredDay can be of the format of a real day yyyy-mm-dd
hh:mi:ss or relative day such as NOW -10 day.
Example
grdapi set_import [START]
configure_export
Example
grdapi configure_export [aggHost] [aggSecHost] [exportOlderThan] [exportValues] [ignoreOlderThan]
configure_archive
Configure the archive of Aggregation data.
Table 17. configure_archive
Parameter Description
accessKey String. Shared secret key of Aggregator.
archiveOlderThan Required. Integer. Detail what data to archive by time.
archiveValues Required. Integer. 0 or 1
bucketName String
destHost String. Host name of archive destination.
ignoreOlderThan Required. Integer. Detail what data to ignore by time.
passwd String. Password.
passwdRetype String. Retype Password
port Integer. Port number
Example
grdapi configure_archive [accessKey] [archiveOlderThan] [archiveValues][bucketName][destHost][ignoreO
create_assessment
Example
grdapi create_assessment assessmentDescription=Assess1
Example
grdapi add_assessment_datasource assessmentDescription=Assess1 datasourceName=DS1
add_assessment_test
If 0 then (exceptions group not supported for this test): If the parameter
is provided, then ERROR (can not provide exception group for this test);
If the parameter is NOT provided, then use -1 to populate.
Else (Exception group supported for the test): If the parameter is NOT
provided then use -1 to populate; IF the parameter is provided validate
the group and use the group ID.
If there is not such group ERROR, then exception group does not exists.
If the group is present and the type = 55, then use the GROUP_ID.
Example
grdapi add_assessment_test assessmentDescription=Assess1 testDescription="The first test"
delete_assessment
Use this GuardAPI command to delete a security assessment.
Table 21. delete_assessment
Parameter Validation
assessmentDescription
Required - Free text – unique - must ensure there is no previous
assessment with the same description, if there is one then ERROR
Additional Validation: Must ensure there are no results for the assessment to be
deleted by:
Action: If the parameter is validated (identifies the security assessment record, and
there are no results for the assessment) delete the SECURITY_ASSESSMENT
Example
grdapi delete_asssessment assessmentDescription=Assess1
delete_assessment_datasource
Example
grdapi delete_asssessment_datasource assessmentDescription=Assess1 datasourceName=DS1
delete_assessment_test
Use this GuardAPI command to delete a test from an existing security assessment
Table 23. delete_assessment_test
Parameter Validation
assessmentDescription
Required. Free text – unique - must ensure there is no previous
assessment with the same description, if there is one then ERROR
testDescription Free Text: Must match the TEST_DESC of an existing test in
AVAILABLE_TEST , if such test not present, then ERROR
Example
grdapi delete_asssessment_test assessmentDescription=Assess1
Example
grdapi list_assessments
list_assessment_tests
Use this GuardAPI command to show the list of tests for the security assessment.
Example
grdapi list_assessment_tests
update_assessment
Use this GuardAPI command to update the record of the security assessment.
Table 26. update_assessment
Parameter Validation
assessmentDescription
Must match an existing record in SECURITY_ASSESSMENT
Example
grdapi update_assessment assessmentDescription=Assess1 filterClientIP=192.168.1.1.
add_autodetect_task
Example
grdapi add_autodetect_task process_name=myProcess hosts_list="192.168.1.1 192.168.1.3" ports_list=
create_autodetect_process
Note: * nmap options are accessible from API only and not from GUI. For details
of nmap parameters and their impact on scan performance see man nmap.
Example
grdapi create_autodetect_process process_name=myProcess
modify_autodetect_process
Note: * nmap options are accessible from API only and not from GUI. For details
of nmap parameters and their impact on scan performance see man nmap.
Example
grdapi modify_autodetect_process process_name=myProcess
delete_autodetect_scans_for_process
This command remove all the tasks for a process, but cannot run if a process is
running, scheduled or has results.
Table 30. delete_autodetect_scans_for_process
Parameter Description
process_name Required. Name of process
Example
grdapi delete_autodetect_scans_for_process process_name=myProcess
list_autodetect_processes
Example
grdapi list_autodetect_processes
list_autodetect_tasks_for_process
This command lists all tasks of a specified process.
Table 32. list_autodetect_tasks_for_process
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi list_autodetect_tasks_for_process process_name=myProcess
execute_autodetect_process
This command runs the specified process, but it cannot run if no tasks are defined
for the process or if the process is currently running.
Table 33. execute_autodetect_process
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
show_autodetect_process_status
This command shows process status and progress summary.
Table 34. show_autodetect_process_status
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi show_autodetect_process_status process_name=myProcess
stop_autodetect_process
Example
grdapi stop_autodetect_process process_name=myProcess
execute_replay
Use this GuardAPI command to run the replay, equivalent to clicking Run Once
Now in the GUI.
Table 36. execute_replay
Parameter Description
setupName String - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
execute_staging_start
Use this GuardAPI command to start the staging process, equivalent to START
option from the stage drop down list in the GUI.
Table 38. execute_staging_start
Parameter Description
replayConfigNameString - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
modify_staging_data
Example
grdapi modify_staging_data configId=3 fullSQLFilter="select * from dual" statementType=0 sessionId
Note: Use the escape character “\” for sourceProgram values that might also
contain the special character “\”in their path.
Example
purge_staging_data
Example
queue_purge_agg_replay_match_by_id
Example
queue_purge_replay_match_by_id
Example
queue_purge_replay_match_by_name
Example
queue_purge_replay_to_replay_results_match_by_id
Purge the data generated by the queue_replay_match_by_id API for replay-replay
compare.
Table 45. queue_purge_replay_to_replay_results_match_by_id
Parameter Description
rrhid1 Integer - required
rrhid2 Integer - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
queue_replay_agg_match_by_id
Used to compare two workloads, typically for databases that are of the same type,
and populates the Workload Aggregate Match report.
Table 46. queue_replay_agg_match_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
queue_replay_agg_match_by_name
Purge the data generated by the queue_replay_agg_match_by_name API.
Table 47. queue_purge_replay_agg_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
Example
queue_replay_match_by_id
Example
grdapi queue_replay_match_by_id rrhid=2 configid=3 isCompareToCapture=1 includeGroup="Replay - Inc
queue_replay_match_by_name
Example
grdapi queue_replay_match_by_name capture_name= replay_header_name= runtime=
Example
grdapi queue_replay_object_agg_match_by_id rrhid=2 configid=3 isCompareToCapture=1
queue_replay_object_agg_match_by_name
Example
grdapi queue_replay_object_agg_match_by_name capture_name= replay_header_name= runtime=
queue_replay_resultsMatch_by_id
Table 52. queue_replay_resultsMatch_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)
Example
queue_replay_results_match_by_name
Example
grdapi queue_replay_results_match_by_name capture_name= replay_header_name= runtime=
queue_replay_to_replay_match_by_id
Table 54. queue_replay_to_replay_match_by_id
Parameter Description
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
rrhid1 Integer - required; is a replay ID
rrhid2 Integer - required; is a replay ID
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
queue_replay_to_replay_match_by_name
Example
grdapi queue_replay_to_replay_match_by_name capture_name= replay_header_name= runtime=
queue_replay_to_replay_results_match_by_id
Table 56. queue_replay_to_replay_results_match_by_id
Parameter Description
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
rrhid1 Integer - required; is a replay ID
rrhid2 Integer - required; is a replay ID
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
queue_replay_to_replay_results_match_by_name
Purge the data generated by the queue_replay_to_replay_results_match_by_name
API.
Table 57. queue_replay_to_replay_results_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi queue_replay_to_replay_match_by_name capture_name= replay_header_name= runtime=
Not supported
clone_replay
clone_replay_schedule_setup
create_replay_schedule_setup
delete_replay
delete_replay_schedule_setup
list_replay list_replay_schedule_setup
update_replay
update_replay_schedule_setup
create_entry_location
Adds a new archive entry to the internal catalog location table.
Table 58. create_entry_location
Parameter Description
entryType Required string. Must be one of the following:
v CollectorDataArchive
v AggDataArchive
v AggResultArchive
processDesc String. Used and required only when the entryType is
AggResultArchive.
fileName Required string. Identifies the file.
hostName Required string. Identifies the host.
path Required string. For FTP: specify the directory relative to the FTP
account home directory; for SCP: Specify the directory as an absolute
path.
user Required string. User account to access the host.
password Required string. Password for user.
retention Optional integer. The number of days this entry is to be kept in the
catalog (the default is 365).
storageSystem Required string. Must be one of the following: EMC CENTERA, FTP,
SCP, TSM.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi create_entry_location entryType=CollectorDataArchive fileName=733392-a1.corp.com-w2007122
Example
grdapi list_entry_location path=/mnt/nfs/ogazit/archive_results/ hostName=192.168.1.33
delete_entry_location
Example
grdapi delete_entry_location path=/var/dump/mojgan hostName=192.168.1.18
update_entry_location
Updates one archive locations if a fileName is specified, or updates multiple
archive locations when the fileName is omitted.
Example
grdapi update_entry_location fileName=a1.corp.com-1_4_2008-01-10_10:27:24.res.70.tar.gz.enc path=
create_classifier_action
Table 62. create_classifier_action
Parameter Description
actionName Required. String
actualMemberContent
Required. String
For reference, here is the list of action types with the associated required
parameters. So depending on what the user selects for the action type
will determine which parameters are required -
add_to_group_objects
add_to_group_object_fields
create_access_rule
create_privacy_set
log_policy_violation
action_send_alert
Examples
grdapi create_classifier_action actionType=add_to_group_objects policyName=-policy1 ruleName=-rule
grdapi create_classifier_action actionType=add_to_group_object_fields policyName=-policy1 ruleName
grdapi create_classifier_action actionType=create_access_rule policyName=-policy1 ruleName=-rule1
grdapi create_classifier_action actionType=create_privacy_set policyName=-policy1 ruleName=-rule1
grdapi create_classifier_action actionType=log_policy_violation policyName=-policy1 ruleName=-rule
grdapi create_classifier_action actionType=send_alert policyName=-policy1 ruleName=-rule1 actionNa
GuardAPI command values
See the table for a list of GuardAPI command values for the command,
grdapi create_classifer_action that are used in the GUI. Use these values
when creating groups.
Table 63. GrdAPI create_classifer_action
GUI values GrdAPI values
%/%.Name %/NAME
%/Full %/FULL
Change/%.Name CHANGE/NAME
Change/Full CHANGE/FULL
Fully Qualified FULLNAME
Name(Schema.Object)
Like %Full %FULLLIKE
Like %Full% %FULLLIKE%
Example
grdapi create_classifier_action actionName=classgrpobjectseach1 actionType=ADD_TO_GROUP_OBJECTS polic
create_classifier_policy
Table 64. create_classifier_policy
Parameter Description
category Required. String
classification Required. String
description String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
create_classifier_process
create_classifier_process
Note: Create a classification policy and datasource before calling this GuardAPI.
Table 65. create_classifier_process
Parameter Description
comprehensive Boolean
datasourceNames Required. String
policyName Required. String
processName Required. String
sampleSize Integer
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi create_classifier_process datasourceNames=sample_cls_0001 policyName=APITEST_Cls_Ply_10001
create_classifier_rule
Table 66. create_classifier_rule
Parameter Description
policyName Required. String
ruleName Required. String
For reference, here is the list of valid rule types with the associated
required parameters. Depending on what the user selects for the rule
type will determine which parameters are required
catalog_search_add
search_by_permissions_add
search_for_data_add
search_for_unstructured_data_add
Examples
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_datasource type="Oracle (DataDirect)" user=scott password=tiger host="swan.guard.swg
grdapi create_group appid=Classifier type=OBJECTS desc="AA Classifier ALL Values" owner=admin cate
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=ACCOUNTING
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=ACCOUNTTING
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=ACCOUNTTING
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=AG
grdapi create_classifier_policy policyName="Search ALL DATA SEARCH smoke values" category="ALL" cl
grdapi create_classifier_rule policyName="Search ALL DATA SEARCH smoke values" category="ALL" clas
grdapi create_classifier_process policyName="Search ALL DATA SEARCH smoke values" processName="Sea
delete_classifier_action
Table 67. delete_classifier_action
Parameter Description
actionName Required. String
policyName Required. String
Example
grdapi delete_classifier_action policyName=-policy1 ruleName=-rule1 actionName=-action1
delete_classifier_policy
Table 68. delete_classifier_policy
Parameter Description
policyName Required. String
Example
grdapi delete_classifier_policy policyName=-policy1
delete_classifier_process
Table 69. list_classifier_process
Parameter Description
processName String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
example
grdapi delete_classifier_process processName=APITEST_Clps_10001_1
delete_classifier_rule
Table 70. delete_classifier_rule
Parameter Description
policyName Required. String
ruleName Required. String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi delete_classifier_rule policyName=-policy1 ruleName=-rule1
execute_cls_process
Execute (submit) a classification process
Example
grdapi execute_cls_process processName="classPolicy1"
Here is a list of the classifier functions and the parameters for each. In the case
where the parameter will have a set list of valid entries, the list will be supplied.
list_classifier_policies
Table 72. list_classifier_policies
Parameter Description
policyName Required. String
ruleName Required. String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi list_classifier_policy policyName=-policy1 ruleName=-rule1 actionName=-action1 recursive=1
Note: Executing this function with no arguments will list all policies. Passing an
argument for the policy will list all rules and actions for the policy. Passing a
policy and rule will list all of the actions for the rule.
list_classifier_process
Table 73. list_classifier_process
Parameter Description
processName String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
example:
update_classifier_action
Table 74. update_classifier_action
Parameter Description
actionName Required. String
Example
grdapi update_classifier_action actionType=add_to_group_objects policyName=-policy1 ruleName=-rule1 a
grdapi update_classifier_action actionType=add_to_group_object_fields policyName=-policy1 ruleName=-r
grdapi update_classifier_action actionType=update_access_rule policyName=-policy1 ruleName=-rule1 act
grdapi update_classifier_action actionType=update_privacy_set policyName=-policy1 ruleName=-rule1 act
grdapi update_classifier_action actionType=log_policy_violation policyName=-policy1 ruleName=-rule1 a
grdapi update_classifier_action actionType=send_alert policyName=-policy1 ruleName=-rule1 actionName=
update_classifier_policy
Table 75. update_classifier_policy
Parameter Description
policyName Required. String
category Required. String
classification Required. String
description String
Example
update_classifier_process
update_classifier_process
Table 76. update_classifier_process
Parameter Description
comprehensive Boolean
datasourceNames Required. String
newName String
policyName Required. String
processName Required. String
sampleSize Integer
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example:
grdapi update_classifier_process
datasourceNames=sample_cls_0001,sample_cls_0002
policyName=APITEST_Cls_Ply_10001_1 processName=APITEST_Clps_10001_1
comprehensive=0 sampleSize=3000
update_classifier_rule
Table 77. update_classifier_rule
Parameter Description
policyName Required. String
ruleName Required. String
ruleType Required. String
values – catalog_search
search_by_permissions
search_for_data
search_for_unstructured_data
category String
classification String
continueOnMatch Boolean
description String
columnNameLike String
fireOnlyWithMarker
String
tableNameLike String
tableTypeSynonymBoolean
tableTypeSystemTable
Boolean
Examples
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
non_credential_scan
API that allows for submitting jobs that will scan databases within the
serversGroup for enabled default users in the usersGroup. Submitted jobs will run
under the Classifier Listener and may be tracked using the Classifier/Assessment
Job Queue report. A submitted job may be canceled from the Classifier/
Assessment Job Queue report by double-clicking on the job and choosing Stop Job.
Note: If a server within the serversGroup can not be reached an exception of type
Scheduled Job Exception will be added and the server will not be scanned.
Example
grdapi non_credential_scan databaseType=ORACLE serversGroup=oracleServers usersGroup="ORACLE Defau
These APIs help maintain the mapping between database users (Invokers of SQL
that caused a violation) and email addresses for real time alerts. See Alerting
Actions for more information on Invokers.
v create_db_user_mapping
v delete_db_user_mapping
v list_db_user_mapping
create_db_user_mapping
Use of wildcards:
v In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
v 'create' command:
– serverIp - wildcard is valid, '%' can be placed instead of the number in the
ip_address format
– 192.168.2.% - valid
– 192.%.2.% - valid
– 192.% - invalid
v serviceName - wildcards (%) are allowed
v dbUserName - no wildcards, '%' is valid, but will be considered as the symbol
'%'
v emailAddress - no wildcards, '%' is valid, but will be considered as the symbol
'%'
Table 79. create_db_user_mapping
Parameter Description
serverIp Required (IP Address). Needs to be in the format of an IP address
A.B.C.D
serviceName Required (any string). Identifies the service name.
dbUserName Required (any string). Identifies the database user name.
emailAddress Required (any string and requires an '@' sign). Identifies the email
address.
Example
grdapi create_db_user_mapping serverIp=192.168.1.104 serviceName=ora1 dbUserName=scott emailAddress=s
delete_db_user_mapping
Use of wildcards:
v In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
v 'create' command:
– serverIp - wildcard is valid, '%' can be placed instead of the number in the
ip_address format
– 192.168.2.% - valid
– 192.%.2.% - valid
– 192.% - invalid
v serviceName - wildcards (%) are allowed
v dbUserName - no wildcards, '%' is valid, but will be considered as the symbol
'%'
v emailAddress - no wildcards, '%' is valid, but will be considered as the symbol
'%'
Table 80. delete_db_user_mapping
Parameter Description
serverIp Required (IP Address). Needs to be in the format of an IP address
A.B.C.D
serviceName Required (any string). Identifies the service name.
dbUserName Required (any string). Identifies the database user name.
emailAddress Required (any string and requires an '@' sign). Identifies the email
address.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi create_db_user_mapping serverIp=192.168.1.104 serviceName=ora1 dbUserName=scott emailAddress=s
list_db_user_mapping
Use of wildcards:
v In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
v 'create' command:
– serverIp - wildcard is valid, '%' can be placed instead of the number in the
ip_address format
Example
grdapi create_db_user_mapping serverIp=192.168.1.104 serviceName=ora1 dbUserName=scott emailAddres
Use this GuardAPI command to view the debug level for IMS™ output.
set_debug_level
Use this GuardAPI command to control IMS output.
create_datasource
ChangeAuditSystem
Access_policy
MonitorValues
DatabaseAnalyzer
AuditDatabase
CustomDomain
Classifier
AuditTask
SecurityAssessment
Replay
Stap_Verification
compatibilityMode Compatibility Mode: Choices are Default or MSSQL 2000. The
processor is told what compatibility mode to use when monitoring a
table.
conProperty Optional. Use only if additional connection properties must be
included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.
DB2
DB2 for i
Informix
MS SQL Server
MySQL
NA
Netezza
Oracle (DataDirect)
Oracle (SID)
PostgreSQL
Sybase
Sybase IQ
Teradata
TEXT
TEXT:FTP
TEXT:HTTP
TEXT:HTTPS
TEXT:SAMBA
user Optional. User for the datasource. If used, password must also be
used.
Example
grdapi create_datasource type=DB2 name=chickenDB2 password=guardium user=db2inst1 dbName=dn0chick
Note: The API only adds records to remove an exception a new record should be
created with new dates according to the needs.
Table 83. create_test_exception
Parameter Description
datasourceName Required. Valid name of a defined datasource.
testDescription Required. A valid test name within Security Assessments.
fromDate Required. Beginning date for when the exception is valid.
toDate Required. Ending date for when the exception is valid.
explanation Required. A recommendation as to why the test will pass.
Example
grdapi create_test_exception datasourceName=ORAPROD5 testDescription="CVE-2009-0997" fromDate="2012-0
list_datasource_by_name
Example
CLI> grdapi list_datasource_by_name name=chickenDB2
ID=20000
Datasource DatasourceId=20000
Datasource DatasourceTypeId=2
Datasource Name=chickenDB2
Datasource Description=null
Datasource Host=chicken.corp.com
Datasource Port=50000
Datasource ServiceName=
Datasource UserName=db2inst1
Datasource Password=[B@1415de6
Datasource PasswordStored=true
Datasource DbName=dn0chick
Datasource LastConnect=null
Datasource Timestamp=2008-04-18 15:40:58.0
Datasource ApplicationId=2
Datasource Shared=true
list_datasource_by_id
Example
grdapi list_datasource_by_id id=2
delete_datasource_by_name
Deletes the specified datasource definition, unless that datasource is being used by
an application. This function removes the datasource, regardless of who created it.
Table 86. delete_datasource_by_name
Parameter Description
name Required. The datasource name.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi delete_datasource_by_name name=swanSybase
delete_datasource_by_id
Deletes the specified datasource definition, unless that datasource is being used by
an application. This function removes the datasource, regardless of who created it.
Table 87. delete_datasource_by_id
Parameter Description
id Required (integer). Enter the ID number of the datasource to be
listed.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi delete_datasource_by_id id=2
Example
grdapi update_datasource_by_name name=chickenDB2 newName="chicken DB2" user=" " password=" "
update_datasource_by_id
Updates a datasource definition.
Table 89. update_datasource_by_id
Parameter Description
id Required (integer). Identifies the datasource.
Example
grdapi update_datasource_by_id id=20000 user=" " password=" " newName="chickenDB2hooo"
list_db_drivers
List only the name of database drivers Oracle (DataDirect) and MS SQL SERVER
(DataDirect) are now supported as datasource types.
list_db_drivers_by_details
Lists each database driver in more details (name, class, driver class, URL, and
datasource type ID)
Example
grdapi create_datasourceRef_by_id appId=51 datasourceId=20000 objId=2
create_datasourceRef_by_name
SecurityAssessment
CustomTables
Classifier
datasourceName Required. Identifies the datasource (from the datasource definition).
objName Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the name of a
specific classification process.
Example
grdapi create_datasourceRef_by_name application=Classifier datasourceName=swanSybase objName=”c
list_datasourceRef_by_id
For a specific object of a specific application type (for example, a specific
Classification process), lists all datasources referenced.
8 = SecurityAssessment
47 = CustomTables
51 = Classifier
objID Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the ID of a
specific classification process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi list_datasourceRef_by_id appId=13 objId=1
list_datasourceRef_by_name
SecurityAssessment
CustomTables
Classifier
objName Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the name of a
specific classification process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdap list_datasourceRef_by_name application=Classifier objName="class process1"
delete_datasourceRef_by_id
8 = SecurityAssessment
47 = CustomTables
51 = Classifier
datasourceId Required (integer). Identifies the datasource (from the datasource
definition).
objId Required (integer). Identifies an instance of the appID type specified.
For example, if apID=51, this would be the ID of a classification
process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi delete_datasourceRef_by_id appId=51 datasourceId=2 objId=1
delete_datasourceRef_by_name
SecurityAssessment
CustomTables
Classifier
datasourceName Required. Identifies the datasource (from the datasource definition).
objName Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the name of a
specific classification process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi delete_datasourceRef_by_name application=Classifier datasourceName=swanSybase objName=”class p
Example
Note: An error will occur if the insert is cyclic (a parent reports to a child)
list_user_hierarchy_by_parent_user
Example
delete_user_hierarchy_by_entry_id
Example
delete_user_hierarchy_by_user
Example
Note:
create_allowed_db
Create a User-DB association
Table 100. create_allowed_db
Parameter Description
userName Required. The name of the user
serverIp Required. The server IP
instanceName Required. The instance name
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
list_allowed_db_by_user
List User-DB associations by user
Table 101. list_allowed_db_by_user
Parameter Description
userName Required. The name of the user
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
delete_allowed_db_by_entry_id
Example
delete_allowed_db_by_user
Example
update_user_db
Fully apply all recent changes to the active User-DB association map
Table 104. update_user_db
Parameter Description
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.
Example
grdapi update_user_db
get_load_balancer_load_map
grdapi get_load_balancer_load_map
get_load_balancer_params
grdapi get_load_balancer_params
set_load_balancer_param
Multiple parameter and values pairs by can be specified on a single command line.
For example, grdapi set_load_balancer_params LOAD_BALANCER_ENABLED=1
STATIC_LOAD_COLLECTION _INTERVAL=360.
assign_load_balancer_groups
unassign_load_balancer_groups
Unassign a managed unit group from an application or S-TAP group.
create_ef_mapping
This function creates a mapping and populates tables based on the name of the
report specified by the reportName parameter. Each mapping has a name stored in
EF_MAP_TYPE_HDR.EF_TYPE_DESC, and that name will be identical to the value
of reportName. The target table name will also be based on the reportName
parameter, with underscores added between the words. For example, "My Report"
becomes MY_REPORT.
Table 105.
Parameter Description
reportName Name of the report to use for external feed
mapping. This parameter also determines
the name of the mapping and the target
table name.
modify_ef_mapping
delete_ef_mapping
This function allows you to delete existing mappings. Only mappings with ID >=
20000 may be deleted in order to protect predefined Guardium mappings.
Table 107.
Parameter Description
reportName Name of the mapping to delete.
list_ef_mapping
If run without any parameters, this function returns a list of all customer-created
mappings. If run with the reportName parameter, this function returns details of the
specified mapping (such as the table and column names used by the external feed).
Table 108.
Parameter Description
reportName Optional. Name of the mapping for which to
return details.
Use the GuardAPI command, grdapi create_policy, to create a FAM policy. After
the policy is created, use FAM-specific GuardAPI commands.
For example:
Note: Quick Search must also be enabled with the command grdapi
enable_quick_search schedule_interval=1.
Table 109. enable_fam_crawler
Parameter Description
extraction_start Initial date/time from which data is extracted to file quick search. It is
limited to 2 days in the past. The default is current time. If the unit is
set to HOUR, then it is rounded to an hour. If it is set to DAY, then it is
rounded to a day.
schedule_start The default is current time.
activity_schedule_interval
Required. This parameter sets activity schedule interval. The
recommended interval is 2 with the unit set to MINUTE.
activity_schedule_units
Required. This parameter sets the unit of the activity unit. The values
are either MINUTE or HOUR. The recommended unit is MINUTE.
entitlement_schedule_interval
Required. This parameter sets the entitlement schedule interval. The
recommended interval is 1 with the unit set to DAY.
entitlement_schedule_units
Required. This parameter sets the unit of the entitlement schedule. The
possible values are MINUTE, HOUR, and DAY. The recommended unit
is DAY.
Example
grdapi enable_fam_crawler extraction_start=< > schedule_start=< >
activity_schedule_interval=2 activity_schedule_units=MINUTE
entitlement_schedule_interval=10 entitlement_schedule_units=MINUTE
disable_fam_crawler
Disables the file activity monitor. The file quick search activity and entitlement
extractions scheduler are removed. This function also disables remote group
population.
Example
grdapi disable_fam_crawler
get_fam_crawler_info
Shows the status of the file activity monitor. If it is enabled, the command shows
the settings for the entitlement extraction and file quick search activity schedule.
FAM Crawler (server side) is disabled.
FAM Crawler (server side) is enabled. Entitlement(1 DAY) Activity(2 MINUTE)
Example
grdapi get_fam_crawler_info
Parameter Description
policyName Required. String. Policy name
ruleName Optional. String. If no ruleName is provided,
all policy rules with details will be shown. If
a ruleName is provided, details will be
listed for that rule.
create_fam_rule
Parameter Description
policyName Required. String. Policy name.
ruleName Required. String. Rule name.
filePath String. File path to be monitored. Either
filePath or filePathGroup must be specified.
notfilePath String. Must be yes or no. Yes means apply
this rule to all files except those in the
specified path.
filePathGroup String. Group of file paths. Either filePath or
filePathGroup must be specified.
includeSubDirectory String. Must be yes or no. Yes means include
files in all subdirectories.
removableMedia String. Must be yes or no.
osUser String. OS user name,
osUserGroup String. Group of OS users.
notOSUser String. Must be yes or no. Yes means use all
users except the specified osUser,
serverHost String. Host name.
serverHostGroup String. Group of hostnames.
command String. The command name to be included
in the rule.
commandGroup String. Group of commands.
notCommand String. Must be yes or no. Yes means use all
commands except the specified command.
actionName String. Required, The name of the FAM
action.
messageTemplate String. Message template name.
notificationType String. Notification type.
userLoginName String. User login name.
classDestination String. Name of custom class to be invoked.
Parameter Description
policyName Required. String. Policy name
ruleName Required. String. Name of the rule to be
deleted.
gim_list_registered_clients
Example
grdapi gim_list_registered_clients
gim_list_client_params
Example
grdapi gim_list_client_params clientIP=192.168.12.210
gim_update_client_params
Example
grdapi gim_update_client_params clientIP=192.168.1.100 paramName=STAP_TAP_IP paramValue=192.168.1.100
gim_list_client_modules
Lists all the modules assigned to a specific client and their state
Table 113. gim_list_client_modules
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_list_client_modules clientIP=192.168.2.210
gim_load_package
Note: This command will load a file which resides on local file system, therefore
the procedure (cmd='fileserver') of loading a file to the CM/Guardium appliance
must precede this command.
Table 114. gim_load_package
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_load_package filename=*.gim
Example
grdapi gim_assign_bundle_or_module_to_client_by_version clientIP=192.168.1.100 module=BUNDLE-STAP
gim_schedule_install
Schedules for installation all the modules/bundles that were assigned to a client
and haven't been installed yet (for example, PENDING). If the parameter module
is specific, only the requested module will be scheduled.
Table 116. gim_schedule_install
Parameter Description
clientIP Required - Client IP Address
module Optional - Module. If module is not specified in the command, all the
modules for the specified clientIP will be scheduled for install.
date Required - Date; Format: 'now' or 'yyyy-MM-dd HH:mm'
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_schedule_install clientIP=192.168.1.100 module=BUNDLE-STAP date=”2008-07-02 14:50”
grdapi gim_schedule_install clientIP=192.168.1.100 date=”2008-07-02 14:50”
gim_list_client_status
Displays the status of the latest operation executed for a specific client.
Table 117. gim_list_client_status
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
gim_uninstall_module
Uninstalls a module/bundle on a specific client.
Table 118. gim_uninstall_module
Parameter Description
clientIP Required - Client IP Address
module Required - Module.
date Required - Date; Format: 'now' or 'yyyy-MM-dd HH:mm'
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_uninstall_module clientIP=192.168.1.100 module=BUNDLE-STAP
gim_cancel_install
Example
grdapi gim_cancel_install clientIP=192.168.1.100 module=BUNDLE-STAP
gim_list_bundles
Lists all the available bundles. A bundle is a group of modules that can be
installed on a client.
Table 120. gim_list_bundles
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
gim_list_mandatory_params
Lists the mandatory parameters for a single module.
Table 121. gim_list_mandatory_params
Parameter Description
module The name of the GIM module for which to display the mandatory
parameters
version The version of the GIM module for which to display the mandatory
parameters
Example
grdapi gim_list_mandatory_params module=name version=number
gim_assign_latest_bundle_or module_to_client
Assigns the latest (i.e. the highest version) available bundle or module for a
specific client.
Table 122. gim_assign_latest_bundle_or module_to_client
Parameter Description
clientIP Required - Client IP Address
module Required- Module.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_assign_latest_bundle_or_module_to_client clientIP=192.168.1.100 module=BUNDLE_STAP
gim_schedule_uninstall
Example
gim_cancel_uninstall
Cancels uninstallation of a bundle/module on a specific client. Canceling
uninstallation is possible only if a module/bundle is not already in the process of
being installed by a client (STATE=IP or IP-PR)
Table 124. gim_cancel_uninstall
Parameter Description
clientIP Required - Client IP Address
module Required- Module.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_cancel_uninstall clientIP=192.168.1.100 module=BUNDLE-STAP
gim_remove_bundle
The command will delete bundlePackageName from the database as well as from
the file system (from /var/log/guard/gim_packages , and also
from/var/gim_dist_packages if the Guardium system is a central manager).
parameters (required):
bundlePackageName
Parameter value take bundle package name as specified in the output of the
gim_list_unused_bundles. The command will be successful only if:
2.4 There is one and only one bundle that refers to the value of
bundlePackageName
ALL the conditions (2.1 to 2.4) must be true in order to delete a bundle from the
database/file system. Otherwise an error will be generated.
Example
grdapi gim_remove_bundle bundlePackageName= bundlePackageName
gim_unassign_client_module
Unassigns a module from a client. Unlike 'gim_remove_module', this command
will untie the connection between a module and a specific client on the
CM/Guardium appliance. This command is will NOT uninstall or remove the
module on the actual DB-server machine. It is to be used only in cases on
Example
grdapi gim_unassign_client_module clientIP=192.168.1.100 module=STAP
gim_get_purge_list
List old software packages (GIM files) that have previously been uploaded to the
Guardium appliance or CM.
Table 126. gim_get_purge_list
Parameter Description
olderThan Required - Number of days. Files older than the number of days
specified will be purged. Valid value is any number greater or equal to
0.
excludeLatest Optional - true or false (default value is true).
Example
grdapi gim_get_purge_list olderThan=30 excludeLatest=true
gim_purge
Remove old software packages (GIM files) that have previously been uploaded to
the Guardium appliance or CM.
Table 127. gim_purge
Parameter Description
olderThan Required - Number of days. Files older than the number of days
specified will be purged. Valid value is any number greater or equal to
0.
Example
grdapi gim_purge olderThan=30
Note:
GIM purge will not purge files that are currently scheduled for installation.
GIM purge will not allow the removal of any file (for example, parameter
filename) that includes '/' character.
gim_get_available_modules
Example
grdapi gim_get_available_modules clientIP=192.168.1.100
gim_get_client_last_event
List the latest operation executed for a specific client.
Table 129. gim_get_client_last_event
Parameter Description
clientIP Required - Client IP Address
Example
gim_get_modules_running_status
Example
grdapi gim_get_modules_running_status clientIP=192.168.1.100 process= status=
gim_list_unused_bundles
The command returns a list of unused (not installed on any database server)
bundles.
parameters (required):
If set to value 1, the returned list of unused bundles will include the latest unused
bundle.
Example
grdapi gim_list_unused_bundles includeLatest=1
gim_reset_client
Disassociate modules from selected client.
Table 131. gim_reset_client
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi gim_reset_client clientIP=192.168.1.100
Example
grdapi gim_set diagnostics clientIP=192.168.1.100
gim_set_global_param
Example
grdapi gim_set_global_param clientIP=192.168.1.100 paramName=gim_listener_default_port paramValue=844
gim_remote_activation
Connects the collector's IP address to a server mode GIM agent or group of GIM
agents.
Example
grdapi gim_remote_activation targetGroup=<someGroup> sharedSecret=<password> targetPort=8445
Group Functions
create_group
list_group_by_id
list_group_by_desc
delete_group_by_id
delete_group_by_desc
update_group_by_id
update_group_by_desc
flatten_hierarchical_groups
Member Functions
create_member_to_group_by_id
create_member_to_group_by_desc
list_group_members_by_id
delete_member_from_group_by_id
delete_member_from_group_by_desc
create_group
create_group
Application Module
Application System ID
APPLICATION USER
Client Hostname
Client IP
Client OS
COMMANDS
Database Name
DB Error Codes
DB PROTOCOL
DB PROTOCOL VERSION
DB Role
DB User/Object/Privilege
DB Ver./Patches
EXCEPTION TYPE
FIELDS
Files Permissions
Global ID
Guardium Role
Guardium Users
CLI and API 201
Login Succeded Code
NET PROTOCOL
Table 135. create_group (continued)
Parameter Description
appID Required. Identifies the application for the group. It must be one of the
following values:
Public
Baseline Builder
Classifier
DB2_zOS groups
Express Security
Policy Builder
subtype Optional. A sub type is used to collect multiple groups of the same
group type, where the membership of each group is exclusive. For
example, assume that you have database servers located in three
datacenters, and that you want to group the servers by location. You
would define a separate group of database servers for each location, and
define all three groups with the same sub type (datacenter, for example).
category Optional. A category is an optional label that is used to group policy
violations and groups for reporting.
classification Optional. A classification is another optional label that is used to group
policy violations and groups for reporting.
owner Required. The owner
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
list_group_by_id
Display the properties of a specific group.
Table 136. list_group_by_id
Parameter Description
id Required (integer). Identifies the group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
list_group_by_desc
Display the properties of a specific group.
Table 137. list_group_by_desc
Parameter Description
desc Required. The name of the group to be displayed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi list_group_by_desc desc=agroup
delete_group_by_id
Table 138. delete_group_by_id
Parameter Description
id Required (integer). Identifies the group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi delete_group_by_id id=100005
delete_group_by_desc
Table 139. delete_group_by_desc
Parameter Description
desc Required. The name of the group to be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi delete_group_by_desc desc=agroup
update_group_by_id
Example
grdapi update_group_by_id id=100002 newDesc=beegroup subtype=bee category=be classification=bea
update_group_by_desc
Example
grdapi update_group_by_desc desc=beegroup newDesc=beegroupee category=bebebe classification=bebebebe
Example
grdapi flatten_hierarchical_groups
create_member_to_group_by_id
Example
grdapi create_member_to_group_by_id id=100005 member=turkey
create_member_to_group_by_desc
Add a member to the named group.
Table 144. create_member_to_group_by_desc
Parameter Description
desc Required. The name of the group to which the member is to be added.
member Required. The new member name, which must be unique within the
group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi create_member_to_group_by_desc desc=bgroup member=turkey
list_group members_by_id
List the members of the specified group.
Table 145. list_group members_by_id
Parameter Description
id Required (integer). Identifies the group whose members are to be listed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi list_group_members_by_id id=100001
list_group_members_by_desc
Example
grdapi list_group_members_by_desc desc=bgroup
delete_member_from_group_by_id
Remove a member from a specified group.
Table 147. delete_member_from_group_by_id
Parameter Description
id Required (integer). Identifies the group from which the member is to be
removed.
member Required. The name of the member to be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
delete_member_from_group_by_desc
Remove a member from a specified group.
Table 148. delete_member_from_group_by_desc
Parameter Description
desc Required. The name of the group from which the member is to be
removed.
member Required. The name of the member to be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi delete_member_from_group_by_desc desc=bgroup member=boston
The generation of Guard API calls from reports can be invoked in one of two
ways, either from a single row within a report or multi-rows that is based on a
whole report (what is seen on the screen). See the how-to topic, Generate API Call
From Reports, for an example.
Note: Empty parameters might remain in the script as the API call
ignores them
Example Modified Script
# A template script for invoking Sqlguard API function
delete_datasource_by_name seven times:
# Usage: ssh cli@a1.corp.com<delete_datasource_by_name_api_call.txt
# replace any < > with the required value
#
set guiuser <username> password <password>
grdapi delete_datasource_by_name name=egret-oracle3
3) Execute the CLI function call
Example Call
$ ssh
cli@a1.corp.com<c:/download/delete_datasource_by_name_api_call.txt
Note: If the Guardium report, with a constant added, is exported, the constant
will not be exported.
Note: When GuardAPI parameters are mapped to report attributes, if a report has
more than one attribute that is mapped to the same GuardAPI parameter, the
value picked for the API call is the first of these attributes according to the order of
display in the report.
Existing Attributes
1. Go to the Query Entities & Attributes report to add the API parameter
mappings. (Guardium Monitor -> Query Entities & Attributes)
2. The Query Entities & Attributes report is long because it lists all the
Guardium attributes. Narrow down the records you are interested in by
using the Customize button.
3. To create the mapping, double-click the attribute row you would like to
assign to a parameter name
4. Click the Invoke... option
5. Select the create_api_parameter_mapping API function
6. Fill in the functionName and parameterName in the API Call Form
7. Click the Invoke now button to create the API to Report Parameter
Mapping
See how-to topic, Using API Calls From Custom Reports, for a full scenario
that maps GuardAPI parameters through the GUI.
Note: When using API mapping, table columns in a report appears in the
report field as long as the table column is an attribute of an entity. Some of
the columns such as count column will not be displayed in the report field
because it cannot be mapped.
This means a user that has the appropriate roles for Policy Builder is able to
execute the GuardAPI command, delete_rule, on any policy, regardless of the roles
of this specific policy.
Role validation exists for the following Policy rules GuardAPI commands:
change_rule_order; copy_rule; copy_rules, delete_rule; update_rule.
Role validation exists for the following Group Description GuardAPI commands:
create_member_to_group_by_desc; create_member_to_group_by_id;
delete_group_by_desc; delete_group_by_id; delete_member_from_group_by_desc;
delete_member_from_group_by_id; update_group_by_id; update_group_by_desc.
Role validation exists for the following Audit Process GuardAPI commands:
stop_audit_process.
An GuardAPI can be invoked automatically from any report portlet. When the
GuardAPI is invoked, it creates a new audit process report.
If such process for the user exists, then the parameters are updated and the same
process is used.
1 - If new process, it creates one receiver per email in the list (if any) with <p>a
content type as indicated in the emailContentType parameter. It will also create a
user receiver for the user that is logged in (invoking the API) if the
includeUserReceiver parameter is true.
2 - If existing process, all email receivers are removed and replaced with the
emails from the new list (if any) with the content type as defined in the
emailContentType parameter. If the list is empty, it removes all email address
receivers. If there is already a receiver for the user it will NOT be removed even if
the includeUserreceiver is false, however if the parameter is true and there is no
such receiver then it is added.
create_ad_hoc_audit_and_run_once
Parameters:
1 - reportId - The ID on the report to be used for the only task in the Audit process
4 - taskParameter All task parameters and the value for each concatenated with the
characters ^^ should be like: PAR1=Val1^^PAR2=Val2^^ etc it is valid to leave a
parameter empty, for example if PAR2 should remain empty it looks like:
PAR1=VAL1^^PAR2=^^PAR3=VAL3^^...
An GuardAPI can be invoked automatically from any report portlet. When the
GuardAPI is invoked, it creates a new audit process report.
Schedule APIs
modify_schedule parameters jobName jobGroup cronString startTime optional
list schedule
Note: Some job types for the grdapi schedule_job function do not require an object
name. No validation is performed on the object name parameter and users see the
standard 'OK' prompt when the function is run with anything entered as the
objectName parameter for the following jobs types:csvExportJob, systemBackupJob,
dataArchiveJob, dataExportJob, dataImportJob, resultsArchiveJob,
AppUserTranslation, IpHostToAlias
grdapi set_purge_batch_size
Set the batch size that is used during purge, aids in performance of purge and has
a default setting of 200,000. A trade-off in performance and disk space usage
should be noted as setting to a larger batch size increases the speed of the purge
but consumes more disk space and setting to a low batch size decreases the speed
of the purge but not consume as much disk space.
grdapi get_purge_batch_size
grdapi patch_install
create_computed_attribute
Use in Reports.
Table 149. create_computed_attribute
Parameter Description
attributeLabel Required.
entityLabel Required. Database user
expression Required. Server IP
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
delete_computed_attribute
Use in Reports.
Table 150. delete_computed_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
expression Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
update_computed_attribute
Use in Reports.
Table 151. update_computed_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
expression Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
delete_constant_attribute
Use in Reports.
Table 153. delete_constant_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
constant Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
update_constant_attribute
Use in Reports.
Table 154. update_constant_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
constant Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
create_ad_hoc_audit_and_run_once
Use in Reports.
Table 155. create_ad_hoc_audit_and_run_once
Parameter Description
chnageParlfExist Boolean. Required.
REST API
JSON (JavaScript Object Notation) output option supports GuardAPI functions.
This is part of REST APIs. REST stands for Representational State Transfer. It relies
on a stateless, client/server, cacheable communications protocol, and in virtually
all cases, the HTTP protocol is used. REST is an architecture style for designing
networked applications. The idea is that, rather than using complex mechanisms
such as CORBA, RPC, or SOAP to connect between machines, simple HTTP is
used to make calls between machines. RESTful applications use HTTP requests to
post data (create and/or update), read data (for example, make queries), and
delete data. Thus, REST uses HTTP for all four Create/Read/Update/Delete
operations. REST is a lightweight alternative to mechanisms like RPC (Remote
Procedure Calls) and Web Services (SOAP, WSDL).
Guardium’s Implementation of REST
1. Register Application (only once) and get Client Secret.
2. Store Client Secret in secure place.
3. Request Access Token for authorization.
4. Store Access Token so grdAPI command is authenticated properly.
5. Use Access Tokens to submit GuardAPI commands.
Example use cases
v I want the ability to dynamically get a small amount of audit data for a
certain IP address without having to login to the Guardium GUI.
v I want to populate an existing group, so I can update my policy to
prevent unauthorized access to sensitive information.
v I want to get a list of all users within a certain authorized access group.
v I want my application development team to help identify what sensitive
tables to monitor.
v I want to script access to grdAPI’s without using “expect” scripting
language, which requires me to code response text from the target
system.
HTTP has a vocabulary of operations (request methods)
v GET (pass parameters in the URL)
v POST (pass parameters in JSON object)
create_datasource
-X POST https://10.10.9.239:8443/restAPI/datasource
update_datasource_by_name - JSON Object ’{password:guardium}'
-X PUT -d ’{password:guardium, name:"MSSQL_1}'
delete_datasource_by_id - JSON Object ’{"id":20020}'
-X DELETE -d ’{"id":20020}'
register_oauth_client
Use this GuardAPI command to wrap supported GuardAPI functions in a RESTful
API that uses JSON (JavaScript Object Notation) for input and output.
Use the GrdAPI command, grdapi register_oauth_client, to register the client and
obtain the necessary access token to call the REST services.
RESTful applications use HTTP requests to post data (create and/or update), read
data (for example, make queries), and delete data. Thus, REST uses HTTP for all
four Create/Read/Update/Delete operations. REST is a lightweight alternative to
mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL).
function parameters:
grant_types - String - required. The only grant type that is supported is password.
sortColumn - optional - If specified must be the column title of one of the report
fields.
Syntax
getOAuthTokenExpirationTime
Use this GuardAPI command to get the expiration time of the REST API token
function parameters:
api_target_host - String
setOAuthTokenExpirationTime
Use this GuardAPI command to set the expiration time of the REST API token.
function parameters:
api_target_host - String
Syntax
Example
grdapi execute_cls_process processName="classPolicy1"
Runs the specified assessment. It is equivalent of executing Run Once Now from
Security Assessment Finder. It submits the job. This places the process on the
Guardium Job Queue, from which the appliance runs a single job at a time.
Administrators can view the job status by selecting Guardium Monitor >
Guardium Job Queue.
Example
grdapi execute_assessment assessmentDesc="assessment1"
Example
grdapi execute_auditProcess auditProcess="Appliance Monitoring"
The stop_audit_process API can not be used through the GuardAPI command line.
This function is only usable as an invocation through a drill down. See the
sub-topic, Stop an audit process, in Compliance Workload Automation help topic.
Table 159. Stop an audit process
Parameter Description
process Name of the audit process
run The RunID of the audit process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
stop_audit_process
Note: This grdapi can only be used for groups that have already been configured
in Populate Group From Query Set Up screen (query should have been chosen and
parameters should have been set)
Table 160. Execute a populate group from query
Parameter Description
groupDesc Group name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi execute_populateGroupFromQuery groupDesc="A test"
Note: To run this grdapi, must define at least one Application User Detection in
Application User Translation Configuration screen. If not a message will be
displayed.
Table 161. Execute an application user translation
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi execute_appUserTranslation
Note: This grdapi can only be executed if Flat Log Process is configured as Process
in Flat Log Process screen. If not, an error message will be displayed.
Table 162. Execute a flat log process
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi execute_flatLogProcess
Executes a query which is defined for the selected incident generation process,
against the policy violations log. It generates incidents based on that query. It is
equivalent of executing Run Once Now from Edit Incident Generation Process
screen.
Example
grdapi execute_incidentGenProcess processId=20003
Table 164. execute_incidentGenProcess_byDetails
Parameter Description
queryName Query name
categoryName Category Name
user User
threshold Threshold
severity Severity level
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi execute_incidentGenProcess_byDetails queryName="Policy Violation Count" user=admin severi
Example
grdapi upload_custom_data tableName="TEST_TABLE"
Note: LDAP must be configured. Otherwise, the system will give an error
message.
Table 166. Import LDAP users
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi execute_ldap_user_import
Install policy
Install a policy or multiple policies. If multiple policies are to be installed then the
policies need to be delimited by a pipe character '|' with policies being in the
order you want to be installed. This needs to be done even if only one policy
might have had changes.
Even in UI, when you install a policy after another installed policy, it will reinstall
all of them. which is the same as grdapi policy_install command.
Table 167. Install policy
Parameter Description
policy Policy name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Examples
grdapi policy_install policy="Policy 1|Policy 2"
grdapi policy_install policy="policy 20|policy 30|policy 40"
Delete policy
Example
grdapi delete_policy policyDesc="Hadoop Policy"
List policy
Examples
Note: It Copies a rule of <fromPolicy> to the end of <toPolicy> rule's list. Both
<fromPolicy> and <toPolicy> must be created, before running this grdapi.
Table 170. Copy policy rule
Parameter Description
ruleDesc Rule Description
fromPolicy Policy name
toPolicy Policy name
Example
grdapi copy_rule ruleDesc="Rule Description" fromPolicy="policy1" toPolicy=" policy2 "
Clone policy
Example
grdapi clone_policy policyDesc="Hadoop Policy" clonedPolicyDesc="Hadoop Policy cloned1"
See Policies for additional information on the following policy rule parameters that
can be altered with the update_rule API call.
Table 172. Update policy rule
Parameter Description
ruleDesc Rule Description
fromPolicy Policy name
newDesc New Rule Description
clientIP Client IP
clientNetMask Client Net Mask
serverIP Server IP
serverNetMask Server Net Mask
objectName Object Name
sourceProgram Source Program
dbName Database Name
dbUser Database User
command Command
appUserName Application User Name
Example
grdapi update_rule ruleDesc="Rule Description" fromPolicy="policy1" serviceName="ANY"
Example
grdapi change_rule_order ruleDesc="Copy of policy1 exception1" fromPolicy="policy1" order=10
Example
grdapi list_policy_rules policy="policy1"
Example
grdapi delete_rule ruleDesc="Copy (3) of policy1 exception1" fromPolicy="policy1"
Examples
Examples
Example
grdapi delete_Audit_process_result ExecutionDateFrom=, ExecutionDateTo=, ProcessName=abab
Map API parameters to Domain entities and attributes so the parameters can be
populated by report values on API call generation or API automation.
Example
grdapi create_api_parameter_mapping functionName="create_group" parameterName="desc" domain="Group
Example
grdapi delete_api_parameter_mapping functionName="create_group" parameterName="desc" domain="Group Tr
Close all the events defined on a specific process/task/execution for tasks of type
report. Specially needed if for example there is a task with a default event that
returned a large number of records, such task can not be signed unless all the
events are closed.
Table 182. Close all the events defined on a specific process/task/execution
Parameter Description
eventStatus Required. Event status. Must be a valid status for the default event
defined for the audit task and must be a final status.
execDate Required. Execution Date and Time
processDesc Required. Audit process description.
taskDesc Required. Audit task description.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grdapi close_default_events eventStatus=Done execDate="2010-03-01 08:00:00" processDesc="Audit Proces
create_quarantine_until
Use in Policies.
Table 184. create_quarantine_until
Parameter Description
quarantineUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
delete_quarantine_until
Use in Policies.
Table 185. delete_quarantine_until
Parameter Description
quarantineUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
restart_job_queue_listener
Use the restart_job_queue_listener command to restart the job queue listener if the
job queue fails to start, does not run waiting jobs, or if a job appears stuck in
running or stopping status for a prolonged period of time. Issuing this command
immediately restarts the job queue, and any currently executing jobs will be halted
and restarted.
Example:
grdapi restart_job_queue_listener
update_quarantine_allowed_until
Use in Policies.
Table 187. update_quarantine_allowed_until
Parameter Description
allowedUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
disable_quick_search
grdapi disable_quick_search
For example, the following command enables Quick Search for Enterprise with a
2-minute data extraction interval: grdapi enable_quick_search
schedule_interval=2 schedule_units=MINUTE.
set_enterprise_search_options
For example, the following command configures Quick Search for Enterprise in
all_machines mode to allow searching of data across the entire Guardium
environment from any Guardium machine in that environment: grdapi
set_enterprise_search_options distributed_search=all_machines.
Note: If you create query rewrite definitions by using APIs, you can still use the
UI to retrieve those definitions for testing with the Query Rewrite Builder.
assign_qr_condition_to_action
create_qr_action
create_qr_add_where
create_qr_add_where_by_id
create_qr_condition
create_qr_replace_element
create_qr_replace_element_byId
list_qr_action
list_qr_add_where
list_qr_add_where_by_id
list_qr_condition
list_qr_condition_to_action
list_qr_definitions
list_qr_replace_element
list_qr_replace_element_byId
remove_all_qr_replace_elements
remove_all_qr_replace_elements_byId
remove_qr_action
remove_qr_add_where_by_id
remove_qr_condition
remove_qr_definition
remove_qr_replace_element_byId
update_qr_action
update_qr_add_where_by_id
update_qr_condition
update_qr_definition
update_qr_replace_element_byId
assign_qr_condition_to_action
Parameter Description
actionName Required. The name of the query rewrite action.
conditionName Required. The name of the query rewrite condition to be associated with
the specified action.
Example:
grdapi assign_qr_condition_to_action definitionName="case 15" actionName="qr action15_2" conditionNam
create_qr_action
Parameter Description
actionName Required. The unique name of the query rewrite action.
definitionName Required. The query rewrite definition that is associated with this
action.
description An optional description.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
grdapi create_qr_action definitionName="case 15" actionName="qr action15_3"
create_qr_add_where
Parameter Description
actionName Required. The unique name of the query rewrite action.
definitionName Required. The query rewrite definition that is associated with this
action.
whereText Text to add to a WHERE clause.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Parameter Description
qrActionId Required (integer). The unique ID of query rewrite action.
whereText Text to add to a WHERE clause.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
create_qr_condition
Parameter Description
conditionName Required. The unique name of this query rewrite condition.
definitionName Required. The query rewrite definition that is associated with this
condition.
depth Integer that specifies the depth of the parsed SQL that this condition
applies to (1 and higher). The default -1 means that the query rewrite
condition applies to any matching SQL at any depth.
True or false. Use this parameter to associate this condition with objects
isForAllRuleObjects
in a policy access rule. True indicates that the specified condition applies
to all objects in the access rule’s Object field or Object group for a fired
rule. The default is false, which means the query condition is specified
using the objects that are defined in this condition. Neither option
impacts any rule triggering behavior.
isForAllRuleVerbsTrue or false. Use this parameter to associate this condition with objects
in a policy access rule. True, indicates that the specified condition
applies to all verbs in the access rule’s Verb field or Verb group for a
fired rule. The default is false, which means the query condition is
specified using the verbs that are defined in this condition. Neither
option impacts any rule triggering behavior.
isObjectRegex True or false. Indicates that the specified object is specified by using a
regular expression. Default is false.
isVerbRegex True or false. Indicates that the specified verb is specified by using a
regular expression. Default is false.
object An object (table, view). The default “*” means all objects. This can also
be specified as a regular expression, in which case set the isVerbRegex
to True.
order Used to specify the order in which to assemble multiple related query
rewrite conditions for complex SQL. Default is 1.
verb A verb (select, insert, update, delete). The default “*” means all verbs.
Example:
create_qr_definition
Create a query rewrite definition.
Parameter Description
dataBaseType Required. The type of database this query rewrite definition is
associated with. Acceptable values are: ORACLE or DB2.
definitionName Required. A unique name for this query rewrite definition condition.
description An optional description.
isNegateQrCond Indicates whether there is a NOT flag on the set of query rewrite
conditions that are associated with this definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
create_qr_replace_element
Parameter Description
actionName Required. The unique name of the query rewrite action this rewrite
function is associated with.
definitionName Required. A unique name for this query rewrite definition condition.
True or false. Indicates that this action applies to all FROM elements.
isFromAllRuleElements
Default is false.
isFromRegex True or false. Indicates that the ‘from’ element is specified by using a
regular expression. Default is false.
True or false. Indicates that the "replace to" is the name of a function,
isReplaceToFunction
such as user-defined function.
replaceFrom The incoming string for a matching rule that is to be replaced. Use
replaceType to indicate specifically which element of the incoming
query to examine.
replaceTo The replacement string for the matching element.
Example:
create_qr_replace_element_byId
Create a replacement specification for a specified query rewrite action.
Parameter Description
True or false. Indicates that this action applies to all FROM elements.
isFromAllRuleElements
Default is false.
isFromRegex True or false. Indicates that the “from” element is specified by using a
regular expression. Default is false.
True or false. Indicates that the “replace to” is the name of a function,
isReplaceToFunction
such as user-defined function.
qrActionId Required (integer). The unique ID of query rewrite action.
replaceFrom The incoming string for a matching rule that is to be replaced. Use
replaceType to indicate specifically which element of the incoming
query to examine.
replaceTo The replacement string for the matching element.
replaceType Required. Indicates what is to be replaced.
Example:
list_qr_action
Lists query actions for a specified query definition.
Parameter Description
actionName The name of the query rewrite action.
definitionName Required. The query rewrite definition name.
detail True or false. The default is true, which lists all the associated attributes
of the actions. Only the name is returned for false.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_action definitionName="case 2"
#######################################################################
#######################################################################
qr action ID: 1
qr action name: qr action2
qr action description: add where by id
ok
Example:
grdapi list_qr_action definitionName="case 2" detail=false
Output:
list_qr_add_where
Lists “add where” functions for a specified query action and query definition pair.
Parameter Description
actionName The name of the query rewrite action.
definitionName Required. The query rewrite definition name.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
list_qr_add_where_by_id
Lists “add where” functions for a specified query action.
Parameter Description
qrActionId Required (integer). The unique identifier for the query rewrite action.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
list_qr_condition
Lists the query rewrite conditions that are associated with a particular query
rewrite definition.
Parameter Description
conditionName The name of a query rewrite condition.
definitionName Required. A query rewrite definition.
detail True or false. The default is true, which lists all the associated attributes
of the conditions. Only the name is returned for false.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_condition definitionName="case 2" conditionName="qr c
#######################################################################
QR Condtions of Definition ’case 2’ - (id = 1 )
#######################################################################
qr condition id: 1
qr condition name: qr cond2
qr definition ID: 1
qr condition verb: *
qr condition object: *
qr condition dept: -1
is verb regex: false
list_qr_condition_to_action
Lists the associations between a query rewrite condition and a query rewrite action
for a particular query definition.
Parameter Description
actionName Required (integer). The unique identifier for the query rewrite action.
definitionName Required. A query rewrite definition.
Detail True or false. The default is true, which lists all the associated attributes
of the conditions for the specified action and definition. Only the name
is returned for false.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_condition_to_action actionName="qr action2" definitionNa
#######################################################################
#######################################################################
qr condition id: 1
qr condition name: qr cond2
qr definition ID: 1
qr condition verb: *
qr condition object: *
qr condition dept: -1
is verb regex: false
is object regex: false
is action for all rule verbs: false
is action for all rule objects: false
qr condition order: 1
list_qr_definitions
Parameter Description
definitionName Required. A query rewrite definition.
Detail True or false. The default is true, which lists all the associated attributes
of the conditions for the specified action and definition. Only the name
is returned for false.
Example:
grdapi list_qr_definitions
Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_definitions
#######################################################################
QR Definitions
#######################################################################
qr definition ID: 1
qr definition name: case 2
qr definition description:
is negation set on qr conditions: false
list_qr_replace_element
Lists replacements for a specified query rewrite action and query rewrite definition
pair.
Parameter Description
actionName Required. A query rewrite action.
definitionName Required. A query rewrite definition.
Detail True or false. The default is true, which lists all the associated attributes
of the replacement elements for the specified action and definition. Only
the names are returned for false.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_replace_element actionName="qr action2" definitionNam
QR replace elements for action ’qr action2’ - (qrActionId = 1 )
#######################################################################
***********************************************************************
qr replace element ID: 2
qr replace type: selectList
qr replace from: Whole select list
qr replace to: EMPNO,SAL
qr is from regex: false
qr is from all rule elements: false
list_qr_replace_element_byId
Parameter Description
detail True or false. The default is true, which lists all the associated attributes
of the replacement elements for the specified action and definition. Only
the names are returned for false.
qrActionId Required (integer). The unique identifier for the query rewrite action.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
remove_all_qr_replace_elements
Deletes query replacement specifications from the system.
Parameter Description
actionName Required. A query rewrite action.
definitionName Required (integer). The unique identifier for the query rewrite action.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
Example:
remove_all_qr_replace_elements_byId
Deletes query replacement specifications from the system.
Parameter Description
qrActionId Required (integer). A query rewrite action identifier.
definitionName Required. A query rewrite definition.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
Example:
remove_qr_action
Parameter Description
actionName Required. A query rewrite action.
definitionName Required. A query rewrite definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
remove_qr_add_where_by_id
Parameter Description
qrAddWhereId Required (integer). An “add where” function.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
remove_qr_condition
Parameter Description
conditionName Required. A query rewrite condition.
definitionName Required. A query rewrite definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
remove_qr_definition
Parameter Description
definitionName Required. A query rewrite definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Parameter Description
Required (integer). A replacement definition ID.
qrReplaceElementId
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
grdapi qrReplaceElementId=33333
update_qr_action
Updates an existing query rewrite action with a new name and optional
description.
Parameter Description
actionName Required. The unique name of the query rewrite action.
definitionName Required. The query rewrite definition that is associated with this
action.
description An optional description.
newName The new name for the query rewrite action.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
update_qr_add_where_by_id
Allows update of an existing “add where” function with new replacement text.
Parameter Description
qrAddWhereId Required (integer). The unique identifier for the query rewrite “add
where” function.
whereText The replacement text for the identified where clause.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
Parameter Description
conditionName Required. The unique name of this query rewrite condition.
definitionName Required. The query rewrite definition that is associated with this
condition.
depth Integer that specifies the depth of the parsed SQL that this condition
applies to (1 and higher). The default -1 means that the query rewrite
condition applies to any matching SQL at any depth.
True or false. Indicates that the specified condition applies to all objects
isForAllRuleObjects
for the fired rule. Default is false.
isForAllRuleVerbsTrue or false. Indicates that the specified condition applies to all verbs
for the fired rule Default is false.
isObjectRegex True or false. Indicates that the specified object is specified by using a
regular expression. Default is false.
isVerbRegex True or false. Indicates that the specified verb is specified by using a
regular expression. Default is false.
newName The new name for the query rewrite condition.
Object An object (table or view). The default “*” means all objects. This can
also be specified as a regular expression, in which case set the
isVerbRegex to True.
Order Used to specify the order in which to assemble multiple related query
rewrite conditions for complex SQL. Default is 1.
Verb A verb (select, insert, update, delete). The default “*” means all verbs.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.
Example:
update_qr_definition
Update an existing query rewrite definition.
Parameter Description
dataBaseType Required. The type of database this query rewrite definition is
associated with. Must be either ORACLE or DB2.
definitionName Required. A unique name for this query rewrite definition condition.
description An optional description.
isNegateQrCond Indicates whether there is whether there is a NOT flag on the set of
query rewrite conditions that are associated with this definition.
newName Optional. Specify a new unique name.
sampleSql Optional. Specify a sample SQL statement. In most cases, you will not
use this unless you want to use the inputted sample SQL later in the UI.
Example:
grdapi update_qr_definition dataBaseType="DB2" definitionName="case 15" sampleSql="select EMPNO fr
update_qr_replace_element_byId
Parameter Description
Required. The type of database this query rewrite definition is
isFromAllRuleElements
associated with. Must be either ORACLE or DB2.
isFromRegex True or false. Indicates that the “from” element is specified by using a
regular expression. Default is false.
True or false. Indicates that the “replace to” is the name of a function,
isReplaceToFunction
such as user-defined function.
Required (integer). The unique ID of query rewrite action.
qrReplaceElementId
replaceFrom The incoming string for a matching rule that is to be replaced. Use
replaceType to indicate specifically which element of the incoming
query to examine.
replaceTo The replacement string for the matching element.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example:
grdapi update_qr_replace_element_byId qrReplaceElementId=1 isFromAllRuleElements=false isFromRegex
Note: In a Central Management environment, the object to which you want to add
a role may reside on the Central Manager or on a managed unit. See the Overview
of the Aggregation & Central Management help book, for more information.
grant_role_to_object_by_id
1=Query
2=Report
3=Alert
4=Baseline
5=Policy
6=SecurityAssessment
7=PrivacySet
8=AuditProcess
12=CustomTable
13=Datasource
14=CustomDomain
15=ClassifierPolicy
16=ClassificationProcess
objectId Required (integer). Identifies the object to which the role will be
assigned.
roleId Required (integer). Identifies the role to assign. This can be any existing
role ID, or the special value -1, which allows access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
grant_role_to_object_by_Name
Add a role to the specified object - a Classification process, for example.
Dependencies are checked before adding the role. For example, before adding a
role to a Classification process, that role must be assigned to all components
contained by that Classification process (the classification policy and any
datasources referenced). Parameters
Query
Report
Alert
Baseline
Policy
SecurityAssessment
PrivacySet
AuditProcess
CustomTable
Datasource
CustomDomain
ClassifierPolicy
ClassificationProcess
objectName Required. The name of the object (the query or report, for example) to
which the role will be assigned.
role Required. The name of the role to assign. This can be any existing role,
or all_roles to allow access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
list_roles_granted_to_object_by_id
Displays the roles assigned to the specified object - a Classification process, for
example.
1=Query
2=Report
3=Alert
4=Baseline
5=Policy
6=SecurityAssessment
7=PrivacySet
8=AuditProcess
12=CustomTable
13=Datasource
14=CustomDomain
15=ClassifierPolicy
16=ClassificationProcess
objectId Required (integer). Identifies the object to which the role will be
assigned.
roleId Required (integer). Identifies the role to assign. This can be any existing
role ID, or the special value -1, which allows access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
list_roles_granted_to_object_by_Name
Displays the roles assigned to the specified object - a Classification process, for
example.
Query
Report
Alert
Baseline
Policy
SecurityAssessment
PrivacySet
AuditProcess
CustomTable
Datasource
CustomDomain
ClassifierPolicy
ClassificationProcess
objectName Required. The name of the object (the query or report, for example) to
which the role will be assigned.
role Required. The name of the role to assign. This can be any existing role,
or all_roles to allow access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
revoke_role_from_object_by_id
Removes a role from the specified object - a Classification process, for example.
Dependencies are handled automatically. For example, if the role foo is removed
from a specific query, the role foo will also be removed from any report based on
that query.
1=Query
2=Report
3=Alert
4=Baseline
5=Policy
6=SecurityAssessment
7=PrivacySet
8=AuditProcess
12=CustomTable
13=Datasource
14=CustomDomain
15=ClassifierPolicy
16=ClassificationProcess
objectId Required (integer). Identifies the object to which the role will be
assigned.
roleId Required (integer). Identifies the role to assign. This can be any existing
role ID, or the special value -1, which allows access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
revoke_role_from_object_by_Name
Removes a role from the specified object - a Classification process, for example.
Dependencies are handled automatically. For example, if the role foo is removed
from a specific query, the role foo will also be removed from any report that uses
that query.
Query
Report
Alert
Baseline
Policy
SecurityAssessment
PrivacySet
AuditProcess
CustomTable
Datasource
CustomDomain
ClassifierPolicy
ClassificationProcess
objectName Required. The name of the object (the query or report, for example) to
which the role will be assigned.
role Required. The name of the role to assign. This can be any existing role,
or all_roles to allow access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
create_stap_inspection_engine
Add an inspection engine to the specified S-TAP. S-TAP configurations can be
modified only from the active Guardium host for that S-TAP, and only when the
S-TAP is online.
Table 195. create_stap_inspection_engine
Parameter Description
stapHost Required. The host name or IP address of the database server on which
the S-TAP is installed.
DB2
FTP
Informix
Kerberos
Mysql
Netezza®
Oracle
PostgreSQL
Sybase
Teradata
exclude IE
MSSQL
named pipes
portMin Required (integer). Starting port number of the range of listening ports
that are configured for the database. (Do not use large inclusive ranges,
as this degrades the performance of the S-TAP.)
portMax Required (integer). Ending port number of the range of listening ports
for the database.
teeListenPort Optional (integer). Not used for Windows. Under UNIX, replaced by the
KTAP DB Real Port when the K-TAP monitoring mechanism is used.
teeRealPort Required when the TEE monitoring mechanism is used. The Listen Port
is the port on which the S-TAP listens for and accepts local database
traffic. The Real Port is the port onto which S-TAP forwards traffic.
connectToIp Optional (integer). The IP address for the S-TAP to use to connect to the
database. Some databases accept local connection only on the “real” IP
address of the machine, and not on the default (127.0.0.1).
client Required. A list of Client IP addresses and corresponding masks to
specify which clients to monitor. If the IP address is the same as the IP
address for the database server, and a mask of 255.255.255.255 is used,
only local traffic is monitored. A client address/mask value of
1.1.1.1/0.0.0.0 monitors all clients. (See the example.)
encryption Optional. Activate ASO encrypted traffic where encryption=0 (no) or
encryption=1 (yes).
excludeClient Optional. A list of Client IP addresses and corresponding masks to
specify which clients to exclude. This option enables you to configure
the S-TAP to monitor all clients, except for a certain client or subnet (or
a collection of these options).
/home/oracle10/prod/10.2.0/db_1/bin/oracle
db2SharedMemAdjustment
These three parameters are used for a DB2 inspection engine, only
under the following conditions:
db2SharedMemClientPosition
v The DB2 server is running under Linux.
db2SharedMemSizev The K-TAP monitoring mechanism is installed.
v Clients connect to DB2 using shared memory.
When these parameters are used, grdapi verifies only that the protocol
is db2; it does not verify that the conditions have been met.
See the DB2 Linux S-TAP Configuration Parameters topic for a detailed
explanation of how to use these parameters.
instanceName Optional (string). Used only for MSSQL or Oracle encrypted traffic.
Either the MSSQL or ORACLE encryption flag must be turned on before
this parameter can be used.
informixVersion Informix Version.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API runs. On a Central Manager (CM), the value
is the host name or IP of any managed units. On a managed unit, it is
the host name or IP address of the CM.
Example
grdapi create_stap_inspection_engine stapHost=192.168.2.118 protocol=Oracle portMin=1521 portMax=1
Note:
Client IP/mask is required for UNIX S-TAP, optional for Windows S-TAP.
list_inspection_engines
Display the properties of all S-TAPs on the specified host, optionally for a specific
database type only.
db2
informix
mssql
mssql-np
oracle
sybase
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
ID=20162
name =ORACLE2
type =ORACLE
connect to IP=127.0.0.1
encrypted = no
client = 127.0.0.1/255.255.255.255
client = 192.168.0.0/255.255.0.0
type =ORACLE
connect to IP=127.0.0.1
encrypted = no
ok
list_staps
Display the database servers from which S-TAPs report to this Guardium system,
optionally listing only the servers that have S-TAPs for which this Guardium
system is the active host (that is, the one to which the S-TAP is sending data and
the one from which the S-TAP configuration can be modified).
Table 197. list_staps
Parameter Description
onlyActive Optional (Boolean). Enter true, or omit this parameter, to list only those
hosts having S-TAPs for which this Guardium system is the active host.
Enter false to list all hosts on which S-TAPs have been configured to use
this Guardium system as either a primary or secondary host.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.
Example
ID=0
staps:
ok
Example
grdapi delete_stap_inspection_engine stapHost=192.168.2.118 type=Oracle sequence=1
restart_stap
Example
grdapi restart_stap stapHost=192.168.2.118
function parameters :
stapDebugInterval - required
stapDebugLevel - required
stapDebugOn - required
stapHost - required
api_target_host
store_stap_approval
Use this function to block unauthorized S-TAPs from connecting to the Guardium
system.
If ON, then S-TAPs can not connect until they are specifically approved.
Note:
Function: store_stap_approval
function parameters :
api_target_host - String
Syntax
CLI command
add_approved_stap_client
Use of this GuardAPI command does not restart the sniffer and does not affect
already connected S-TAPs. This command affects only new S-TAP connections.
Function: add_approved_stap_client
function parameters :
api_target_host - String
Syntax
list_approved_stap_client
Function: add_approved_stap_client
function parameters :
api_target_host - String
Syntax
grdapi list_approved_stap_client
list_stap_verification_results
function parameters:
stapHost - String. The host name or IP address of the database server on which the
S-TAP is installed.
Syntax
delete_approved_stap_client
Use this GuardAPI command to remove an approved S-TAP client.
Use of this GuardAPI command does not restart the sniffer and does not affect
other already connected S-TAPs. This command affects only the specified S-TAP
connections.
function parameters :
api_target_host - String
Syntax
set_ktap_debug
ID=0
function parameters :
ktapDebugInterval - required
ktapFunctionNames
stapHost - required
api_target_host
display_stap_config
Display all the properties of all S-TAPs on the specified host.
Table 200. display_stap_config
Parameter Description
stapHost Required. The host name or IP address of a database server on which
S-TAPs are installed and configured to report to this Guardium system,
or a comma-separated list of host names or IP addresses. You can also
use these values:
all_active
All S-TAPs that are configured to report to this Guardium
system
all_windows_active
All S-TAPs that are configured to report to this Guardium
system and are running on Windows machines
all_unix_active
All S-TAPs that are configured to report to this Guardium
system and are running on UNIX machines
Examples:
grdapi display_stap_config stapHost=myhost1,myhost2
grdapi display_stap_config stapHost=all_active
Examples:
grdapi update_stap_config stapHost=all_windows_active updateValue=TAP.XXXX
verify_stap_inspection_engine_with_sequence
Use this command to verify the S-TAP inspection engine.
function parameters:
addToSchedule - String - Constant values list; valid values are Yes and No.
stapHost - String - required - The host name or IP address of the database server
on which the S-TAP is installed.
protocol - Required. The database protocol, which must be one of the these values:
DB2, DB2 Exit (DB2 version 10), FTP, Informix, Kerberos, Mysql, Netezza, Oracle,
Example:
grdapi verify_stap_inspection_engine_with_sequence stapHost=9.70.144.212
sequence=3
revoke_ignore_stap
Example
grdapi revoke_ignore_stap stapHost=myhost1
set_ztap_logging_config
function parameters :
api_target_host - String
Syntax
grdapi get_ztap_logging_config
When you use the selection tool to define masking actions, it creates scripts that
are run when rule conditions are met. These scripts modify the HTTP messages
that occur with the use of the application. If this process does not give you the
results that you require, you can create your own scripts to manipulate the
contents and properties of the HTTP messages. Designing these scripts requires
that you understand the messages that are exchanged when users interact with the
applications that you want to mask.
To use your custom scripts, identify the conditions for running the scripts, then
create a mask in context action, and add one or more action items that invoke your
custom scripts. In these scripts, you can use the objects and classes that are
described here.
In addition to the objects and classes, the API provides a function that can be used
for debug purposes:
dbgm(...); //prints the supplied arguments to stdout.
For example,
dbgm(’this ’ + ’is’ + ’ a debug output’); //prints "this is a debug output"
You can insert values from the current class or object into the output string. For an
example, see the json global object.
The Guardium for Applications JavaScript API defines objects and classes.
Note: The only way to get to a specific node in an HTML document is to use an
XPath expression.
Example:
var ns = html.xpath(’some xpath expression returning text nodes’);
// "ns" is an object of JS class XmlNodeSet (see classes sections for mode details)
// providing the node set is not empty we can now mask text node contents according to the
// information stored in the current action
// the following lines mask contents of the first node in the set
if (ns.size > 0)
html.mask(ns[0]);
// the following code masks the ’a1’ attribute of the second node in the set
if (ns.size > 1)
html.mask(ns[1], ’a1’);
xml
Note: The only way to get to a specific node in an XML document is to use an
XPath expression.
json
Example:
json.data = {"p1": "v1", "p2": "v2"}; // this would entirely replace JSON in the message
json.data.p1 = {};
json.data.p2 = null;
json.data.a1 = [1, 2, "aasdf"];
json.data.a1[0] = false; // 1 -> false
json.mask(json.data.a1, 2); // "aasdf" will be masked with "*****" if the parent action
// defines "replace" masking method
dbgm(JSON.stringify(json.data)); // should print:
// {"p1": {}, "p2": null, "a1": [false, 2, "*****"]}
form
Example:
// set value in form field "p1"
form.data["p1"] = "v1";
// mask form field "p2"
form.mask("p2");
// mask all fields in the form
for (var f in form.data)
form.mask(f);
query
A global object representing a parsed URL query part, as appears in the browser.
Properties
data: QueryData - provides access to the actual URL query data (parsed
name/value list)
Methods
mask(n: String) - mask query value with name "n".
Example:
// set value in query field "p1"
query.data["p1"] = "v1";
// mask query field "p2"
query.mask("p4");
// mask all fields in the query
for (var f in query.data)
query.mask(f);
text
A property of the global object of type String. Assignments to this property directly
modify message body. However, if during the message processing both HTML tree
Example:
text = ’this string will replace content in the message buffer’;
Instances of this class are created by xpath() methods of html and xml global
objects. These are actually the standard JavaScript Array objects containing
XmlNode objects as their elements. Access to these elements is provided through
the [] operator as it would normally be for JS arrays.
Properties
none
Methods
none
Example:
var ns = html.xpath(’some xpath expression’); // ns: Array of XmlNode objects
dbgm(’number of nodes in set: ’ + ns.length); // print number of nodes in the set "ns"
var node = ns[0]; // node: XmlNode
XmlNode
Instances of this class are also created by xpath() methods of html and xml global
objects.
Properties
v name: String [r] - get node name
v text: String [rw] - get/set inner text for text nodes only
v attributes: XmlAttributeSet [r] - access node attributes
Methods
none
Example:
node.attributes[’a1’] = ’attribute one’; // node is of type XmlNode; setting ’a1’ attribute value
var a2 = node.attribute[’a2’]; // getting attribute ’a2’ value (of type string)
XmlAttributeSet
Instances of this class are used to access the XmlNode attributes through the
attributes property of XmlNode objects. The class behaves as a regular JS Array. All
the array elements are of type String.
Properties
any property [rw] - get/set the respective attribute value for given
XmlNode object
JsonNode
FormData
QueryData
Provides read/write access to the parsed URL query data represented as a
name/value list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none
XmlNodeSet
Instances of this class are created by xpath() methods of html and xml global
objects. These are actually the standard JavaScript Array objects containing
XmlNode objects as their elements. Access to these elements is provided through
the [] operator as it would normally be for JS arrays.
Example:
var ns = html.xpath(’some xpath expression’); // ns: Array of XmlNode objects
dbgm(’number of nodes in set: ’ + ns.length); // print number of nodes in the set "ns"
var node = ns[0]; // node: XmlNode
XmlNode
Instances of this class are also created by xpath() methods of html and xml global
objects.
Properties
v name: String [r] - get node name
v text: String [rw] - get/set inner text for text nodes only
v attributes: XmlAttributeSet [r] - access node attributes
Methods
none
Example:
node.attributes[’a1’] = ’attribute one’; // node is of type XmlNode; setting ’a1’ attribute value
var a2 = node.attribute[’a2’]; // getting attribute ’a2’ value (of type string)
XmlAttributeSet
Instances of this class are used to access the XmlNode attributes through the
attributes property of XmlNode objects. The class behaves as a regular JS Array. All
the array elements are of type String.
Properties
any property [rw] - get/set the respective attribute value for given
XmlNode object
Methods
none
JsonNode
This is a dummy object, completely transparent to the calling script. It serves as a
bridge between the JS JSON interface and the native JSON parser and allows
manipulation of native JSON objects from within scripts as if they were normal JS
objects.
Properties
any property [rw] - get/set property value of underlined JS object
Methods
none
QueryData
html
There is only one html object as a property of a global JS object.
Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on HTML text
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action
Note: The only way to get to a specific node in an HTML document is to use an
XPath expression.
Example:
var ns = html.xpath(’some xpath expression returning text nodes’);
// "ns" is an object of JS class XmlNodeSet (see classes sections for mode details)
// providing the node set is not empty we can now mask text node contents according to the
// information stored in the current action
// the following lines mask contents of the first node in the set
if (ns.size > 0)
html.mask(ns[0]);
// the following code masks the ’a1’ attribute of the second node in the set
if (ns.size > 1)
html.mask(ns[1], ’a1’);
Note: The only way to get to a specific node in an XML document is to use an
XPath expression.
json
Example:
json.data = {"p1": "v1", "p2": "v2"}; // this would entirely replace JSON in the message
json.data.p1 = {};
json.data.p2 = null;
json.data.a1 = [1, 2, "aasdf"];
json.data.a1[0] = false; // 1 -> false
json.mask(json.data.a1, 2); // "aasdf" will be masked with "*****" if the parent action
// defines "replace" masking method
dbgm(JSON.stringify(json.data)); // should print:
// {"p1": {}, "p2": null, "a1": [false, 2, "*****"]}
form
A global object representing parsed form data, typically in POST requests.
Properties
data: FormData - provides access to the actual form data (parsed
name/value list)
Methods
mask(n: String) - mask form value with name "n" according to a method
stored in the current action.
Example:
query
A global object representing a parsed URL query part, as appears in the browser.
Properties
data: QueryData - provides access to the actual URL query data (parsed
name/value list)
Methods
mask(n: String) - mask query value with name "n".
Example:
// set value in query field "p1"
query.data["p1"] = "v1";
// mask query field "p2"
query.mask("p4");
// mask all fields in the query
for (var f in query.data)
query.mask(f);
text
A property of the global object of type String. Assignments to this property directly
modify message body. However, if during the message processing both HTML tree
structure and plain message text are modified, only modifications that are applied
to the HTML tree hold, as the modified tree is serialized back to the message
buffer and replaces its content.
Properties
none
Methods
none
Example:
text = ’this string will replace content in the message buffer’;
D
Functions 178
GuardAPI GIM Functions 189
S
store network interface
diag CLI command 56 GuardAPI Group Functions 199
high-availability 87
GuardAPI Input Generation 207
store network interface secondary 87
GuardAPI Process Control
E Functions 218
GuardAPI QR Functions 236
Enterprise load balancing 184
GuardAPI Reference Overview 129 U
GuardAPI Role Functions 251 User Account, Password and
GuardAPI S-TAP Functions 257 Authentication CLI Commands 115
F
File Handling CLI Commands 72
277