You are on page 1of 1439

IBM

IBM Guardium 10.0


ii IBM Guardium 10.0
Contents
Chapter 1. Product overview . . . . .. 1 How to integrate custom rules with Guardium
IBM Guardium . . . . . . . . . . . .. 1 policy . . . . . . . . . . . . . .. 94
What's new in this release . . . . . . . . .. 2 How to use the appropriate Ignore Action. .. 104
Release Notes . . . . . . . . . . . . .. 3 Character sets . . . . . . . . . . .. 107
Correlation Alerts . . . . . . . . . . .. 131
Chapter 2. Getting Started . . . . . .. 5 How to signify events through Correlation Alerts 135
How to terminate connections via threshold alerts 139
Components . . . . . . . . . . . . .. 5
File Activity Monitoring . . . . . . . . .. 148
Getting Started with the Guardium User Interface .. 6
Installing FAM components . . . . . .. 152
System View . . . . . . . . . . . . .. 8
Viewing file data . . . . . . . . . .. 154
Data Activity Monitoring . . . . . . . . .. 9
Creating a FAM policy rule. . . . . . .. 158
Policies and Rules . . . . . . . . . .. 9
Creating a decision plan . . . . . . . .. 158
Workflows . . . . . . . . . . . . .. 9
FAM configuration with GIM Parameters . .. 161
Auditing. . . . . . . . . . . . . .. 9
Concepts for Guardium for Applications . . .. 162
Classification . . . . . . . . . . . .. 10
Configure data masking policy . . . . . .. 166
Key Concepts and Tools . . . . . . . . .. 10
Guardium for Applications Masking Script
Queries and Reports . . . . . . . . .. 10
JavaScript API . . . . . . . . . . . .. 174
Access Control . . . . . . . . . . .. 10
Incident Management . . . . . . . . .. 179
User Roles . . . . . . . . . . . . .. 11
How to manage the review of multiple database
Groups . . . . . . . . . . . . . .. 11
security incidents . . . . . . . . . . .. 181
Data Archive and Purge . . . . . . . .. 11
Query rewrite . . . . . . . . . . . .. 185
Guardium Installation Manager. . . . . .. 12
How query rewrite works . . . . . . .. 186
Using query rewrite . . . . . . . . .. 187
Chapter 3. Discover . . . . . . . .. 13
Datasources . . . . . . . . . . . . .. 13
Chapter 5. Monitor and Audit. . . .. 195
Creating a datasource definition . . . . .. 13
Building audit processes. . . . . . . . .. 195
Working with existing datasources. . . . .. 20
How to create an Audit Workflow . . . .. 212
Reporting on datasources . . . . . . . .. 20
Open Workflow Process Results . . . . .. 216
Defining a datasource using a service name .. 21
How to distribute workflow through Guardium
Database Auto-discovery . . . . . . . . .. 22
groups . . . . . . . . . . . . .. 217
Classification . . . . . . . . . . . . .. 24
Audit Process To-Do List . . . . . . .. 229
Classification Process Performance. . . . .. 25
Audit and Report . . . . . . . . . . .. 230
Classification Rule Handling. . . . . . .. 25
External Data Correlation . . . . . . . .. 230
Working with Classification Processes . . .. 27
Privacy Sets . . . . . . . . . . . . .. 241
Working with Classification Policies . . . .. 28
Custom Alerting . . . . . . . . . . .. 243
Working with Classification Rules . . . . .. 29
Flat Log Process . . . . . . . . . . .. 246
Working with Classification Rule Actions . .. 31
Build Expression on Query condition . . . .. 247
Discover Sensitive Data . . . . . . . . .. 34
Database Entitlement Reports . . . . . . .. 247
Name and description . . . . . . . . .. 36
How to use Access Maps to show paths between
What to discover . . . . . . . . . .. 36
clients and servers. . . . . . . . . . .. 248
Where to search . . . . . . . . . . .. 40
User Identification. . . . . . . . . . .. 251
Run discovery and review report . . . . .. 42
Identify Users via Application User Translation 251
Audit . . . . . . . . . . . . . .. 43
Identify Users via API . . . . . . . .. 259
Scheduling . . . . . . . . . . . .. 44
Identify Users via Stored Procedures . . .. 261
Regular Expressions . . . . . . . . . .. 44
Value Change Auditing . . . . . . . . .. 263
Create an Audit Database . . . . . . . .. 266
Chapter 4. Protect . . . . . . . . .. 51 Monitored Table Access . . . . . . . . .. 270
Baselines . . . . . . . . . . . . . .. 51 How to use PCI/DSS Accelerator to implement
Policies . . . . . . . . . . . . . . .. 57 PCI compliance. . . . . . . . . . . .. 272
Special pattern tests . . . . . . . . .. 61 Workflow Builder . . . . . . . . . . .. 283
Rule actions . . . . . . . . . . . .. 62 How to create Customized Workflows . . .. 285
Creating policies. . . . . . . . . . .. 72 How to use Customized Workflows . . . .. 287
Installing Policies . . . . . . . . . .. 80 Quick Search for Enterprise . . . . . . .. 289
How to install a policy and detail group Enabling and disabling Quick Search for
members . . . . . . . . . . . . .. 85 Enterprise . . . . . . . . . . . .. 290
Rule definition fields . . . . . . . . .. 89

iii
Using Quick Search for Enterprise . . . .. 291 How to create custom reports from stored data .. 510
Using the Investigation Dashboard . . . .. 293 How to report on dormant tables and columns .. 516
Outliers Detection . . . . . . . . . . .. 294 How to Generate API Call from Reports . . .. 526
Enabling and disabling outliers detection . .. 295 How to use Constants within API Calls . . .. 532
Interpreting outliers . . . . . . . . .. 296 How to use API Calls from Custom Reports . .. 540
Grouping users and objects for outlier detection 297 Optional External Feed . . . . . . . . .. 547
Excluding events from outlier detection . .. 297 Mapping an External Feed . . . . . . . .. 549
Distributed Report Builder . . . . . . . .. 550
Chapter 6. Reports . . . . . . . .. 299 How to create a Distributed Report . . . . .. 556
Report parameters. . . . . . . . . . .. 303
Creating dashboards . . . . . . . . . .. 304 Chapter 7. Assess and harden . . .. 563
Configuring your dashboard . . . . . .. 305 Introducing Guardium Vulnerability Assessment 563
Using your dashboard . . . . . . . .. 305 Deploying VA for DB2 for i. . . . . . .. 567
Viewing a report . . . . . . . . . . .. 306 Vulnerability Assessment tests . . . . . . .. 569
Refreshing reports . . . . . . . . . .. 307 Defining a query-based test . . . . . .. 571
Exporting a report . . . . . . . . . .. 308 Defining a CAS-based test . . . . . . .. 573
Viewing Drill-Down Reports . . . . . .. 308 Assessments . . . . . . . . . . . . .. 574
Creating a report . . . . . . . . . . .. 309 Creating an assessment . . . . . . . .. 574
Data Mart . . . . . . . . . . . . .. 310 Creating a VA Test Exception . . . . . .. 575
Audit and Report . . . . . . . . . . .. 313 How to create a security assessment . . . .. 576
Queries . . . . . . . . . . . . . .. 313 Running an assessment . . . . . . . .. 582
Using the Query Builder . . . . . . .. 316 Viewing assessment results . . . . . . .. 582
Query Conditions . . . . . . . . . .. 318 VA summary . . . . . . . . . . .. 585
Domains, Entities, and Attributes . . . . . .. 323 Required schema change . . . . . . . .. 587
Domains . . . . . . . . . . . . .. 323 Assessing RACF vulnerabilities . . . . . .. 587
Custom Domains . . . . . . . . . .. 328 Configuration Auditing System . . . . . .. 588
Entities and Attributes . . . . . . . .. 341 CAS Start-up and Failover . . . . . . .. 593
Database Entitlement Reports . . . . . .. 405 CAS Templates . . . . . . . . . . .. 596
How to take advantage of over 600 predefined Working with CAS Templates . . . . . .. 606
reports . . . . . . . . . . . . . .. 422 CAS Hosts . . . . . . . . . . . .. 612
Predefined Reports . . . . . . . . .. 447 CAS Reporting . . . . . . . . . . .. 615
Predefined admin Reports . . . . . . .. 450 CAS Status . . . . . . . . . . . .. 623
Predefined user Reports . . . . . . . .. 480 Amazon RDS Discovery . . . . . . . . .. 626
Predefined Reports Common . . . . . .. 497
How to build a report and customize parameters 500 Index . . . . . . . . . . . . . .. 633
How to ask questions of the data. . . . . .. 507

iv IBM Guardium 10.0


Chapter 1. Product overview
Product and release information for Guardium® Solutions.

IBM Guardium
August 18, 2015 IBM Guardium prevents leaks from databases, data warehouses
and Big Data environments such as Hadoop, ensures the integrity of information
and automates compliance controls across heterogeneous environments.

It protects structured and unstructured data in databases, big data environments


and file systems against threats and ensures compliance.

It provides a scalable platform that enables continuous monitoring of structured


and unstructured data traffic as well as enforcement of policies for sensitive data
access enterprise-wide.

A secure, centralized audit repository combined with an integrated workflow


automation platform streamlines compliance validation activities across a wide
variety of mandates.

It leverages integration with IT management and other security management


solutions to provide comprehensive data protection across the enterprise.

They are intended to enable continuous monitoring of heterogeneous database and


document-sharing infrastructures, as well as enforcement of your policies for
sensitive data access across the enterprise, utilizing a scalable platform. A
centralized audit repository designed to maximize security, combined with an
integrated compliance workflow automation application, enables the products to
streamline compliance validation activities across a wide variety of mandates.

The IBM Security Guardium solution is offered in three versions:


v IBM Security Guardium Database Activity Monitoring (DAM)
v IBM Security Guardium for Applications - GFA masks sensitive data in web
applications dynamically without changing the web applications themselves.
v IBM Security Guardium File Activity Monitoring (FAM) - Use Guardium file
activity monitoring to extend monitoring capabilities to file servers.

The IBM Guardium products provide a simple, robust solution for preventing data
leaks from databases and files, helping to ensure the integrity of information in the
data center and automating compliance controls.

Guardium products can help you:


v Automatically locate databases and discover and classify sensitive information
within them;
v Automatically assess database vulnerabilities and configuration flaws;
v Ensure that configurations are locked down after recommended changes are
implemented;
v Enable high visibility at a granular level into database transactions that involve
sensitive data;

1
v Track activities of end users who access data indirectly through enterprise
applications;
v Monitor and enforce a wide range of policies, including sensitive data access,
database change control, and privileged user actions;
v Create a single, secure centralized audit repository for large numbers of
heterogeneous systems and databases; and
v Automate the entire compliance auditing process, including creating and
distributing reports as well as capturing comments and signatures.

The Guardium solution is designed for ease of use and scalability. It can be
configured for a single database or thousands of heterogeneous databases located
across the enterprise.

This solution is available as preconfigured appliances shipped by IBM® or as


software appliances installed on your platform. Optional features can easily be
added to your system after installation.

For more information on the Guardium family of products, visit


http://www.ibm.com/software/data/guardium/.

What's new in this release


New features/ functions/ enhancements in version 10.0
v Use the product more easily through an intuitive user interface. New GUI/ new
UI framework for reporting and charting/ new online help system based on
IBM Eclipse. In the new GUI, select User Interface and search to quickly
navigate through the GUI or Data or Files (Quick Search).
v Create dashboards to easily organize and review your most commonly used
reports. A dashboard is a user-personalized space in which you can drop reports
and organize reports for easy access.
v Scenario-based task flow to discover and protect sensitive data
v Easy UI customization - The user interface contains a myriad of items and
features, many of which are not necessary for most users. Learn how to
customize a user layout based upon role to give a user only the tools they need.
v Quick Search for the Enterprises with topology view/ Investigation dashboard -
Quick Search for Enterprise provides immediate access to your data without
requiring detailed knowledge of Guardium topology, aggregation, or
load-balancing schemes.
v Service status dashboard and consolidated support and maintenance view - The
Services Status panel is a centralized place to check status of services such as
CAS or alerter, and if necessary, investigate each service further.
v Extend monitoring of sensitive data to include data stored in files as well as in
databases - File Activity Monitoring. You can use Guardium file activity
monitoring to extend monitoring capabilities to file servers. File activity
monitoring is similar to database activity monitoring in many respects. In both
cases, you discover the sensitive data on your servers and configure policies to
create rules about data access and actions to be taken when rules are met.
v Add format-preserving encryption and tokenization to Guardium for
Applications data masking. To mask data, you must create and configure
Guardium data masking polices that specify the data that is to be masked. The
procedure that you use to create data masking policies is similar to the
procedure that you use to create other types of Guardium policies.

2 IBM Guardium 10.0


v Help you to identify the best collector to manage new S-TAPs - S-TAP Load
Balancer. Load balancing automatically allocates managed units to S-TAP agents
when new S-TAPs are installed and during fail-over when a managed unit is
unavailable. The load balancing application also dynamically re-balances loaded
or busy managed units by relocating S-TAP agents to less-loaded managed units.
v Enhanced instance discovery - no dependence on Java or external libraries
v Optimize the scheduling of jobs by creating dependencies - The Guardium
collector has many tasks such as Policy Installation, Audit Processes, Group
updates, etc. that are scheduled to run periodically. The Job dependencies feature
finds all jobs that have a direct relationship and impact on the success of the
execution of the task you are trying to schedule.
v Protect data by rewriting queries. Query rewrite functionality provides
fine-grained access control for databases by intercepting database queries and
rewriting them based on criteria defined in security policies.
v Assess the vulnerability of new databases (MongoDB, AsterDB, SAP HANA).
v GIM Enhancements: Automate kernel upload via GIM and automated BUNDLE
creation; Auto-discovery report for GIM Listeners
v Classifier - Discovery and classification for unstructured data

Release Notes
IBM Guardium offers the most complete database protection solution for reducing
risk, simplifying compliance and lowering audit cost.

Description
Guardium version 10.0 contains many new and enhanced features touching every
aspect of functionality of the IBM Guardium application.

For new and enhanced features, see http://www-01.ibm.com/support/


docview.wss?uid=swg27046252

Announcement

See the IBM Guardium version 10.0 announcement for the following information:
v Detailed product description, including a description of new functionality
v Product-positioning statement
v Packaging and ordering information
v International compatibility information

Compatibility with earlier versions


For information on upgrade options, go to the Identify the correct upgrade
scenario help topic in the Upgrading section of this information center.

System Requirements

For information about hardware and software compatibility, see the version 10.0
System Requirements document at http://www-01.ibm.com/support/
docview.wss?uid=swg27045976

Chapter 1. Product overview 3


Installing Guardium version 10.0
To install Guardium version 10.0, follow the instructions in the Installing section of
this information center.

Known Issues

Known issues are documented in the form of individual Technotes in the Support
knowledge base at the Guardium Support portal, http://www.ibm.com/support/
entry/portal/Overview/Software/Information_Management/
InfoSphere_Guardium.

As problems are discovered and resolved, the Support knowledge base is updated.
By searching the knowledge base, you can quickly find workarounds or solutions
to problems as well as other documents such as downloads and detailed system
requirements.

Support lifecycle

If you are using an older version of Guardium software, plan ahead to give
yourself time to plan for upgrades. You can find information about end-of-support
dates for IBM products at http://www.ibm.com/software/support/lifecycle/.

4 IBM Guardium 10.0


Chapter 2. Getting Started
Components
Information about the different components found in a Guardium environment.

Datasources
A Guardium datasource identifies a specific database instance. Access to
datasources may be restricted based on the roles assigned to the datasource and to
the applications that use it. For example, the Value Change Auditing application
requires a high level of administrative access that would not be appropriate for
other less privileged applications.

S-TAP

The Guardium S-TAP is a lightweight software agent installed on a database


server. The S-TAP monitors traffic to and from datasources and forwards
information about that traffic to a Guardium system. S-TAP configuration allows
control over which traffic is forwarded to a Guardium system and which is
ignored. The information collected by an S-TAP represents the basis of all
Guardium reports, alerts, visualizations, etc.

Collection and Aggregation

Guardium collectors gather database activity, analyze it in real time, and log it for
further analysis and use in alerting. Guardium aggregators collect and merge
information from multiple Guardium collectors, as well as from other aggregators,
and produce holistic views of an entire environment. Collection and aggregation
processes allow Guardium to easily generate enterprise-level reports

In a large enterprise environment, for example, several Guardium systems may be


used for monitoring different geographic locations or business units. In this
scenario, it may be useful to collect data from all Guardium systems into a single
location in order to view database usage across all geographies or business units.
This can be accomplished by exporting data from multiple collectors to a single
aggregator. Reports, assessments, and audit processes run from this aggregator
would then reflect data collected from across the environment.

Central Management
A central management system controls and monitors an entire Guardium
environment, including all collectors and aggregators, from a single console. In this
configuration, one Guardium system is designated as a central manager that
monitors and controls other Guardium units referred to as managed units. While
some applications (Audit Processes, Queries, Portlets, etc.) can be run from either a
managed unit or from the central manager, applications definitions are stored on
the central manager while data is provided by the local machine.

Central management allows Guardium to support hierarchical aggregation where


multiple aggregators merge their data repositories to a central manager. This is
useful for multi-level views. For example, with different Guardium aggregators
assigned to different geographic locations, a central management unit can merge

5
the contents of all aggregators into a single global view spanning all geographies.

Getting Started with the Guardium User Interface


Learn the basics of the Guardium user interface, including logging in for the first
time, banner and navigation menus, and the user interface and data search.

Navigation

When you first log in to the Guardium user interface, there are two main menus -
the banner and the navigation menu.

You can expand and collapse the navigation menu by clicking the chevron icon ,
or you can hide the navigation menu completely by clicking the show / hide icon
.

The initial layout of your screen is determined by the license applied, the access
allowed based on roles, the machine type and a visibility factor. Examples of roles
are user, admin, access manager, and CLI. Roles are assigned to users and
applications to grant users specific access privileges.

Supported web browsers

Internet Explorer 9 (IE9) and above on Windows 7. And make sure your company
website is not listed in the Compatibility View selection of Internet Explorer.

Firefox ESR 24 and above

Chrome 28 and above

Minimum screen resolution - 1366 x 768

Banner Menu

The banner contains the following items:

Item Description
System time clock The universal time on your Guardium
system.
To-Do List Contains the Audit Process To-Do List,
which can be filtered by user, and the
Processes With No Pending Results.
Help Open the product help by clicking Help >
Guardium Help.

Get information about your Guardium


system, such as the version number, by
clicking Help > About Guardium.

For help content specific to a screen or


feature you're working with, click the small
help icon that is embedded in the screen's
pane.

Note: Both help icons take you to the same


Information Center, where you can search
and access all help content.

6 IBM Guardium 10.0


Item Description
User interface / data / file search Search for a part of the user interface, a
piece of data, or a file.

For example, if you want to find the Policy


Builder, toggle the search to User Interface,
and start typing policy builder. Click any
of the results to go to that part of the user
interface.
Account type Indicates what type of account you have.
Edit your account details, such as your
password or name, customize UI layout, and
sign out of Guardium securely.
Machine type Indicates what type of machine you are on,
such as stand-alone, managed unit, central
manager, or aggregator.

The banner menu also contains important startup messages such as Low RAM
memory, Quick Search memory and CPU 4-cores minimum requirement, Certificate
expiration, Central Management failure, SSLv3 enabled or disabled, and No
License.

Note: Guardium recommends that SSLv3 be disabled. However, in dealing with


older Guardium versions that do not have the latest release installed, if SSLv3 is
disabled, the Central Management functionality will be impaired between the
Central Manager and the managed units.

Navigation Menu

Each icon in the navigation menu represents one phase of the Guardium security
lifecycle, click any icon to expand it and see the components within the phase. The
lifecycle-centric navigation menu is one way to navigate the user interface and is
consistent across roles. Menu items may be customized and may or may not
appear based on your role.

Phase Description
Setup Configure your network settings, check the
status of your services, and setup datasource
definitions, groups, aliases, and alerts.
Manage Manage your environment's overall health,
S-TAPs, data, modules, maintenance, and
reports.
Discover Automatically discover new databases that
are introduced to your environment, and
find and classify sensitive data.
Harden Assess your environment's current
weaknesses with Vulnerability Assessment
and monitor changes made to your
environment with Configuration Auditing
System (CAS).
Investigate Monitor database activities and investigate
suspicious activity in any part of your
environment.

Chapter 2. Getting Started 7


Phase Description
Protect Protect your environment with data security
policies that block suspicious activity and
prevent unauthorized access to data. For
more information about policies, see Policies.
Comply Reach compliance initiatives with audit
processes and granular reporting.

Reports Create your own report or use one of many


predefined reports to report on any part of
your environment. For more information
about reports, see Chapter 6, “Reports,” on
page 299.
My Dashboards Create your own dashboards to easily
review reports that are of primary interest to
you. For more information about
dashboards, see “Creating dashboards” on
page 304.

Commonly Found Icons

Many of the finder and builder applications in Guardium share this set of icons.

Icon Description
New Create a new item, such as a group or
datasource definition.

Modify Modify an item.

Note: When modifying items, the best


practice is to clone the item, and then
modify the clone.
Clone Clone an item to create a copy of the item.

Delete Delete an item.

System View
The System View is the default initial view for many users. It enables you to see
key elements of system status.

Three tabs under the System View display different types of status information:
v The S-TAP Status Monitor displays summary data about S-TAPs that are
deployed in your environment. Icons represent the high-level status, and you
can drill down to view information about inspection engines.
v The Unit Utilization tab displays information about the usage of each Guardium
system.
v The System Monitor tab displays up-to-date details about incoming data, CPU
usage, and other information.

8 IBM Guardium 10.0


Data Activity Monitoring
Information about key security concepts used in Guardium.

Policies and Rules


A security policy contains an ordered set of rules to be applied to the observed
traffic between database clients and servers. Each rule can apply to a request from
a client, or to a response from a server. Multiple policies can be defined and
multiple policies can be installed on a Guardium system at the same time.

Each rule in a policy defines a conditional action. The condition can be a simple
test, for example a check for any access from a client IP address not found in an
Authorized Client IPs group, or the condition can be a complex test that evaluates
multiple message and session attributes such as database user, source program,
command type, time of day, etc. Rules can also be sensitive to the number of times
a condition is met within a specified timeframe.

The action triggered by the rule can be a notification action (e-mail to one or more
recipients, for example), a blocking action (the client session might be
disconnected), or the event might simply be logged as a policy violation. Custom
actions can be developed to perform any tasks necessary for conditions that may
be unique to a given environment or application.

Workflows
Workflows consolidate several database activity monitoring tasks, including asset
discovery, vulnerability assessment and hardening, database activity monitoring
and audit reporting, report distribution, sign-off by key stakeholders, and
escalations.

Workflows are intended to transform database security management from a


time-consuming manual activity performed periodically to a continuously
automated process that supports company privacy and governance requirements,
such as PCI-DSS, SOX, Data Privacy and HIPAA. In addition, workflows support
the exporting of audit results to external repositories for additional forensic
analysis via Syslog, CSV/CEF files, and external feeds.

For example, a compliance workflow automation process might address the


following questions: what type of report, assessment, audit trail, or classification is
needed, who should receive this information and how sign-offs are handled, and
what is the schedule for delivery?

Auditing
Guardium provides value change auditing features for tracking changes to values
in database tables.

For each table in which changes are to be tracked, you can select which SQL
value-change commands to monitor (insert, update, delete). Before and after values
are captured each time a value-change command is executed against a monitored
table. This change activity is uploaded to Guardium on a scheduled basis, after
which all of Guardium‘s reporting and alerting functions can be used.

You can view value-change data from the default Values Changed report, or you
can create custom reports using the Value Change Tracking domain.

Chapter 2. Getting Started 9


Classification
Guardium supports the discovery and classification of sensitive data to allow the
creation and enforcement of effective access policies.

A classification policy is a set of rules designed to discover and tag sensitive data
elements. Actions can be defined for each rule in a classification policy, for
example to generate an email alert or to add a member to a Guardium group, and
classification policies can be scheduled to run against specified datasources or as
tasks in a workflow.

Discovery and classification routines become important as the size of an


organization grows and sensitive information like credit card numbers or personal
financial data become present in multiple locations, often without the knowledge
of the current administrators responsible for that data. This frequently happens in
the context of mergers and acquisitions, or when legacy systems have outlasted
their original owners. Guardium classification discovers and tags this sensitive data
so appropriate access policies can be applied.

Key Concepts and Tools


Information about key concepts pertaining to Guardium administration.

Queries and Reports


Guardium queries describe a set of information to be obtained from the collected
data. Reports define how the data identified by a Guardium query is presented.

Guardium queries describe a set of information obtained from the collected data.
Queries are comprised of three elements: entities, fields, and conditions. Entities
define the scope of a query, fields list the columns of data to be returned by the
query, and conditions define tests to match against the data (greater than, less than,
contains, etc.)

A report defines how the data collected by a query is presented. The default report
is a tabular report that reflects the structure of the query, with each attribute
displayed in a separate column. All runtime parameters and presentation
components of a tabular report can be customized.

Access Control
Guardium provides access maps as a way to conveniently show data access
between database clients and database servers.

Data access by applications and tools can be categorized according to many


dimensions, including what data is being accessed, how it is being accessed, or
how many SQL calls are being made. In an enterprise environment, it is very
important to get a good handle on database access. This requirement can stem
from the need to understand and secure access to the database due to compliance
initiatives and even due to the need to tune and optimize your database
environment. Because there can be many databases and a very large number of
database clients in enterprise environments, getting a handle on the data access
paths can be difficult.

Access maps provide a convenient way to create a mapping of data access,


revealing access paths between database clients and database servers. This view is
displayed in as a visual map that shows all access paths derived from a set of

10 IBM Guardium 10.0


criteria that you define. Criteria can be set based on any combination including
server type or location on the network (IPs and subnets). In addition, you can
group access patterns together, since one of the main problems in reviewing access
data is the detailed granularity. By grouping similar access paths, you are able to
get a visual map, which can be meaningful in understanding your access
environment.

User Roles
A role defines a group of Guardium users who share the same access privileges.

When a role is assigned to an application or the definition of an item (a specific


query, for example), only those Guardium users who are also assigned that role
can access that component. If no security roles are assigned to a component (a
report, for example), only the user who defined that component and the admin
user can access it.

At installation time, Guardium is configured with a default set of roles and a


default set of user accounts. The Guardium access manager can create new roles
and modifies existing roles as needed.

Groups
Guardium supports the grouping of elements to simplify creating and managing
policies and to clarify the presentation of reports.

Grouping can simplify the process of creating policy and query definitions. It is
often useful to group elements of the same type, and grouping can make the
presentation of information on reports more straightforward. Groups are used by
all subsystems, and all users share a single set of groups.

For an example of grouping, assume that your company has 25 separate data
objects containing sensitive employee information, and you need to report on all
access to these items. You could formulate a very long query testing for each of the
25 items. Alternatively, you could define a single group called sensitive employee
info containing those 25 objects. That way, in queries or policy rule definitions,
you only need to test if an object is a member of that group.

An additional benefit of groups is that they can ease maintenance requirements


when the group's composition changes. To continue the example, if your company
decides that two more objects need to be added to the sensitive employee info
group, you only need to update the group definition and not all of the queries,
reports, and policies that reference the group.

Data Archive and Purge


Data Archive backs up data that has been captured by a Guardium system. When
configuring Data Archive, data purge criteria may also be specified.

There are two archive operations available from the Guardium Administration
Console: Data Archive and Results Archive. The path to these archive operations is
Setup > Tools and Views > Data Management.
Data Archive
With Data Archive, data is typically archived at the end of the day on
which it is captured, which ensures that in the event of a catastrophe, only
the data of that day is lost. The purging of data depends on the application

Chapter 2. Getting Started 11


and depends on business and auditing requirements, but in most cases
data can be kept on the machines for more than six months.
Results Archive
Results Archive backs up audit task results (e.g. reports, assessment tests,
entity audit trail, privacy sets, and classification processes) as well as the
view and sign-off trails and the accommodated comments from workflow
processes. Results sets are purged from the system according to the
workflow process definition.

In an aggregation environment, data can be archived from the collector, from the
aggregator, or from both locations. Most commonly, the data is archived only once,
and the location from where it is archived varies depending on the customer's
requirements.

Guardium Installation Manager


The Guardium Installation Manager (GIM) is used to install and maintain
Guardium components on managed systems.

The GIM component includes a GIM server, which is installed as part of the
Guardium system, and a GIM client, which must be installed on servers that host
databases you want to monitor. After installing the GIM client, it works with the
GIM server to perform the following tasks:
v Check for updates to installed software
v Transfer and install new software
v Uninstall software
v Update software parameters

If your environment includes a Guardium system configured as a Central Manager,


you must decide which Guardium systems you want to use as GIM servers. You
can either manage all of your GIM clients from a single Guardium system, such as
the central manager, or you can manage them in groups from the different
Guardium systems. If you manage all of your GIM clients from a single Guardium
system, then you can view the status of all the GIM clients and perform related
tasks from a single interface. If you choose to manage your GIM clients in groups
from separate Guardium systems, then you can use each system to work with the
GIM clients that it manages, but no overall or environment-wide view is available.

12 IBM Guardium 10.0


Chapter 3. Discover
Discovery refers to processes of locating and identifying objects in your
environment that must be tracked for security and compliance purposes.

Discovery is the process of finding important objects such as privileged users,


sensitive data, and datasources. Classification is the process of appropriately
identifying what is discovered for security and compliance purposes. These
processes of discovery and classification are important in large organizations where
mergers, acquisitions, and legacy systems introduce new objects to your
environment in unstructured or unpredictable ways. Guardium helps you
incorporate these objects into your environment so you can enforce effective
security policies and ensure compliance.

A common scenario involves the discovery of sensitive data. Sensitive data refers
to regulated information like credit card numbers, personal financial data, social
security numbers, and other information that requires special handling. Guardium
supports two different approaches for discovering sensitive data: by using the
Discover Sensitive Data workflow builder, or by using the Policy Builder with
other Guardium tools. The Discover Sensitive Data workflow builder is intended
as an all-inclusive tool for establishing discovery and classification processes for
sensitive data. Use it to specify rules for discovery, define actions to take on
discovered data, specify which data sources to scan, distribute reports, and run the
workflow on an automated schedule. For more advanced users, the Policy Builder
supports more granular discovery and classification rules that can be easily
incorporated into existing processes and Guardium applications.

Datasources
Datasources store information about your database or repository such as the type
of database, the location of the repository, or credentials that might be associated
with it. You must define a datasource in order to use it with Guardium
applications.

Creating a datasource definition


Use the Datasource Builder to create datasource definitions for use with Guardium
applications.

About this task


You can create a datasource definition through two general processes. First, you
can add a datasource definition from the Datasource Builder and then specify the
application for which you want to use the datasource. Second, you can go into the
application you want to use, and create a datasource within the application. The
navigation for adding a datasource definition within a specific application varies
depending on the application you choose or the type of database selected. For
example, if you want to create an audit database, navigate to Harden >
Configuration Change Control (CAS Application) > Value Change Audit
Database Creation and click Add Datasource .

Procedure
v Open the Datasource Builder by navigating to Setup > Datasource Definitions.

13
v The first screen in the Datasource Builder is the Application Selection menu,
which lists all applications with which you can use the datasource definition.
Choose an application, and click Next.
v The Datasource Finder shows existing datasource definitions created for the
application you selected. Click New to add a datasource definition for the
selected application.
v Use the Datasource Definition dialog to provide information about the
datasource to be stored for future use. Depending on the application that you
select, and the type of datasource you use, the dialog varies slightly.
1. Enter a unique name for the datasource. Include both the database type and
server name in the datasource name to prevent future confusion between
datasources.
2. From the Database Type menu, select the database or type of file. For some
applications, the datasource must be a database, and cannot be a text file.
Depending on the type of database you select, some fields on the panel are
disabled, or the labels change.
3. Select a Severity Classification (or impact level) for the datasource. Severity
classification can be used to sort, filter, or focus datasources while you are
viewing reports and results.
4. Select Share Datasource to share the datasource definition across all
applications. If you do not share the datasource, the definition you create
can be used only with the application you chose.
5. Select Save Password to save and encrypt your authentication credentials
on the Guardium appliance. Save password is required if you are defining a
datasource with an application that runs as a scheduled task (as opposed to
on demand). When save password is selected, login name and password are
required.
6. Enter your credentials for Login Name and Password.
7. For the Host Name/IP field, enter the host name or IP address for the
datasource.
8. Use the table to complete Port based on your datasource type.
Datasource type and port number table
Database type Port number
Aster Data 2046
DB2 50000
DB2 for i 446
DB2 for z/OS 446
Hadoop 21000-21050
Informix 1526
MS SQL Server Port number grayed out - Use of this datasource allows a client
(Dynamic ports) and MS without a defined port value or where the dynamic function is
SQL Server (DataDirect - enabled from the MS SQL Server database server to connect
Dynamic ports) dynamically to a MS SQL server database. To define dynamic
port, go onto the DB serve for MS SQL Server and define 0 for
Dynamic port type and remove TCP/IP which by default is port
1433. Setting Dynamic port value to 0 and restarting the services
will set a dynamic IP.
MS SQL Server 1433
(DataDirect)
MongoDB 27017

14 IBM Guardium 10.0


Datasource type and port number table
Database type Port number
MySQL 3306
Netezza 5480
Oracle (DataDirect) 1521
PostgreSQL 5432
SAP Hana 30015-30017
Sybase 4100
Sybase IQ 2638
Teradata 1025
Text 0
Text:HTTP 8000
Text:FTP 21
Text:SAMBA 445
Text:HTTPS 8443
N_A 0
MS SQL Server (open 1433
source) (use Harden >
Vulnerability
Assessment > Customer
Uploads to upload these
JDBC drivers, see
Subscribed Groups
Upload)
Oracle (open source) (use 1521
Harden > Vulnerability
Assessment > Customer
Uploads to upload these
JDBC drivers, see
Subscribed Groups
Upload)
HIVE, HiveServer2 10000
HADOOP, Hive CLI 9083
deprecated
HIVE, for Impala from 21050
Hue
HADOOP, Impala shell 21000
HUE, Oracle Hue 1521
backend
HUE, MySQL Hue 3306
backend
HUE, PostgreSQL Hue 5432
backend
WEBHDFS 50070

9. Depending on the datasource type, the dialog varies slightly for the fields
after port.
– If DB2, enter the database name.

Chapter 3. Discover 15
– If DB2 iSeries or Oracle, enter the service name.
– If Informix, enter the Informix server name.
– For a non-text Database Type, in the Database box, enter the database
name (Informix, Sybase, MS SQL Server, PostgreSQL, or Teradata only).
If it is blank for Sybase or MS SQL Server, the default is master. For
Sybase database, the Database text box should contain either the
database name or default to master if it is blank (This works for
Entitlement Reports and Classifier. For VA, use the database instance
name.)
– For DB2, DB2 iSeries, or Oracle enter a valid schema name in the Schema
box to use.
– For a text file Database Type, in the File Name box, enter the file name.
10. Use the Connection Property box only if additional connection properties
must be included on the JDBC URL to establish a JDBC connection with
this datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.
– For a Sybase database with a default character set of Roman8, enter the
following property: charSet=utf8.
– For an Oracle Encrypted Connection you need to define a Connection
Property as:
oracle.net.encryption_client=REQUIRED;oracle.net.encryption_types_client=RC4_40
(Replacing with an encryption algorithm required by the monitored
instance, regardless of its type).
– NOTE that 3DES168 encryption is problematic. A datasource defined to
use 3DES168 encryption will incorrectly throw an ORA-17401 protocol
error or ORA-17002 checksum error when it encounters any SQL error.
Thereafter, the connection simply won't work until it is closed and
reopened.
– For a DB2 Encrypted Connection you need to define a Connection
Property as: securityMechanism=13
– For a DB2 iSeries Connection, define a Connection Property as:
property1=com.ibm.as400.access.AS400JDBCDriver;translate binary=true
– In Oracle, sys is an Oracle default user, is owner of the database instance,
and has super user privileges, much like root on Unix. SYSDBA is a role
and has administrative privileges that are required to perform many
high-level administrative operations such as starting and stopping the
database as well as performing such operations as backup and recovery.
This role (SYSDBA) can also be granted to other users. The phrase sys
as SYSDBA refers to the connection method required to connect as the sys
user.
– For monitor values for Oracle 10 (sys as SYSDBA) (this is for the Oracle
open source driver), enter the following: internal_logon=sysdba
– For DataDirect (Oracle driver), enter the following: SysLoginRole=sysdba
– In addition, if using CRYPTO_CHECKSUM_TYPES in your sqlnet.ora,
use the following examples:

oracle.net.encryption_client=aes256;oracle.net.crypto_checksum_types_client=SHA1

oracle.net.encryption_client=rc4_256;oracle.net.crypto_checksum_types_client=MD5

oracle.net.encryption_client=aes256;oracle.net.crypto_checksum_types_client=MD5

16 IBM Guardium 10.0



oracle.net.encryption_client=rc4_256;oracle.net.crypto_checksum_types_client=SHA1
– Example: Use authentication to Oracle LDAP which is known as OID.
Values needed are: the LDAP server host or IP, the LDAP server port, the
Oracle instance name and the realm. The custom URL must be properly
entered: jdbc:guardium:oracle:@ldap://wi3ku2x32t4:389/
on0maver,cn=OracleContext,dc=vguardium,dc=com
11. Enter a Custom Url (optional) connection string to the datasource; otherwise
connection is made using host, port, instance, properties, etc. of the
previously entered fields. When filling in a Custom URL field with the
Oracle Open Source format, use: jdbc:guardium:oracle://;SID=<SID>
12. Enter CAS information
a. Because vendors offer flexibility during installation, users should be
asked to help in determining the two fields required on the datasource
definition.
CAS needs two pieces of information: a database instance account to run
some of the database tools on Unix, and the name of the database
instance directory in order to find the files it is to monitor. Generally, if
the Database Instance Account and Directory are not correctly entered in
the Datasource Definition, you will see No CAS data available messages
for tests where CAS could not find data.
Enter a Database Instance Account (software owner) and a Database
Instance Directory (directory where database software was installed)
that will be used by CAS.
These are suggestions for how to find the needed information to fill in
the CAS information for datasources. This information may vary from
one installation to another. One of the ways used on Unix is to list the
/etc/passwd file for specific database installations that can be used to
identify the database instance account and instance directory. Sometimes
during the installation an environment variable is defined in the
database instance account identifying the instance directory, such as
ORACLE_HOME. In this case, enter $ORACLE_HOME in the database
instance directory field of the datasource definition form and the
variable will be expanded to find the correct directory name on the
database server.

Note: To search multiple directories, you can define multiple file paths
for Database Instance Directory. Refer to the MongoDB row for an
example.
Table 1. Database Instances
Database
Type Database Instance Account Database Instance Directory/ Additional Hints
Db2 Often db2inst1 Home directory of db2inst1 or C:\Program Files\IBM\SQLLIB on
Windows

The program db2cmd.exe must be on the system path, or in the bin


subdirectory of the Database Instance Directory.

Chapter 3. Discover 17
Table 1. Database Instances (continued)
Database
Type Database Instance Account Database Instance Directory/ Additional Hints
Informix Often informix Something like /opt/IBM/informix on Unix, or C:\Program
Files\IBM\Informix. An environment variable INFORMIXDIR may be
defined.

The program <servicename>.cmd must be on the system path where


<servicename> is the value entered in the Informix Server of the
Datasource Definition.

MongoDB Often mongodb or mongos With MongoDB, you must specify multiple paths for the database
instance directory. Indicate a separate path by using a pipe “|” with
spaces.

For example, /var/lib/mongo | MongoBinary=/usr/bin |


dbpath=/var/lib/mongo | logpath=/var/log/mongodb |
keytab=/home/keytab | dbdumppath=/opt/backup | sslpath=/etc/ssl
| keyfile=/home/mongod/mongo_server.keyfile.

The /var/lib/mongo path is required, as it is the home path for the


mongo user.

MongoBinary=/usr/bin is the path to the mongo binary. You must


specify the variable (which is case sensitive) and then equal the path.

dbpath=/var/lib/mongo is the path to the data files. In this case, it


happens to be the same as the MongoDB home directory.

logpath=/var/log/mongodb is the path to the MongoDB log.

keytab=/home/keytab is the directory to the MongoDB keytab file.

dbdumppath=/opt/backup is the directory to the MongoDB backup


dump.

sslpath=/etc/ssl is the path to MongoDB SSL files.

keyfile=/home/mongod/mongo_server.keyfile points to the MongoDB


keyfile.

You do not need to define all the listed paths. Whichever paths are not
defined will not be analyzed.
Oracle Often oracle, or version For example, /home/oracle9 on Unix, or C:\oracle\product\10.2.0\
specific such as oracle9 or db_1 on Windows. An environment variable ORACLE_HOME may be
oracle10 defined.

On Windows, environment variables PERL5LIB and ORACLE_HOME


must be defined, and the program opatch.bat must be on the system
path.

18 IBM Guardium 10.0


Table 1. Database Instances (continued)
Database
Type Database Instance Account Database Instance Directory/ Additional Hints
SQL Server Not needed unless There are two scenarios when populating Database instance Directory
Windows Authentication is for CAS usage in SQL Server.
being used. In that case, it
must be in the form If the datasource is being used for Vulnerability Assessment Tests, then
acceptable to Windows this column needs to be populate with the DATABASE INSTANCE
Authentication, HOME DIRECTORY.
DOMAIN/Username.
Examples

MSSQL2000, Name instance on a 64bit server.

C:\Program Files (x86)\Microsoft SQL Server\MSSQL$MSSQL2000

MSSQL2000, default instance on a 32bit server.

C:\Program Files\Microsoft SQL Server\MSSQL

MSSQL2005

C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL

MSSQL2008

C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL

If the datasource is being used for NON Vulnerability Assessment


Tests, but for CAS monitoring files or registry.

Then this column will be the Microsoft SQL Server directory with
Program Files

Examples C:\Program Files (x86)\Microsoft SQL Server

or

C:\Program Files\Microsoft SQL Server

Note: You must have two datasources if you want to do Vulnerability


Assessment Tests and CAS file monitoring
Sybase Often "sybase" For Unix /home/sybase, or C:\sybase for Windows. An environment
variable SYBASE may be defined.
MySQL An environment variable MYSQL_HOME may be defined.
Note: A MySQL datasource with a Unicode database name is not
supported. The datasource name in MYSQL must be ASCII.
Teradata Not needed. The installations all look the same.
Netezza Not needed. The installation is in the same location on all machines.
PostgreSQL This is the most flexible of the installations. The user is required to
define two environment variables on the Postgres database server:
PostgreSQL_BIN should be the location of the binaries for the
installation, and PostgreSQL_DATA the location of the data.

Note: Note: If an environment variable is to be used within the


Database Instance Directory field, that environment variable must be
defined on the database server.
b. Click Apply to save the datasource definition (you cannot add roles or
comments until the definition has been saved).

Chapter 3. Discover 19
c. Optionally click Roles to assign roles for the datasource.
d. Optionally click Add Comments to add comments to the definition.
e. Optionally click Test Connection to test connectivity of the defined
datasource.
f. Click Done when you are finished with the definition.

Working with existing datasources


After you create a datasource definition, you can clone the datasource, modify the
datasource, or delete the datasource.

Procedure
v Open the Datasource Builder by navigating to Setup > Datasource Definitions.
v The Application Selection menu lists all applications with which you can use a
datasource definition. Choose the application for which the datasource you want
to modify was created, and click Next, bringing you to the Datasource Finder.

Cloning a datasource
Procedure
v Select the datasource that you want to clone from the Datasource Finder, and
click Clone.
v The information that you entered when the datasource definition was created
appears in the Datasource Definition dialog, with "copy Of" appearing before the
original name of the datasource. Change whatever fields you like.
v Click Apply to save the cloned datasource.
v

Modifying a datasource
Procedure
v Select the datasource that you want to modify from the Datasource Finder, and
click Modify.
v The information that you entered when the datasource definition was created
appears in the Datasource Definition dialog. Change whatever fields you like.
v Click Apply to save the changes that you made to the datasource.

Removing a datasource
Procedure

Select the datasource that you want to modify from the Datasource Finder, and
click Delete.

Reporting on datasources
Guardium provides reports on the datasources that are in your environment and
any changes made to them.

Procedure
v Open the Datasources report by navigating to Reports > Report Configuration
Tools > Datasources. The table that appears lists all datasources, and the
information that is stored in each datasource definition.
v Right-click any cell in the table and you are given two options: Datasource
Version History, and Invoke.

20 IBM Guardium 10.0


– Click Datasource Version History to view changes made to the datasource
definition.
– Click Invoke to select and run one of the available APIs for the datasource.

Note: You can customize the run time and presentation parameters of the
Datasources report by clicking the pencil icon.
Related concepts:

../com.ibm.guardium.doc.reference/cli_api/guardapi_datasource_functions.dita

Defining a datasource using a service name


You can define a datasource that enables your users to connect to an Oracle
database using the service name by using a custom URL.

About this task

You must enter the hostname, port, and service name as well as the custom URL.

Procedure
1. Determine the oracle service name. You can use commands like these:
SQL> set line size 5000;
SQL> select host_name, instance_name from v$instance;
SQL> select name from v$database;
SQL> show parameter service

Use the name that appears in the VALUE column.


2. Load the appropriate Oracle JDBC thin driver to the Guardium system.
a. Find and download the driver for your Oracle database here:
http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-
112010-090769.html
b. Open the Customer Uploads window by navigating to Harden >
Vulnerability Assessment > Customer Uploads.
c. Locate the section titled Upload Oracle JDBC driver. Click Browse and
browse to the location to which you downloaded the file. Click Use
open-source driver for all.
d. Restart the Guardium user interface after the upload is complete.
3. Define the datasource for this database.
a. Open the Datasource Builder by navigating to Setup > Datasource
Definitions.
b. The Application Selection menu lists all applications with which you can
use a datasource definition. Choose the application for which the datasource
you want to modify was created, and click Next, bringing you to the
Datasource Finder.
c. Enter the service name in the Service Name field. In the custom URL field,
enter jdbc:oracle:thin@//hostname:port/svcname where hostname and port
are the standard values for the database and svcname is the service name,
the same value that you entered into the Service Name field

Chapter 3. Discover 21
Database Auto-discovery
The Auto-Discovery application scans and probes your servers for open ports to
prevent unknown or unwanted connections to your network. You can run
auto-discovery processes on demand, or schedule the processes on a periodic basis.

Database Auto-discovery Overview


There are many scenarios where databases can exist undetected on your network
and expose your network to potential risk. Old databases might be forgotten and
unmonitored, or a new database might be added as part of an application package.
A rogue DBA might also create a new instance of a database to conduct malicious
activity outside of the monitored databases.

Auto-discovery uses scan and probe jobs to ensure that no database goes
undetected in your environment.
v A scan job scans each specified host (or hosts in a specified subnet), and
compiles a list of open ports that are specified for that host.
v A probe job uses the results of the scan to determine whether there are database
services that are running on the open ports. A probe job cannot be completed
without first running a scan. View the results of this job in the Databases
Discovered predefined report.

Before you begin, you must download and install the patch for the Auto-discovery
application. The patch is available at IBM Fix Central.

Follow these steps to use the Auto-discovery application:


1. Create an Auto-discovery process to search specific IP addresses or subnets for
open ports.
2. Run the Auto-discovery process on demand or on a scheduled basis.
3. View the results of the process with Auto-discovery reports, or create custom
reports.

Auto discovery has its own processes that are independent of audit processes, but
they work exactly the same way as audit processes.

You can only enter IPs when doing a scan, not host names, but Guardium does
detect host names as part of the report. Guardium does not truncate host names in
the Guardium product. However, it may be necessary to configure the report to
have wider columns.

Guardium auto-discovery does not guess on what database appears during a


probe. If Guardium auto-discovery says it has found a database, then it is 100%
certain what the database is.

Note: Discovery only finds running databases. Databases will need to be started if
discovery is to be used during the installation. Due to how the AIX KTAP
interception works, the databases need to be restarted after the first time S-TAP
runs. If the databases are not restarted, some interception will not work.

Create an Auto-discovery Process


Specify which host and ports the Auto-discovery process scans.

22 IBM Guardium 10.0


1. Configure Auto-discovery by clicking Discover > Database Discovery >
Auto-discovery Configuration.
2. Click New to create a new process and open the Auto-discovery Process
Builder.
3. Enter a Process name that is unique on your Guardium system.
4. To run a probe job immediately after the scan job completes, check the Run
probe after scan check box.
5. For each host or subnet to be scanned, enter the host and port, and click Add
scan. Each time that you add a scan, it is added to the task list.

Note:
v Wildcard characters are enabled. For example: to select all addresses
beginning with 192.168.2, use 192.168.2.*.
v Specify a range of ports by putting a dash between the first and last port
numbers in the range. For example: 4100-4102.
v After you add a scan, modify the host or port by typing over it. Click Apply
to save the modification.
v If you have a dual stack configuration, you will need to set up a scan for
both the IPV4 and the IPV6 addresses.
v To remove a scan, click the Delete this task icon for the scan. If a task has
scan results dependent upon it, the scan cannot be deleted.
6. When finished adding scans, click Apply, and run the job or schedule the job
in the future.

Run or Schedule an Auto-discovery Process

Run or schedule scan and probe jobs as part of the Auto-discovery process.
1. Click Discover > Database Discovery > Auto-discovery Configuration.
2. Select the process to-be run from the Auto-discover Process Selector list and do
one of the following:
3.
v To run a job immediately, click Run Once Now.
v To schedule a job in the future, click Modify Schedule

Note: A probe job cannot run without the results of the scan job. You can
schedule the two jobs to run individually, or you configure the probe job to
run after the scan job by modifying a process, and checking the Run probe
after scan check box.
.
4. After you start or schedule a job, you can click Progress Summary to display
the status of this process.

Auto-discovery Reports

Open the Auto-discovery reports by clicking Discover > Reports and selecting
from the available reports.

You can create custom reports with the Auto-discovery Query Builder. Open the
Auto-discovery Query Builder by clicking Discover > Database Discovery >
Auto-discovery Query Builder.

Chapter 3. Discover 23
Databases Discovered Report
Open the Databases Discovered report by clicking Discover > Reports > Databases
Discovered.

The main entity for this report is the Discovered Port. Each individual port that is
discovered has its own row in the report. The columns that are listed are: Time
Probed, Server IP address, Server Host Name, DB Type, Port, Port Type (usually
TCP), and a count of occurrences.

There are no special runtime parameters for this report, but it excludes any
discovered ports with a database type of Unknown.

When an auto-discovery process definition changes, the statistics for that process
are reset.

Auto-discovery Tracking Domain

The Auto-discovery Tracking domain contains all of the data reported by


Auto-discovery processes. Click any entity name to display its attributes.

Auto-discovery Tracking Domain Entities


v Auto-discovery Scan provides a time stamp for each scan operation.
v Discovered Host provides the IP address and host name for each discovered
host.
v Discovered Port provides a time stamp, identifies the port, and provides the
database type for each port discovered open.

Classification
Classification policies and processes define how Guardium discovers and treats
sensitive data such as credit card numbers, social security numbers, and personal
financial data.

Discovery and classification processes become important as the size of an


organization grows and sensitive information like credit card numbers and
personal financial data become present in multiple locations, often without the
knowledge of the current administrators responsible for that data. This frequently
happens in the context of mergers and acquisitions, or when legacy systems have
outlasted their original owners. Creating workflows for discovering sensitive data
allows you to identify sensitive data in your environment and take appropriate
actions, such as applying access policies.

Classification processes consist of classification policies that have been associated


with one or more datasources. Classification processes can be submitted to be run
once or, if login credentials have been stored for all the datasources used in the
process, scheduled to run on a periodic basis in a compliance workflow
automation process.

Classification policies consist of classification rules and classification rule actions


designed to find and tag sensitive data in specified datasources.

Classification rules use regular expressions, Luhn algorithms, and other criteria to
define rules for matching content when applying a classification policy.

24 IBM Guardium 10.0


Classification rule actions specify a set of actions to be taken for each rule in a
classification policy. For example, an action might generate an email alert or add
an object to a Guardium group. Each time a rule is satisfied, that event is logged,
and thus can be reported upon (unless ignore is specified as the action to be taken,
in which case there is no logging for that rule).

Classification Process Performance


Classification processes are handled with sampling routines and timeout
parameters that ensure minimal performance impact on database servers.

When the classifier runs, you have the option of specifying how it samples records.
The default behavior takes a random sampling of rows using an appropriate
statement for the database platform in question. For example, the classifier samples
using a rand() statement for SQL databases. The alternative behavior is sequential
sampling, which reads rows, in order, up to the specified sample size. Random
sampling is the default behavior and is generally recommended because it
provides more representative results. However, random sampling may run incur a
slight performance penalty when compared to sequential sampling. For both
random and sequential sampling, the default sample size is 2000 rows or the total
number of available rows, whichever is fewer. Larger or smaller sample sizes may
be specified.

To further minimize the impact of classification processes on the database server,


long running queries will be cancelled, logged, and the remainder of the table
skipped. Any rows acquired up to that point will be used while evaluating rules
for the table. Similarly, if a classification process runs for an extensive period of
time without completing, the entire process is halted, logged with the process
statistics, and the next classification process is started. This is an uncommon
occurrence and usually only happens on servers that are already experiencing
performance problems.

The classifier periodically throttles itself to idle so it does not overwhelm the
database server with requests. If many classification rules are sampling data, the
load on the database server should remain constant but the process may take
additional time to run.

The classifier handles false positives by using excluded groups for schema, table
and table columns. Previously, it could be a complex process to set up Guardium
to ignore false positive results for future classification scans. Now, when you
review classifier results, you can easily add false positive results to an exclusion
group, and add that group to the classification policy to ensure those results are
ignored in future scans.

Classification Rule Handling


Classification rules are handled according to flexible matching and grouping
criteria.

Fire only with Marker


The Fire only with Marker allows for the grouping of Classifier rule types by the
same exact name. Additionally, all returned rules using a marker must return data
based on the same table name. If two, or more, rules are defined with the same
marker then those rules will fire together and together such that if both rules fire
on the same table then they both will be logged and their actions invoked. If on
the other hand only one of them fires on a table then neither of the rules will be

Chapter 3. Discover 25
logged or have their actions invoked. Being able to have multiple rules fire
together becomes important when you care about sensitive data appearing together
within the same table. For example, you may want to know when a table has both
a social security number and a Massachusetts drivers license.

The Fire only with Marker is a constant value, can be named any value, and must
have the exact same value across rules you want to group. This means that if one
rule has a marker of ABC then the other rule that you want to group it with must
also have a marker named ABC. Any other marker value and the rules are no
longer grouped.

You must use at least two rules of any values based on looking for data within the
same table name.

Continue on Match

The Fire only with Marker is also based on the Continue on Match. As an example,
if the following rules were defined such that Rule 3 does not match the Continue
on Match then no results will be returned regardless if all three marker rules were
positive. This is because you didn't get to run Rule 4 and the grouping will not fire
because all Fire only with Markers must execute and with positive results.

Rule 1. Firemarker rule ABC (continue on match)

Rule 2. Firemarker rule ABC (continue on match)

Rule 3. non-firemarker rule type (continue on match)

Rule 4. Firemarker rule ABC (continue on match)

Unmatched Columns Only

Use this option for reducing the granularity of data results. Some organizations
may want to do a survey of their data to discover which tables and columns have
sensitive data without necessarily needing to find every type of sensitive data in
that column. A new option for Continue on match, With Unmatched Columns
only, means that as soon as the classifier finds a match for that column, it will
ignore that column as it continues its processing.
Table 2. Summary of available classifier processing options
Continue on With Unmatched
match Columns only Granularity of Result
No N/A Table. Classifier will stop processing rules after the
first hit in the table.
Yes Yes Table and column. Classifier will record the first hit
for any given column and ignore it thereafter for
subsequent rules.
Yes No Detailed. Classifier will record hits for all columns for
all rules.

Classification with Luhn algorithm


When a rule name begins with guardium://CREDIT_CARD, and there is a valid credit
card number pattern in the Search Expression box, the classification policy will use
the Luhn algorithm (a widely-used algorithm for validating identification numbers
26 IBM Guardium 10.0
such as credit card numbers), in addition to standard pattern matching. The Luhn
algorithm is an additional check and does not replace the pattern check. A valid
credit card number is a string of 16 digits or four sets of four digits, with each set
separated by a blank. There is a requirement to have both the
guardium://CREDIT_CARD rule name and a valid [0-9]{16} number in the Search
Expression box in order to have the Luhn algorithm involved in this pattern
matching.

Working with Classification Processes


Create, run, and view classification processes using the Classification Process
Builder.

Procedure

Open the Classification Process Builder by navigating to Discover > Classifications


> Classification Process Builder.

Create a Classification Process


Procedure
1. From the Classification Process Builder, click New to open the Define
Classification Process panel.
2. Enter a name for the process in the Process Description box.
3. Select a Classification Policy from the list. You can click Modify to view and
edit the policy if needed.
4. Click the Comprehensive search check box; only relevant when the number of
records in a table exceeds the Sample size, checking/setting Comprehensive
Search to true will randomly search "Sample size" records in the table for a
match. This is a high quality search because the results are more likely to be
representative of the data. Unchecking / unsetting Comprehensive search to
false will search the first "Sample size" records for a match. This type of search
can be much faster than a comprehensive search but it may sacrifice the
quality of the results.
5. Enter a Sample size when searching for data (see Define Classification Policy
Rules / Define a Search for Data Rule), if the number of records in a table is
<= to "Sample size", then all those records are searched for a match. When the
number of records in a table exceeds "Sample size", then Comprehensive
search may be used.
6. Click the Add Datasource button to add one or more datasources.
7. Click Save. This completes the definition of the classification process.
8. Optionally add comments to the definition. See Comments in the Common
Tools help book.
9. Optionally add security roles. See Security Roles in the Access Management
help book.
10. Optionally submit the classification process for execution. See Run a
Classification Process.
11. Click Done when you are finished.

Run a Classification Process


About this task

There are three ways to run classification processes:


v On demand from the Classification Process Builder, which is described in this
task.

Chapter 3. Discover 27
v As a task within a Compliance Workflow Automation Process, described elsewhere.
v As part of a Discover Sensitive Data Workflow, described elsewhere.

Procedure
1. From the Classification Process Builder, select the process to run, and click
Modify to open the Classification Process Builder.
2. Click the Run Once Now button to submit the job. This places the process on
the Guardium Job Queue, from which the Guardium system runs a single job
at a time. You can view the job status using the Guardium Job Queue.
3. Click the Done button when you are finished.

View Classification Results


Procedure
1. From the Classification Process Builder, click the View Results button. The
results will open in a separate window.
2. On any row of the Process Run Log, click (details) to display more information.
3. Optionally, if Data User Security is enabled, through the Global Profile, check
boxes will be displayed that allow users to control / toggle rows in the result
set in accordance to the Filtering defined.
4. Click Close this window when you are done viewing the results. In addition,
there is a Classifier Process Log report to display the status of the classification
process.

View the Job Queue


Before you begin

The Guardium Job Queue is available from the administrator portal only.

Procedure

To view the report, open the Guardium Job Queue by navigating to Discover >
Classifications > Guardium Job Queue.

Working with Classification Policies


Procedure
Open the Classification Policy Builder by navigating to Discover > Classifications
> Classification Policy Builder.

Create a Classification Policy


Procedure
1. Click New to open the Classification Policy Definition panel.
2. Enter a unique name in the Name field.
3. Enter a category in the Category field, and a classification in the Classification
field. Both are required. Both are used to group and organize data on reports.
4. Optionally enter a Description.
5. Optionally enter comments. These can be entered at any time after the policy
has been saved.
6. Click Edit Rules to define rules and their associated actions. See Define
Classification Policy Rules for detailed instructions. It is recommended to use
the Discover Sensitive Data scenario (Discover > End-to-End Scenario >

28 IBM Guardium 10.0


Discover Sensitive Data) for modifying existing classification policies. Use the
same Discover Sensitive Data scenario to create classification policy groups.
Also, if groups have been created, they have to be explicitly selected.

Modify a Classification Policy:


Procedure
1. Select the classification policy to be modified, and do one of the following:
v To modify policy rules, click Edit Rules and see Define Classification Policy
Rules.
v To modify any other element of the definition, click the Modify button.
2. Type over any of the items as appropriate.
3. Click Save to save any changes, and click Done when you are finished.

Clone a Classification Policy:


Procedure
1. Select the classification policy to be cloned, and click the Clone button.
2. Type over any of the items as appropriate for the cloned policy. We recommend
that you replace the default name for the clone, which is the name of the
selected policy prefixed with Copy of.
3. Click the Save Clone button to save the new classification policy. The policy
will be re-displayed in the Classification Policy Definition panel.
4. See Modify a Classification Policy for instructions on how to change
components of the new classification policy definition.

Working with Classification Rules


Procedure
1. Open the Classification Policy Rules panel from the Classification Policy Finder
by navigating to Discover > Classifications > Classification Policy Builder.
2. It is recommended to use the Discover Sensitive Data scenario (Discover >
End-to-End Scenario > Discover Sensitive Data) for modifying existing
classification policies. Use the same Discover Sensitive Data scenario to create
classification policy groups. Also, if groups have been created, they have to be
explicitly selected.

Add a New Classification Policy Rule


Procedure
1. Click the Add Rule button to open the Classification Rule definition panel.
2. Enter a Rule Name.
3. Optionally enter a new Category and/or Classification for the rule. The
defaults are taken from the Classification Policy Definition for the policy.
4. If the next rule in the classification policy should be evaluated after this rule is
matched, mark the Continue on Match checkbox. The default is to stop
evaluating rules when a rule is matched.
5. Select a Rule Type. For a new rule, no Rule Type is selected. Once a Rule Type
is selected, the panel expands to include the fields needed to define that type of
rule. For the specifics of how to define each type of rule, see one of the
following sections:
v Define a Catalog Search Rule - Search the database catalog for table or
column name
v Define a Search for Data Rule - Match specific values or patterns in the data

Chapter 3. Discover 29
v Define a Search for Unstructured Data Rule - Match specific values or
patterns in an unstructured data file (CSV, Text, HTTP, HTTPS, Samba)
6. Click the New Action button to add an action to be taken when this rule is
matched. See Add a Classification Rule Action.
7. Click Accept to add the rule to the policy.

Define a Catalog Search Rule


About this task

A catalog search rule searches the database catalog for table and/or column names
matching specified patterns. Wildcards are allowed: % for zero to any number of
characters, or _ (underscore) for a single character.

Procedure
1. In the Table Type row, mark at least one type of table to be searched: Synonym,
Table, or View. (Table is selected by default.)
2. Optionally enter a specific name or a wildcard based pattern in the Table Name
Like box. If omitted, all table names will be selected.
3. Optionally enter a specific name or a wildcard based pattern in the Column
Name Like box. If omitted, all column names will be selected.
4. Click the Accept button when you are done.

Define a Search for Data Rule


About this task

A search for data rule searches one or more columns for specific data values.
Wildcards are allowed: % for zero to any number of characters, or _ (underscore)
for a single character. For example, the Rule Type is Search for Data, the Table
Type is Table, and the Table Name Like is CREDIT%.

Procedure
1. In the Table Type row, mark at least one type of table to be searched:
Synonym, Table, or View. (Table is selected by default.)
2. In the Table Name Like row, optionally enter a specific name or a wildcard
based pattern. If omitted, all table names will be selected.
3. In the Data Type row, select one or more data types to search.
4. In the Column Name Like row, optionally enter a specific name or wildcard
pattern. If omitted, all column names will be selected.
5. Optionally enter a Minimum Length. If omitted, no limit.
6. Optionally enter a Maximum Length. If omitted, no limit.
7. In the Search Like field, optionally enter a specific value or a wildcard based
pattern. If omitted, all values will be selected.
8. In the Search Expression field, optionally enter a regular expression to define
a pattern to be matched. To test a regular expression, click the (Regex) button
to open the Build Regular Expression panel in a separate window.
9. In the Evaluation Name, optionally enter a fully qualified Java™ class name
that has been created and uploaded. The Java class will then be used to fire
and evaluate the string. There is no validation that the class name entered was
loaded and conforms to the interface. See Custom Evaluation and Manage
Custom Classes for more information on creation and uploading of Java class
files.
10. Optionally enter a Fire only with Marker name. See Fire only with Marker.

30 IBM Guardium 10.0


11. In the Hit Percentage field, optionally enter a percentage of matching data that
should be achieved for this rule to fire. Data is returned if the percentage of
matching data examined is greater than or equal (>=) then the percentage
value entered, noting that an empty entry means it is not a condition and will
not affect whether the rule fires or not and return data to the view screen, a 0
percentage will cause the rule to fire for this condition and return data to the
view screen, and a percentage of 100 requires that all must match.
12. In the Compare to Values in SQL field, optionally enter a SQL statement. The
SQL entered, which must be based on returning information from one and
only one column, will then be used as a group of values to search against the
tables and/or columns selected. If used, the Compare to Values in SQL should
follow the following rules:
v The SQL statement MUST begin with SELECT
v The SQL statement SHOULD NOT utilize the ';' semi-colon
v The SQL entered MUST specify a schema value name in order to be
accurate in returning results.
v Good examples include:
SELECT ename FROM scott.emp
select EMPNUMBER from SYSTEM.EMP where EMPNUMBER in(5555,4444)
select DNAME from SCOTT.DEPT where DNAME like ’A%G’
SELECT ZIP from SCOTT.FOO where ZIP in (SELECT ZIP FROM SCOTT.FOO)
13. In the Compare to Values in Group field, optionally select a group. The group
selected will then be used as a group of values to search against the tables
and/or columns selected. As long as one of the values within a group, that is
either a public or a classifier group, matches, then the value rule will return
data.
14. Mark the Show Unique Values checkbox to add, to the Comments, details on
what values matched the classification policy rules and fired. Use regular
expressions in the Unique Values Mask field to redact the unique values. For
example, mark the Unique Values checkbox and use ([0-9]{2]-[0-9]{3})-[0-9]{4}
in the Unique Values Mask field to log the last four digits and redact the
prefix digits.

Define a Search for Unstructured Data Rule


About this task

A Search for Unstructured Data rule examines a non-database file.

Procedure
1. In the Search Like box, optionally enter a specific value or a wildcard based
pattern. If omitted, all values will be selected.
2. In the Search Expression box, optionally enter a regular expression to define a
pattern to be matched. To test a regular expression, click the icon to open the
Build Regular Expression panel in a separate window.
3. Optionally enter a marker name.

Working with Classification Rule Actions


Procedure
1. After a rule has been saved, click the (Customize) button for that rule to return
to the rule definition panel, from which you can add one or more rule actions.
2. Click the New Action button to open the Action panel.
3. Enter an Action Name.

Chapter 3. Discover 31
4. Optionally enter a Description.
5. Select an Action Type from the list. Depending on the action selected, a
different set of fields will appear on the panel.
v For the Ignore and Log Result actions, no additional information is needed.
– Ignore - Do not log the match, and take no additional actions.
– Log Result - Log the match, and take no additional actions.
v For all other actions, additional fields will appear on the panel, and you will
have to enter additional information.
– Add To Group Of Object-Fields Action
– Add To Group Of Objects Action
– Create Access Rule Action
– Create Privacy Set Action
– Log Policy Violation Action
– Send Alert Action
6. After actions have been added to the Classification Rule panel, the controls in
the table can be used to modify the actions defined.
7. Click Accept when you are done working with the rule definition.

Add to Group of Object Fields Action


About this task

Each time the classification rule is matched, a member will be added to the
selected Object-Field group on the Guardium system. You have the option of
replacing all members, or adding new members.

For a database file, the object component of the member will be the database table
name, and the field component will be the column name.

For an unstructured data file, the object component of the member will be the file
name (in quotes), and the field component will be the column name, but if column
names cannot be determined, the columns will be named column1, column2, etc.

Procedure
1. Do one of the following:
v Select an Object-Field Group from the list, or
v Click the Groups button, define a new group using the Group Builder, and
then select that group from the list.
2. Optionally mark the Replace Group Content box to completely replace the
membership of the selected group with members returned by this rule. By
default, this box is not marked, which means that new members will be added
to the group, but no members will be deleted. For a job that is run on demand,
this box is ignored, and you are given the opportunity to add or replace
members on the view results panel.
3. Click the Save button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.

Add to Group of Objects Action


About this task

Each time the classification rule is matched, a member will be added to the
selected Object group on the Guardium system.

32 IBM Guardium 10.0


For a database file type, the member will be the database table name. For an
unstructured file type, the member name will be the file name.

You have the option of replacing all entries, or only adding new entries.

Procedure
1. Do one of the following:
v Select an Object Group from the list, or
v Click the Groups button, define a new group using the Group Builder, and
then select that group from the list.

Note: To use aliases with groups generated from Classifier - Open the Group
Builder, select the Object group generated by Classifier and then click
Modify. Click on the Aliases button in Group button to change the name of
the Object Group.
2. Optionally mark the Replace Group Content box to completely replace the
membership of the selected group with members returned by this rule. By
default, this box is not marked, which means that new members will be added
to the group, but no members will be deleted. For a job that is run on demand,
this box is ignored, and you are given the opportunity to add or replace
members on the view results panel.
3. From the Actual Member Content, select the naming convention that will be
used when adding the member to the group where 'Full' is the
schema.tablename and 'Name' is the tablename.
4. Click Save to add the action to the rule definition, close the Action panel, and
return to the rule definition panel.

Create Access Rule Action


About this task

Each time the classification rule is matched, an access rule will be inserted into an
existing security policy definition. The updated security policy will not be installed
(that task is performed separately, usually by a Guardium administrator).

Procedure
1. Select an Access Policy from the list. You must be authorized to access that
policy.
2. Enter a rule name in the Rule Description box.
3. Select an action from the Access Rule Action list.
4. Optionally select a Commands Group, or click the Groups button, define a new
Commands group using the Group Builder, and then select that Commands
group from the list.
5. To log field values separately, mark the Include Field checkbox. Otherwise, only
the table will be recorded (the default).
6. To include the server IP address, check the Include Server IP checkbox.
7. If you have selected an alerting action, a Receiver row appears on the panel,
and you must add at least one receiver for the alert. Click Modify Receivers to
add one or more receivers.
8. Click Accept to add the action to the rule definition, close the Action panel,
and return to the rule definition panel.

Chapter 3. Discover 33
Create Privacy Set Action
About this task

Each time the classification rule is matched, the selected privacy set's object-field
list will be replaced.

For a database file, the object component of the privacy set will be the database
table name, and the field component will be the column name.

For an unstructured data file, the object component of the privacy set will be the
file name (in quotes), and the field component will be the column name, but if
column names cannot be determined, the columns will be named column1,
column2, etc.

Procedure
1. Select the previously defined Privacy Set whose contents you want to replace.
2. Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.

Log Policy Violation Action


About this task

Each time the classification rule is matched, a policy violation will be logged. This
means that classification policy violations will be logged (and can be reported)
together with access policy violations (and optionally correlation alerts) that may
have been produced.

Procedure
1. Select a Severity code from the list.
2. Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.

Send Alert Action


About this task

Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.

Procedure
1. Select a Notification Type code from the list.
2. Click the Modify Receivers button to add one or more receivers. The specified
receiver will be get one mail per datasource per rule per action. So, if a
datasource has three rules and each rule has two actions (that have at least one
match), then the user will get 2 * 3 = 6 mails.
3. Click the Accept button to add the action to the rule definition, close the Action
panel, and return to the rule definition panel.

Discover Sensitive Data


Create an end-to-end scenario for discovering and classifying sensitive data.

34 IBM Guardium 10.0


About this task
Discovery and classification processes become important as the size of an
organization grows and sensitive information like credit card numbers and
personal financial data propagate to multiple locations. This often happens in the
context of mergers and acquisitions or when legacy systems have outlasted their
original owners. As a result, sensitive data may exist beyond the knowledge of the
person who currently owns that data. This is a common yet extremely vulnerable
scenario, since you cannot protect sensitive data unless you know it exists.

Sensitive data discovery scenarios span three critical aspects of enterprise security:
v Discovery: locating the sensitive data that exists anywhere in your environment
v Protection: monitoring and alerting when sensitive data is accessed
v Compliance: creating audit trails for reviewing the results of sensitive data
discovery processes

The Discover Sensitive Data end-to-end scenario builder streamlines the processes
of discovery, protection, and compliance by integrating several Guardium tools
into a single user-friendly interface.
Table 3. Discover sensitive data tools map
Value Scenario Task Description Result
Discover Name and Provide a name and Creates a classification
Description description for the scenario process and classification
and its related processes and policy.
policies.
Optionally creates new
What to Create rules and rule actions datasource definitions.
discover for discovering and
classifying data.
Where to Identify datasources to scan.
search
Run discovery Run the scenario, review the
results, and define ad hoc
Protect Review report Creates an access policy.
grouping and alerting
actions.
Comply Audit Define recipients, a Creates an audit process.
distribution sequence, and
review options.
Schedule Create a schedule to run at
defined intervals.

This sequence of tasks guides you through the processes of creating a new
discovery scenario. This includes creating classification policies consisting of rules
and rule actions for discovering sensitive data, creating classification processes by
identifying datasources to scan for sensitive data, defining ad hoc policies (for
grouping and alerting, for example), and creating audit processes that distribute
results to different stakeholders at scheduled intervals.

While a discover sensitive data scenario creates underlying policies and processes
that can be accessed using other Guardium tools (for example the Classification
Policy Builder or through GuardAPI commands), there are no GuardAPI
commands for creating or modifying a discovery scenario.

Chapter 3. Discover 35
What to do next
Continue to the next section and provide a Name and description for your
discovery and classification scenario.

Name and description


Provide a name and description for your discovery scenario.

About this task

The name provided for the discovery scenario will also be used to name
underlying policies and processes.

During this step, you may also specify security roles that can access the discovery
scenario.

Procedure

Begin by creating a new discovery scenario or selecting an existing discovery


scenario to edit.
1. Click Discover > End-to-End Scenarios > Discover Sensitive Data.
2. Click the icon to create a new scenario or click an existing scenario name to
begin editing that scenario.
3. Open the Name and Description section and provide or edit the name and
optional description of the scenario. The name you provide here will also be
used to name underlying classification processes and policies created by the
discovery scenario.

Example: A discovery scenario named "Find PCI" will create a classification


process named "Find PCI" and a classification policy named "Find PCI
Classification Policy" (followed by a date and time stamp).
4. Provide category and the classification labels for tagging violations. "Sensitive"
is the default value for category and classification labels.
5. Optionally, click the Roles button to specify security roles that can access the
discovery scenario.

What to do next

Continue to the next section of the discovery scenario, What to discover.

What to discover
Create policies consisting of rules and rule actions for discovering and classifying
sensitive data.

About this task


Classification policies contain ordered sets of rules and rule actions that identify and
take actions on sensitive data. Each rule in a policy defines a conditional action
that is taken when the rule matches. The conditional test can be simple, for
example a wildcard string found anywhere in a table, or a complex test that
considers multiple conditions. For discover sensitive data scenarios, the action
triggered by a rule can be a grouping action that adds the object to a specified

36 IBM Guardium 10.0


group or an alerting action that triggers a notification when rules are matched.
Multiple grouping and alerting actions can be combined and ordered to create
sophisticated responses to matched rules.

This task guides you through the processes of creating and editing classification
rules and rule actions for use in your discovery scenario.

Procedure
1. Open the What to discover section to define rules for discovering data.
2. Add rules to your discovery scenario by doing one of the following:
v Click the icon to create a new rule.
v Select rules from the Classification Rule Templates table and click the icon
to add predefined rules.
3. Define a new rule, or edit a rule template by selecting the template and clicking
the icon.
a. Provide a name and description while optionally specifying a special pattern
test at the beginning of the Name field. The rule name will also be used to
name the rule associated with the classification policy in the Classification
Policy Builder. If you require a special pattern test, it is recommended that
you work with its corresponding template (for example, use Bank Card -
Credit Card Number for credit card numbers).
b. Open the Rule Criteria section to define a regular expression and other
search criteria for the rule. If you are working with a rule template, an
appropriate regular expression is provided by default.
Attention: For rules created in the discover sensitive data scenario, the
default Data type includes both Number and Text.
c. Open the Actions section and define any rule actions that should be taken
when rule criteria match.
d. When defining multiple rule actions, you can optionally click the icon and
use the and icons to change the order in which the actions are executed.
e. Click Save when you are finished adding or editing rule definitions to
return to the What to discover section of the discovery scenario.
4. Optionally click the icon and use the and icons to change the order in which
rules are applied. Rule order is important as the default behavior stops rule
execution after the first match unless Continue on match is selected under
Rule criteria.
5. When you are finished working with rules, click Next to begin working on the
next section of the discovery scenario.

What to do next
Continue to the next section of the discovery scenario, Where to search.
Related concepts:
“Regular Expressions” on page 44
Regular expressions can be used to search traffic for complex patterns in the data.
Related reference:
“Actual Member Content” on page 40
Use the Actual Member Content field to define how objects are labeled by the
Add to Group of Objects rule action.
“Rule Criteria” on page 38

Chapter 3. Discover 37
“Special pattern tests” on page 61
You can use these special pattern tests to identify sensitive data that is contained in
the traffic that flows between the database server and the client.

Rule Criteria
Table 4.
Attribute Description
Table type Select one or more table types to search: Synonym, Table, or View. Table is
selected by default.
Data type Select one or more data types to search: Number, Text, or Date. Number
and Text are selected by default.
Search Optionally enter a regular expression to define a search pattern to match. To
expression test a regular expression, click the RE button to open the regular expression
editor.
Table name Optionally enter a specific name or wildcard pattern. If omitted, all table
like names are selected.
Column Optionally enter a specific name or wildcard pattern. If omitted, all column
name like names are selected.
Continue on If the next rule in the classification policy should be evaluated after this rule
match is matched, mark the Continue on Match checkbox. The default is to stop
evaluating rules once a rule is matched.
Search Optionally enter a specific value or a wildcard pattern. If omitted, all values
wildcard are selected.
Minimum Optionally enter a minimum length. If omitted, there is no limit.
length
Maximum Optionally enter a maximum length. If omitted, there is no limit.
length
Evaluation Optionally enter a fully qualified Java class name that has been created and
name uploaded. The Java class will then be used to fire and evaluate the string.
Note: There is no validation that the class name entered was loaded and
conforms to the interface.

38 IBM Guardium 10.0


Table 4. (continued)
Attribute Description
Fire only The Fire only with marker allows for the grouping of classifier rules: rules
with marker with the same marker fire at the same time. Additionally, all returned rules
using a marker must return data based on the same table name. If two or
more rules are defined with the same marker, those rules will fire together
such that if both rules fire on the same table they will both be logged and
their actions invoked. On the other hand, if only one rule fires on a table
then neither of the rules will be logged or have their actions invoked. Being
able to have multiple rules fire together becomes important when you care
about sensitive data appearing together within the same table. For example,
you may want to know when a table has both a social security number and
a Massachusetts drivers license.

The fire only withMarker is a constant value, can be named to any value,
and must have the exact same value across the rules you want grouped.
This means that if one rule has a marker of ABC then the other rule that
you want to group it with must also have a marker named ABC.

The Fire only with Marker also interacts with the Continue on Match flag.
For example, if the following rules were defined such that Rule 3 does not
match the Continue on match then no results will be returned regardless if
all three marker rules were positive. This is because you didn't get to run
Rule 4 and the grouping will not fire because all Fire only with markers
must execute with positive results.

Rule 1. Firemarker rule “ABC” (continue on match)

Rule 2. Firemarker rule “ABC” (continue on match)

Rule 3. non-firemarker rule type (continue on match)

Rule 4. Firemarker rule “ABC” (continue on match)


Hit Optionally enter a percentage of matching data that should be achieved for
percentage this rule to fire. Data is returned if the percentage of matching data
examined is greater than or equal (>=) then the percentage value entered,
noting that an empty entry means it is not a condition and will not affect
whether the rule fires or not and return data to the view screen. A 0
percentage will cause the rule to fire for this condition and return data to
the view screen, and a percentage of 100 requires that all must match.
Compare to Optionally enter a SQL statement. The SQL entered, which must be based
values in on returning information from one and only one column, will then be used
SQL as a group of values to search against the tables and columns selected.
Note: If used, the Compare to values in SQL should observe the following
rules:
v The SQL statement MUST begin with SELECT.
v The SQL statement SHOULD NOT utilize the ; (semi-colon).
v The SQL entered MUST specify a schema value name in order to be
accurate in returning results.
v Good examples:
SELECT ename FROM scott.emp
select EMPNUMBER from SYSTEM.EMP where EMPNUMBER in(5555,4444)
select DNAME from SCOTT.DEPT where DNAME like ’A%G’
SELECT ZIP from SCOTT.FOO where ZIP in (SELECT ZIP FROM SCOTT.FOO)

Chapter 3. Discover 39
Table 4. (continued)
Attribute Description
Compare to Optionally select a group. The group selected will then be used as a group
values in of values to search against the tables and columns selected. As long as one
group of the values within a group, that is either a public or a classifier group,
matches, then the value rule will return data.
Show unique Mark the Show Unique Values checkbox to add details on what values
values matched the classification policy rules to the comments field of the resulting
report.
Unique Use regular expressions in the Unique values mask field to redact the
values mask unique values. For example, mark the Show unique values checkbox and
use ([0-9]{2]-[0-9]{3})-[0-9]{4} in the Unique values mask field to log
the last four digits and redact the prefix digits.

Actual Member Content


Use the Actual Member Content field to define how objects are labeled by the
Add to Group of Objects rule action.
Table 5.
Actual Member Content Selection Value in Group
Object Name Only tableName
Like Name% tableName%
Like %Name %tableName
Like %Name% %tableName%
%/%.Name %%.tableName
Fully Qualified Name schemaName.tableName
Like Full% schemaName.tableName%
Like %Full %schemaName.tableName
Like %Full% %schemaName.tableName%
%/Full %%.schemaName.tableName
Read/%.Name Read/%.tableName
Change/%.Name Change/%.tableName
Read/Full Read/schemaName.tableName
Change/Full Change/schemaName.tableName

If your rules return the table name JJ_CREDIT_CARD from the schema DB2INST1, and
you have specified an Add to Group of Objects action, the Actual Member
Content selections behaves as follows:
v Selecting Fully Qualified Name adds DB2INST1.JJ_CREDIT_CARD to the selected
group.
v Selecting Object Name Only adds JJ_CREDIT_CARD to the selected group.
v Selecting Change/Full adds Change/DB2INST1.JJ_CREDIT_CARD to the selected
group.

Where to search
Identify datasources to scan for sensitive data.

40 IBM Guardium 10.0


About this task
Datasources store information about your database or repository such as the type
of database, the location of the repository, or authentication credentials that may be
associated with it. Adding datasources to a discovery scenario creates a
classification process where classification policies are applied to the selected
datasources.

In this task, identify the datasources you would like to search for sensitive data.

Procedure
1. Open the Where to search section to identify the datasources you would like to
search for sensitive data.
2. Add datasources to your discovery scenario by doing one of the following:
v Click the icon to open the Create Datasource dialog and add a new
datasource definition.
v Select datasources from the Available Datasources table and click the icon
to add existing datasources.
3. Define a new datasource, or edit an existing datasource by selecting the
datasource and clicking the icon. New datasources defined through the
discovery scenario can also be viewed or edited through the Datasource
Definitions tool.
a. Provide or edit the name of the datasource.
b. Select the appropriate database type from the Database type menu and
provide the requested information to complete the datasource definition.
The available fields differ depending on the selected database type.
c. When you are finished editing the datasource definition, click Save to save
your work and optionally click Test Connection to verify the datasource
connection.
d. When you are finished working with the datasource definition, click Close
to close the dialog.
4. When you are finished adding datasources, click Next to begin working on the
next section of the discovery workflow.

Results
A classification process is created after adding datasources to your discovery scenario
and saving the scenario. To view or edit this process directly, use the Classification
Process Builder.

What to do next

Continue to the next section of the discovery workflow, Run discovery.


Related concepts:
“Datasources” on page 13
Datasources store information about your database or repository such as the type
of database, the location of the repository, or credentials that might be associated
with it. You must define a datasource in order to use it with Guardium
applications.
Related tasks:
“Creating a datasource definition” on page 13
Use the Datasource Builder to create datasource definitions for use with Guardium

Chapter 3. Discover 41
applications.

Run discovery and review report


Optionally run your discovery scenario and review the results.

About this task

After defining policies for discovering sensitive data and identifying datasources to
search, you can run the classification process and review the results. Running the
process and reviewing the results allows you to refine your policies, for example
specifying additional search criteria if you find the results too broad. It may be
necessary to go through several iterations of refining policies, running the process,
and assessing the results before achieving the desired results.

Procedure
1. Open the Run discovery section to test your discovery scenario.
2. Click Run Now to begin.
Attention:
v Depending on the policies you have specified and the number of datasources
you have selected to search, it may take several minutes or more to complete
the process of identifying sensitive data. The process status is indicated next
to the Run Now button, or you can monitor the process using the Guardium
Job Queue.
v You can also run the classification process by visiting the Classification
Process Builder, selecting your classification process, and clicking Run Once
Now.
3. When the discovery scenario has finished running, open the Review report
section to see the results.
4. While reviewing the results, you can define additional rules and actions based
on the results. Use the Filter to refine results (filtering is not supported with
more than 10,000 results).
a. Select the row(s) containing data you want to define actions against.
b. Click Add to Group to define a grouping action, or click Advanced Actions
to define an alerting action.
c. After completing the dialog to define a grouping or alerting action, click OK
to return to the results report.
Attention:
v Grouping and alerting actions added from the results table are considered
ad hoc actions that run only as invoked from the results table. These
actions will not appear in the What to discover > Edit rule > Actions
section of your discovery scenario, and they will not run automatically as
part of the discovery scenario or related classification processes.
v Use the Policy Builder to review, edit, and install alerting actions.
v Use the Group Builder to review and edit grouping actions.
5. When you are finished reviewing the results report, click Next to begin
working on the next section of the discovery scenario.

Results

After running the search for sensitive data, monitor its status next to the Run Now
button or using the Guardium Job Queue. You can use the Group Builder to
review any grouping actions or the Policy Builder to review and install any

42 IBM Guardium 10.0


alerting actions that were added from the results table.

What to do next

Optionally, continue to the next section of the discovery scenario, Audit.

Audit
Optionally create an audit process by defining receivers, a distribution sequence,
and review options for the discovery and classification report.

About this task

You can define any number of receivers for the results of a discovery workflow,
and you can control the order in which they receive results. In addition, you can
specify process control options, such as whether a receiver needs to sign off on the
results before they are sent to the next receiver.

The audit process created by adding receivers to a discovery scenario inherits the
name of the scenario. For example, adding receivers to a discovery scenario named
"Find PCI" creates an audit process named "Find PCI Audit process" followed by a
date and time stamp.

Procedure
1. Open the Audit section to define receivers for discovery reports.
2. Add receivers to your discovery scenario by clicking the icon and defining
options for how the reports are delivered.
v If sending the report to Guardium users, roles, or groups, you will need to
define process control options.
v If sending the report to email recipients, provide their email address and
filter the report by a Guardium username that is appropriate for the email
recipient.
3. Click OK to add the receiver to the discovery workflow. Continue adding
additional receivers to the scenario if needed.
4. Optionally click the icon and use the and icons to change the order in
which reports are distributed to recipients. This is important when using
sequential distribution as it determines which receivers must review or sign the
report before it is sent to subsequent receivers.
5. When you are finished adding, editing, and ordering receivers, click Next to
begin working on the next section of the discovery workflow.

Results
An audit process is created after defining receivers and saving the discovery
scenario. To view, edit, or run this process directly, use the Audit Process Builder.

The audit process remains inactive until it is scheduled using the Schedule section
of the discovery scenario or using the Audit Process Builder. You can also run the
audit process by visiting the Audit Process Builder, selecting the audit process, and
clicking Run Once Now.

Chapter 3. Discover 43
What to do next
Optionally, continue to the next section of the discovery workflow, Schedule.
Related concepts:
“Building audit processes” on page 195
Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.

Scheduling
Optionally activate the audit process by scheduling it to run at defined intervals.

About this task

A schedule becomes part of an audit process along with any receivers specified in
the Audit section of the discovery scenario. Defining a schedule runs the audit
process at specified intervals and ensures that results from the associated
classification process are regularly distributed and reviewed.

Procedure
1. Open the Schedule section to define a schedule for discovering data.
2. Use the Schedule by menu to set daily or monthly intervals for the audit
process.
3. Use the Start schedule every and Repeat every check boxes to define how
many times per day and how many times within each hour to run the audit
process.
4. Use the Start date and time controls to define an explicit date and time for the
schedule to begin.
5. Clear the Activate schedule check box to deactivate the audit process while
retaining scheduling information for later use. The Activate schedule box is
checked by default, meaning that the audit process becomes active after saving
the schedule.
6. When you have defined a schedule, click Save to finish editing and close the
workflow editor.

Results

An audit process is created after defining a schedule and saving the discovery
scenario. To view or edit this audit process directly, use the Audit Process Builder.
Review the Scheduled Jobs report to see the status, start time, and next fire time
for scheduled audit tasks.
Related concepts:
“Building audit processes” on page 195
Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.

Regular Expressions
Regular expressions can be used to search traffic for complex patterns in the data.

44 IBM Guardium 10.0


The IBM Guardium implementation of regular expressions conforms with POSIX
1003.2. For more detailed information, see the Open Group web site:
www.opengroup.org. Regular expressions can be used to search traffic for complex
patterns in the data. See Policies for examples.

This help topic provides instructions for using the Build Regular Expression Tool,
and several tables of commonly used special characters and constructs. It does not
provide a comprehensive description of how regular expressions are constructed or
used. See the Open Group web site for more detailed information.

The important point to keep in mind about pattern matching or XML matching
using regular expressions, is that the search for a match starts at the beginning of a
string and stops when the first sequence matching the expression is found.
Different or the same regular expressions can be used for pattern matching and
XML matching at the same time.

Note: IBM Guardium does not support regular expressions for non-English
languages.

Using the Build Regular Expression Tool

When an input field requires a regular expression, you can use the Build Regular
Expression tool to code and test a regular expression.

To open the Build Regular Expression tool, click the icon next to the field that
will contain the regular expression. If you have already entered anything in the
field, it will be copied to the Regular Expression box in the Build Regular
Expression panel.
1. Select a category of regular expressions from the drop-down list.
2. Select a pattern from the drop-down list.
3. Enter or modify the expression in the Regular Expression box.
4. To test the expression, enter text in the Text To Match Against box, and then
click the Test button:
v If the expression contains an error (a missing closing brace, for example), you
will be informed with a Syntax Error message.
v The Match Found message indicates that your regular expression has found
a match in the text that you have entered.
v If no match is found, the No Match Found message is displayed.
5. We suggest that you repeat the step a number of times to verify that your
regular expression both matches and does not match, as expected for your
purpose.
6. To enter a special character at the end of your expression, you can select it from
the Select element list. To enter a special character anywhere else, you must
type it or copy it there.
7. When you are done making changes and testing, click Accept to close the Build
Regular Expression panel and copy the regular expression to the definition
panel.

Special Characters and Constructs


The following table provides a summary of the more commonly used special
characters and constructs.

Chapter 3. Discover 45
Table 6. Special Characters and Constructs
Character How do I do ... Example Matches No Match
literal Match an exact sequence of characters can can Can cab caN
(case sensitive), except for the special
characters described below
. (dot) Match any character including ca. can cab c cb
carriage return or newline (\n)
characters
* Match zero or more instances of Ca*n Cn Can Caan Cb Cabn
preceding character(s)
^ Match string beginning with following ^C. Ca ca a
character(s)
$ Match string ending with preceding C.n$ Can Cn Cab
character(s)
+ Match one or more instances of ^Ca+n Can Caan Cn
preceding character(s)
? Match either zero or one instance of Ca?n Cn Can Caan
preceding character(s)
| Match either the preceding or Can|cab Can cab Cab
following pattern
(x ...) Match the sequence enclosed in (Ca)*n Can XaCan Cn CCnn
parentheses
{n} Match exactly n instances of the Ca{3}n Caaan Caan Caaaan
preceding character(s)
{n,} Match n or more instances of the Ca{2,}n Caan Caaaan Can Cn
preceding character(s)
{n,m} Match from n to m instances of the Ca{2,3}n Caan Caaan Can Caaaan
preceding character(s)
[a-ce] Match a single character in the set, [C-FL]an Can Dan Lan Ban
where the dash indicates a contiguous
sequence; for example, [0-9] matches
any digit
[^a-ce] Match any character that is NOT in [^C-FL]an aan Ban Can Dan
the specified set
[[.char.]] Match the enclosed character or the [[.~.]]an or [[.tilde.]]an ~an @an
named character from the Named
Characters Table
[[:class:]] Match any character in the specified [[:alpha:]]+ abc ab3
character class, from the Character
Classes Table

Named Characters Table (English)


The following table describes the standard character names that can be used within
regular expression bracket pairs ([[.char]]). Character names are location specific, so
non-English versions of Guardium may use a different set of character names.
v NUL \0
v SOH \001
v STX \002
v ETX \003

46 IBM Guardium 10.0


v EOT \004
v ENQ \005
v ACK \006
v BEL \007
v alert \007
v BS \010
v backspace \b
v HT \011
v tab \t
v LF \012
v newline \n
v VT \013
v vertical-tab \v
v FF \014
v form-feed \f
v CR \015
v carriage-return \r
v SO \016
v SI \017
v DLE \020
v DC1 \021
v DC2 \022
v DC3 \023
v DC4 \024
v NAK \025
v SYN \026
v ETB \027
v CAN \030
v EM \031
v SUB \032
v ESC \033
v IS4 \034
v FS \034
v IS3 \035
v GS \035
v IS2 \036
v RS \036
v IS1 \037
v US \037
v space ' '
v exclamation-mark !
v quotation-mark "
v number-sign #
v dollar-sign $
v percent-sign %

Chapter 3. Discover 47
v ampersand &
v apostrophe \'
v left-parenthesis (
v right-parenthesis )
v asterisk *
v plus-sign +
v comma ,
v hyphen -
v period .
v full-stop .
v slash /
v solidus /
v zero 0
v one 1
v two 2
v three 3
v four 4
v five 5
v six 6
v seven 7
v eight 8
v nine 9
v colon :
v semicolon ;
v less-than-sign <
v equals-sign =
v greater-than-sign >
v question-mark ?
v commercial-at @
v left-square-bracket [
v right-square-bracket ]
v backslash \
v reverse-solidus \\
v circumflex ^
v circumflex-accent ^
v underscore _
v low-line _
v grave-accent `
v left-brace {
v left-curly-bracket {
v right-brace }
v right-curly-bracket
v vertical-line |
v tilde ~
v DEL 177

48 IBM Guardium 10.0


v NULL 0

Named Character Class Table (English)

The following table describes the standard character classes that you can reference
within regular expression bracket pairs ([[:class:]]). Note that character classes are
location specific, so non-English versions of Guardium may use a different set of
character names.
v alnum - Alphanumeric (a-z, A-Z, 0-9)
v alpha - Alphabetic (a-z, A-Z)
v blank - Whitespace (blank, line feed, carriage return)
v cntrl - Control
v digit - 0-9
v graph - Graphics
v lower - Lowercase alphabetic (a-z)
v print - Printable characters
v punct - Punctuation characters
v space - Space, tab, newline, and carriage return
v upper - Uppercase alphabetic
v xdigit - Hexadecimal digit (0-9, a-f)

Regular Expression Examples

You can copy and paste any of the expressions into a field requiring a regular
expression. When using any of these examples, we strongly suggest that you
experiment by using it in the Build Regular Expression tool, entering a variety of
matching and non-matching values, so that you understand exactly what is being
matched by the expression.

Regular Expression Examples

Social Security Number (must have hyphens) [0-9]{3}-[0-9]{2}-[0-9]{4}

Phone Number (North America - Matches 3334445555, 333.444.5555, 333-444-5555,


333 444 5555, (333) 444 5555, and all combinations thereof) \(?[0-9]{3}\)?[-.
]?[0-9]{3}[-. ]?[0-9]{4}

Postal Code - (Canada) [ABCEGHJKLMNPRSTVXY][0-9][A-Z] [0-9][A-Z][0-9]

Postal Code - (UK) [A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}

Zip Code (US) (5 digits required, hyphen followed by four digits optional)
[0-9]{5}(?:-[0-9]{4})?

Credit Card Numbers [0-9]{4}[-, ]?[0-9]{4}[-, ]?[0-9]{4}[-, ]?[0-9]{4}

Chapter 3. Discover 49
50 IBM Guardium 10.0
Chapter 4. Protect
After you identify databases and file systems that contain sensitive data, you can
take several steps to protect that data. Protection options include masking data,
alerting personnel based on data access, and establishing policies that enforce
access restrictions.

Baselines
A baseline is a profile of access commands executed in the past, helping to identify
normal activity and anomalous behavior (inconsistent with or deviating from
behavior that is usual, normal, or expected).

The Baseline Builder generates a baseline by examining activity previously logged


and currently available, on the Guardium appliance.

When included in a security policy, the baseline becomes a baseline rule, which
allows all database access that has been included in the baseline.

A baseline rule in a policy has the following characteristics:


v There can be only one baseline rule.
v The baseline rule action is always Allow, which means accept the command and
do not continue to the next rule in the policy.
v When the baseline rule is added to the policy, it is positioned first in the list of
rules. It can be moved anywhere in the set of rules (which are evaluated in
sequence), as appropriate for the policy.
v Once a baseline rule has been included in a policy, it cannot be removed.

The Policy Builder can generate suggested policy rules from the baseline. The
suggested rules can be edited and included in the policy ahead of the baseline rule,
so that alternative actions (alerts, for example) can be taken for some commands
that were seen in the baseline period. In addition, an examination of the suggested
rules provides valuable insight into the actual traffic patterns observed (types of
commands and frequency).

The Baseline Builder provides the ability to control what gets included in the
baseline, in several ways:
v By specifying a threshold to control how many occurrences of a command must
be seen before the command will be included in the rule. A threshold of one
includes every command observed, while a threshold of 1,000 includes only
those commands occurring 1,000 times or more.
v By controlling sensitivity to one or more attributes. For example, if the baseline
is sensitive to the database user, it will include commands for specific users only.
Users who did not execute the command during the baseline period would not
be allowed by the baseline rule.
v By limiting the connections included to subsets of server and client IP addresses.
The baseline always specifies a single client network mask and a single server
network mask. Each mask can be as inclusive or as exclusive as required.
v By merging data from different time periods. There may be traffic that occurs
during non-contiguous time periods that should be included in the baseline. You

51
can merge the data from any number of time periods into a single baseline. In
addition, the data can be filtered for specific client and server addresses.

About Baseline Sensitivity


Baseline sensitivity can be based on any combination of the following (each will be
described in more detail, later):
v Database User
v Database Protocol
v Database Protocol Version
v Time Period
v Source Program
v Sequence

Baseline sensitivity depends on a specified threshold, which defines the minimum


number of times a command must be observed during the baseline period in order
to include that command in the baseline.

With no sensitivity selected, each command that exceeds the threshold will be
included in the baseline.

If a single type of sensitivity is selected, a separate count of each command will be


maintained for each value of the sensitivity type (database user, for example).

If multiple types of sensitivity are selected, separate counts of each command are
maintained for each combination of values for each selected type (for each
combination of database user and source program, for example). Thus for each
type of sensitivity included, the number of combinations can increase dramatically.

About Sequence Sensitivity

If the baseline is sensitive to command sequence, then when included in a policy


the baseline rule will allow only the sequences of commands observed during the
baseline period. To illustrate with a very simple example: if the only two sequences
of commands observed in the baseline period are A-B and B-C, the following table
illustrates which sequences of commands would be allowed by that baseline rule.
Table 7. About Sequence Sensitivity
Command
Sequence Allowed
A-B Y
A - everything N
else
B-C Y
B - anything else N
Anything but A N

About Time Period Sensitivity

When the baseline is sensitive to the time period, separate counts are maintained
for each time period defined. If overlapping time periods are defined (which is a
normal situation), a command will be counted only once, in the most restrictive

52 IBM Guardium 10.0


time period during which it occurs. If the time-period is non-contiguous – for
example, from 00:00 to 08:00 each day of the week – only one contiguous segment
of the time period is considered (eight hours in the example).

To illustrate how the Baseline Builder assigns requests to time periods, assume that
Saturday is included in three time periods:
v 24x7 (24 hours, 7 days a week)
v Saturday (24 hours only)
v Week End (48 hours - Saturday + Sunday)

Since the time period named Saturday is the most restrictive (24 hours only), all
requests time-stamped on Saturday will be counted in that time period, and not in
the more inclusive Week End or 7x24 time periods.

About Baselines in Aggregation and Central Manager


Environments

If there are multiple Guardium appliances in an Aggregation and/or Central


Manager environment, there is a single important point to keep in mind when
generating and using baselines:

Baselines are generated using only the data currently available on the appliance
that is generating the baseline.

This means that:


v A baseline generated on a collector will be built using the traffic available on
that unit only.
v A baseline built on an aggregator will be built from the data currently available
on the aggregator, which typically will have been sent from multiple collectors
over a period of time.
v A baseline generated on a Central Manager that is not also an aggregator will be
empty, since a Central Manager does not collect data (unless it is also an
aggregator).
v In a Central Management environment, a baseline generated on a managed unit
will be built using data from that unit only, but the baseline will be stored on
the Central Manager, and it will be available for use on any other unit.
v In a Central Management environment, to generate a single baseline from
multiple managed units, the baseline can be built with data from the first
managed appliance, and then merged using data from the other appliances, one
at a time.

About Suggested Rules


When a baseline is included in a policy, the Policy Builder can generate suggested
rules from the baseline. It will generate the minimum number of rules necessary to
represent everything that is included in the baseline. You can then accept any or all
of the suggested rules, and modify the accepted ones as necessary. In addition to
being a convenient way to generate an explicit policy (rather than an implicit
policy based on a baseline only), this is an important step in validating that a
baseline does not include malicious or erroneous activity that may have occurred
during the baseline period.

You may want to modify the suggested rules if you discover an activity that
occurred during the baseline period that you would like to monitor or alert upon

Chapter 4. Protect 53
in the future. You simply tailor the appropriate rule suggested from the baseline,
and assign the desired action. By default, the suggested rules will be positioned
before the baseline rule, so that the action specified will be taken before the
baseline rule executes to allow that command with no further testing of rules.

Note: The Policy Builder can also generate rules from the database ACL. See
“Policies” on page 57 for more information.

About Suggested Object Groups

When generating suggested rules from either the baseline or the database ACL
(access control), the Policy Builder minimizes the number of suggested rules by
creating suggested object groups. For example, assume the baseline includes a
particular command that references only three objects: AAA, BBB, and CCC, and
that there is not already an object group defined consisting of only those three
objects. The Policy Builder will create a suggested object group for those objects,
and will generate a single rule for the command, which references the suggested
object group.

You can display the membership of a suggested object group, and you have the
option of accepting or rejecting each group. In the example just given, if you reject
the suggested object group, the single rule that references it will be replaced by
three suggested rules (one each for AAA, BBB, and CCC).

Creating a Baseline
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. Click New to open the Baseline Builder.
3. Enter a unique baseline name in the Baseline Description box. Do not include
apostrophe characters in the baseline description.
4. In the Baseline Sensitivity pane, mark each element to which the baseline will
be sensitive. The more sensitive the baseline, the more complex the testing
that will be done both when creating the baseline and more importantly, when
inspecting traffic. See the Overview, for more information about baseline
sensitivity.
5. In the Baseline Threshold pane, enter the minimum number of occurrences for
a command during the baseline period for that command to be included in
the baseline. If one or more sensitivity boxes have been marked, this count
applies to the combination of sensitive values.
If the approach you are taking in building your security policy is to always
allow the most commonly issued commands from the past, then set this
number upwards to the appropriate level. If, on the other hand, you want to
ensure that the baseline is comprehensive, then leave this value set to 1. In
either case, you can have the Policy Builder suggest rules from the baseline.
The suggested rules are sorted in descending order by frequency in the
baseline period, so you can decide at that time whether to include or modify
rules for each unique command issued.
6. Use the Baseline Network Information pane to identify the servers and clients
to be included in the baseline. The method used to select which IP addresses
to use to construct the baseline is the same for servers and clients.
For each address encountered in the baseline data, membership in an optional
tagged group is considered first. A tagged group is a specific list of IP
addresses for which baseline constructs will be generated. If a tagged group is
selected, and if an IP address encountered in the baseline data is included in

54 IBM Guardium 10.0


the corresponding tagged group, that element will be included in the baseline
for that specific IP address. For example, assume that the Tagged Client IP
Group named ZoneAGroup has been selected, and that group includes a
client address of 192.162.14.33. If the baseline generator encounters the
command SELECT abc FROM xyz from that IP address, that command will be
counted for that specific address.
In contrast, if no tagged group is selected, or if an IP address is encountered
in the baseline data that is not a member of the selected tagged group, that
command may be counted with identical commands from other IP addresses
as directed by the corresponding network mask.
The network mask is required to group both client and server IP addresses.
Choices include all the different variations of subnet masks between
255.255.255.255 (all four octets must match) and 0.0.0.0 (all octets can be
anything).
You must always:
v Enter a subnet mask in the Server Network Mask box.
v Enter a subnet mask in the Client Network Mask box.
To illustrate how the baseline builder uses network masks to group addresses,
assume that:
v The Client Network Mask is 255.255.0.0, meaning that the first two octets
must match, but the second two octets can be anything.
v In the baseline data, a request with the client IP address 192.168.3.211 is
encountered.
v That client IP address is not in the selected Tagged Client IP Group (or
there is no Tagged Client IP Group selected).
v The command is SELECT abc FROM xyz.
When generating the baseline, this command will be included in the count of
all SELECT abc FROM xyz commands for all client IP addresses from the
192.168.0.0 subnet.
7. Click Save to validity-check and save the baseline definition. If you have
omitted required fields or entered invalid values, the definition will not be
saved and you must resolve any problems before attempting to save again.
8. Optionally click Roles to assign roles for the policy.
9. Optionally click Comments to add comments to the definition.
10. After a baseline has been saved successfully, the Baseline Generation and
Baseline Log panes appear on the panel.
11. Click anywhere on the Baseline Generation pane title to expand the pane.
12. Supply both From and To dates to define the time period from which the
baseline is to be generated. Regardless of how you enter dates, any minutes or
seconds specified will be ignored.
13. Click the Generate button to generate the baseline. If you have modified the
baseline definition, you will be prompted to save the definition before
generating the baseline.

Note: After you successfully generate the baseline for the first time, additional
fields are displayed in the Baseline Generation panel. These fields allow you to
merge data from additional time periods into the baseline, and to restrict the client
and server IP addresses used during each additional time period.

Chapter 4. Protect 55
Merge Baseline Information
To merge baseline information (to include information from additional time
periods and/or from different groups of clients and servers, for example):
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline into which additional
baseline information is to be merged.
3. Click Modify to open the Edit Baseline panel.
4. Do not modify the Baseline Sensitivity selections. If you modify the baseline
sensitivity, you are prompted to generate a completely new baseline to replace
the existing one.
5. Optional. Set the Minimum number of occurrences for addition to Baseline
value in the Baseline Threshold pane. The value entered here has no impact
on information previously included in the baseline. Once something is added
to the baseline, it is not removed during a merge operation.
6. Optional. Enter alternative network information in the Baseline Network
Information pane. The displayed values are from the last generate or merge
operation. If the merged information comes from the same set of servers
and/or clients, leave these fields unchanged. Otherwise, make the appropriate
changes in this pane to select the traffic to be included in the baseline.
7. Click anywhere on the Baseline Generation pane title to expand the pane.
8. Supply both From and To dates to define the time period from which the
baseline is to be generated. Regardless of how you enter dates, any minutes or
seconds specified will be ignored.
9. Select the Merge radio button.
10. Optional. In the Filter Selection pane, limit the baseline generation to specific
client and/or server IP addresses by entering an IP address followed by a
network mask. For example, to select all client IP addresses from the
192.168.9.x subnet, enter 192.168.9.1 in the first Client IP box, and 255.255.255.0
in the second box. To include additional addresses, click the Add button, then
enter the additional address information
11. Click Generate to generate the baseline. If you have modified the baseline
definition, you will be prompted to save the definition before generating the
baseline.

Modify a Baseline
Caution: Before modifying a baseline definition, be sure that you understand the
implications of modifying it, particularly if the baseline whose definition you want
to modify and re-generate is used in an installed policy. If you modify and
re-generate a baseline contained in an installed policy, when you re-install that
policy it will use the new baseline. To provide a fall-back option for baselines used
by installed policies, consider instead cloning these baselines and policies, and
modifying and generating the cloned definitions. See Clone a Baseline for more
information.
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline to be modified. Click the
Modify button to open the Edit Baseline panel. Apart from the panel title, this
panel is identical to the Add Baseline panel. See Create a Baseline for
instructions on using this panel.

56 IBM Guardium 10.0


Clone a Baseline
There are a number of situations where you may want to define a new baseline
based on an existing one, without modifying the original definition. See the
caution.
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline to be cloned.
3. Click Clone to open the Clone Baseline panel.
4. Enter a unique name for the new baseline in the New Baseline Description box.
Do not include apostrophe characters in the new baseline description.
5. To clone the baseline constructs (the commands, basically) that have been
generated for the baseline being cloned, mark the Clone Constructs checkbox.
6. Click Accept to save the new baseline. You can then open and edit the new
baseline by using the Baseline Finder.

Remove a Baseline
1. Click Protect > Security Policies > Baseline Builder to open the Baseline
Finder.
2. From the Baseline Definition list, select the baseline to be removed.
3. Click Delete. You are prompted to confirm the action.

Policies
A security policy contains an ordered set of rules to be applied to the observed
traffic between database clients and servers. Each rule can apply to a request from
a client, or to a response from a server. Multiple policies can be defined and
multiple policies can be installed on a Guardium appliance at the same time.

Each rule in a policy defines a conditional action. The condition tested can be a
simple test - for example it might check for any access from a client IP address that
does not belong to an Authorized Client IPs group. Or the condition tested can be
a complex test that considers multiple message and session attributes (database
user, source program, command type, time of day, etc.), and it can be sensitive to
the number of times the condition is met within a specified timeframe.

The action triggered by the rule can be a notification action (e-mail to one or more
recipients, for example), a blocking action (the client session might be
disconnected), or the event might simply be logged as a policy violation. Custom
actions can be developed to perform any tasks necessary for conditions that may
be unique to a given environment or application. For a complete list of actions, see
Rule Actions Overview.

A policy violation is logged each time that a rule is triggered (except when the rule
explicitly requests no logging). Optionally, the SQL that triggered the rule
(including data values) can be recorded with the policy violation. Policy violations
can be assigned to incidents, either automatically by a process, or manually by
authorized users (see the Incident Management tab in the Guardium GUI. For
further information, see “Incident Management” on page 179.

Note: Correlation alerts can also be written to the policy violations domain (see
“Correlation Alerts” on page 131).

Chapter 4. Protect 57
In addition to logging violations, policy rules can affect the logging of client traffic,
which is logged as constructs and construct instances.
v Constructs are basically prototypes of requests that Guardium detects in the
traffic. The combinations of commands, objects and fields included in a construct
can be very complex, but each construct basically represents a very specific type
of access request. The detection and logging of new constructs begins when the
inspection engine starts, and by default continues (except as described)
regardless of any security policy rules.
v Each instance of a construct detected in the traffic is also logged, and each
instance is related to a specific client-server session. No SQL is stored for a
construct instance, except when a policy rule requests the logging of SQL for
that instance, or for a particular client/server session of instances (with or
without values).

In addition to controlling the inclusion of SQL in client construct instances, a


security policy rule can disable the logging of constructs and instances for the
remainder of a session.

In heavy volume situations, the parsing and aggregating of information into


constructs and instances can be deferred by using the Log Flat (Flat Log) option.
When used, the production of alerts and reports will be delayed until the logged
information has been aggregated. See Log Flat discussed later in this topic.

To completely control the client traffic that is logged, a policy can be defined as a
selective audit trail policy. In that type of policy, audit-only rules and an optional
pattern identify all of the client traffic to be logged. See Use Selective Audit Trail
discussed later in this topic.

In addition to installing new policies from Policy Installer screen of Administration


Console/Policy Installation:
v A new policy can be installed from Policy Finder screen.
v From the Policy Definition screen, an installed policy can be reinstalled, without
reinstalling other installed policies.
v From Policy Rules screen, an installed policy rule can be reinstalled, without
reinstalling the entire policy.

On a new installation only (not on upgrades), a default policy exists. It has no


rules, but Selective Audit is checked (this means that the Guardium system will
not collect any traffic per the default policy). The default policy on 64-bit
Guardium (new installation) is Default - Ignore Data Activity for Unknown
Connections.

For information on Guardium for Applications (which also uses the Policy
Builder), see “Configure data masking policy” on page 166

Policy Rule Basics


Within a policy, rules are evaluated in the order in which they appear, as each
element of traffic is analyzed.

There are three types of rules:


v An access rule applies to client requests - for example, it might test for UPDATE
commands issued from a specific group of IP addresses.

58 IBM Guardium 10.0


v An exception rule evaluates exceptions returned by the server (responses) - for
example, it might test for five login failures within one minute.
v An extrusion rule evaluates data returned by the server (in response to requests)
- for example, it might test the returned data for numeric patterns that could be
social security or credit card numbers.

Category, Classification, and Severity

For each rule, an optional Category and/or Classification can be assigned. These
are used to group policy violations for both reporting and incident management.

Minimum Counts and Reset Intervals

Some activities are normal and acceptable when they occur less than a certain rate.
But those same activities may require attention when the rate exceeds a tolerable
threshold. For example, if interactive database access is allowed, a consistent but
relatively low rate of login failures might be expected, whereas a sharply higher
rate might indicate an attack is in progress.

To deal with thresholds, a minimum count and a reset interval can be specified for
each policy rule. This can be used, for example, to trigger the rule action after the
count of login failures exceeds 100 (the minimum count) within one minute (the
reset interval). If omitted, the default is to execute the rule action each time the
rule is satisfied.

Continue to Next Rule

By default, the evaluation of access and exception rules for a unit of traffic ends
when a rule is triggered, providing that there is not multiple actions in one rule. In
cases where it is necessary to take multiple actions for the same or similar
conditions, mark the Continue to Next Rule box for that rule.

Note: Continue to Next Rule applies to access rules following access rules and to
exception rules following exception rules, but not to an exception rule following an
access rule or an access rule following an exception rule.

Extrusion rules will be processed regardless of the end of an access or exception


rule preceding the extrusion rule. See extrusion rules revoke in the Rule Definitions
Reference table at the end of this topic for information on excluding logging a
response that has already been selected for logging by a previous rule in the policy.

As baselines are only relevant to access rules, the use of baselines with exception
or extrusion rules can not limit/stop the continuance to the next rule.

Record Values with Policy Violation

When marked, the actual construct causing the rule to be satisfied will be logged
in the SQL String attribute and is available in reports. If not marked, no SQL
statement will be logged. To include the full values in the policy violation, mark
the Rec. Vals box for that rule.

Note: The full SQL with values will be available only in the policy violation
record, within the policy violations reporting domain. It will not be available in the
client traffic log, or on reports from the data access domain. To include full SQL
(with or without data values) in the client traffic log, use the Log Full SQL rule
actions.

Chapter 4. Protect 59
For more information about working with rules, see the following topics:
v View the Policy Rules for the Installed Policy
v Specify Values and/or Groups of Values in Rules
v Filter Rules to Display only a Subset
v Copy Rules
v Using Rules Suggested from the Baseline
v Using Rules Suggested from the Database ACL.
v Add or Edit Rules
v Using the Policy Simulator

Specify Values and/or Groups of Values in Rules

For many rule attributes, you can specify a single value and/or a group value,
using controls like those illustrated for the App User.

Be aware that a group member may contain wildcard (%) characters, so each
member of a group may match multiple actual values.

When a Group is selected, be aware that the group may contain wildcards.
v Negative Rule: Mark the Not box to create a negative rule; for example, not the
specified App User, or not any member of the selected group, or neither the
specified App User nor any member of the selected group.
v Empty Value: Enter the special value guardium://empty to test for an empty
value in the traffic. This is allowed only in the following fields: DB Name, DB
User, App User, OS User, Src App, Event Type, Event User Name, and App
Event Text.
v To define a new group to be tested: Click the Groups button to define a new
group, and then select that group from the Group list.
v To match any value: Leave the value box blank, and select nothing from the
Group list (be sure that the line of dashes is selected, as in the example).
v To match a specific value only: Enter that value in the value box, and select
nothing from the Group list.
v To match any member of a group: Leave the value box blank, and select the
group from the list. If the minimum count is greater than 1, there will be a
single counter, and it will be incremented each time any member of the group is
matched.
v To match an individual value or any member of a group: Enter a specific value
in the value box, and select a group from the list. If the minimum count is
greater than 1, there will be a single counter, and it will be incremented each
time the individual value or any member of the group is matched.
v If the minimum count is greater than 1, count each individual value separately:
Enter a dot (.) in the value box, and select nothing from the group list. Note that
the dot option cannot be used for the Service Name or Net Protocol boxes. If the
minimum count is greater than 1, count each member of a group separately:
Enter a dot (.) in the value box, and select a group from the list. Note that the
dot option cannot be used for the Service Name or Net Protocol boxes.

Pattern matching using Regular Expressions


In addition to special pattern tests, regular expressions can be used to search traffic
for complex patterns in the data. The Guardium implementation of regular
expressions conforms with POSIX 1003.2, which differs from the UNIX

60 IBM Guardium 10.0


implementation of regular expressions. Regular expressions are allowed in any
field that is followed by the Build Regular Expression button.

Note: You can also use regular expressions in the following fields (DB user, App
User, SRC App, Field name, Object, App Event Values Text) by typing the special
value guardium://regexp/(regular expression) in the text box that corresponds to
the field.

Note: IBM Security Guardium does not support regular expressions for
non-English languages.

Special pattern tests


You can use these special pattern tests to identify sensitive data that is contained in
the traffic that flows between the database server and the client.

Each policy rule can include a single special pattern test. To use one of these tests,
begin the rule name with one of the special pattern test names, followed by a
space and one or more additional characters to make the rule name unique For
example, if you are searching for Social Security numbers of your employees, you
could name the rule guardium://SSEC_NUMBER employee. You can still specify all
other components of the rule, such as specific client and server IP addresses.

These tests match a character pattern, and that match does not guarantee that the
suspected item, such as a Social Security number, has been encountered. There can
be false positives under a variety of circumstances, especially if longer sequences
of numeric values are concatenated in the data.
guardium://CREDIT_CARD
Detects credit card number patterns. It tests for a string of 16 digits or for
four sets of four digits, with each set separated by a blank. This special
pattern test also works with American Express 15-digit credit card number
patterns (first digit 3 and second digit either 4 or 7). For example:
1111222233334444 or 1111 2222 3333 4444
When a rule name begins with "guardium://CREDIT_CARD", and there is
a valid credit card number pattern in the Data pattern field, the policy uses
the Luhn algorithm, a widely-used algorithm for validating identification
numbers such as credit card numbers, in addition to standard pattern
matching. The Luhn algorithm is an additional check and does not replace
the pattern check. A valid credit card number is a string of 16 digits or
four sets of four digits, with each set separated by a blank. There is a
requirement to have both the guardium://CREDIT_CARD rule name and
a valid [0-9]{16} number in the Search Expression box in order to have the
Luhn algorithm involved in this pattern matching.
guardium://PCI_TRACK_DATA
Detects two patterns of magnetic stripe data. The first pattern consists of a
semi-colon (;), 16 digits, an equal sign (=), 20 digits, and a question mark
(?), such as:
;1111222233334444=11112222333344445555?
The second pattern consists of a percent sign (%), the character B, 16 digits,
a carat (^), a variable-length character string terminated by a forward slash
(/), a second variable-length character string terminated by a carat (^), 31
digits, and a question mark (?), such as:
%B1111222233334444^xxx/xxxx x^1111222233334444555566667777888?

Chapter 4. Protect 61
guardium://SSEC_NUMBER
Detects numbers in Social Security number format: three digits, dash (-),
two digits, dash (-), four digits, such as 123-45-6789. The dashes are
required.
guardium://CPF
The Cadastro de Pessoas Físicas (CPF), a Brazilian personal identifier. It
contains 11 digits of the format nnn.nnn.nnn-nn, where the last two digits
are check digits. Check digits are computed from the original nine digits to
provide verification that the number is valid. The formatting characters
within the expression are optional. If there is a match on the expression,
the check digits are validated.
guardium://CNPJ
Cadastro Nacional de Pessoas Jurídicas (CNPJ), an identification number
used for Brazilian companies. It contains 14 digits of the format
00.000.000/0001-00 where:
v The first eight numbers show the registration.
v The next four numbers identify the entity branch. 0001 is the default
value for head quarters.
v The last 2 numbers are the check digits.
The formatting characters within the expression are optional. If there is a
match on the expression, the check digits are validated.

Rule actions
There are a number of factors to consider when selecting the action to be taken
when a rule is satisfied.

Blocking Actions (S-TAP/S-GATE)

This section describes S-TAP® Terminate and S-GATE actions.

S-TAP Terminate Action

The S-TAP TERMINATE action will terminate a database connection (a session)


and prevent additional requests on that session. This action is available in S-TAP,
regardless of whether S-GATE is used or not.

Note: With S-TAP TERMINATE, the triggering request usually will not be blocked,
but additional requests from that session will be blocked (on high rate, sometimes
more than one request may go through before the session is terminated).

S-GATE Actions

S-GATE provides database protection via S-TAP for both network and local
connections.

When S-GATE is available, all database connections (sessions) are evaluated and
tagged to be monitored in one of the following S-GATE modes:
v Attached (S-GATE is "on") – S-TAP is in firewalling mode for that session, it
holds the database requests and waits for a verdict on each request before
releasing its responses. In this mode, latency is expected. However, it assures
that rogue requests will be blocked.

62 IBM Guardium 10.0


v Detached (S-GATE is "off") - S-TAP is in normal monitoring mode for that
session, it passes requests to the database server without any delay. In this mode
latency is not expected.

S-GATE configuration in "guard_tap.ini" defines the default S-GATE mode


("attached" or "detached") for all sessions, as well as other defaults related to
S-GATE verdicts when the collector is not responding. Other than the default
S-GATE configuration, S-GATE is controlled through the real-time policy
mechanism using the following S-GATE Policy Rule Actions:
v S-GATE ATTACH: sets S-GATE mode to "Attached" for a specific session.
Intended for use when a certain criteria is met that raises the need to closely
watch (and if needed block) the traffic on that session.
v S-GATE DETACH: sets S-GATE mode to "Detached" for a specific session.
Intended for use on sessions that are considered as "safe" or sessions that cannot
tolerate any latency.
v S-GATE TERMINATE: Has effect only when the session is attached. It drops the
reply of the firewalled request, which will terminate the session on some
databases. The S-GATE TERMINATE policy rule will cause a previously watched
session to terminate.

Note:
v S-GATE/ S-TAP termination does not work on a client IP group whose members
have wild-card characters. S-GATE/S-TAP termination only works with a single
IP address.
v For version 8.0 and higher, S-GATE actions do not support Oracle ASO
encrypted traffic, or shared memory sessions for DB2® or Informix®, under
Linux.
v For MySQL databases, It should be noted that MySQL's default command line
connection is 'mysql -u<user> -p<pass> <dbname>’
In this mode, MySQL will first map all the objects and fields in this database to
support auto completion (with TAB). When a terminate rule on any object or
field that is involved in this mapping, it will immediately disable the connection
session. To avoid this, connect to MySQL with the "-A" flag, which will disable
the"'auto-complete" feature, and will not trigger the "terminate" rule. Another
option is to fine tune the rule and not terminate on ANY access to these
objects/field and instead find a criteria that is more narrow and will not trigger
the rule on the login sequence.

Alerting Actions

Alert actions send notifications to one or more recipients.

For each alert action, multiple notifications can be sent, and the notifications can be
a combination of one or more of the following notification types:
v Email messages, which must be addressed to Guardium users, and will be sent
via the SMTP server configured for Guardium. Additional receivers for real-time
email notification are Invoker (the user that initiated the actual SQL command
that caused the trigger of the policy) and Owner (the owner/s of the database).
The Invoker and Owner are identified by retrieving user IDs (IP-based)
configured via Guardium APIs. The choice Data Security User - Database
Associations (available from accessmgr) displays the mapping (this is similar to
what is displayed if running the Guardium API command
"list_db_user_mapping").

Chapter 4. Protect 63
v SNMP traps, which will be sent to the trap community configured for the
Guardium appliance.
v Syslog messages, which will be written to syslog.
v Custom notifications, which are user-written notification handlers, implemented
as Java classes.

Note: Alerts definition and notification are not subject to Data Level Security.
Reasons for this include alerts are not evaluated in the context of user, the alert
may be related to databases associated to multiple users and to avoid situations
where no one gets the alert notification.

Message templates are used to generate alerts. Multiple Named Message Templates
are created and modified from Global Profile. There are several types of alert
actions, each of which may be appropriate for a different type of situation.
v Alert Daily sends notifications only the first time the rule is matched each day.
v Alert Once Per Session sends notifications only once for each session in which
the rule is matched. This action might be appropriate in situations where you
want to know that a certain event has occurred, but not for every instance of
that event during a single session. For example, you may want a notification
sent when a certain sensitive object is updated, but if a program updates
thousands of instances of that object in a single session, you almost certainly
would not want thousands of notifications sent to the receivers of the alert.
v Alert Only - action that will write to message and message_text tables. This
action permits all policy violation notifications to be sent to a remote destination.
Designed to improve Guardium integration with other database security
solutions. This alerting action is similar to Alert per match.
v Alert Per Match sends notifications each time the rule is satisfied. This would be
appropriate for a condition requiring attention each and every time it occurs.
v Alert Per Time Granularity sends notifications once per logging granularity
period. For example, if the logging granularity is set to one hour, notifications
will be sent for only the first match for the rule during each hour. (The
Guardium administrator sets the logging granularity on the Inspection Engine
Configuration panel.)

Log or Ignore Actions


These actions control the level of logging, based on the observed traffic.

The Log and Ignore commands are generally always available, but the Audit Only
action is only available for a Selective Audit Trail policy. Access rules, exception
rules and extrusion rules differ in what actions are permitted. Click on the Add
Action button for offerings.
v Audit Only: Available for a Selective Audit Trail policy only. Log the construct
that triggered the rule. For a Selective Audit Trail policy, no constructs are
logged by default, so use this selection to indicate what does get logged. When
using the Application Events API, you must use this action to force the logging
of database user names, if you want that information available for reporting
(otherwise, in this case, the user name will be blank).
v Allow: When matched, do not log a policy violation. If "Allow" action is
selected, no other actions can be added to the rule. Constructs are logged.
v FAM Alert and Audit - two rule actions - Alert, on a matching event, trigger an
alert (using receiver and template) and Audit, log the construct that triggered
the rule.

64 IBM Guardium 10.0


v FAM Audit only - log the construct that triggered the rule.
v FAM Ignore - Do not log this event.
v FAM Log Only Access Violations - log FAM access violations.
v Log only: Log the policy violation only. We refer to the fact that the rule was
triggered as a policy violation. Except for the Allow action, a policy violation is
logged each time a rule is triggered (unless that action suppresses logging).
v Log masked details: Log the full SQL for this request, replacing values with
question marks (???). This action is available for access rules and extrusion rules.
v Log full details: Log the full SQL string and exact timestamp for this request. See
notes in Further Discussion and Examples.
v Log full details with values: Like Log full details, but in addition, each value is
stored as a separate element (parse and log the values into a separate table in
the database). This log action uses more system resources as it logs the specific
values of the relevant commands. Use this log action only when you need to
generate reports with specific conditions on these values. Activation of this log
action choice is not available without consulting Technical Services (admin
user/Tools/Support Maintenance).
v Log full details per session: Log the full SQL string and exact timestamp for this
request and for the remainder of the session.
v Log full details with values per session: See the descriptions of Log full details
with values and Log full details per session. Activation of this log action choice
is not available without consulting Technical Services (admin
user/Tools/Support Maintenance).
v Skip Logging: When matched, do not log a policy violation, and stop logging
constructs. This is similar to the Allow action, but it additionally stops the
logging of constructs. This action is used to eliminate the logging of constructs
for requests that are known to be of no interest. GDM_CONSTRUCT will be
logged in some cases, because parse/log of construct occurs before the rule is
applied. However, the construct will not be included in the session. This feature
also applies for exception rules concerning database error code only, allowing
users to not log errors when an application generates large amounts of errors
and there is nothing that the user can do to stop the application errors.
v Ignore responses per session: Responses for the remainder of the session will be
ignored. This action does not log a policy violation, but it stops analyzing
responses for the remainder of the session. This action is useful in cases where
you know the database response will be of no interest. This action works when
sniffing data from an S-TAP. This action does not work when sniffing data from
a SPAN port.

Note: For ignore responses per session, since the sniffer does not receive any
response for the query or it is ignored, then the values for COUNT_FAILED and
SUCCESS are whatever the default for the table says they are, in this case
COUNT_FAILED=0 and SUCCESS=1.
v Ignore session: The current request and the remainder of the session will be
ignored. This action does not log a policy violation, but it stops the logging of
constructs and will not test for policy violations of any type for the remainder of
the session. This action might be useful if, for example, the database includes a
test region, and there is no need to apply policy rules against that region of the
database. Ignore Session rules provide the most effective method of filtering
traffic. An ignore session rule will cause activity from individual sessions to be
dropped by the S-TAP or completely ignored by the sniffer. Note: connection
(login/logout) information is always logged, even if the session is ignored.

Chapter 4. Protect 65
v Ignore S-TAP session: The current request and the remainder of the S-TAP
session will be ignored. This action is done in combination with specifying in
the policy builder menu screen of certain systems, users or applications that are
producing a high volume of network traffic. This action is useful in cases where
you know the database response from the S-TAP session will be of no interest.
Two options for Ignore S-TAP session: IGNORE_ENTIRE_STAP_SESSION - This
is a "hard" ignore and can not be revoked, and IGNORE_STAP_SESSION
(REVOCABLE) - This is a "soft" ignore, and this rule action can enable the
session traffic to be sent again without requiring a new connection to the
database. REVOKE Ignore - Sessions that were ignored by the action "IGNORE
S-TAP SESSION (REVOCABLE)" will be resumed, meaning the traffic will be
sent to Guardium system after "REVOKE Ignore" command received by the
S-TAP. (This command can be sent from S-TAP control-->send command).
v Ignore SQL per session: No SQL will be logged for the remainder of the session.
Exceptions will continue to be logged, but the system may not capture the SQL
strings that correspond to the exceptions.
v Log Extrusion Counter: Available only for extrusion rules, this action updates
the counter, but does not log any of the returned data. This action saves disk
space when the counter value is most important and returned values are the
least important.
v Log Masked Extrusion Counter: Available only for extrusion rules, this action
updates the counter; logs the SQL request, replacing values with question marks;
does not log the returned data (response).
v Quarantine: Available for access, exception and extrusion rules, the purpose of
this action is to prevent the same user from logging into the same server for a
certain period of time. There is one validation item: you cannot have a rule with
a QUARANTINE action without having filled in a value for amount of time that
the user is quarantined. See Quarantine for (minutes) to set this quarantine time.
If the session is watched (S-GATE scenario), send a drop verdict. If the session is
not watched (S-TAP TERMINATE scenario), have the S-TAP stop the session.
Take the current time and add to that the number of minutes from the reset
interval field. You get a new timestamp. In a new structure you keep a sorted
list (sorted by this timestamp). Each element has in addition to the timestamp, a
server IP, server type, a DB user name, a service name and a flag saying whether
this was a watched session or not.
v No Parse - Do not parse the SQL statement.
v Quick Parse No Fields - Do not parse fields in the SQL statement.
v Quick Parse Native - This is used only for Guardium S-TAP for DB2 on z/OS.
Use this rule action in an environment where heavy traffic is overloading the
Sniffer. Use of this rule action should improve performance in the S-TAP for DB2
on z/OS.
v Quick Parse: For access rules only, for the remainder of the session, Do not parse
the SQL statement. This reduces parsing time. In this mode all objects accessed
can be determined (since objects appear before the WHERE clause), but the exact
object instances affected will be unknown, since that is determined by the
WHERE clause.
v Redact: For extrusion rules only, this feature allows a customer to mask portions
of database query output (for example, credit card numbers) in reports for
certain users. The selection Replacement Character in the Data Pattern/SQL
Pattern section of the extrusion rule menu choices defines the masking character.
Should the output produced by the extrusion rule match the regular expression
of the Data Pattern, the portions that match sub-expressions between parenthesis

66 IBM Guardium 10.0


"(" and ")" will be replaced by the masking character. Predefined regular
expressions (fast regexp) can also be used. See Data Pattern in Rule Definition
Reference table at end of this topic.
v Record Values Separately/ Do Not Record Values Separately: This action is a
session-based access rule. Used in Replay function to distinguish between
transactions.
v Mark as Auto-Commit ON/ Mark as Auto-Commit OFF: This action is a
session-based access rule. Used in Replay function due to various auto-commit
models for different databases.
v z/OS Audit: Used specifically for z/OS Collection Profile policy rules (IMS, Data
Sets, and DB2), which are used to specify which traffic to collect on the z/OS
server. This action specifies that traffic that meets the filtering criteria is sent to
the collector, and it is the only action that can be specified on a Collection Profile
rule.

Note:

Redaction (Scrub) on Linux is supported as of version 9.1. For all UNIX platforms,
Scrub is supported only with ANSI character sets.

Redaction (Scrub) rules should be set on the session level (meaning, trigger rules
on session attributes like IPs, Users, etc), not on the SQL level / attributes (like -
OBJECT_NAME or VERB), because if you set the scrub rule on the SQL that needs
to be scrubbed it probably will take a few miliseconds for the scrub instructions to
make it to the S-TAP where some results may go though unmasked.

To guarantee all SQL is scrubbed, set the S-TAP (S-GATE) default mode to "attach"
for all sessions (in guard_tap.ini). This will guarantee that no command goes
through without being inspected by the rules engine and holding each request and
waiting for the policy's verdict on the request. This deployment will introduce
some latency but this is the way to ensure 100% scrubbed data.

Note:

For HTTP support, there are Policy action limitations. The following policy actions
are not supported for HTTP: S-TAP terminate and Skip logging.

For other actions, the following are not supported by HTTP:


v Ignore Responses Per Session: because HTTP does not support exception and
extrusion.
v Ignore SQL Per Session: because HTTP does not contain SQLs.
v Quarantine: This action is used to quarantine user, but HTTP does not support
DBUser and OSUser.
v Quick Parse: This action is for log SQL.
v SGate Terminate: This action is not supported for Hadoop - all the terminate
actions do not work for HTTP.

For policy conditions - these conditions are not supported for HTTP:

Client MAC; DB Name; DB User; App User; OS User; Src App; Masking Pattern;
Replacement Character; Quarantine for minutes; Records Affected Threshold; XML
Pattern; Event Type; Event User Name; App Event Values Text; App Event Values
Text Group; App Evert Values Text and Group; Numeric; Date.

Chapter 4. Protect 67
Further discussion and examples
Log Full Details
By default the Guardium collector masks all values when logging an SQL
string. For example
insert into tableA (name,ssn,ccn) values (’Bob Jones’, ’429-29-2921’,’29249449494949494’)

is logged as insert into tableA (name,ssn,ccn) values (?, ?,?). This is


the default behavior for two reasons:
1. Values should not be logged by default because they may contain
sensitive information.
2. Logging without values can provide for increased system performance
and longer data retention within the appliance. Very often, database
traffic consists of many SQL requests, identical in everything except for
their values, repeated hundreds, thousands, or even millions of times
per hour. By masking the values, Guardium is able to aggregate these
repeated SQL requests into a single request, called a "construct". When
constructs are logged, instead of each individual SQL request/construct
being logged separately, it is only logged once per hour (per session)
with a counter of how many times the construct was executed. This can
save a tremendous amount of disk space because, instead of creating a
hundreds (or millions) of lines in the database, only one new line is
added.
With Log Full Details, Guardium logs the data with the values unmasked
and each separate request. Log Full Details also provides the exact
timestamp whereas logging without details provides the most recent
timestamp of a construct within the logging granularity time period
(usually 1-hour).
Ignore S-TAP Session - Ignore S-TAP Session causes the collector to send a
signal to the S-TAP instructing it to stop sending all traffic, except for the
logout notification, for specific sessions. For example, if you have a rule
that says where DBUserName?=scott, Ignore S-TAP Session:
v When Scott logs into the database server, S-TAP sends the connection
information to the collector.
v The collector logs the connection. Session information (log in/log outs)
are always logged.
v The collector sends a signal to S-TAP to stop sending any more traffic
from this specific session. This means that any commands run by Scott
against the database server and any responses (result sets, SQL errors,
etc.) sent by the Database server to Scott will be discarded by S-TAP and
will never reach the collector.
v When Scott logs out of the database server, S-TAP will send this
information to the collector (log in/log out information is always tracked
even if the session is ignored).
v When Scott logs in again, these steps are repeated. The logic on which
sessions should be ignored is maintained by the collector, not the S-TAP.
It is important to note that Ignore Session rules are still very important to
include in the policy even if using a Selective Audit Trail. Ignore Session
rules decrease the load on a collector considerably because by filtering the
information at the S-TAP level, the collector never receives it and does not
have to consume resources analyzing traffic that will not ultimately be
logged. A Selective Audit Trail policy with no Ignore Session rules would

68 IBM Guardium 10.0


mean that all traffic would be sent from the database server to the
collector, causing the collector to analyze every command and result set
generated by the database server.
Using MS-SQL or Sybase batch statements in Guardium application
Limitation
The success or failure of SQL commands in MS-SQL or Sybase batch
statements may not show correctly.
MS-SQL or Sybase SQL batch statements are primarily used when creating
complex procedures.
When executing SQL statements separately, the status of each statement is
tracked separately and will have the correct success or failure value.
When a batch of SQL statements (used in MS-SQL or Sybase) are executed
together, the status returned is the single status of the last transaction in
the batch.
Guardium example
[Start of SQL batch]
SQL 1 statement - failed
SQL 2 statement - failed
SQL 3 statement - success
[End of SQL batch]
In the Guardium application, only the success or failure of the last SQL
statement is reported in a MS-SQL or Sybase batch statement. In this case,
success is reported for the MS-SQL or Sybase batch statement, even though
SQL 1 and SQL 2 failed.

Set character set

You can use an action under a policy extrusion rule in order to attach alternative
character sets to the session.

Special Pattern Rules with character sets


Example of extrusion rule (with hint):

Character set EUC-JP (code 274).

Extrusion rule pattern: guardium://char_set?hint=274

As a result an extrusion rule is attached to the session and Analyzer will use
EUC-JP in the session, if there is no other character set.

Example of extrusion rule (with force) :

Character set EUC-JP (code 274).

Extrusion rule pattern: guardium://char_set?force=274

Chapter 4. Protect 69
As a result an extrusion rule us attached to the session and Analyzer will use
EUC-JP character set in the session in any case. Character set used before will be
substituted by EUC-JP.

Keep in mind that extrusion rules usually attach to the session with some delay.
Therefore short sessions or the beginning of the session are not immediately
changed by a character set change. The schema works for: Oracle, Sybase, MY
SQL, and MS SQL.

Analyzer rules

Certain rules can be applied at the analyzer level. Examples of analyzer rules are:
user-defined character sets, source program changes, and issuing watch verdicts for
firewall mode. In previous releases, policies and rules were applied at the end of
request processing on the logging state. In some cases, this meant a delay in
decisions based on these rules. Rules applied at the analyzer level means decisions
can be made at an earlier stage.

Log Flat

The Log Flat option listed in Policy Definition of Policy Builder allows the
Guardium appliance to log information without immediately parsing it.

This saves processing resources, so that a heavier traffic volume can be handled.
The parsing and merging of that data to Guardium's internal database can be done
later, either on a collector or an aggregator unit.

When Log Flat (Flat Log) is checked:


v Data will not be parsed in real-time
v The flat logs can be seen on a designated Flat Log List report
v The offline process to parse the data and merge to the standard access domains
can be configured through the Manage > Activity monitoring > Flat Log process.

Rules on Flat
This section describes the differences on uses of Rules on Flat.

When Rules on flat is checked:


v Session-Level rules will be examined in real-time.
v No rules will be evaluated when the offline processing does takes place.

When Rules on flat is NOT checked:


v Policy rules will fire at processing time using the current installed policy.

Note: Rules on flat does not work with policy rules involving a field, an object,
SQL verb (command), Object/Command Group, and Object/Field Group. In the
Flat Log process, "flat" means that a syntax tree is not built. If there is no syntax
tree, then the fields, objects and SQL verbs cannot be determined.

The following actions do not work with rules on flat policies:


LOG_FULL_DETAILS; LOG_FULL_DETAILS_PER_SESSION;
LOG_FULL_DETAILS_VALUES; LOG_FULL_DETAILS_VALUES_PER_SESSION;
LOG_MASKED_DETAILS.

70 IBM Guardium 10.0


Using Selective Audit Trail
Use the Selective Audit Trail option, in the Policy Definition section of Policy
Builder, to limit the amount of logging on the Guardium appliance.

This is appropriate when the traffic of interest is a relatively small percentage of


the traffic being accepted by the inspection engines, or when all of the traffic you
might ever want to report upon can be completely identified.

Without a selective audit trail policy, the Guardium appliance logs all traffic that is
accepted by the inspection engines. Each inspection engine on the appliance or on
an S-TAP is configured to monitor a specific database protocol (Oracle, for
example) on one or more ports. In addition, the inspection engine can be
configured to accept traffic from subsets of client/server connections. This tends to
capture more information than a selective audit trail policy, but it may cause the
Guardium appliance to process and store much more information than is needed
to satisfy your security and regulatory requirements.

When a selective audit trail policy is installed, only the traffic requested by the
policy will be logged, and there are two ways to identify that traffic:
v By specifying a string that can be used to identify the traffic of interest, in the
Audit Pattern box of the Policy Definition panel. This might identify a database
or a group of database tables, for example. Note that an audit pattern is a
pattern that is applied (via regular expression matching) to EACH SQL that the
logger processes to see if it matches. This pattern match is strictly a string
match. It does NOT match against the session variables (DB name, etc) the way
the policy rules do.
v Or by specifying Audit Only or any of the Log actions (Log Only, Log Full
Details, etc.) for one or more policy rules in a Rule Definition panel. With policy
rules you can be extremely precise, specifying exact values, groups or patterns to
match for every conceivable type of attribute (DB Type, DB Name, User Name,
etc.).

If the Guardium security policy has Selective Audit Trail enabled, and a rule has
been created on a group of objects, the string on each element in the group is
checked. If there is a match, a decision is made to log the information and
continue. If the Guardium security policy has Selective Audit Trail enabled, and a
rule has been created on a group of objects using a NOT designation on the object
group, there is still a need to check the string on each element in the group, and
decide to log and continue only if none of the elements match. NOT designated
rules behave the same as normal rules when used with Selective Audit Trail.

This includes:
v OR situations such as rules based on multiple objects or commands;
v Situations with two NOT conditions (for example, NOT part of a group of
objects and NOT part of a group of commands); and,
v Situations with one NOT condition and one YES condition (for example, a NOT
part of a group of objects and a YES part of a group of commands).

Selective Audit Trail and Application Events API

When a selective audit trail policy is used, and application users or events are
being set via the Application Events API, the policy must include an Audit Only
rule that fires whenever a set/clear application event, or set/clear application user
command is encountered.

Chapter 4. Protect 71
Selective Audit Trail and Application User Translation
When a selective audit trail policy is used, an Application User Translation is also
being used:
v The policy will ignore all of the traffic that does not fit the application user
translation rule (for example, not from the application server).
v Only the SQL that matches the pattern for that policy will be available for the
special application user translation reports.

Selective Audit Trail and specifying an empty group

Using a selective audit policy and specifying an empty group, with the idea that
anything that does not match one of the group members in the specified group
needs to be filtered out. However, this will result in an attempt to match ANY
rather than NONE. Therefore, since there are no group members, nothing gets
filtered out and everything is logged.

Creating policies
In addition to creating policies, you can modify, clone, or remove a policy.

Create a policy

Use this section to create a policy. The steps follow the menu fields on the Policy
Builder screen.

Follow these steps:


1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. A series of predefined policies (available for policy cloning), with predefined
access, exception and extrusion values, have been created for database events
that demonstrate attempts to defeat the protect mechanisms. Such events that
will generate log actions or alerts are: Failed logins and SQL errors from
certain groups or servers; access of certain database objects by certain users or
groups; attempts to change SQL GRANT commands; and more. These
predefined policies facilitate quicker creation of policies for compliance to
meet demands from Basel II; Capture & Replay, DB2-to-DB2; Capture &
Replay, heterogenous; Data Privacy-PII (personable identifiable information);
Default Sharepoint Auditing; Hadoop Policy; HIPAA; PCI; PCI Oracle EBS;
PCI SAP; SOX Oracle EBS; Vulnerability and Threats Management; ivileged
Users Monitoring.
3. Clone a predefined policy or click New to open the Policy Definition panel.
4. Enter a unique name for the policy in the Policy Description box. Do not
include apostrophe characters in the description.
5. Optional. Enter a category in the Category box. A category is an arbitrary
label that can be used to group policy violations for reporting purposes. The
category specified here will be used as the default category for each rule (and
it can be overridden in the rule definition).
6. Optional. Select a baseline to use from the Policy Baseline list. Be sure that the
baseline selected has been generated. If it has not been generated, the Policy
Builder will not be able to suggest rules from that baseline.

72 IBM Guardium 10.0


Note: If the baseline you want to use does not display in the list, your
Guardium user ID has not been assigned a security role authorized to use that
baseline. Contact your Guardium Administrator for further information.
If the policy includes a baseline, the policy definition will initially contain
only the baseline, and the action for a baseline is always allow without
continuing to the next rule.
When adding a baseline to an existing policy, it will be added as the first rule.
You can move the baseline rule to any location in the policy. (Be aware if
moving the baseline as the last rule, it will have no effect.)
7. Optionally mark Log Flat to indicate that Guardium is to log data, but not
analyze and aggregate the data to the internal database.
8. If Log Flat is selected, optionally mark Rules on Flat to apply the policy rules
to the flat log data (as opposed to the aggregated data).
9. Optionally mark Selective Audit Trail to restrict what will be logged when this
policy is installed:
v When marked, only traffic requested by this policy will be logged. This is
appropriate when the traffic of interest is a relatively small percentage of
the traffic being seen by the inspection engines. When marked, there are
two ways to signal what traffic to log: by specifying a string that can be
used to identify the traffic of interest, in the Audit Pattern box; or by
specifying Audit Only or any of the Log actions for one or more policy
rules (rule actions are described later).
v When not marked (the default situation), the Guardium appliance logs all
traffic that is seen by the inspection engines. This provides comprehensive
audit trail capabilities, but may result in capturing and analyzing much
more information than is needed.
v For more information, see Using Selective Audit Trail.

Note: Selective Audit Trail does not work with Exception rules.
10. Click Save to save the policy definition.
11. Optionally click Roles to assign roles for the policy.
12. Optionally click Comments to add comments to the definition.

Where to go from here


After creating a new policy definition, use the Policy Finder panel to access that
definition. Complete the policy definition by performing one or more of the
following tasks:
v Create policy rules manually. See Add or Edit Rules.
v If the policy includes a baseline, have the Policy Builder suggest rules from the
baseline. You can optionally accept or tailor the generated rules as necessary. See
Using Rules Suggested from the Baseline.
v Have the Policy Builder suggest rules from the database access control (ACL)
defined for that database. You can reject, or accept and optionally tailor each
rule as necessary. See Using Rules Suggested from the Database ACL.

Modify/Clone/Remove a Policy

Use this section for the steps on how to modify, clone or remove a policy.

Chapter 4. Protect 73
Modify a policy
Use caution before modifying a policy definition: be sure that you understand the
implications of modifying a policy that is in use. If the existing policy has to be
re-installed before all revisions have been completed, the policy may not install, or
it may not produce the desired results when installed. For this reason, it is
preferable to clone the policy, so that the original is always available to reinstall.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be modified.
3. Do one of the following:
v To edit overall policy settings (Category, Log Flat option, etc.) click Modify.
To change any of these settings, see Create a Policy.
v To edit the rules only, click Edit Rules. To modify any components of the
rule definitions, see Add or Edit Rules.

Clone a policy

There are a number of situations where you may want to define a new policy
based on an existing one, without modifying the original definition.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be cloned.
3. Click Clone to open the Clone Policy panel.
4. Enter a unique name for the new policy in the New Name box. Do not include
apostrophe characters in the name.
5. To clone the baseline constructs (the commands, basically) that have been
generated for the baseline being cloned, mark the Clone Constructs checkbox.
6. Click Save to save the new policy. You can then open and edit the new policy
via the Policy Finder. See Modify a Policy.

Remove a policy
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be cloned.
3. Click the Delete button. You will be prompted to confirm the action.

Add or Edit Rules


Use this section to add or edit rules within a policy.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be edited.
3. Click the Edit Rules button to open the Policy Rules panel.
4. Do one of the following:
v To edit a rule, click the Edit this rule individually button.
v To add a new rule, click one of the following buttons:
Add Access Rule
Add Exception Rule

74 IBM Guardium 10.0


Add Extrusion Rule (will only be available if the administrator user has set
the Inspection Engine configuration (in Admin Console) to Inspect Returned
Data)
Extrusion matches allow the user to define how many matched records will
be grouped together when logged and reported on by Guardium. Extrusion
rules must have an action of LOG FULL DETAILS and a rule name that
includes guardium://(some text)?split=(number) where (some text) is any
text or one of the predefined words such as CREDIT CARD and (number)
is the number of returned data records per Guardium log record.
5. The attributes that can be tested for in each type of rule vary, but regardless of
the rule type, each rule definition begins with the following four items:
v Rule Description - Enter a short, descriptive name for the rule. To use a
special pattern test, enter the special pattern test name followed by a space
and one or more additional characters to make the rule name unique, for
example: guardium://SSEC_NUMBER employee.
v Category - The category will be logged with violations, and is used for
grouping and reporting purposes. If nothing is entered, the default for the
policy is used.
v Classification - Optionally enter a classification in the Classification box.
Like the category, these are logged with exceptions and can be used for
grouping and reporting purposes.
v Severity - Select a severity code: Info, Low, Med, or High (the default is
Info).
6. Use the remaining fields of the Rule Definition panel to specify how to match
the rule. Many of the same fields are available for Access, Exception, and
Extrusion Rules; and some fields are available only after selecting various
other options. For an alphabetical reference of all fields available in the rules
definition panels, see Rule Definition Reference. Also, for instructions on how
to use combinations of groups and individual values, see Specify Values
and/or Groups of Values in Rules.
7. For each type of rule, you can enter one or more regular expressions in a
Pattern box, to match against strings in the traffic. Enter the expression
manually, or click the icon to open the Build Regular Expression tool, which
allows you to enter and test regular expressions.
8. For exception rules only, select a single exception type to which the rule will
be sensitive, from the Exception Type box. The rule count is incremented only
when the selected exception type is encountered.
9. When a rule action is selected, the following two fields are enabled:
v Min. Ct. - Enter the minimum number of times the rule must be matched
before the rule action is triggered. The count of times the rule has been met
will be reset each time the action is triggered or when the reset interval
expires. The default of zero is identical to 1, meaning that every time the
rule is matched the action will be triggered.
v Reset Interval (minutes) - Used only when the minimum count is greater
than zero, and required in that case. Enter the number of minutes after
which the rule counter will be reset to zero. The counter is also reset to zero
each time that the rule action is triggered.
10. Check the Continue to Next Rule box to indicate that when this rule is
satisfied and its action is triggered, testing of the same request, exception, or
results should continue with the next rule. This means that multiple rules may
be satisfied and multiple actions taken based on a single request or exception.
If not marked (the default), no additional rules will be tested when this rule is
satisfied.

Chapter 4. Protect 75
11. When the Rec. Vals box is marked, the actual construct causing the rule to be
satisfied will be logged in the SQL String attribute and is available in reports.
If not marked, no SQL statement will be logged.
12. Message templates are used to generate alerts. Multiple Named Message
Templates are created and modified from Global Profile.
13. Select the action to take when the rule is satisfied.
14. If an alert action is specified, the Notification pane opens, and at least one
notification type must be defined.
15. Click Save to save the rule. This closes the Rule Definition panel and returns
to the Policy Rules panel.

Filter Rules to Display Only a Subset

When a policy contains many rules, it can be useful to view a subset of the rules
having common attributes.

The Filter box in the Rules Definition panel can be used for this purpose. The
process of defining a filter is similar to the process of defining a rule.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be viewed or modified.
3. Click Edit Rules.
4. In the Filter boxd do one of the following:
v Select a filter from the Filter list.
v Click Edit to modify a filter definition.
v Click New to define a new filter.
Once the filtered set of rules is displayed, you can perform any of the actions
described in this section on the displayed rules.

Copy Rules

Use this procedure to copy selected rules from one policy to another, or to a
different location in the same policy.

All of the rules copied will be copied to a single location - after rule 3, for
example. To copy rules to different locations in the receiving policy, either perform
multiple copy operations, or copy all of the rules in one operation, and then edit
the receiving policy to move the rules as necessary.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy from which you want to copy
one or more rules.
3. Click Edit Rules.
4. Mark the checkbox for each rule to be copied.
5. Click Copy Rules.
6. From the Copy selected rules to policy list, select the policy to receive the
copied rules.
7. From the Insert after rule list, select the rule after which the copied rules
should be inserted, or select Top to insert the copied rules at the beginning of
the list.
8. Click Copy. You will be informed of the success of the operation.

76 IBM Guardium 10.0


9. You should now edit the policy to which you copied the rules, to verify that
you have copied the correct rules to the correct location.

Using Rules Suggested from the Baseline


Use Policy Builder to suggest rules from the baseline included in the policy.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to work with. (It must
include a baseline.)
3. Click the Edit Rules button.
4. Set the Rule minimum count value. This is the minimum number of like
commands that the system should find in order to suggest a rule. The default
is zero. The smaller the number entered, the more suggested rules the system
will generate. (Be aware that the Count that displays in the suggested rules
panel does not reflect this value.)
5. Set the Object Group minimum count value, to determine how many instances
of an object group the system should find to generate a suggested object
group. The default is one. The smaller the number entered here, the greater
the number of suggested object groups.
6. Click the Suggest Rules button. The suggested rules display in a separate
window, in the Suggested Rules panel.
7. The suggested rules are sorted in descending order by the count of
occurrences in the baseline period, which is listed for each suggested rule. If
you select one or more of the suggested rules and click Save, they are inserted
in the same order, just before the BASELINE rule in the Policy Rules panel.
You can then change the order of the suggested rules or edit them as
necessary, from the Policy Rules panel.
8. Expand the rules and check the membership of the suggested object groups.
In the Object column of the Suggested Rules panel, if any suggested object
groups have been created, these begin with the name Suggested Object Group
and are displayed as hypertext links. For information about how to view,
accept, or reject suggested object groups, see Using Suggested Object Groups.
9. Mark the Select box for each suggested rule to include in the policy.
10. Click Save to accept the selected rules.
11. You can now edit or modify the suggested rules as you would any rules that
you added manually.

Using Suggested Object Groups

The Policy Builder can suggest rules from both the baseline included in the policy
and the database security policy (internal to the DBMS) defined for a server.

In either case, it attempts to generate the minimal set of rules by grouping


database objects (tables, procedures, or views) into suggested object groups. You
can accept or reject the suggested object groups.

Before accepting a suggested object group, you can edit the generated Group
Description field (Suggested Object Group603-25 11:54, for example) to provide a
more meaningful name. After accepting a suggested object group, you can view its
membership. You can reject the use of that group within any suggested rule, but
you cannot edit the membership of that group.

Chapter 4. Protect 77
If you reject a suggested object group, the suggested rule for that group is replaced
with a separate suggested rule for each member of the rejected group. You can
accept or reject each of those suggested rules separately. After accepting a
suggested rule, you can edit that rule.
Viewing Suggested Object Groups
Suggested object groups display in the Object column of the Suggested
Rules panel as hypertext links beginning with the words Suggested Object
Group.
To view a suggested object group's membership, click the hypertext link
for that group. If the group has not yet been accepted, the group
membership displays in the Edit Group panel. If the group has already
been accepted, it displays in the View Group panel.
Accepting Suggested Object Groups
To accept a suggest object group:
1. Enter a meaningful name in the Group Description field in the Edit
Group panel. (Not required, but strongly recommended). Do not
include apostrophe characters in the name. This is the only opportunity
you have to name this group. Otherwise, the group gets a name
beginning with Suggested Object Group and followed by a number, as
described previously.
2. Click Save to accept the edited group for the suggested rule, or click
Save for All to accept the edited group for all suggested rules in which
it appears. The new object name will replace the old one in the rule.
Rejecting Suggested Object Groups
When you reject a suggested object group, the use of that group is replaced
by one or more suggested rules. To reject a suggested object group, do one
of the following:
v To reject the group for this suggested rule only: Click the Reject button.
v To reject the group for all suggested rules: Click the Reject for All
button.

Note: If you accept a suggested object group in one rule, open that same
suggested object group again from another rule, and then click the Reject for All
button, that group will be retained in any rule where it was explicitly accepted, but
rejected in the remaining rules in which it was used.

Using Rules Suggested from the Database ACL


For a specified database server, the Policy Builder can suggest access rules using
the security policy defined internally by the DBMS.

The Policy Builder does this by examining the permissions granted to user groups
and database objects (tables, procedures, and views) within the DBMS, then
grouping the database objects into suggested object groups so that the total
number of suggested rules can be minimized. You can accept or reject any
suggested object group (see Using Suggested Object Groups). You can also accept
or reject any suggested rule.

To have the Policy Builder suggest rules from the database ACL:

78 IBM Guardium 10.0


Note: When suggesting rules from the database ACL, the system does not use the
Rule minimum count or the Object Group minimum count fields. Those fields are
used only when suggesting rules from the baseline.
1. Click Suggest from DB to open the Database Definition panel in a separate
browser window.
2. Click Add Datasource to select the database from which you want to access the
DB ACL.

Note: If adding an Oracle, DB2 or DB2 for z/OS® datasource to access the DB
ACL, the Query Parameters section, in the Database Definition pop-up window,
will be disabled.
3. Click Suggest Rules to generate the rules. The Suggested Rules panel opens in
a separate window (as described previously, for the Rules Suggested from
Baseline). If you select one or more of the suggested rules and click Save, they
will be inserted in the same order into the list of rules in the Policy Rules
panel, just before the BASELINE rule. If there is no BASELINE rule, they will
be inserted at the beginning of the list. Once the suggested rules have been
inserted into the Policy Rules panel, you can change the order of the rules or
edit them, as necessary.
4. Check the membership of the suggested object groups. In the Object column,
any suggested object groups that have been created begin with the name
Suggested Object Group and display as hypertext links (in blue and
underlined). For information about how to view, edit, accept, or reject
suggested object groups, see Using Suggested Object Groups).
5. Mark the Select box for each suggested rule you want included in the policy.
Click Save to accept the selected rules.

Using the Policy Simulator

Use the Policy Simulator to test access rules without installing the policy.

It does not test exception rules or extrusion rules. The simulator replays logged
network traffic and applies all access rules in the policy. It produces a special
report in a separate window, listing the SQL that triggered alert or log only
actions. The report includes the following columns: Timestamp, Category Name,
Access Rule Description, Client IP, Server IP, DB User Name, Full SQL String,
Severity Description, and Count of Policy Rule Violations. Use the CLI command,
store allow_simulation, to make the Policy Simulation button active in the GUI.

The Policy Simulator can be used to test only the following types of access rule
actions:
v Log Only
v Any Alert action: Alert Daily, Alert Once Per Session, Alert Per Match, Alert Per
Time Granularity

The Policy Simulator will not produce any results if the policy includes logging
actions other than Log Only. To use the simulator for such a policy, temporarily
change all logging actions to Log Only.

To use the Policy Simulator:


1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to work with.

Chapter 4. Protect 79
3. Click Edit Rules.
4. Click the Policy Simulator button to open the Policy Simulator panel.
5. Supply both From and To dates to define the time period to use for the
simulation.

Note: Historical data can be archived and purged from your Guardium
appliance on a schedule defined by your Guardium administrator. Be sure that
data from the time period you specify is available (and has not been purged).
6. Click Test. When the test starts and while it is running, the message * is
running is displayed in the Policy Simulator panel. When the test completes, a
special report opens in a separate window listing all rule matches that were
logged. If no alert or log only rules were triggered, you will receive a No Drill
Down Report Available message. In the latter case, you may not have included
enough data in the test period.

Installing Policies
Use this topic to install the policy and modify schedule.

Multi-policy support
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Installation to open the Policy Installer.
2. Select the policy to be installed from the Policy Description box.
3. Do one of the following:
v Click Install to install the policy immediately.
v Click Modify Schedule to open the general-purpose scheduling utility, to
schedule the policy installation.

View Policy rules for the Installed Policy


More than one installed policy is permitted at the same time. All installed policies
are available for action. There are two limitations: policies defined as selective
audit policies can not be mixed with polices not defined as selective audit policies,
and policies defined as flat log can not be mixed with policies not defined as flat
log. If trying to mix policies, an error message will result when installing these
mixed policies.

Note: Policies defined as baseline policies can be mixed with policies not defined
as baseline policies.

The order of appearance can be controlled during the policy installation, such as
first, last or somewhere in between. But the order of appearance can not be edited
at a later date.

There is also an Unistall policy button to remove a policy previously installed.

The first installed policy has a special meaning, as it sets the value of the global
policy parameters. These parameters are: Global pattern; Is it a selective audit;
Client and Server net mask; Tagged Client and Server group ID.

This multi-policy support is available through the GUI (Setup > Tools and Views >
Policy Installation) and through GuardAPI.

80 IBM Guardium 10.0


View Policy rules for the Installed Policy
From the Currently-Installed Policy panel, any user can view the rules of the
installed policy, and in addition, authorized users can open the policy for editing.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect >
Security Policies > Policy Installation to open the Policy Installer.
2. Click the Installed Policy link to display the policy rules. Authorized users will
have an additional button enabled: To open the policy for editing in the Policy
Builder, click the Edit installed policy button.

Job Dependency Scheduler

The Guardium collector has many tasks such as Policy Installation, Audit
Processes, Group updates, etc. that are scheduled to run periodically. The Job
dependencies feature finds all jobs that have a direct relationship and impact on
the success of the execution of the task you are trying to schedule. Unless you find
the jobs that are defined as prerequisites for the job you are trying to schedule,
there is a chance the task will relay on inaccurate data , which might lead to false
or inaccurate results.
Feature Highlights
v User marks a scheduled job to find and run dependencies at run time.
v When the scheduler runs the job, it automatically finds all the
subordinate jobs and runs them in order.
v There is a retry sequence in case of a failure.
Find dependencies
v Identify scenarios that require dependencies.
v Identify Runnable vs. Non-Runnable jobs.
v Calculate pre-defined job dependencies.

Job Suggested Prerequisite Job Reason


Policy Installation Groups that are defined in Policy rules that use groups
any of the (to be installed) must have up-to-date group
policies and are either data before being installed.
scheduled or not scheduled
to be populated by the
Populate From Query
mechanism.
Policy Installation Audit processes that include Policy rules that use groups
a Classification audit task, must have up-to-date group
where the classification task data before they are
has an action of Add To installed.
Group of Object, Add To
Group of Object/Field, or
Add To Access Rule.
Audit Process Custom table upload jobs Custom tables data that are
where the custom table name referred to by an audit task
is referred to (in the “from” of type Report must be
clause) by an audit task of populated with up-to-date
type Report. data before an audit process
is scheduled to run.

Chapter 4. Protect 81
Job Suggested Prerequisite Job Reason
Audit Process Groups that are defined in a Groups that are referred to
condition of an audit task of by a query condition must be
type Report are either populated with up-to-date
scheduled or not scheduled data before an audit task of
to be populated by the type Report is run.
Populate From Query
mechanism.
Populate From Query Custom upload tables that
contain any of the entities of
the query that is used to
populate a group.
Audit Process Import Relevant for an aggregator
only. This prerequisite
guarantees that information
is imported from all
aggregated units before any
audit process can run.

Scheduler enhancements
v Find job dependencies when a scheduled job is run.
v Run job dependencies in order.
Runnable jobs can be scheduled; Non-Runnable jobs cannot.
A Group is a Non-Runnable job.
Populate From Query on a Group is Runnable.
Direct dependencies are objects that are tied together by definition, for
example, Policy depends on Rule and Rule depends on Groups.
Indirect dependencies are objects that are logically tied, for example, run
Audit processes before installing policies.
GUI support
1. Check box option, Auto run dependent jobs, after selecting Create Schedule
from Policy Installation.
2. Click Save to schedule the process. This notifies the user of the dependencies
status.
GuardAPI support
GuardAPI job dependency commands:
CLI> grdapi add_job_dependency
function parameters :
dependOnJobExecutedWithin - String
dependOnTrigger - String - required
intervalBetweenRetries - Integer
jobRetries - Integer
jobTrigger - String - required
runIfDependOnJobReturns - String
api_target_host - String

82 IBM Guardium 10.0


Use this GuardAPI command for auto-execution of dependencies:
> grdapi auto_execute_suggested_dependencies jobTrigger=<trigger name of
the scheduled job>
CLI> grdapi auto_execute_suggested_dependencies
function parameters :
jobTrigger - String - required
api_target_host - String
CLI> grdapi delete_job_dependencies
function parameters :
dependOnTrigger - String
jobTrigger - String - required
api_target_host - String
CLI> grdapi disable_auto_execute_suggested_dependencies
function parameters :
jobTrigger - String - required
api_target_host - String
CLI> grdapi list_job_dependencies_tree
function parameters :
jobTrigger - String - required
api_target_host - String
To obtain a list of all the scheduled jobs/triggers, run the GuardAPI
command:
> grdapi list_scheduler_jobs
CLI> grdapi list_suggested_job_dependencies
function parameters :
jobTrigger - String - required
api_target_host - String
CLI> grdapi list_existing_job_dependencies
function parameters :
jobTrigger - String - required
api_target_host - String
CLI> grdapi modify_job_dependency
function parameters :
dependOnJobExecutedWithin - String
dependOnTrigger - String - required
intervalBetweenRetries - Integer
jobRetries - Integer
jobTrigger - String - required

Chapter 4. Protect 83
runIfDependOnJobReturns - String
api_target_host - String
CLI> grdapi show_job_dependency_execution_profile
function parameters :
dependOnTrigger - String - required
jobTrigger - String - required
api_target_host - String
Run Scheduler
Scheduler will check for job dependencies when it is time to run a job.
Dependencies are executed in reverse order.
Example: Given a dependency tree:

Policy
Install
(Runnable)
Audit
Process
(Runnable/
Indirect
dependency)
Audit Task
Classification
Process
Classification
Policy
Classification
Policy
Action
Group
(Runnable/
direct -
Populate
from
Query)

Execution order will be : Populate from Query Group → Audit Process →


Policy Install
Scheduler will run each one of the dependencies and wait for it to finish.
Running a full dependency tree might take a long time to complete, but it
is guaranteed all dependencies are executed in the correct order.
Handle errors
If any of the dependencies fails to execute, the job currently executed by
the scheduler is not going to run.
If a failure occurs, an error message is written to the Scheduled Jobs
Exception report.

84 IBM Guardium 10.0


The number of retries that the job dependent on previous jobs can be set.
The default is 3. A valid value is ≥ 0. The interval, in minutes, between
retries can be set. The default is 3. A valid value is ≥ 0.

How to install a policy and detail group members


Demonstrate to an auditor what policies have been installed and what group
members the policies relies on.

Value-added: Meet auditor requirements on what is installed every day.

Follow these steps:


1. Access the predefined “View Details” report.
2. Use the policy drilldown to display the group members.
3. Use an audit process to save the report daily.

There is a predefined report that shows what policies are installed. For the admin
user, select the Administration Console tab and then choose Policy Installation.

Click on the button “View Details Report” that will bring up a default report that
shows all the rules in every field.

Chapter 4. Protect 85
This report can be added to any portal page, if you have the privileges to the
report.

There are two things you may still be lacking when an auditor asks to show which
policy you have installed:
1. What policy was installed at a certain point in time?
2. Group members - if you look at the screen capture, you can see that some of
the rules refer to a group called PHI Objects. Unless the group members are
included, then there is some confusion on what the policy refers to.

In the policy editor you can always drill down to the group members, but you
may want this in the report as well.
Policy editor drill down

The way most users deal with (1) when was the policy installed, and (2)
list the members of the associated group, is to use a naming convention
and then use an audit process with predefined reports.

86 IBM Guardium 10.0


As an example, if a policy is created called PRODPCI, then any group
used in the policy will use a prefix with PRODPCI (for example,
PRODPCI_SERVERIPS, PRODPCI_SAPOBJS).
Then, use two predefined reports - one on the policy and one using group
members reports (appears for the admin user, Guardium Monitor tab >
Guardium Group Details or add it to any portal):

Auditing Process Task 1


Now, if the requirement is save this report every day (or whatever the
frequency) automatically for audit purposes, then create the following
audit process:

Chapter 4. Protect 87
Auditing Process Task 2
The filter is important here.
This is an example of the PDF that will be produced (Only one rule is
used, but both the policy and members can be seen).

Run this audit process daily (or whatever frequency) and produce a PDF
report to show the auditor what was installed every day:

88 IBM Guardium 10.0


Rule definition fields
You can use these fields when you define policy rules.
Table 8. Reference Table of Rule Definition Fields
Field Description
Action Indicates the action to be taken when the rule is true. For a comprehensive
description of all rule actions, see Rule Actions Overview.
App Event Exists Match for an application event only. See the App Event Note.
App Event Values Match the specified application event Text, Numeric, or Date values. Also allow a
Group to be chosen for the event string as an option. See the App Event Note.
(App) Event Type Match the specified application event. See the App Event Note.
(App) Event User Name Match the specified application event user name only. See the App Event Note.
App Event Note The App Event fields cannot be used when the Flat Log box is marked.
App. User Application User. See Specify Values and/or Groups of Values in Rules.
Category An arbitrary label that can be used to group policy violations for reporting
purposes. A default category can be specified in the policy definition, but the
default can be overridden for each rule.
Classification An arbitrary label that can be used to group policy violations for reporting
purposes. A default classification can be specified in the policy definition, but the
default can be overridden for each rule.
Client Info
DB2 client info: For access rules only. For z/OS only, a CLIENT INFO field (and
CLIENT_INFO_GROUP_ID) will be visible if DB_TYPE is either DB2, DB2
COLLECTION Profile or VSAM COLLECTION Profile.

The type of information that can be placed in this field is USER=x; WKSTN=y;
APPL=z.

Chapter 4. Protect 89
Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
Client IP Clear the Not box to include, or mark the Not box to exclude:
v Any client: Leave all client fields blank. The count will be incremented every
time any client satisfies the rule. (You cannot leave all fields blank if the Not
box is marked.)
v All clients selected by an IP address and mask: Enter a client IP address in the
first box and network mask in the second box. The count will be incremented
each time that any of the specified clients satisfies the rule. For example, to
select all clients in subnet 192.168.9.x, enter 192.168.9.1 in the first box and
255.255.255.0 in the second box. For more information selecting IP addresses,
see Selecting IP Addresses Using a Mask.
v A group of clients: Select a group of client IP addresses from the Group
drop-down list, or click the Groups button to define a new group and then
select that group. The count will be incremented each time that any member of
the selected group satisfies the rule.
v All clients selected by an IP address and mask AND a group of clients: Use
both the Client IP and Group fields. The count will be incremented each time
that any client specified using either method satisfies the rule.

Allow wildcard in IP address. Wildcard % is permitted in a policy for Client IP


group.

Client IP/Source Program/DB


User/ Server IP/Service Name 5-tuple group type available for access, exception and extrusion rules.

A tuple allows multiple attributes to be combined together to form a single group


member.

Tuple supports the use of one slash and a wildcard character (%). It does not
support the use of a double slash.

Wildcard % is permitted in a policy for Client IP/Source Program/DB User/


Server IP/Service Name group.
Client MAC
To make the rule sensitive to a single client MAC address, enter the address in
nn:nn:nn:nn:nn:nn format, where each n is a hexadecimal digit (0-F) OR Enter a
dot (.) in the Client MAC box to indicate that a separate count should be
maintained for each client MAC address OR Leave the Client MAC box empty to
ignore client MAC addresses.
Command
The command. See Specify Values and/or Groups of Values in Rules if a
commands group cannot be edited, and the and/or Group label changes to
Collect Only, indicating that commands from only the selected group are to be
selected.

Click on the Every box to select all the commands shown in Groups.
Continue to Next Rule If marked, rule testing will continue with the next rule, regardless of whether or
not this rule is satisfied. This means that multiple rules may be satisfied (and
multiple actions taken) by a single SQL statement or exception. If not marked (the
default), no additional rules will be tested for the current transaction when this
rule is satisfied.

90 IBM Guardium 10.0


Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
Data Pattern
Every type of rule (Access, Exception, Extrusion) can have Data pattern, but it is
required for Extrusion rules.

For use in defining Extrusion Rules - A regular expression to be matched, in the


Data Pattern box. Click the Regex button to open the Build Regular Expression
tool, which allows you to enter and test regular expressions. This enables more
complex masking patterns. Put parentheses around the section that should be
masked. Use this function to mask data retrieved from the database.

For example, a credit card number is expressed as [0-9]{4}[-, ]?[0-9]{4}[-,


]?[0-9]{4}[-, ]?[0-9]{4}. The parentheses in red around this expression example
([0-9]{4}[-, ]?[0-9]{4}[-, ]?[0-9]{4})[-, ]?[0-9]{4} mask all but the last four digits.

Additional regular expressions (Regex) for use only in Data Patterns with an
action of Redact (Scrub):
Use this regular expression Turn this result Into this SCRUB_SSN_ANSI

Regex with Redact - Use of Regular expressions (regex) in the IBM Security
Guardium solution (including the masking in the policy) are executed on the
appliance, and allow advanced regexp capabilities.

However, the regex library for use with Redaction is executed in the kernel of the
database server and is limited to most basic regex. Only basic regex patterns can
be used with Redaction.

For example, the regular expression nomenclature [0-9]* cannot be used to


indicate any number of digits. It is necessary to use basic regular expression
nomenclature [0-9]-[0-9]-[0-9]... to specify a sequence of digits.
Note: S-TAP will only accept the predefined SCRUB pattern names; ignoring any
other name.

Access rule, data pattern and replacement character - Using a data pattern, for
example, [a-z,2]{3}([_][0-9]{1,2}) with a replacement character of * will change the
values between the parentheses in the data pattern to ***. Use this function to
mask values.
User Defined Character Sets
Available for Oracle, Sybase, MySQL, & MSSQL and for extrusion rules
only, users may influence the character set used by defining special
extrusion rules. These character set policy rules are only used to set the
character set a user would like to convert traffic to, setting an action is
irrelevant. In order to have an action for that traffic the user needs to
define additional rules after that character set rule. Two examples of
setting a character set rule are possible (hint or force) as defined in the
following examples:
Example of extrusion rule (with hint)
Will convert the traffic by character set as defined in the extrusion rule of
the installed policy ONLY if the regular conversion failed.
Character set EUC-JP (code 274).
Extrusion rule pattern: guardium://char_set?hint=274
Example of extrusion rule (with force)
Will convert the traffic by character set as defined in the extrusion rule of
the installed policy for ALL data.
Character set EUC-JP (code 274).
Extrusion rule pattern: guardium://char_set?force=274
Chapter 4. Protect 91
See List of possible character set codes at end of this topic.
Note: Keep in mind that extrusion rules usually attached to the session with
delay. Therefore short sessions or beginning of a session may be not immediately
Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
DB Name The database name. See Specify Values and/or Groups of Values in Rules.
DB Type
Supported DB Types

For access rule: Cassandra, CIFS, CouchDB, DB2, DB2 COLLECTION PROFILE*
(only for use with z/OS), FTP, GreenPlumDB, Hadoop, HTTP, IBM INFORMIX
(DRDA®), IBM iSeries, IMS™, IMS COLLECTION PROFILE (only for uses with
z/OS, Informix, MongoDB, MS SQL SERVER, MYSQL, NETEZZA, Oracle,
PostgreSQL, Sybase, TERADATA, VSAM or VSAM COLLECTION PROFILE*
(only for use with z/OS).

For exception and extrusion rules: Cassandra, CIFS, CounchDB, DB2, FTP,
GreenPlumDB, Hadoop, IBM INFORMIX (DRDA), IBM iSeries, Informix,
MongoDB, MS SQL SERVER, MYSQL, NETEZZA, Oracle, PostgreSQL, Sybase, or
TERADATA. Note: Informix supports two protocols SQLEXEC (native Informix
protocol) or DRDA (IBM protocol). These protocols are automatically identified for
Informix traffic with no additional settings. The Server Type attribute will show
INFORMIX (for SQLEXEC protocol) and IBM INFORMIX (DRDA) (for DRDA
protocol).
Note: TERADATA has a silent login and allows clients to auto-reconnect. To block
Teradata statements in a policy, use the S-TAP firewall function with default state
ON and un-watch safe users.
DB User The database user. See Specify Values and/or Groups of Values in Rules.
Error Code The error code (for an exception). See Specify Values and/or Groups of Values in
Rules.
Exception Type
The type of exception (selected from the list).
Note: A session closed by GUI timeout, in an Exception rule, will not produce a
Session Error (Session_Error).
Field Name
The field name. See Specify Values and/or Groups of Values in Rules.

Click on the Every box to select all the fields shown in Groups.
Min. Ct. The minimum number of times the condition contained in the rule must be
matched before the rule will be satisfied (subject to the Reset interval).
Net. Protocol The network protocol. See Specify Values and/or Groups of Values in Rules.
Object
The object name. See Specify Values and/or Groups of Values in Rules.

For Sybase and MS SQL Server, there are two groups,


MASKED_SP_EXECUTIONS_SYBASE and
MASKED_SP_EXECUTIONS_MS_SQL_SERVER respectively that include names of
stored procedures. If there is an execution of an included procedure than
everything will be masked.

Click on the Every box to select all the objects shown in Groups.
Object/Command Group Match a member of the selected Object/Command group.
Object/Field Group Match a member of the selected Object/Field group.
OS User Operating system user. See Specify Values and/or Groups of Values in Rules.
Pattern A regular expression to be matched, in the Pattern box. You can enter a regular
expression manually, or click the (Regex) button to open the Build Regular
Expression tool, which allows you to enter and test regular expressions.

92 IBM Guardium 10.0


Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
Time Period To make the rule sensitive to a single time period, select a pre-defined time period
from the Period list or click the (Period) button to define a new time period.
Rec. Vals. When marked, the actual construct causing the rule to be satisfied will be logged,
and available in reports, in the SQL String attribute. For a policy violation only, if
not marked, no SQL statements will be logged.
Records Affected Threshold
Access rule only. Set a threshold value for matched records. Example: Let 1000
instances take place before taking action.

This field affects the output of the rule rather than the definition of the rule
(example, what happens when it is triggered, rather than when should it trigger).

Records affected threshold is based on rule and session. It is accumulated


returned rows from all queries that meet the rule condition. Once all accumulated
records affected reach the threshold, the rule will trigger and the records affected
on the statement (if the action log full details) will be the accumulated value of
the records affected.
Replacement Character
Define a masking character.

Should the output produced by the extrusion rule match the regular expression,
the portions that match sub-expressions between parenthesis '(' and ')' will be
replaced by the Masking character.
Reset Interval Used only if the Min. Ct. field is greater than zero. This value is the number of
minutes after which the condition met counter will be reset to zero.
Revoke This checkbox appears on extrusion rules only. It allows you to exclude from
logging a response that has already been selected for logging by a previous rule
in the policy. In most cases you can accomplish the same result more simply by
defining a single rule with one or more NOT conditions to exclude the responses
you do not want, while logging the remaining ones that satisfy the rule. (The
Revoke checkbox pre-dates NOT conditions, and is provided mainly for backward
compatibility to support existing policies.)
Rule Description
The name of the rule. To use a special pattern test in the rule, enter the special
pattern test name followed by a space and one or more additional characters to
make the rule name unique, for example: guardium://SSEC_NUMBER employee.
(See Special Pattern Tests for more information.)

When displayed, the name will be prefaced with the rule number and the label
Access Rule, Exception Rule, or Extrusion Rule, to identify the rule type. If the
rule was generated using the Suggest Rules (from a baseline) function or the
Suggest From DB function, the generated name is in the form: Suggested Rule
<n>_mm-dd hh:mm, consisting of the following components

n A sequence number for the generated rule

mm-dd The month and day the rule was generated

hh:mm the time the rule was generated

Chapter 4. Protect 93
Table 8. Reference Table of Rule Definition Fields (continued)
Field Description
Server IP
Clear the Not box to include, or mark the Not box to exclude:
v Any server: Leave all server fields blank. The count will be incremented every
time any server satisfies the rule. (You cannot leave all fields blank if the Not
box is marked.)
v All servers selected by an IP address and mask: Enter a server IP address in the
first box, and network mask in the second box. The count will be incremented
each time that any of the specified servers satisfies the rule. For example, to
select all servers in subnet 192.168.3.x, enter 192.168.3.1 in the first box, and
255.255.255.0 in the second box.
v A group of servers: Select a group of server IP addresses from the Group
drop-down list or click the Groups button to define a new group and then
select that group. The count will be incremented each time that any member of
the specified group satisfies the rule.
v All servers selected by an IP address and mask AND a group of servers: Use
both the Server IP and Group fields. The count will be incremented each time
that any server specified using either method satisfies the rule.

Allow wildcard in IP address. Wildcard % is permitted in a policy for Server IP


group.

Service Name The service name. See Specify Values and/or Groups of Values in Rules.
Severity Select a severity code from the list: INFO, LOW, NONE, MED or HIGH. If HIGH
is selected and email alerts are sent by this rule, the email will be flagged Urgent.
SQL Pattern A regular expression to be matched, in the Pattern box. You can enter a regular
expression manually, or click Regex to open the Build Regular Expression tool,
which allows you to enter and test regular expressions.
Src app Application source program. See Specify Values and/or Groups of Values in
Rules.
Trigger Once Per Session
Do not analyze session for same rule after first match. Especially effective for
“Selective Audit” policies.
XML Pattern
A regular expression to be matched, in the Pattern box. You can enter a regular
expression manually, or click Regex to open the Build Regular Expression tool,
which allows you to enter and test regular expressions.

A regular expression to be matched can be used in this box. The regular


expression must be entered manually.
Full_SQL return values using
MSSQL In MSSQL, sp_cursoropen and sp_cursorfetch stored procedures are used for
SELECT database queries.

Sp_cursoropen holds the original statement, while the FULL_SQL return value in
an Extrusion rule will appear as sp_cursorfetech instead of Select * from
___________.

How to integrate custom rules with Guardium policy


Show how to automatically modify/derive Guardium policy from a custom
entitlement system.

94 IBM Guardium 10.0


This example will take a sample Oracle table (CUSTOM_ENTITLEMENT) as an
example of a custom entitlement data, use an Oracle script to select data from this
table, and then generate a file with GuardAPI commands. The file will include
commands for the creation of new or modification of existing policy rules, change
of a policy rule order, and policy reinstallation. We’ll then show you how to
execute the generated script and then view the policy changes in the Guardium
GUI.

Value-added: Guardium API provides access to Guardium functionality from the


command line or script. This allows for the automation of repetitive tasks which is
especially valuable in larger implementations. Calling these GuardAPI functions
enables a user to quickly perform operations such as maintenance of the Guardium
policy.

Follow these steps.


1. Define a rule structure which logs full details for all database manipulation
(DML) Commands. It will be used as a template for a creating new rules
creation. The rule will belong to the template policy.
2. Create the Oracle script that will generate a file with the following GuardAPI
commands:
v copy_rule - add new rules to the installed policies as a copy of rule template
v update_rule - update the copied rules with the relevant data from
CUSTOM_ENTITLEMENT Oracle table
v update_rule - update the existing rule with the data from that table
v change_rule_order - change rule position
v policy_install, reinstall_policy– install /re-install policy
3. Run the generated script.
4. View installed policy changes.

Steps:
1. Define a rule template
As many actions are permitted for a given policy rule, it becomes very difficult
to define the complex hierarchical structure that a rule has using the Guard
API. However, in most cases rules differ by the conditions and the
action/receiver structures usually fall into a small set of different options.
Therefore, the APIs are based on cloning an existing rule which acts as a rule
template – this defines the actions/receiver structure and then conditions are
changed using APIs.
Here we create a rule template (HowToTemplate), which includes rule action
definition and will then be cloned and updated each time a new rule of that
kind has to be added to a policy.
Click Protect > Security Policies > Policy Builder to open the Policy Finder
and create a template policy. S
Click New to create the template policy; entering a Policy description, checking
the Selective audit trail check-box, and clicking the Save button.

Chapter 4. Protect 95
Click Edit Rules button to add a template rule to this policy

Click on the Add Access Rule button to display the Access Rule Definition
panel and add a rule.

96 IBM Guardium 10.0


To add the rule, enter DML Command - Log Full Details Templatein the
Description box; choose (Public) DML Commandsfrom the Commands box;
highlight LOG FULL DETAILS WITH VALUESin the Action section; and then
click the Save button.

Chapter 4. Protect 97
98 IBM Guardium 10.0
2. Create the Oracle script that will generate a file with GuardAPI commands.
Key items to know before writing the script:
v GuardAPI is a set of CLI commands, all of which begin with the keyword
grdapi. To list all GuardAPI commands available, enter the command
'grdapi' with no arguments. To display the parameters for a particular
command, enter the command followed by '--help=yes'.
For example
CLI>grdapi copy_rule --help=yes
ID=0
function parameters :
fromPolicy - required
ruleDesc - required
toPolicy - required
ok
v Both the keyword and value components of parameters are case sensitive.
v If a parameter value contains one or more spaces, it must be enclosed in
double quote characters. For example:
grdapi copy_rule ruleDesc="DMLCommand - Log Full Details Template" ...
v There is no need to use all available parameters that a function supports. In
addition to the required parameters, use the parameters that you want to
change.
v Scripts, which invoke GuardAPI, may contain sensitive information, such as
passwords for datasources. To ensure that sensitive information is kept
encrypted at all times, the grdapi command supports passing of one
encrypted parameter to an API Function. This encryption is done using the
System's Shared Secret which is set by the administrator and can be shared
by many systems, and between all units of a central management and/or
aggregation cluster; allowing scripts with encrypted parameters to run on
machines that have the same shared secret. For more details about this issue
please see Guardium Help.

Chapter 4. Protect 99
v If multiple policies are installed, then install policy command (policy_install)
must include all installed policies descriptions delimited by pipe character.
This must be done even if only one policy has changes. The policy
descriptions should be in the order you want the policies to be installed.
Example of the command for installation of policies HowTo 1 and HowTo 2:
grdapi policy_install policy="HowTo 1|HowTo 2"

Logic behind writing of the script; changing the currently installed policy
HowTo in the following way:
a. For each record in the CUSTOM_ENTITLEMENT table with IS_NEW_FLAG
equals ‘1’, a new access rule with description saved in RULE_DESC column
will be added to the “HowTo” policy. The rule logs full details for all DML
Commands from OS user (OS_USER field value), client IP (CLIENT_IP),
server IP (SERVER_IP) with service name (SERVICE_NAME).
b. If IS_NEW_FLAG value is ‘0’, the rule with description equals to the value
of RULE_DESC column will be changed based on the relevant data from
this record of the table.
c. Rule3 will be set as the first rule – to show how to use change_rule_order
function.
d. In order to apply all of the changes, the policy will be reinstalled.
Data in custom_entitlement table
Table 9. Custom entitlement
os_user client_ip server_ip rule_desc service_name is_new_rule seq
User1 192.168.7.101 192.168.7.201 Rule1 PROD1 1 1
User2 192.168.7.102 192.168.7.202 Rule2 PROD2 1 2
User3 192.168.7.103 192.168.7.203 Rule3 PROD3 1 3
User4 192.168.7.104 192.168.7.204 Rule2 PROD4 0 4

Changes, based on logic and table data, can be described as:


a. Add a new access rule: Rule1. The rule logs full details for all DML
Commands from user “user1”, client IP “192.168.7.101” to Oracle database
on “192.168.7.201” server with service name “PROD1”.
b. Add a new access rule: Rule2. The rule logs full details for all DML
Commands from user “user2”, client IP “192.168.7.102” to Oracle database
on “192.168.7.202” server with service name “PROD2”.
c. Add a new access rule: Rule3. The rule logs full details for all DML
Commands from user “user3”, client IP “192.168.7.103” to Oracle database
on “192.168.7.203” server with service name “PROD3”.
d. Change Rule2 – set OS user to “user4”, client IP to “192.168.7.104”, server IP
to “192.168.7.204”, service name to “PROD4”.
e. Set Rule3 to be the first rule in the policy.
f. In order to apply all of the changes, re-install the policy
Oracle script
SET LINESIZE 2000
SET TERMOUT OFF
SET FEEDBACK OFF

SET SERVEROUTPUT ON SIZE 1000000


spool update_policy.txt

declare cursor CUSTOM_TABLE is

100 IBM Guardium 10.0


select OS_USER,CLIENT_IP,SERVER_IP,SERVICE_NAME,RULE_DESC,IS_NEW_RULE
from CUSTOM_ENTITLEMENT order by SEQ;
S_RULE_DESC VARCHAR2(100);
BEGIN
FOR CUR_W IN CUSTOM_TABLE
LOOP
IF NVL(CUR_W.IS_NEW_RULE,’0’) = ’1’ THEN
-- copy rule
DBMS_OUTPUT.PUT_LINE(’grdapi copy_rule ruleDesc="DMLCommand - Log Full Details Templ
S_RULE_DESC := ’DMLCommand - Log Full Details Template’;
ELSE
S_RULE_DESC := CUR_W.RULE_DESC;
END IF;
-- update rule
DBMS_OUTPUT.PUT_LINE(
’grdapi update_rule ruleDesc="’||S_RULE_DESC||’"’||
’ fromPolicy=HowTo newDesc="’|| CUR_W.RULE_DESC ||’" clientIP=’||CUR_W.CLIENT_IP ||
’ clientNetMask=255.255.255.0 serverIP=’||CUR_W.SERVER_IP||’ serverNetMask=255.255.255.
’ serviceName=’||CUR_W.SERVICE_NAME ||’ osUser=’||CUR_W.OS_USER||’ dbType=ORACLE’);
END LOOP;
-- set Rule3 to be the first one
DBMS_OUTPUT.PUT_LINE(’grdapi change_rule_order ruleDesc=Rule3 fromPolicy=HowTo order=1’);
-- reinstall policy
DBMS_OUTPUT.PUT_LINE(’grdapi policy_install policy=HowTo’);
END;
/
spool off
Generated script with GuardAPI commands
When the Oracle script is run within SQL*Plus, and spooled accordingly, will
produce a file (update_policy.txt) that looks like:
grdapi copy_rule ruleDesc="DMLCommand - Log Full Details Template" fromPolicy=HowToTemplate toP
grdapi update_rule ruleDesc="DMLCommand - Log Full Details Template" fromPolicy=HowTo newDesc="
grdapi copy_rule ruleDesc="DMLCommand - Log Full Details Template" fromPolicy=HowToTemplate toP
grdapi update_rule ruleDesc="DMLCommand - Log Full Details Template" fromPolicy=HowTo newDesc="
grdapi copy_rule ruleDesc="DMLCommand - Log Full Details Template" fromPolicy=HowToTemplate toP
grdapi update_rule ruleDesc="DMLCommand - Log Full Details Template" fromPolicy=HowTo newDesc="
grdapi update_rule ruleDesc="Rule2" fromPolicy=HowTo newDesc="Rule2" clientIP=192.168.7.104 cli
grdapi change_rule_order ruleDesc=Rule3 fromPolicy=HowTo order=1
grdapi policy_install policy=HowTo

Note: The last grdapi command which re-installs the policy to apply the rules
to the system
3. Run the generated script.
To run this script use the following command structure:
ssh cli@[Guardium appliance name] < [script name]
For example, to run update_policy.txt script on host 192.168.12.5 (password will
be prompted for)
ssh cli@192.168.12.5 <update_policy.txt
Sample output:
192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20015
192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20016

Chapter 4. Protect 101


192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20017
192.168.12.5> 192.168.12.5> ok
ID=20016
192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5>

4. View installed policy changes.


Before running the script, there were no rules defined in the HowTo policy as
shown in this preview

After running the script:


As a result of the copy_rule, the HowTo policy now has three Access Rules.
As a result of the change_rule_order command, Rule3 is now first.

102 IBM Guardium 10.0


Expanding any of the policy rules, Rule1 here, we can validate the various
fields that have been altered with the update_rule commands.

As a result of the update_rule command, Rule2 has been change.

Chapter 4. Protect 103


And as a result of the policy_install command, the currently installed policy is
now the HowTo policy with three installed rules.

How to use the appropriate Ignore Action


Detail how the data is handled when using Ignore actions in Policy Rules.

Value-added: Make clearer what happens when certain choices are made in Policy
Rules for log or ignore actions, which control the level of logging, based on
observed traffic.

For further information, see Policies.

Ignore session

The current request and the remainder of the session will be ignored. This action
does log a policy violation, but it stops the logging of constructs and will not test
for policy violations of any type for the remainder of the session. This action might

104 IBM Guardium 10.0


be useful if, for example, the database includes a test region, and there is no need
to apply policy rules against that region of the database.
Table 10. Ignore session
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Ignore - SQL commands, SQL errors, Log in/ Log out Ignore – SQL commands, SQL errors,
Result Sets Result Sets.
Sniffer to S-TAP - One signal to
S-TAP to stop sending activity for SQL commands and errors coming
this session. If additional activity is from a Span Port or Network TAP
sent by S-TAP, it is ignored at the are filtered at the Sniffer.
sniffer level only.

Ignore SQL commands

Ignore SQL errors

Ignore Result Sets

Ignore S-TAP session

The current request and the remainder of the S-TAP session will be ignored. This
action is done in combination with specifying in the policy builder menu screen of
certain machines, users or applications that are producing a high volume of
network traffic. This action is useful in cases where you know the database
response from the S-TAP session will be of no interest.
Table 11. Ignore S-TAP session
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Ignore - SQL commands, SQL errors, Log in/ Log out Sniffer to S-TAP - Not Applicable
Result Sets One signal to S-TAP to stop sending
activity for this session. Additional If there is a need to ignore traffic
signals to S-TAP to stop sending from a Span Port/ Network TAP, use
activity to this session. Ignore session instead.

Ignore responses per session

Responses for the remainder of the session will be ignored. This action logs a
policy violation, but it stops analyzing responses for the remainder of the session.
This action is useful in cases where you know the database response will be of no
interest.

Note: For ignore response per session, since the sniffer does not receive any
response for the query or it is ignored, then the values for COUNT_FAILED and
SUCCESS are whatever the default for the table says they are, in this case
COUNT_FAILED=0 and SUCCESS=1.

Chapter 4. Protect 105


Table 12. Ignore responses per session
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Log – SQL commands Ignore - SQL Log in/ Log out SQL Commands Not applicable
errors, Result Sets Sniffer to S-TAP - One signal to
S-TAP to stop sending activity for This rule action is S-TAP-only
this session. Additional signals to implementations.
S-TAP to stop sending activity to this
session.

Ignore SQL per session

No SQL will be logged for the remainder of the session. Exceptions will continue
to be logged, but the system may not capture the SQL strings that correspond to
the exceptions.
Table 13. Ignore SQL per session
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Ignore - SQL commands Log in/ Log out Ignore – SQL commands

Log - SQL errors, Result Sets Sniffer to S-TAP - One signal to Log - SQL errors, Result Sets
S-TAP to stop sending activity for
this session. If additional activity is SQL commands are filtered at the
sent by S-TAP, it is ignored at the Sniffer.
sniffer level only.

Log SQL commands

Log SQL errors

Log Result Sets, if using extrusion


rules

Selective Audit Trail

Use a Selective Audit Trail policy to limit the amount of logging on the appliance.
This is appropriate when the traffic of interest is a relatively small percentage of
the traffic being accepted by the inspection engines, or when all of the traffic you
might ever want to report upon can be completely identified.

It is important to note that Ignore Session rules are still very important to include
in the policy even if using a Selective Audit Trail. Ignore Session rules decrease the
load on a collector considerably because by filtering the information at the S-TAP
level, the collector never receives it and does not have to consume resources
analyzing traffic that will not ultimately be logged. A Selective Audit Trail policy
with no Ignore Session rules would mean that all traffic would be sent from the
database server to the collector, causing the collector to analyze every command
and result set generated by the database server.

106 IBM Guardium 10.0


Table 14. Selective Audit Trail
Data logged or ignored between Data sent from DB Server/S-TAP to Data from Span Port/ Network TAP
client and DB Server/S-TAP Collector to Collector
Ignore - SQL commands Log in/ Log out Ignore – SQL commands

Log - SQL errors, Result Sets Ignore SQL commands, except for Log - SQL errors, Result Sets
those defined by Audit-Only or Log
Full Details rules. SQL commands are filtered at the
Sniffer.
Log SQL errors

Log Result Sets, if using extrusion


rules

Character sets
You can use character set codes in extrusion rules.

List of possible character set codes


ANSI_X3.4-1968 - 1
ANSI_X3.4-1986 - 2
ASCII - 3
CP367 - 4
IBM367 - 5
ISO-IR-6 - 6
ISO646-US - 7
ISO_646.IRV:1991 - 8
US - 9
US-ASCII - 10
CSASCII - 11
UTF-8 - 12
ISO-10646/UCS2 - 13
UCS-2 - 14
CSUNICODE - 15
UCS-2BE - 16
UNICODE - 17
UNICODEBIG - 18
TSCII - 19
UCS-2LE - 20
UNICODELITTLE - 21
ISO-10646/UCS4 - 22
UCS-4 - 23
CSUCS4 - 24
UCS-4BE - 25
UCS-4LE - 26
UTF-16 - 27
UTF-16BE - 28
UTF-16LE - 29
UTF-32 - 30
UTF-32BE - 31
UTF-32LE - 32
UTF7 - 33
UTF-7 - 34
UTF-8 - 35
UCS2 - 36

Chapter 4. Protect 107


UCS2 - 37
UCS4 - 38
UCS4 - 39
UTF8 - 40
UTF8 - 41
CP819 - 42
IBM819 - 43
ISO-8859-1 - 44
ISO-IR-100 - 45
ISO8859-1 - 46
ISO_8859-1 - 47
ISO_8859-1:1987 - 48
L1 - 49
LATIN1 - 50
CSISOLATIN1 - 51
ISO-8859-2 - 52
ISO-IR-101 - 53
ISO8859-2 - 54
ISO_8859-2 - 55
ISO_8859-2:1987 - 56
L2 - 57
LATIN2 - 58
CSISOLATIN2 - 59
ISO-8859-3 - 60
ISO-IR-109 - 61
ISO8859-3 - 62
ISO_8859-3 - 63
ISO_8859-3:1988 - 64
L3 - 65
LATIN3 - 66
CSISOLATIN3 - 67
ISO-8859-4 - 68
ISO-IR-110 - 69
ISO8859-4 - 70
ISO_8859-4 - 71
ISO_8859-4:1988 - 72
L4 - 73
LATIN4 - 74
CSISOLATIN4 - 75
CYRILLIC - 76
ISO-8859-5 - 77
ISO-IR-144 - 78
ISO8859-5 - 79
ISO_8859-5 - 80
ISO_8859-5:1988 - 81
CSISOLATINCYRILLIC - 82
ARABIC - 83
ASMO-708 - 84
ECMA-114 - 85
ISO-8859-6 - 86
ISO-IR-127 - 87
ISO8859-6 - 88
ISO_8859-6 - 89
ISO_8859-6:1987 - 90
CSISOLATINARABIC - 91
ECMA-118 - 92

108 IBM Guardium 10.0


ELOT_928 - 93
GREEK - 94
GREEK8 - 95
ISO-8859-7 - 96
ISO-IR-126 - 97
ISO8859-7 - 98
ISO_8859-7 - 99
ISO_8859-7:1987 - 100
CSISOLATINGREEK - 101
HEBREW - 102
ISO-8859-8 - 103
ISO-IR-138 - 104
ISO8859-8 - 105
ISO_8859-8 - 106
ISO_8859-8:1988 - 107
CSISOLATINHEBREW - 108
ISO-8859-9 - 109
ISO-IR-148 - 110
ISO8859-9 - 111
ISO_8859-9 - 112
ISO_8859-9:1989 - 113
L5 - 114
LATIN5 - 115
CSISOLATIN5 - 116
ISO-8859-10 - 117
ISO-IR-157 - 118
ISO8859-10 - 119
ISO_8859-10 - 120
ISO_8859-10:1992 - 121
L6 - 122
LATIN6 - 123
CSISOLATIN6 - 124
ISO-8859-13 - 125
ISO-8859-13 - 126
ISO-8859-13 - 127
ISO-8859-13 - 128
L7 - 129
LATIN7 - 130
ISO-8859-14 - 131
ISO-CELTIC - 132
ISO-IR-199 - 133
ISO8859-14 - 134
ISO_8859-14 - 135
ISO_8859-14:1998 - 136
L8 - 137
LATIN8 - 138
ISO-8859-15 - 139
ISO-IR-203 - 140
ISO8859-15 - 141
ISO_8859-15 - 142
ISO_8859-15:1998 - 143
ISO-8859-16 - 144
ISO-IR-226 - 145
ISO8859-16 - 146
ISO_8859-16 - 147
ISO_8859-16:2000 - 148

Chapter 4. Protect 109


KOI8-R - 149
CSKOI8R? - 150
KOI8U? - 151
KOI8R? - 152
CP1250 - 153
MS-EE - 154
WINDOWS-1250 - 155
CP1251 - 156
MS-CYRL - 157
WINDOWS-1251 - 158
CP1252 - 159
MS-ANSI - 160
WINDOWS-1252 - 161
CP1253 - 162
MS-GREEK - 163
WINDOWS-1253 - 164
CP1254 - 165
MS-TURK - 166
WINDOWS-1254 - 167
CP1255 - 168
MS-HEBR - 169
WINDOWS-1255 - 170
CP1256 - 171
MS-ARAB - 172
WINDOWS-1256 - 173
CP1257 - 174
WINBALTRIM - 175
WINDOWS-1257 - 176
CP1258 - 177
WINDOWS-1258 - 178
850 - 179
CP850 - 180
IBM850 - 181
CSPC850MULTILINGUAL? - 182
862 - 183
CP862 - 184
IBM862 - 185
CSPC862LATINHEBREW? - 186
866 - 187
CP866 - 188
IBM866 - 189
CSIBM866 - 190
MAC - 191
MACINTOSH - 192
MACUK - 193
CSMACINTOSH - 194
MACIS - 195
MAC - 196
MAC - 197
MAC - 198
MAC - 199
MACUKRAINIAN - 200
MAC - 201
MAC - 202
MAC - 203
MAC - 204

110 IBM Guardium 10.0


MAC - 205
HP-ROMAN8 - 206
R8 - 207
ROMAN8 - 208
HPROMAN8 - 209
ROMAN8 - 210
ARMSCII-8 - 211
GEORGIAN-ACADEMY - 212
GEORGIAN-PS - 213
KOI8-T - 214
KOI8-T - 215
CP1133 - 216
IBM-CP1133 - 217
ISO-IR-166 - 218
TIS-620 - 219
TIS620 - 220
TIS620-0 - 221
TIS620.2529-1 - 222
TIS620.2533-0 - 223
TIS620.2533-1 - 224
CP874 - 225
WINDOWS-874 - 226
VISCII - 227
VISCII - 228
VISCII - 229
TCVN - 230
TCVN-5712 - 231
TCVN5712-1 - 232
TCVN5712-1:1993 - 233
ISO-IR-14 - 234
ISO646-JP - 235
JIS_C6220-1969-RO - 236
JP - 237
CSISO14JISC6220RO? - 238
JISX0201-1976 - 239
JIS_X0201 - 240
X0201 - 241
CSHALFWIDTHKATAKANA - 242
ISO-IR-87 - 243
JIS0208 - 244
JIS_C6226-1983 - 245
JIS_X0208 - 246
JIS_X0208-1983 - 247
JIS_X0208-1990 - 248
X0208 - 249
CSISO87JISX0208? - 250
ISO-IR-159 - 251
JIS_X0212 - 252
JIS_X0212-1990 - 253
JIS_X0212.1990-0 - 254
X0212 - 255
CSISO159JISX02121990? - 256
CN - 257
GB_1988-80 - 258
ISO-IR-57 - 259
ISO646-CN - 260

Chapter 4. Protect 111


CSISO57GB1988? - 261
CHINESE - 262
GB_2312-80 - 263
ISO-IR-58 - 264
CSISO58GB231280? - 265
CN-GB-ISOIR165 - 266
ISO-IR-165 - 267
ISO-IR-149 - 268
KOREAN - 269
KSC_5601 - 270
KS_C_5601-1987 - 271
KS_C_5601-1989 - 272
CSKSC56011987 - 273
EUC-JP - 274
EUCJP - 275
EXTENDED_UNIX_CODE_PACKED_FORMAT_FOR_JAPANESE - 276
CSEUCPKDFMTJAPANESE - 277
MS_KANJI - 278
SHIFT-JIS - 279
SHIFT_JIS - 280
SJIS - 281
CSSHIFTJIS - 282
CP932 - 283
ISO-2022-JP - 284
CSISO2022JP? - 285
ISO-2022-JP-1 - 286
ISO-2022-JP-2 - 287
CSISO2022JP2? - 288
CN-GB - 289
EUC-CN - 290
EUCCN - 291
GB2312 - 292
CSGB2312 - 293
CP936 - 294
GBK - 295
GB18030 - 296
ISO-2022-CN - 297
CSISO2022CN? - 298
ISO-2022-CN-EXT - 299
HZ - 300
HZ-GB-2312 - 301
EUC-TW - 302
EUCTW - 303
CSEUCTW - 304
BIG-5 - 305
BIG-FIVE - 306
BIG5 - 307
BIGFIVE - 308
CN-BIG5 - 309
CSBIG5 - 310
CP950 - 311
BIG5-HKSCS - 312
BIG5HKSCS? - 313
EUC-KR - 314
EUCKR - 315
CSEUCKR - 316

112 IBM Guardium 10.0


CP949 - 317
UHC - 318
CP1361 - 319
JOHAB - 320
ISO-2022-KR - 321
CSISO2022KR? - 322
IBM037 - 323
IBM038 - 324
IBM256 - 325
IBM273 - 326
IBM274 - 327
IBM275 - 328
IBM277 - 329
IBM278 - 330
IBM280 - 331
IBM281 - 332
IBM284 - 333
IBM285 - 334
IBM290 - 335
IBM297 - 336
IBM367 - 337
IBM420 - 338
IBM423 - 339
IBM424 - 340
IBM437 - 341
IBM500 - 342
IBM775 - 343
IBM813 - 344
IBM819 - 345
IBM848 - 346
IBM850 - 347
IBM851 - 348
IBM852 - 349
IBM855 - 350
IBM856 - 351
IBM857 - 352
IBM860 - 353
IBM861 - 354
IBM862 - 355
IBM863 - 356
IBM864 - 357
IBM865 - 358
IBM866 - 359
IBM866NAV? - 360
IBM868 - 361
IBM869 - 362
IBM870 - 363
IBM871 - 364
IBM874 - 365
IBM875 - 366
IBM880 - 367
IBM891 - 368
IBM903 - 369
IBM904 - 370
IBM905 - 371
IBM912 - 372

Chapter 4. Protect 113


IBM915 - 373
IBM916 - 374
IBM918 - 375
IBM920 - 376
IBM922 - 377
IBM930 - 378
IBM932 - 379
IBM933 - 380
IBM935 - 381
IBM937 - 382
IBM939 - 383
IBM943 - 384
IBM1004 - 385
IBM1026 - 386
IBM1046 - 387
IBM1047 - 388
IBM1089 - 389
IBM1124 - 390
IBM1129 - 391
IBM1132 - 392
IBM1133 - 393
IBM1160 - 394
IBM1161 - 395
IBM1162 - 396
IBM1163 - 397
IBM1164 - 398
MSCP949 - 399
EUC-JISX0213 - 400
UJIS - 401
CP852 - 402
EUCJP-MS - 403
IBM902 - 404
IBM921 - 405
WINDOWS-31J - 406
IBM1025 - 407
IBM1140 - 408
IBM1137 - 409
IBM1122 - 410
IBM1141 - 411
IBM1142 - 412
IBM1143 - 413
IBM1144 - 414
IBM1145 - 415
IBM1146 - 416
IBM1147 - 417
IBM1148 - 418
IBM1149 - 419
IBM1153 - 420
IBM1155 - 421
IBM1157 - 422
EBCDICUS - 423
IBM1112 - 424
IBM1158 - 425
437 - 426
500g - 427
500V1g - 428

114 IBM Guardium 10.0


851g - 429
852g - 430
855g - 431
856g - 432
857g - 433
860g - 434
861g - 435
863g - 436
864g - 437
865g - 438
866NAVg - 439
869g - 440
874g - 441
904g - 442
1026g - 443
1046g - 444
1047g - 445
8859_1g - 446
8859_2g - 447
8859_3g - 448
8859_4g - 449
8859_5g - 450
8859_6g - 451
8859_7g - 452
8859_8g - 453
8859_9g - 454
10646-1:1993g - 455
10646-1:1993/UCS4/ - 456
ANSI_X3.4g - 457
ANSI_X3.110-1983g - 458
ANSI_X3.110g - 459
ARABIC7g - 460
ASMO_449g - 461
BALTICg - 462
BIG-5g - 463
BIG-FIVEg - 464
BIG5-HKSCSg - 465
BIG5g - 466
BIG5HKSCSg? - 467
BIGFIVEg - 468
BS_4730g - 469
CAg - 470
CN-BIG5g - 471
CN-GBg - 472
CNg - 473
CP-ARg - 474
CP-GRg - 475
CP-HUg - 476
CP037g - 477
CP038g - 478
CP273g - 479
CP274g - 480
CP275g - 481
CP278g - 482
CP280g - 483
CP281g - 484

Chapter 4. Protect 115


CP282g - 485
CP284g - 486
CP285g - 487
CP290g - 488
CP297g - 489
CP420g - 490
CP423g - 491
CP424g - 492
CP437g - 493
CP500g - 494
CP737g - 495
CP775g - 496
CP803g - 497
CP813g - 498
CP851g - 499
CP852g - 500
CP855g - 501
CP856g - 502
CP857g - 503
CP860g - 504
CP861g - 505
CP863g - 506
CP864g - 507
CP865g - 508
CP866NAVg? - 509
CP868g - 510
CP869g - 511
CP870g - 512
CP871g - 513
CP875g - 514
CP880g - 515
CP891g - 516
CP901g - 517
CP902g - 518
CP903g - 519
CP904g - 520
CP905g - 521
CP912g - 522
CP915g - 523
CP916g - 524
CP918g - 525
CP920g - 526
CP921g - 527
CP922g - 528
CP930g - 529
CP932g - 530
CP933g - 531
CP935g - 532
CP936g - 533
CP937g - 534
CP939g - 535
CP949g - 536
CP950g - 537
CP1004g - 538
CP1008g - 539
CP1025g - 540

116 IBM Guardium 10.0


CP1026g - 541
CP1046g - 542
CP1047g - 543
CP1070g - 544
CP1079g - 545
CP1081g - 546
CP1084g - 547
CP1089g - 548
CP1097g - 549
CP1112g - 550
CP1122g - 551
CP1123g - 552
CP1124g - 553
CP1125g - 554
CP1129g - 555
CP1130g - 556
CP1132g - 557
CP1137g - 558
CP1140g - 559
CP1141g - 560
CP1142g - 561
CP1143g - 562
CP1144g - 563
CP1145g - 564
CP1146g - 565
CP1147g - 566
CP1148g - 567
CP1149g - 568
CP1153g - 569
CP1154g - 570
CP1155g - 571
CP1156g - 572
CP1157g - 573
CP1158g - 574
CP1160g - 575
CP1161g - 576
CP1162g - 577
CP1163g - 578
CP1164g - 579
CP1166g - 580
CP1167g - 581
CP1361g - 582
CP1364g - 583
CP1371g - 584
CP1388g - 585
CP1390g - 586
CP1399g - 587
CP4517g - 588
CP4899g - 589
CP4909g - 590
CP4971g - 591
CP5347g - 592
CP9030g - 593
CP9066g - 594
CP9448g - 595
CP10007g - 596

Chapter 4. Protect 117


CP12712g - 597
CP16804g - 598
CPIBM861g - 599
CSA7-1g - 600
CSA7-2g - 601
CSA_T500-1983g - 602
CSA_T500g - 603
CSA_Z243.4-1985-1g - 604
CSA_Z243.4-1985-2g - 605
CSA_Z243.419851g - 606
CSA_Z243.419852g - 607
CSDECMCSg - 608
CSEBCDICATDEg - 609
CSEBCDICATDEAg - 610
CSEBCDICCAFRg - 611
CSEBCDICDKNOg - 612
CSEBCDICDKNOAg - 613
CSEBCDICESg - 614
CSEBCDICESAg - 615
CSEBCDICESSg - 616
CSEBCDICFISEg - 617
CSEBCDICFISEAg - 618
CSEBCDICFRg - 619
CSEBCDICITg - 620
CSEBCDICPTg - 621
CSEBCDICUKg - 622
CSEBCDICUSg - 623
CSEUCKRg - 624
CSEUCPKDFMTJAPANESEg - 625
CSGB2312g - 626
CSIBM037g - 627
CSIBM038g - 628
CSIBM273g - 629
CSIBM274g - 630
CSIBM275g - 631
CSIBM277g - 632
CSIBM278g - 633
CSIBM280g - 634
CSIBM281g - 635
CSIBM284g - 636
CSIBM285g - 637
CSIBM290g - 638
CSIBM297g - 639
CSIBM420g - 640
CSIBM423g - 641
CSIBM424g - 642
CSIBM500g - 643
CSIBM803g - 644
CSIBM851g - 645
CSIBM855g - 646
CSIBM856g - 647
CSIBM857g - 648
CSIBM860g - 649
CSIBM863g - 650
CSIBM864g - 651
CSIBM865g - 652

118 IBM Guardium 10.0


CSIBM868g - 653
CSIBM869g - 654
CSIBM870g - 655
CSIBM871g - 656
CSIBM880g - 657
CSIBM891g - 658
CSIBM901g - 659
CSIBM902g - 660
CSIBM903g - 661
CSIBM904g - 662
CSIBM905g - 663
CSIBM918g - 664
CSIBM921g - 665
CSIBM922g - 666
CSIBM930g - 667
CSIBM932g - 668
CSIBM933g - 669
CSIBM935g - 670
CSIBM937g - 671
CSIBM939g - 672
CSIBM943g - 673
CSIBM1008g - 674
CSIBM1025g - 675
CSIBM1026g - 676
CSIBM1097g - 677
CSIBM1112g - 678
CSIBM1122g - 679
CSIBM1123g - 680
CSIBM1124g - 681
CSIBM1129g - 682
CSIBM1130g - 683
CSIBM1132g - 684
CSIBM1133g - 685
CSIBM1137g - 686
CSIBM1140g - 687
CSIBM1141g - 688
CSIBM1142g - 689
CSIBM1143g - 690
CSIBM1144g - 691
CSIBM1145g - 692
CSIBM1146g - 693
CSIBM1147g - 694
CSIBM1148g - 695
CSIBM1149g - 696
CSIBM1153g - 697
CSIBM1154g - 698
CSIBM1155g - 699
CSIBM1156g - 700
CSIBM1157g - 701
CSIBM1158g - 702
CSIBM1160g - 703
CSIBM1161g - 704
CSIBM1163g - 705
CSIBM1164g - 706
CSIBM1166g - 707
CSIBM1167g - 708

Chapter 4. Protect 119


CSIBM1364g - 709
CSIBM1371g - 710
CSIBM1388g - 711
CSIBM1390g - 712
CSIBM1399g - 713
CSIBM4517g - 714
CSIBM4899g - 715
CSIBM4909g - 716
CSIBM4971g - 717
CSIBM5347g - 718
CSIBM9030g - 719
CSIBM9066g - 720
CSIBM9448g - 721
CSIBM12712g - 722
CSIBM16804g - 723
CSIBM11621162g - 724
CSISO4UNITEDKINGDOMg? - 725
CSISO10SWEDISHg? - 726
CSISO11SWEDISHFORNAMESg? - 727
CSISO15ITALIANg? - 728
CSISO16PORTUGESEg? - 729
CSISO17SPANISHg? - 730
CSISO18GREEK7OLDg? - 731
CSISO19LATINGREEKg? - 732
CSISO21GERMANg? - 733
CSISO25FRENCHg? - 734
CSISO27LATINGREEK1g? - 735
CSISO49INISg? - 736
CSISO50INIS8g? - 737
CSISO51INISCYRILLICg? - 738
CSISO58GB1988g? - 739
CSISO60DANISHNORWEGIANg? - 740
CSISO60NORWEGIAN1g? - 741
CSISO61NORWEGIAN2g? - 742
CSISO69FRENCHg? - 743
CSISO84PORTUGUESE2g? - 744
CSISO85SPANISH2g? - 745
CSISO86HUNGARIANg? - 746
CSISO88GREEK7g? - 747
CSISO89ASMO449g? - 748
CSISO90g - 749
CSISO92JISC62991984Bg? - 750
CSISO99NAPLPSg? - 751
CSISO103T618BITg? - 752
CSISO111ECMACYRILLICg? - 753
CSISO121CANADIAN1g? - 754
CSISO122CANADIAN2g? - 755
CSISO139CSN369103g? - 756
CSISO141JUSIB1002g? - 757
CSISO143IECP271g? - 758
CSISO150g - 759
CSISO150GREEKCCITTg? - 760
CSISO151CUBAg? - 761
CSISO153GOST1976874g? - 762
CSISO646DANISHg? - 763
CSISO2022CNg? - 764

120 IBM Guardium 10.0


CSISO2022JPg? - 765
CSISO2022JP2g? - 766
CSISO2022KRg? - 767
CSISO2033g - 768
CSISO5427CYRILLICg? - 769
CSISO5427CYRILLIC1981g? - 770
CSISO5428GREEKg? - 771
CSISO10367BOXg? - 772
CSKSC5636g - 773
CSNATSDANOg - 774
CSNATSSEFIg - 775
CSN_369103g - 776
CSPC8CODEPAGE437g? - 777
CSPC775BALTICg? - 778
CSPCP852g - 779
CSSHIFTJISg - 780
CSUCS4g - 781
CSWINDOWS31Jg? - 782
CUBAg - 783
CWI-2g - 784
CWIg - 785
DEg - 786
DEC-MCSg - 787
DECg - 788
DECMCSg - 789
DIN_66003g - 790
DKg - 791
DS2089g - 792
DS_2089g - 793
E13Bg? - 794
EBCDIC-AT-DE-Ag - 795
EBCDIC-AT-DEg - 796
EBCDIC-BEg - 797
EBCDIC-BRg - 798
EBCDIC-CA-FRg - 799
EBCDIC-CP-AR1g - 800
EBCDIC-CP-AR2g - 801
EBCDIC-CP-BEg - 802
EBCDIC-CP-CAg - 803
EBCDIC-CP-CHg - 804
EBCDIC-CP-DKg - 805
EBCDIC-CP-ESg - 806
EBCDIC-CP-FIg - 807
EBCDIC-CP-FRg - 808
EBCDIC-CP-GBg - 809
EBCDIC-CP-GRg - 810
EBCDIC-CP-HEg - 811
EBCDIC-CP-ISg - 812
EBCDIC-CP-ITg - 813
EBCDIC-CP-NLg - 814
EBCDIC-CP-NOg - 815
EBCDIC-CP-ROECEg - 816
EBCDIC-CP-SEg - 817
EBCDIC-CP-TRg - 818
EBCDIC-CP-USg - 819
EBCDIC-CP-WTg - 820

Chapter 4. Protect 121


EBCDIC-CP-YUg - 821
EBCDIC-CYRILLICg - 822
EBCDIC-DK-NO-Ag - 823
EBCDIC-DK-NOg - 824
EBCDIC-ES-Ag - 825
EBCDIC-ES-Sg - 826
EBCDIC-ESg - 827
EBCDIC-FI-SE-Ag - 828
EBCDIC-FI-SEg - 829
EBCDIC-FRg - 830
EBCDIC-GREEKg - 831
EBCDIC-INTg - 832
EBCDIC-INT1g - 833
EBCDIC-IS-FRISSg - 834
EBCDIC-ITg - 835
EBCDIC-JP-Eg - 836
EBCDIC-JP-KANAg - 837
EBCDIC-PTg - 838
EBCDIC-UKg - 839
EBCDIC-USg - 840
EBCDICATDEg - 841
EBCDICATDEAg - 842
EBCDICCAFRg - 843
EBCDICDKNOg - 844
EBCDICDKNOAg - 845
EBCDICESg - 846
EBCDICESAg - 847
EBCDICESSg - 848
EBCDICFISEg - 849
EBCDICFISEAg - 850
EBCDICFRg - 851
EBCDICISFRISSg - 852
EBCDICITg - 853
EBCDICPTg - 854
EBCDICUKg - 855
EBCDICUSg - 856
ECMA-128g - 857
ECMA-CYRILLICg - 858
ECMACYRILLICg - 859
ESg - 860
ES2g - 861
EUC-CNg - 862
EUC-JISX0213g - 863
EUC-JP-MSg - 864
EUC-JPg - 865
EUC-KRg - 866
EUC-TWg - 867
EUCCNg - 868
EUCJP-MSg - 869
EUCJP-OPENg - 870
EUCJP-WINg - 871
EUCJPg - 872
EUCKRg - 873
EUCTWg - 874
FIg - 875
FRg - 876

122 IBM Guardium 10.0


GBg - 877
GB2312g - 878
GB13000g - 879
GB18030g - 880
GBKg - 881
GB_1988-80g - 882
GB_198880g - 883
GOST_19768-74g - 884
GOST_19768g - 885
GOST_1976874g - 886
GREEK-CCITTg - 887
GREEK7-OLDg - 888
GREEK7g - 889
GREEK7OLDg? - 890
GREEKCCITTg - 891
HUg - 892
IBM-803g - 893
IBM-856g - 894
IBM-901g - 895
IBM-902g - 896
IBM-921g - 897
IBM-922g - 898
IBM-930g - 899
IBM-932g - 900
IBM-933g - 901
IBM-935g - 902
IBM-937g - 903
IBM-939g - 904
IBM-943g - 905
IBM-1008g - 906
IBM-1025g - 907
IBM-1046g - 908
IBM-1047g - 909
IBM-1097g - 910
IBM-1112g - 911
IBM-1122g - 912
IBM-1123g - 913
IBM-1124g - 914
IBM-1129g - 915
IBM-1130g - 916
IBM-1132g - 917
IBM-1133g - 918
IBM-1137g - 919
IBM-1140g - 920
IBM-1141g - 921
IBM-1142g - 922
IBM-1143g - 923
IBM-1144g - 924
IBM-1145g - 925
IBM-1146g - 926
IBM-1147g - 927
IBM-1148g - 928
IBM-1149g - 929
IBM-1153g - 930
IBM-1154g - 931
IBM-1155g - 932

Chapter 4. Protect 123


IBM-1156g - 933
IBM-1157g - 934
IBM-1158g - 935
IBM-1160g - 936
IBM-1161g - 937
IBM-1162g - 938
IBM-1163g - 939
IBM-1164g - 940
IBM-1166g - 941
IBM-1167g - 942
IBM-1364g - 943
IBM-1371g - 944
IBM-1388g - 945
IBM-1390g - 946
IBM-1399g - 947
IBM-4517g - 948
IBM-4899g - 949
IBM-4909g - 950
IBM-4971g - 951
IBM-5347g - 952
IBM-9030g - 953
IBM-9066g - 954
IBM-9448g - 955
IBM-12712g - 956
IBM-16804g - 957
IBM037g - 958
IBM038g - 959
IBM256g - 960
IBM273g - 961
IBM274g - 962
IBM275g - 963
IBM277g - 964
IBM278g - 965
IBM280g - 966
IBM281g - 967
IBM284g - 968
IBM285g - 969
IBM290g - 970
IBM297g - 971
IBM420g - 972
IBM423g - 973
IBM424g - 974
IBM437g - 975
IBM500g - 976
IBM775g - 977
IBM803g - 978
IBM813g - 979
IBM848g - 980
IBM851g - 981
IBM852g - 982
IBM855g - 983
IBM856g - 984
IBM857g - 985
IBM860g - 986
IBM861g - 987
IBM863g - 988

124 IBM Guardium 10.0


IBM864g - 989
IBM865g - 990
IBM866NAVg? - 991
IBM868g - 992
IBM869g - 993
IBM870g - 994
IBM871g - 995
IBM874g - 996
IBM875g - 997
IBM880g - 998
IBM891g - 999
IBM901g - 1000
IBM902g - 1001
IBM903g - 1002
IBM904g - 1003
IBM905g - 1004
IBM912g - 1005
IBM915g - 1006
IBM916g - 1007
IBM918g - 1008
IBM920g - 1009
IBM921g - 1010
IBM922g - 1011
IBM930g - 1012
IBM932g - 1013
IBM933g - 1014
IBM935g - 1015
IBM937g - 1016
IBM939g - 1017
IBM943g - 1018
IBM1004g - 1019
IBM1008g - 1020
IBM1025g - 1021
IBM1026g - 1022
IBM1046g - 1023
IBM1047g - 1024
IBM1089g - 1025
IBM1097g - 1026
IBM1112g - 1027
IBM1122g - 1028
IBM1123g - 1029
IBM1124g - 1030
IBM1129g - 1031
IBM1130g - 1032
IBM1132g - 1033
IBM1133g - 1034
IBM1137g - 1035
IBM1140g - 1036
IBM1141g - 1037
IBM1142g - 1038
IBM1143g - 1039
IBM1144g - 1040
IBM1145g - 1041
IBM1146g - 1042
IBM1147g - 1043
IBM1148g - 1044

Chapter 4. Protect 125


IBM1149g - 1045
IBM1153g - 1046
IBM1154g - 1047
IBM1155g - 1048
IBM1156g - 1049
IBM1157g - 1050
IBM1158g - 1051
IBM1160g - 1052
IBM1161g - 1053
IBM1162g - 1054
IBM1163g - 1055
IBM1164g - 1056
IBM1166g - 1057
IBM1167g - 1058
IBM1364g - 1059
IBM1371g - 1060
IBM1388g - 1061
IBM1390g - 1062
IBM1399g - 1063
IBM4517g - 1064
IBM4899g - 1065
IBM4909g - 1066
IBM4971g - 1067
IBM5347g - 1068
IBM9030g - 1069
IBM9066g - 1070
IBM9448g - 1071
IBM12712g - 1072
IBM16804g - 1073
IEC_P27-1g - 1074
IEC_P271g - 1075
INIS-8g - 1076
INIS-CYRILLICg - 1077
INISg - 1078
INIS8g - 1079
INISCYRILLICg - 1080
ISIRI-3342g - 1081
ISIRI3342g - 1082
ISO-2022-CN-EXTg - 1083
ISO-2022-CNg - 1084
ISO-2022-JP-2g - 1085
ISO-2022-JP-3g - 1086
ISO-2022-JPg - 1087
ISO-2022-KRg - 1088
ISO-8859-9g - 1089
ISO-8859-10g - 1090
ISO-8859-11g - 1091
ISO-8859-16g - 1092
ISO-10646g - 1093
ISO-10646/UTF-8/ - 1094
ISO-10646/UTF8/ - 1095
ISO-IR-4g - 1096
ISO-IR-8-1g - 1097
ISO-IR-9-1g - 1098
ISO-IR-10g - 1099
ISO-IR-11g - 1100

126 IBM Guardium 10.0


ISO-IR-15g - 1101
ISO-IR-16g - 1102
ISO-IR-17g - 1103
ISO-IR-18g - 1104
ISO-IR-19g - 1105
ISO-IR-21g - 1106
ISO-IR-25g - 1107
ISO-IR-27g - 1108
ISO-IR-37g - 1109
ISO-IR-49g - 1110
ISO-IR-50g - 1111
ISO-IR-51g - 1112
ISO-IR-54g - 1113
ISO-IR-55g - 1114
ISO-IR-57g - 1115
ISO-IR-60g - 1116
ISO-IR-61g - 1117
ISO-IR-69g - 1118
ISO-IR-84g - 1119
ISO-IR-85g - 1120
ISO-IR-86g - 1121
ISO-IR-88g - 1122
ISO-IR-89g - 1123
ISO-IR-90g - 1124
ISO-IR-92g - 1125
ISO-IR-98g - 1126
ISO-IR-99g - 1127
ISO-IR-103g - 1128
ISO-IR-111g - 1129
ISO-IR-121g - 1130
ISO-IR-122g - 1131
ISO-IR-127g - 1132
ISO-IR-139g - 1133
ISO-IR-141g - 1134
ISO-IR-143g - 1135
ISO-IR-150g - 1136
ISO-IR-151g - 1137
ISO-IR-153g - 1138
ISO-IR-155g - 1139
ISO-IR-156g - 1140
ISO-IR-166g - 1141
ISO-IR-193g - 1142
ISO-IR-197g - 1143
ISO-IR-209g - 1144
ISO/TR_11548-1/ - 1145
ISO646-CAg - 1146
ISO646-CA2g - 1147
ISO646-CNg - 1148
ISO646-CUg - 1149
ISO646-DEg - 1150
ISO646-DKg - 1151
ISO646-ESg - 1152
ISO646-ES2g - 1153
ISO646-FIg - 1154
ISO646-FRg - 1155
ISO646-FR1g - 1156

Chapter 4. Protect 127


ISO646-GBg - 1157
ISO646-HUg - 1158
ISO646-ITg - 1159
ISO646-JP-OCR-Bg - 1160
ISO646-KRg - 1161
ISO646-NOg - 1162
ISO646-NO2g - 1163
ISO646-PTg - 1164
ISO646-PT2g - 1165
ISO646-SEg - 1166
ISO646-SE2g - 1167
ISO646-YUg - 1168
ISO2022CNg? - 1169
ISO2022CNEXTg? - 1170
ISO2022JPg? - 1171
ISO2022JP2g? - 1172
ISO2022KRg? - 1173
ISO6937g - 1174
ISO8859-11g - 1175
ISO11548-1g - 1176
ISO88591g - 1177
ISO88592g - 1178
ISO88593g - 1179
ISO88594g - 1180
ISO88595g - 1181
ISO88596g - 1182
ISO88597g - 1183
ISO88598g - 1184
ISO88599g - 1185
ISO885910g - 1186
ISO885911g - 1187
ISO885913g - 1188
ISO885914g - 1189
ISO885915g - 1190
ISO885916g - 1191
ISO_2033-1983g - 1192
ISO_2033g - 1193
ISO_5427-EXTg - 1194
ISO_5427g - 1195
ISO_5427:1981g - 1196
ISO_5427EXTg - 1197
ISO_5428g - 1198
ISO_5428:1980g - 1199
ISO_6937-2g - 1200
ISO_6937-2:1983g - 1201
ISO_6937g - 1202
ISO_6937:1992g - 1203
ISO_8859-7:2003g - 1204
ISO_8859-16:2001g - 1205
ISO_9036g - 1206
ISO_10367-BOXg - 1207
ISO_10367BOXg - 1208
ISO_11548-1g - 1209
ISO_69372g - 1210
ITg - 1211
JIS_C6229-1984-Bg - 1212

128 IBM Guardium 10.0


JIS_C62201969ROg - 1213
JIS_C62291984Bg - 1214
JOHABg - 1215
JP-OCR-Bg - 1216
JSg - 1217
JUS_I.B1.002g - 1218
KOI-7g - 1219
KOI-8g - 1220
KOI8g - 1221
KSC5636g - 1222
L10g - 1223
LATIN-9g - 1224
LATIN-GREEK-1g - 1225
LATIN-GREEKg - 1226
LATIN10g - 1227
LATINGREEKg - 1228
LATINGREEK1g - 1229
MAC-CYRILLICg - 1230
MAC-ISg - 1231
MAC-SAMIg - 1232
MAC-UKg - 1233
MACCYRILLICg - 1234
MIKg - 1235
MS-MAC-CYRILLICg - 1236
MS932g - 1237
MS936g - 1238
MSCP949g - 1239
MSCP1361g - 1240
MSMACCYRILLICg - 1241
MSZ_7795.3g - 1242
MS_KANJIg - 1243
NAPLPSg - 1244
NATS-DANOg - 1245
NATS-SEFIg - 1246
NATSDANOg - 1247
NATSSEFIg - 1248
NC_NC0010g - 1249
NC_NC00-10g - 1250
NC_NC00-10:81g - 1251
NF_Z_62-010g - 1252
NF_Z_62-010_(1973)g - 1253
NF_Z_62-010_1973g - 1254
NF_Z_62010g - 1255
NF_Z_62010_1973g - 1256
NOg - 1257
NO2g - 1258
NS_4551-1g - 1259
NS_4551-2g - 1260
NS_45511g - 1261
NS_45512g - 1262
OS2LATIN1g? - 1263
OSF00010001g - 1264
OSF00010002g - 1265
OSF00010003g - 1266
OSF00010004g - 1267
OSF00010005g - 1268

Chapter 4. Protect 129


OSF00010006g - 1269
OSF00010007g - 1270
OSF00010008g - 1271
OSF00010009g - 1272
OSF0001000Ag? - 1273
OSF00010020g - 1274
OSF00010100g - 1275
OSF00010101g - 1276
OSF00010102g - 1277
OSF00010104g - 1278
OSF00010105g - 1279
OSF00010106g - 1280
OSF00030010g - 1281
OSF0004000Ag? - 1282
OSF0005000Ag? - 1283
OSF05010001g - 1284
OSF100201A4g? - 1285
OSF100201A8g? - 1286
OSF100201B5g? - 1287
OSF100201F4g? - 1288
OSF100203B5g? - 1289
OSF1002011Cg? - 1290
OSF1002011Dg? - 1291
OSF1002035Dg? - 1292
OSF1002035Eg? - 1293
OSF1002035Fg? - 1294
OSF1002036Bg? - 1295
OSF1002037Bg? - 1296
OSF10010001g - 1297
OSF10020025g - 1298
OSF10020111g - 1299
OSF10020115g - 1300
OSF10020116g - 1301
OSF10020118g - 1302
OSF10020122g - 1303
OSF10020129g - 1304
OSF10020352g - 1305
OSF10020354g - 1306
OSF10020357g - 1307
OSF10020359g - 1308
OSF10020360g - 1309
OSF10020364g - 1310
OSF10020365g - 1311
OSF10020366g - 1312
OSF10020367g - 1313
OSF10020370g - 1314
OSF10020387g - 1315
OSF10020388g - 1316
OSF10020396g - 1317
OSF10020402g - 1318
OSF10020417g - 1319
PTg - 1320
PT2g - 1321
PT154g - 1322
RK1048g - 1323
RUSCIIg - 1324

130 IBM Guardium 10.0


SEg - 1325
SE2g - 1326
SEN_850200_Bg - 1327
SEN_850200_Cg - 1328
SHIFT-JISg - 1329
SHIFT_JISg - 1330
SHIFT_JISX0213g - 1331
SJIS-OPENg - 1332
SJIS-WINg - 1333
SJISg - 1334
SS636127g - 1335
STRK1048-2002g - 1336
ST_SEV_358-88g - 1337
T.61-8BITg - 1338
T.61g - 1339
T.618BITg - 1340
TS-5881g - 1341
UHCg - 1342
UJISg - 1343
UKg - 1344
UTF8g - 1345
UTF16g - 1346
UTF16BEg? - 1347
UTF16LEg? - 1348
UTF32g - 1349
UTF32BEg? - 1350
UTF32LEg? - 1351
WCHAR_Tg - 1352
WIN-SAMI-2g - 1353
WINDOWS-31Jg - 1354
WINDOWS-936g - 1355
WINSAMI2g - 1356
WS2g - 1357
YUg - 1358

Correlation Alerts
An alert is a message indicating that an exception or policy rule violation was
detected.

Alerts are triggered in two ways:


v A correlation alert is triggered by a query that looks back over a specified time
period to determine if alert threshold has been met. The Guardium Anomaly
Detection Engine runs correlation queries on a scheduled basis. By default,
correlation alerts do not log policy violations, but they can be configured to do
that.
v A real-time alert is triggered by a security policy rule. The Guardium Inspection
Engine component runs the security policy as it collects and analyzes database
traffic in real time.

Regardless of how they are triggered, Guardium logs all alerts the same way: the
alert information is logged in the Guardium internal database. The amount and
type of information logged depends on the specific alert type. The Guardium

Chapter 4. Protect 131


Alerter component, which also runs on a scheduled basis, processes each new alert,
passing the logged information for each alert to any combination of the following
notification mechanisms:
v SMTP – The SMTP (outgoing e-mail) server. The Alerter passes standard email
messages to the SMTP server for which it has been configured.
v SNMP – The SNMP (network information and control) server. When SNMP is
selected for an alert notification, the Alerter passes all alert messages of that type
to the single trap community for which the Alerter has been configured.
v Syslog – The alert is written to syslog on the Guardium appliance (which may
be configured by the Guardium Administrator to write syslog messages to a
remote system).

Note: For SNMP or SYSLOG, the maximum message length is 3000 characters.
Any messages longer than that will be truncated.
v Custom – A user written Java class to handle alerts. The Alerter passes an alert
message and timestamp to the custom alerting class. There can be multiple
custom alerting classes, and one custom alerting class can be an extension of
another custom alerting class.

Note: Alerts definition and notification are not subject to Data Level Security.
Reasons for this include alerts are not evaluated in the context of user, the alert
may be related to databases associated to multiple users and to avoid situations
where no one gets the alert notification.

Note: If there is an alert using a query that contains 30 fields or more (including
counters) the anomaly detection will fail with an Array out of bound exception
error message Queries with 30 columns (or more) can not be used for alerts. Such
queries do not appear in the list of available queries for threshold alerts.

Alerting Tasks for Administrators

Guardium administrators perform the following tasks:


v Customize the Alert Message Template, using the Global Profile.
v Configure and start the Alerter, which delivers messages to SMTP, SNMP,
Syslog, or Custom alerting classes
v Start and stop the Anomaly Detection Engine, which runs the correlation alerts
according to the schedules defined.
v Upload Custom Alerting Classes to the Guardium system.

Alerting Tasks for Users


Guardium users (and administrators) can perform these correlation alerting tasks:
v Define queries that can be used for correlation alerts
v Define correlation alerts
v Write custom alerting classes

About Correlation Alert Queries


A correlation alert is based on a query in any of the reporting domains. That query
must be defined before the alert can be defined. To be available for use by a
correlation alert, the query must contain at least one date field.

132 IBM Guardium 10.0


Create a Correlation Alert
1. Click Protect > Database Intrusion Detection > Alert Builder to open the
Alert Finder.
2. Click New in the Alerts Finder panel to display the Add Alert panel.
3. Enter a unique name for the alert in the Name box. Do not include apostrophe
characters in the alert name.
4. Enter a short sentence that describes the alert in the Description box.
5. Enter an optional category in the Category box.
6. Enter an optional classification in the Classification box.
7. For Recommended Action, the user can add free text as the recommended
action for the specific alert.
8. As in real-time alerts, the user can choose a template for the message that is
sent in case the threshold alert fires. The template uses a predefined list of
variables that are replaced with the appropriate value for the specific alert.
The list of variables and a default template are detailed in the Named
Templates section of the Global Profile help topic.
9. Select a severity level from the Severity list. For an email alert, a setting of
HIGH results in the email being flagged as HIGH.
10. Enter the number of minutes between runs of the query in the Run Frequency
field.
11. Mark the Active box to activate the alert, or clear the box to save the alert
definition without starting it running (it can be activated later). In a Central
Manager environment, the alert will be activated (or stopped) on all managed
units when this box is marked (or cleared). To disable the alert on a specific
appliance in a Central Manager environment, use the Anomaly Detection
panel of the Administrator Console.
12. Mark the Log Policy Violation box to log a policy violation when this alert is
triggered. By default, correlation alerts are logged in the Alert Tracking
domain only. By marking this box, correlation alerts and real-time alerts
(issued by the data access security policy) can be viewed together, in the
Policy Violations domain.
13. From the Query list in the Alert Definition panel, select the query to run for
this alert. The list of queries displayed will include all queries defined that:
v Contain at least one date field (timestamp) - a timestamp field is required
v Contain a Count field - a count field is required
v Can be accessed by your Guardium user account
Troubleshooting tips
v If a custom query has been created in any Query Builder in Report
Building, and it does not appear in the Query list, then make sure that the
custom query has a timestamp (date field).
v After selecting a query from the Query list in the Alert Definition panel of
the Add Alert screen, and there is need to edit the query (Edit icon), and
the query can not be edited, then go to Query Builder (Tools > Report
Building) to edit the query.
14. If the selected query contains run-time parameters, a Query Parameters panel
will appear in the Alert Definition pane. Supply parameter values as
appropriate for your application.
15. In the Accumulation Interval box, enter the length of the time interval (in
minutes) that the query should examine in the audit repository, counting back
from the current time (for example, enter 10 to examine the last 10 minutes of
data).

Chapter 4. Protect 133


Note: Alerts that run on aggregators are based only on data with the defined
merge period.
16. Check the Log Full Query Results box to have the full report logged with the
alert.
17. If the selected query contains one or more columns of numeric data, select one
of those columns to use for the test. The default, which will be the last item
listed, is the last column for the query, which is always the count of
occurrences aggregated in that row.
18. In the Alert Threshold pane, define the threshold at which a correlation alert
is to be generated, as follows:
v In the Threshold field, enter a threshold number that will apply as
described by the remaining fields in the panel.
v From the Alert when value is list, select an operator indicating how the
report value is to relate to the threshold to produce an alert (greater than,
greater than or equal to, less than, etc.).
v Select per report if the threshold number applies to a report total, or select
per line if the threshold applies to a single line of the report (the report
being the output of the selected query, run by looking back over the
specified accumulation time).
If there is no data during the specified Accumulation Interval:
If the threshold is per report, the value for that interval is 0 (zero), and an
alert will be generated if the threshold condition is met (for example, if the
condition specified is “Alert when value is < 1”).
If the threshold is per line, no alert will be generated, regardless of the
specified condition (this is because there are no lines of output).
v Select As absolute limit to indicate that the threshold entered is an absolute
number or select As a percentage change within period to indicate that the
threshold represents a percentage of change within the time period
identified in the From and To fields.
If the As percentage change within period option is selected, use the date
picker controls to select the From and To dates.
If the As percentage change for the same "Accumulation Period" on a
relative time is selected, one relative date will be entered and the alert will
execute the query for the current period and for the relative period (using
the same interval), and will check the values as a percentage of the base
period value.

Note: If relative period is used, each time the alert is checked it will
execute the query twice, once for the current period and once for the
relative period.
19. Indicate in the Notification Frequency box how often (in minutes) the Alert
Receivers should be notified when the alert condition has been satisfied.
20. Click Save to save the alert definition.

Note: You cannot assign receivers or roles, or enter comments until the
definition has been saved.
21. In the Alert Receivers panel, optionally designate one or more persons or
groups to be notified when this alert condition is satisfied. To add a receiver,
click the Add Receiver button to open the Add Receiver Selection panel.

Note: If the receiver of an alert is the admin user then admin needs to be
assigned an email for the alert to fire.

134 IBM Guardium 10.0


Note: An additional receiver for threshold alerts is Owner (the owner/s of the
database). If the query associated with the alert contains Server IP and Service
name and if the alert is evaluated Per Row, then the receiver can be Owner.
The alert notification must have: Alert Notification Type: Mail, Alert User ID:
0, Alert Destination: Owner. See Alerting Actions in “Policies” on page 57for
additional receivers for real-time alerts.
22. Optionally click Roles to assign roles for the alert.
23. Optionally click Comments to add comments to the definition.
24. Click Apply and then Done when you have finished.

Modify a Correlation Alert


1. Click Protect > Dtaabase Intrusion Detection > Alert Builder to open the Alert
Finder.
2. Select the correlation alert you want to modify, in the Alert Finder panel.
3. Click Modify to open the Modify Alert panel.
4. Referring to Create a Correlation Alert topic, make changes to the alert
definition.
5. Click Save.

Remove a Correlation Alert


1. Click Protect > Dtaabase Intrusion Detection > Alert Builder to open the Alert
Finder.
2. Select the correlation alert you want to remove, in the Alerts Finder panel.
3. Click the Delete button. You will be prompted to confirm the action.

How to signify events through Correlation Alerts


Trigger a correlation alert if there are more than fifteen SQL Errors in the last three
hours from any individual user of the application.

About this task

Use correlation alerts to inform about events accumulated over time. Applications
do not normally have SQL errors. An increase in SQL Errors in an application is a
warning sign that a possible SQL Injection is being attempted. See the online help
topics, Correlation Alerts and Queries for further information.

Prerequisites
v Configure email (SMTP) server (Setup > Tools and Views >Alerter)
v After fully configuring the correlation alert, make sure it is active and running
(Setup > Tools and Views> Anomaly Detection)

An alert is a message indicating that an exception (correlation alert) or policy rule


violation (real-time alert) was detected.

A correlation alert is triggered by a query that looks back over a specified time
period to determine if an alert threshold has been met.

Overview of correlation alert steps


1. Create a custom query from Exceptions Tracking with a field of SQL Errors
(with a count) and a condition of application users. In order to use this custom
query in the Alert Builder, a date field (timestamp) is required.

Chapter 4. Protect 135


2. Click Protect > Database Intrusion Detection > Alert Builder to open the Alert
Finder.
3. Click on New. Complete the fields per the instructions after the Alert Builder
menu screen.
4. Add Receiver.

Exceptions Tracking, SQL Errors query

Procedure
1. Exceptions Tracking - Open the Query Finder
v Users with the admin role: Select Tools > Report Building, and then select
the Exceptions Tracking domain only.
v All Others: Select Monitor/Audit > Build Reports, and select Exceptions
Tracking Builder.
2. Open the drop-down choices for Query. Select SQL Errors. This will open a
configuration screen with SQL Errors at the main title.
3. Clone this selection, typing in a unique name in the text box for the query. Do
not include apostrophe characters in the query name.
4. In your custom query, under Query fields, add a date field (timestamp) and
change the database error text field to count field mode. Under Query
conditions, change the run time parameters of exception types to attribute and
choose Exception.App. User Name.
5. Click Save. This custom query for SQL Errors from any application user is
now available for use in the Alert Builder.

136 IBM Guardium 10.0


Alert Builder menu screen
6. Alert Builder - Create a Correlation Alert
7. Click Protect > Dtaabase Intrusion Detection > Alert Builder to open the
Alert Finder.
8. Click the New button in the Alerts Finder panel to display the Add Alert
panel.
9. Enter a unique name for the alert in the Name box. Do not include apostrophe
characters in the alert name.
10. Enter a short sentence that describes the alert in the Description box.
11. Enter an optional category in the Category box. In this instance, Self
Monitoring was used.
12. Enter an optional classification in the Classification box.
13. Select a severity level from the Severity list. For an email alert, a setting of
HIGH results in the email being flagged as urgent.
14. Enter the number of minutes between runs of the query in the Run Frequency
field.
15. Mark the Active box to activate the alert.
16. Mark the Log Policy Violation box to log a policy violation when this alert is
triggered. By default, correlation alerts are logged in the Alert Tracking
domain only. By marking this box, correlation alerts and real-time alerts
(issued by the data access security policy) can be viewed together, in the
Policy Violations domain.

Chapter 4. Protect 137


17. From the Query list in the Alert Definition panel, select the query to run for
this alert. The list of queries displayed will include all queries defined that:
v Contain at least one date field (timestamp) - a timestamp field is required
v Contain a Count field - a count field is required
v Can be accessed by your Guardium user account
Troubleshooting tip: If a custom query has been created in any Query Builder
in Report Building, and it does not appear in the Query list, then make sure
that the custom query has a timestamp (date field).
Troubleshooting tip: After selecting a query from the Query list in the Alert
Definition panel of the Add Alert screen, and there is need to edit the query
(Edit icon), and the query can not be edited, then go to Query Builder (Tools >
Report Building) to edit the query.
18. If the selected query contains run-time parameters, a Query Parameters panel
will appear in the Alert Definition pane. Supply parameter values as
appropriate for your application.
19. In the Accumulation Interval box, enter the length of the time interval (in
minutes) that the query should examine in the audit repository, counting back
from the current time (for example, enter 10 to examine the last 10 minutes of
data).
20. Mark the Log Full Query results box to have the full report logged with the
alert.
21. If the selected query contains one or more columns of numeric data, select one
of those columns to use for the test. The default, which will be the last item
listed, is the last column for the query, which is always the count of
occurrences aggregated in that row.
22. In the Alert Threshold pane, define the threshold at which a correlation alert
is to be generated, as follows:
v In the Threshold field, enter a threshold number that will apply as
described by the remaining fields in the panel.
v From the Alert when value is list, select an operator indicating how the
report value is to relate to the threshold to produce an alert (greater than,
greater than or equal to, less than, etc.).
v Select per report if the threshold number applies to a report total.
If there is no data during the specified Accumulation Interval: If the threshold
is per report, the value for that interval is 0 (zero), and an alert will be
generated if the threshold condition is met (for example, if the condition
specified is “Alert when value is < 1”).
23. Indicate in the Notification Frequency box how often (in minutes) the Alert
Receivers should be notified when the alert condition has been satisfied.
24. Click the Apply button to save the alert definition.

Note: You cannot assign receivers or roles, or enter comments until the
definition has been saved.
25. In the Alert Receivers panel, optionally designate one or more persons or
groups to be notified when this alert condition is satisfied. To add a receiver,
click the Add Receiver button to open the Add Receiver Selection panel. For
information about adding receivers, see notifications.
26. Optionally click the Roles button to assign roles for the alert. See Security
Roles.
27. Optionally click the Comments button to add comments to the definition.
28. Click the Apply button and then the Done button when you have finished.

138 IBM Guardium 10.0


If there are more than fifteen SQL errors in the last three hours by any
application user, then an alert will be sent to the designated receiver.

How to terminate connections via threshold alerts


Terminate or quarantine connections in real time using threshold alerts.

Move from a purely passive alerting system to an active prevention system, even
in cases where complex conditions need to be expressed with real-time rules.

Guardium policies and policy rules can be used to terminate or quarantine


connections in real time. The Guardium system has threshold alerts (also called
correlation alerts) that can be used for defining complex conditions, which are
based on historical information. Such alerts are based on queries – the same
queries that also contribute to reports and audit processes. In this how-to topic,
you learn how to use such queries to terminate and quarantine connections.

Summary of procedure

The scenario that you will implement involves data leak prevention. You will
terminate a user connection if that connection extracted a defined number of Social
Security records, for example, more than 100. Most monitoring systems can tell
when a user extracts many records in a request – as can policies in Guardium
systems.

However, if it is known that a monitoring system alerts the security administrator,


then the approach is to avoid making a request of a large result set, something like
select * from CC_DATA. Instead, the approach is to “sip” data slowly. A way to do
this is to make 100,000 separate requests with each one extracting one record. This
approach does not trigger the monitoring system limits and does not trigger a rule
that looks for anyone extracting more than 100 records. Additionally, it is possible
to put a “sleep 10” value between each request so that if the monitoring system is
looking at the streams from a volumetric perspective, the monitoring system limits
are still not triggered.

In order to prevent data leak, three distinct capabilities of the Guardium system
are used:
1. The ability to define threshold-based alerts.
2. The ability to invoke automatically GuardAPI functions as a result of query
lines.
3. The ability to quarantine a user and terminate a connection using a GuardAPI
command.

Follow these steps:


1. Define a Guardium policy that logs the number of Social Security numbers that
are extruded from the database.
2. Install this policy.
3. Define a query (question) that indicates a problematic condition. A line that is
returned by the query implies that this session should be terminated and the
user quarantined until further notice.
4. Run the query every few minutes and automate the quarantine by using an
audit process. Use quarantine because if a user did something wrong, you want
not only to terminate the misbehaving session, you also want to disallow that
user from making further connections until a security operator can look into it.

Chapter 4. Protect 139


5. Use the auto-API-invocation capability of the audit process to activate the
quarantine.

Step 1: Define the appropriate Guardium policy


The policy defines the Social Security number as an extrusion pattern and logs full
details for such extrusions. Do this from Tools > Config and Control > Policy
Builder.

140 IBM Guardium 10.0


Step 2: Install the policy
Do this from Setup > Tools and Views > Install Policy.

Step 3: Define the query and test it

The query lists server/instance/user information for sessions that both have a sum
of returned counters larger than some number – say 100.

Do this from Tools > Report Building > Access Tracking.

Chapter 4. Protect 141


Since the query should return sessions that need to be quarantined, the main entity
is Session.

Add the three quarantine attributes required.

Add a condition with the HAVING qualified to count the number of returned
matches and set the condition to be larger than 100. If you want to use a HAVING
clause, you must add a count so that the correct GROUP BY clause is added.

As an example, the output of the report once a user extracts more than 100 records
look like:

This report is almost good enough but not quite. This is because the GuardAPI for
quarantining users requires a time stamp for how long to quarantine the user. In
this case, it is a constant of 2099-12-31 23:59:59.

Step 4: Use the Guardium API


In order to generate a field, which has a constant value, you need to know a little
more about GuardAPIs. Most attributes and entities that are defined in the
Guardium system are pre-defined. However, sometimes it is necessary to provide
constants as columns in reports. One of the Guardium APIs does precisely that - it
adds a constant attribute.

142 IBM Guardium 10.0


To add this to the Client/Server entity, log in as an admin user and click on
Guardium Monitor tab (if you have a centrally-managed environment, do this on
the Central Manager).

Click Query Entities & Attributes menu item and open the Customizer.

Select Logger% for DOMAIN_LIKE and Client% for ENTITY_LIKE.

Then, double-click one of the lines, click “invoke” and then click
create_constant_attribute.

Enter the constant value and the name of the attribute (in this example, the name
is Forever):

This creates a new constant attribute:

Chapter 4. Protect 143


Now go back to the query editor and edit your report. A new attribute (Forever)
will now be present.

Add this new attribute to your report:

Generate the report and add it to a pane (you can click Add to Pane; this performs
all operations on your behalf).

Now look at the quarantine GuardAPI that you will use:


a1.corp.com> grdapi create_quarantine_until --help=true
ID=0
function parameters :
dbUser - required

144 IBM Guardium 10.0


quarantineUntil - required
serverIp - required
serviceName - required
ok

Now you can see why you had to add a new attribute. The quarantine GuardAPI
requires a time stamp that defines how long the user on the instance is
quarantined. However, since this is a new attribute that the system does not yet
know about, this attribute needs to be mapped to the quarantine GuardAPI. The
system must know that it can use this new attribute as the time stamp for the
quarantine.

Go back at the Query Entities and Attributes menu screen (on the Guardium
Monitor tab), double-click on the line for the Forever attribute and click invoke.

Then, click create_api_parameter_mapping. This is an GuardAPI command that


maps attributes.

Add the quarantine GuardAPI name and the parameter to which to map:

Finally, you need to enable the GuardAPI call to create_quarantine_until from


your report. Open the Report Builder and find your report.

Click API assignment and move the GuardAPI command from the one list box to
the other list box.

Click Apply.

Chapter 4. Protect 145


Your report is now quarantine-enabled.

From each line on the report, you can now double-click and invoke a quarantine
GuardAPI command:

Click on create_quarantine_until:

146 IBM Guardium 10.0


If you invoke this, you will quarantine the user forever. You see this quarantine on
the Daily Monitor tab on the quarantined connections report:

Connections Quarantined report

You can invoke a GuardAPI command on this report to remove a user from
quarantine.

Step 5: Automate quarantine procedure

The last part of this how-to topic involves automating this procedure. Running a
report and invoking an GuardAPI is good but only if there is a user that monitors
such reports. Instead, what you might want to do is run this report every period
(for example, every 10 minutes) and for each line invoke the GuardAPI. This can
be done automatically using an audit process.

Create a new audit process (Tools > Config and Control > Audit Process Builder)
and name it.

Add a task by choosing the report that you created. Since you run the report every
10 minutes, pick an appropriate from/to period. Pick the GuardAPI from the pull
down menu:

Chapter 4. Protect 147


Make the audit process active. You do not need any receivers since you need only
to have each line in the report call an GuardAPI invocation.

Save your audit process.

Test it using Run once now – you should see that a quarantine record is added to
the report under the Daily Monitor tab.

Once everything works you can schedule this audit process for continuous
running.

File Activity Monitoring


You can use Guardium file activity monitoring to extend monitoring capabilities to
file servers.

File activity monitoring is similar to database activity monitoring in many respects.


In both cases, you discover the sensitive data on your servers and configure
policies to create rules about data access and actions to be taken when rules are
met.

148 IBM Guardium 10.0


File activity monitoring consists of the following capabilities:
v Discovery to inventory files and metadata.
v Classification to crawl through the files to look for potentially sensitive data, such
as credit card information or personally identifiable information.
v Monitoring, which can be used without discovery and classification, to monitor
access to files and, based on policy rules, audit and alert on inappropriate
access, or even block access to the files to prevent data leakage.

File activity monitoring (FAM) uses a discovery agent called a file crawler to
inventory the files on each server and identify sensitive data within the files. The
discovery agent/ file crawler gathers the list of folders and files, their owner,
access permissions, size, and the date and time of the last update.

FAM uses decision plans to identify sensitive data within files. Each decision plan
contains rules for recognizing a certain type of data. By default, FAM uses decision
plans that identify data for SOX, PCI, HIPAA, and source code. You can create
your own decision plans, and you can activate and deactivate decision plans to
focus on the types of sensitive data about which you are concerned. Think of this
as analogous to the classification process used with databases. Decision plans are
analogous to classification policies.

The discovery agent/ file crawler sends file metadata and data from its
classification process to the Guardium system. You can view that data in reports or
in the File version of the enterprise search function.

If classification is required, see the requirements for IBM Content Classification


Version 8.8, in this IBM technote: - http://www-01.ibm.com/support/
docview.wss?uid=swg27020838

Note: FAM discovery and classification can not be installed if there is no S-TAP
installed on the Guardium system.
File Activity Monitoring Value Proposition
Ensure integrity and protection of structured and unstructured sensitive
data
v Discover where your sensitive data resides (through metadata collection
and classification).
v Prevent unauthorized access to your files and documents (Continuous,
policy-based, real-time monitoring of all file access activities, including
actions by privileged users.)
Meet regulatory compliance in a cost effective way
v Automate and centralize controls, provide audit trail.
v Achieve compliance with diverse regulations such as HIPAA, PCI DSS,
various state-level and national privacy regulations.
Scales with growing data volumes and expanding enterprise requirements
v Extensive heterogeneous support across all popular systems
Use case 1
Critical application files can be accessed, modified, or even destroyed
through back-end access to the application or database server

Chapter 4. Protect 149


Solution: File Activity Monitoring can discover and monitor your
configuration files, log files, source code, and many other critical
application files and alert or block when unauthorized users or processes
attempt access.
Use case 2
Need to protect files containing Personally Identifiable Information (PII) or
proprietary information while not impacting day-to-day business.
Solution: File Activity Monitoring can discover and monitor access to your
sensitive documents stored on many file systems. It will aggregate the
data, give you a view into the activity, alert you in case of suspicious
access, and allow you to block access to select files and folders and from
select users.
Use case 3
Need to block back-end access to documents managed by your application.
Solution: File Activity Monitoring can discover, monitor, and block
back-end access to your documents, which are normally accessed through
an application front-end (for example, web portal).
File Activity Monitoring Workflow
1. Discover
Collect and store metadata. File Name, File Size, Date Created, Owner,
Read User, Write User, etc.
Identify user rights, privileges, permissions.
Supports all file types.
2. Classify
Scan file content.
OOTB Classification: PCI, HIPAA, SOX, Source Code
Customize classification rules.
Supports: Text, HTML, XML, CSV, Office, PDF, Log Files, Source Code
3. Monitor/Audit/Block
Create and apply policies for ongoing monitoring and protection.
Operations monitored: Read, Write, Execute, Delete, Change Owner,
Permissions, Properties
Block file access to unauthorized parties.
View and search your data.
Predefined reports: Users privileges, File privileges, Count of activity
per user, Count of activity per client, Files open to “public”, Dormant
users, Dormant Files, etc.
Components and architecture
The main components of the file activity monitoring solution are:
On the file server:
v A FAM Discovery agent. This agent is required if you want to do file
discovery and classification. It is not required for monitoring activity.
Install this agent using the Guardium Installation Manager (GIM) just as
you would any other bundle. Using GIM, you configure this agent to do
file discovery (a basic scan) and optional classification. To minimize

150 IBM Guardium 10.0


impact, the scan runs in offline mode, and not in real time. Also, after
the initial scan, subsequent scans, running on scheduled time, will only
pick up new and changed files.
– The basic scan includes Owner, Size, Last Change, Creation time, and
Access Privileges to user or group.
– For classification, you use sets of classifier rules known as decision
plans. Default decision plans exist for HIPAA, PCI, SOX, and source
code and can be activated and configured through the Guardium
Installation Manager (GIM).
v A file monitoring agent called an S-TAP (for “software TAP”). This agent
is used for ongoing monitoring, alerting, and blocking of file access. Use
Guardium policy rules to specify which file servers and files to monitor
and what actions to take if policy rules are violated, such as logging the
violation, alerting, or blocking access.
This capability is embedded in the same S-TAP agent used for database
activity monitoring and if you have licensed both capabilities you can use
the same S-TAP agent for both file and database activity monitoring.
v The Guardium Installation Manager (GIM) client, which picks up agent
configuration changes you specify on the GIM GUI on the Guardium
appliance (collector or central manager).
v IBM Classification Workbench is a Windows application you can use to
create your own decision plans, which you can upload to the Guardium
collector appliance. Also use IBM Classification Workbench to edit an
existing decision plan.
On the Guardium appliance:
v Guardium installation manager (GIM) server and user interface, which is
used to install the discovery and S-TAP agents on the file server and for
configuring those agents.
v The file activity monitoring policy builder, with which you create policy
rules and actions for file activity monitoring.
v File enterprise search, which includes an Entitlement tab to see the
results of the FAM discovery (file metadata, activity, violation, error,
classifications).
v Report builder, which you can use to create reports on audited file
activity.
Notes - UNIX/Linux
v New device node and link
/dev/fsmon_${rev}
/dev/guard_fsmon
v Runs independently from its own thread inside UNIX S-TAP
v Has an entirely separate connection to the primary Guardium system
than the rest of UNIX S-TAP
Feed connection, typically on port 16022 (clear) or port 16023 (TLS)
v Logs to a different file than UNIX S-TAP
/tmp/guard_stap.fam.txt
Mirrors the debug level of S-TAP, but only changeable when S-TAP starts,
set debug level to 4 either in the .ini file or by running S-TAP by hand to
get FAM debug

Chapter 4. Protect 151


FAM logs and utility output are collected in the V10 guard_diag script.
v Policies are pushed to KTAP and do not require a constant connection to
the Guardium system to enforce, but a connection is required to send
alerts based on violations
v Only supported on Linux, AIX 6, and AIX 7 for V10.
v A block enforced by the operating system and not FAM on Linux is not
logged in the GDM_EXCEPTION table.
Notes - Windows
v New Windows service (STAPat) and new driver (fsmonitor) to support
FAM.
v Service and driver implement file monitoring and blocking and come as
part of default Windows S-TAP install.
v Note that discovery and classification for FAM are done by the FAM
discovery agent, which is an optional GIM package that needs to be
installed separately.
v All file activity events are sent to the Guardium system over ports
16022/16023(TLS) – universal feed, from the STAPat service.
v File activity rules (as defined in policy) are sent to and saved locally by
Windows S-TAP/driver, unlike DAM policy and rules.
v Configuration (FAM rules) is saved in registry:
v A block enforced by the Windows operating system and is logged in the
GDM_EXCEPTION table.
HKLM\SYSTEM\CurrentControlSet\services\Fsmonitor\Parameters

Installing FAM components


To monitor files on a file server, begin by installing the S-TAP bundle.

Before you begin

This section describes using GIM to install the FAM components. After the GIM
client is installed on the file server, you can easily use GIM to install the modules
you need on the file server:
v FAM discovery agent (also known as FAM bundle or FAM agent): Required for
file discovery and optional classification.
v S-TAP: Required for file monitoring and policy enforcement

On a Windows workstation or file server, install the IBM Content Classification


Module if you want to create your own classification rules (decision plans) and
package them into a custom decision plan.

About this task


You can install the FAM discovery agent/ file crawler together with the Guardium
Installation Manager (GIM) client, and with an S-TAP if you wish.

Procedure
1. On the collector, enable the population of discovery and classification results
into enterprise search by running at the CLI prompt the following GuardAPI
command, grdapi enable_fam_crawler [extraction_start] [schedule_start]
[activity_schedule_interval] [activity_schedule_units]
[entitlement_schedule_interval] [entitlement_schedule_units] Example:

152 IBM Guardium 10.0


The following command sends updated discovery and classification results to
enterprise search for classification data every 2 minutes and for entitlement
information every day, grdapi enable_fam_crawler
activity_schedule_interval=2 activity_schedule_units=MINUTE
entitlement_schedule_interval=1 entitlement_schedule_units=DAY To enable
the violations in enterprise search, grdapi set_enterprise_search_options
2. Install the GIM client on the file server. Instructions are included in the online
help. Click on the question mark in the banner and select Guardium Help.
Navigate to the Guardium Installation Manager section.
3. Install S-TAP for file monitoring by following steps:
a. Install the GIM client on the file server using the instructions in the
Guardium online documentation. Click on the question mark in the banner
and select Guardium Help.
b. In the Guardium UI, upload the S-TAP module into the appliance by
navigating to Manage > Module Installation > Import modules. Choose
the correct S-TAP module for your platform.
c. In the Guardium UI, import the S-TAP module. Go to Manage > Module
Installation > Import modules.
d. Using the GIM GUI, install the S-TAP module onto the file server.
Navigate to Manage > Module Installation > Setup by Client. To see all
registered clients, click Search.
e. Select your file server and then click Next.
f. There are no S-TAP parameters specifically for file activity monitoring. You
may want to ensure that STAP_SQLGUARD_IP is pointing to the correct
collector.
g. Click Apply to Selected then click Install/Update. You can install now or
schedule this for a later installation.
h. Verify that the S-TAP installed successfully by viewing the Guardium
report, S-TAP Status Monitor (add the report from My Dashboards). Look
for the S-TAP host to have the :FAM suffix.
4. In the Guardium UI, upload the FAM module into the appliance by
navigating to Manage->Module Installation> Upload Modules. For UNIX, the
FAM Discovery bundle will have a name like: guard-bundle-FAM-
foxhound_r*****_trunk_*****.gim For Windows, the name will be something
like: guard-FAM-guardium_r*****Windows-Server-x86_x64_ia64.gim Choose
the correct module for your file server OS.
5. In the Guardium UI, import the FAM module. Go to Manage > Module
Installation > Import Modules
6. Using the GIM GUI, install the module onto the file server. Navigate to
Manage > Module Installation > Setup by Client. To see all registered
clients, click Search.
7. Select your file server and then click Next.
8. Choose the FAM module you uploaded. (For Windows, you may need to
uncheck the Display Only Bundles checkbox.)
9. At this point, you will need to configure parameters for FAM discovery agent,
including: SOURCE_DIRECTORIES for the directories you want to scan (Unix
only); STAP_SQLGUARD_IP for the IP or host name of the Guardium
collector. By default, the agent will only do basic scanning for entitlement
information. To enable scanning based on decision plans, such as for SOX or
HIPAA, you need to set FAM_IS DEEP_ANALYSIS to true and specify which
decision plans you want it to use. By default, it will use all of the default
decision plans. The default schedule for the scanning is every 12 hours, but

Chapter 4. Protect 153


you can change this and change the start time using GIM parameters
FAM_SCHEDULER_HOUR_TIME_INTERVAL, FAM_SCHEDULER_START,
and FAM_SCHEDULER_REPEAT.
10. Click Apply to Selected then click Install/Update, where you can install
immediately or schedule a later time.
11. Verify that the FAM discovery agent installed successfully by viewing the
Guardium report, S-TAP Status Monitor (add the report from My Dashboards).
Look for the FAM_Agent suffix in the IP address of the S-TAP host.

Results

Discovery and Classification results - when the installation of the FAM discovery
agent/ file crawler is complete, a basic run of the file crawler begins, using the
initial path that you specified during the installation. This process gathers the list
of folders and files, their owner, access permissions, size, and the date and time of
the last update.

Viewing file data


You can use Guardium enterprise search to view data about file access.

To view file access data, choose File in the dropdown list in the banner. This action
opens the enterprise search function and displays file data. Enterprise search must
be enabled on your Guardium system in order to view this data. The FAM
component on the Guardium system must also be enabled.

To view data that is sent by the File Access Monitoring (FAM) discovery agent,
open the Entitlement tab. This tab displays entries that are based on the decision
plans that are being used by the FAM classifier to identify sensitive data. The
Classification Entities column shows the decision plan that caused this file to be
identified as sensitive. From this view, you can choose an entry and add it to a
group, which you will use in one or more policies. You can also create a policy that
uses an entry to form its first access rule. You can create a new policy, or create a
rule and add it to an existing policy.

To view data that is generated based on file access policies that you have created,
after following the step procedure in the Creating a FAM policy rule section, open
the Activity tab. This tab displays entries that show the file name and path and the
type of access that was detected.
Viewing discovery and classification results
To obtain file metadata and optional classification results, the FAM
Discovery agent must be installed and configured on the file server and
actively sending data to the Collector. On the collector, you can see the
results in two ways:
v From the enterprise search UI. To populate enterprise search, you must
use the following command on the Guardium collector: grdapi
enable_fam_crawler. The enterprise search UI is ideal for ad-hoc
browsing and searching. You can also use enterprise search results to
automatically create file activity monitoring policy rules.
v From the FAM – Entitlement report. Reports are ideal for creating
auditable records. Using the audit process builder (Under the Comply
tab), you can schedule reports to run periodically and send report results
to reviewers.
File activity monitoring

154 IBM Guardium 10.0


File activity monitoring is a separate task from file discovery. It relies on an
S-TAP that runs on the file server. For NFS volumes, it is important to have
an S-TAP installed and configured on all machines that access those
volumes.
The S-TAP is dynamically configured with security policies that you create
using the Guardium UI. The S-TAP monitors file accesses and sends
activity that matches the security policy rules criteria to the Guardium
collector where it is stored in the Guardium repository. This is different
than the way typical database activity monitoring works in that only
activity that is specified in the security policy is sent to the collector.
When an event is recorded in the Guardium repository, it means it has
been audited. It is possible to raise the level of awareness for certain
activities by alerting on them or logging them also as policy violations,
which can be seen in a separate report, FAM – Access, or in a separate tab
in the enterprise search UI.
Access to files can also be blocked, even if the operating system
permissions allow access. Because the file monitoring rules are activated in
the S-TAP, blocking occurs immediately. The data requested by the user is
simply never read from disk; the S-TAP blocks and prevents the operation.
Important: Windows and Linux: Linux ROOT user and Windows
Administrator activities are not monitored or blocked by File Activity
Monitoring.
File activity monitoring policy
Use a file activity monitoring policy to specify how Guardium should
handle different file activity events.

Note: FAM is limited to one action per rule.


A policy consists of a set of ordered rules, each of which contains criteria
and actions. For example, you may want a policy that does the following:
1. Log a policy violation if John writes into the CONFIDENTIAL folder
2. Block a group of users from deleting the file SALARIES.XLS
3. Send an e-mail to Krishna if JENNY reads from any files that begin
with the name sample*
4. Audit all accesses to any files that have been classified as containing
sensitive data related to PCI
Groups: Guardium uses the concept of groups for policy and report
creation.
Guardium groups are created and maintained on the Guardium collector
or Central Manager. Do not confuse Guardium groups with file system
groups.
It is recommended that you consider a naming strategy for your groups,
including groups of data sources (file servers) groups of files (such as by
sensitivity level or combination of sensitivity level and application), groups
of users (a list of all known users “authorized” users, users with special
privileges).
FAM policy rule actions
To access the FAM policy builder, click Protect > End-to-End Scenarios >
File Activity Monitoring.

Chapter 4. Protect 155


Create a new policy by providing the following information:
v Use the New tool bar icon to create a new rule.
v Provide a rule name.
v Choose datasource.
v Define rule action.
v
v Define rule criteria.
The following policy rule actions are available:
Table 15. FAM policy rule actions
Rule Description
Alert and Log the event and send a notification to SYSLOG, email, a custom alerter,
audit or SNMP.
Audit only Log the event
Block, log Block access to the object, log a policy violation, and log the event. A
violation, and blocking action requires an alert configuration as well.
audit
Ignore Do not log this event.
Log as Log this as a policy violation and log the event.
violation and
audit

Rule order
The ordering of rules in the security policy is very important. The rules are
sent to the S-TAP as a set and are processed strictly in order. Any given
user access is checked against each rule in the policy in order. The first rule
that meets the criteria of this file access is applied and subsequent rules are
ignored. Let us say you have two rules:
v Rule A: audit only all access to /data/*
v Rule B: block, log violation and audit user 'joe' from accessing
/data/salaries
If you put Rule A first, and Joe tries to read /data/salaries, there is no
need to go to the next rule, and Joe will be audited. If you put Rule B first,
Joe is blocked from accessing /data/salaries and there is no need to go to
the next rule.
In most cases, put the most specific rule first and the most general rule
last.
Rule criteria
For any given file access, rule criteria are used to evaluate whether a
particular action should be taken. For any datasource or group of
datasources (file servers), the rule criteria that you can specify include:
User: This is the OS user who is accessing files. This can also be a group of
users, as defined in a Guardium group. If this is left blank then the rule
applies to all users (except root).
File Path: This can be a Windows or UNIX file path or even an individual
file or group of files. This cannot be blank (except when removable media
is selected).
You can use wild cards in the name specification:

156 IBM Guardium 10.0


v The '*' character matches any number of any characters
v The '?' character matches one single character
v For UNIX, use back slash to escape * and ?
UNIX examples: To match all files on disk, enter /*.
To match /tmp/My*File.txt exactly , use /tmp/My\*File.txt
To match any file with a .txt extension in /tmp , use /tmp/*.txt
Windows: For Windows, you must specify the drive, such as C:\.
Examples:
To monitor all files on the C drive, enter C:\ and mark the Monitor
subdirectories checkbox.
To match any file with a .txt extension in C:\tmp, use C:\tmp\*.txt
Hint: Wild cards take extra processing. Excessive use of wild cards will
impact performance.
Access commands: Because there are hundreds of file system commands,
they are categorized into the following categories:
The categories are:
READ
WRITE
EXECUTE
DELETE
FILEOP – This includes any calls that affect file metadata such as change
file ownership, change file permissions, and similar calls.
Some calls, such as get system time, do not affect files at all and are
ignored.
These categories are fixed in the system and cannot be changed. It is
possible to create a Guardium group that contains these categories in any
combination you want, however, and use that group in the security policy.
For example, you can create a Guardium group that contains WRITE and
EXECUTE as members.
If you leave this blank, it means all file system commands are counted as a
match.
Example:
The FAM rule pattern is:
/FAM*
In other words:
Directory: /
File name: FAM*
The rule in place has subdirectories selected:
Subdirs: Yes
The file accessed is:
/guardium/modules/SUPERVISOR/10.0.0/FAM.output

Chapter 4. Protect 157


This matches. The file name, FAM.output, matches the name, FAM, and is
in a subdirectory of the given directory '/'.

Creating a FAM policy rule


You can create a new rule from the list of enterprise search results, or from the
FAM policy builder, and use values from the results to prefill rule values.

Before you begin

To create rules from the quick search results, you must install FAM on one or more
file servers. FAM must send information to your Guardium server, so that it
appears in the quick search results.

About this task

FAM applies policy rules to data that is sent from your file servers. You can use
values, such as datasource names, user names, actions, and file paths, from your
quick search data to create policy rules.

Procedure
1. Choose File from the dropdown list in the product banner and click the search
icon to open the Quick Search results page for file data.
2. Open the Entitlement tab. Click Details to see individual entries.
3. Choose one or more entries in the results that you want to use to populate a
rule. You can use the Select all check box to include all the entries that are
currently displayed (not all the entries in the database).
4. Right-click and choose Add Policy Rule. The Build Rule dialog is displayed.
Fields in this dialog are pre-filled with values from the entry that you selected.
If you selected multiple entries, a group is created that contains the values from
those entries. You can create a rule that is to be added to an existing policy, or
create a new policy that includes your new rule.

Note: A overly broad rule (a rule that monitors too many files) will overload
the system and increase processing and response time.

Note: A FAM rule can have more than one pattern in it. To protect both a
directory and its contents, define a rule with two patterns /FAMtest/* and
/FAMtest.

Note: All FAM rules must be explicitly specified.


5. Choose datasources, actions, and criteria. Overwrite any values that you want
to change. Click Edit to modify each field.
6. To create a new policy and install it, click Create and Install. To create the
policy but not install it, click OK.

Creating a decision plan


Decision plans are used to identify sensitive content in files. The Guardium FAM
discovery agent provides default decision plans, and you can also create your own
or edit existing ones.

158 IBM Guardium 10.0


Before you begin
Install IBM Content Classification 8.8 on a Windows workstation that can be
connected to your Guardium environment.

During File Activity Monitoring, the GIM installation user must configure ICM
Decision Plan setting on the File Activity Monitoring GIM configuration page.

User must configure the list of Decision Plans (categories) with entities (NVP
fields) for each Decision Plan delimited by colons.

This configuration is used by File Activity Monitoring for content classification.

The customer should be able to configure all possible entities for each Decision
Plans templates, available during the File Activity Monitoring installation.

After File Activity Monitoring installation, there are four Decision Plan templates
available:
v
HIPAA, PCI, SOX, Source
v
HIPAA Decision Plan used for finding medical information
v
PCI for finding Credit Card Numbers
v
SOX for financial documents

The "Source" decision plan refers to two knowledge bases (CodeKB and
DocumentTypeKB) which are loaded by default once the Source decision plan is
configured.

Here the list of possible entities for each Decision Plan supplied out of the box
with File Activity Monitoring and can be configured via GIM.

HIPAA

SSN, Name, License, GovermentID, PassportContext, BankAccount, Address,


IPAddress, EmailAddress, URL, Phone, CreditCard, possibleHealthPlan,
Confidential_match, HIPAA_match

PCI

SSN, Name, License, GovermentID, PassportContext, BankAccount, Address,


IPAddress, EmailAddress, URL, Phone, BankAccountContext, CreditCard,
CreditContext, containCardIssuer, PCI_match, Confidential

SOX

SSN, Name, License, GovermentID, PassportContext, BankAccount, Address,


IPAddress, EmailAddress, URL, Phone, BankAccountContext, CreditCard,
CreditContext, containCardIssuer, piiMatch, Confidential, SOXContext, SOX_match

Source

Chapter 4. Protect 159


containDate,hasSSN, hasBirthDate, containCardIssuer, hasCreditCard, PCIViolation,
HIPAA_Match, ConfidentialMatch, Source_match

A decision plan is a collection of rules that you configure to determine how IBM
Classification Module classifies content items. Rules consist of triggers and actions.
A trigger determines the conditions that must be met to initiate an action. An
action determines how the document is to be classified. A decision plan can also
refer to one or more knowledge bases to combine rule, keyword-based
classification with statistical, text-based classification.

A Knowledge base is a set of collected data that is used to analyze and categorize
content items. The knowledge base reflects the kinds of data that the system is
expected to handle. Before the knowledge base can analyze text, it must be trained
with a sufficient number of sample content items that are properly classified into
categories. A trained knowledge base can compute a numerical measure of an
item's relevancy to each category.

Note: ICM is not able to work with Decision Plans with Chinese names. Content
documents in Chinese and Decision Plan rules in Chinese is supported, but not
Decision Plan names in Chinese.

Note: Distribution of decision plans from the Central Manager to managed units is
unsupported.

About this task

For the purpose of this description, we’ll assume that your company has a
confidential project named "ProjectA." You want to identify and monitor all files
that contain this string.

Procedure
1. Use the Windows Start menu to open the IBM Content Classification 8.8
Classification Workbench.
2. In the Open Project dialog, click New....
3. In the New Project dialog, choose Decision Plan for the project type. Enter a
name for this decision plan, such as ProjectA_DP. Enter a description if you
want one.
4. In the New Project Options dialog, select Create an empty project.
5. In Project Explorer click Word and string list files. In the Word and string list
files dialog, click New... to create a new file. In the New File dialog, choose
Word list for the file type and choose a name for the file. In this example we
call the file Names. Wordlist_Names.txt appears in the list of files.
6. Double-click the file name to edit the file. Insert a single line with the string
~ProjectA~ and save the file.
7. In Project Explorer click DecisionPlan > New Group > New Rule. Change the
name of the rule to ProjectA.
8. In the New Rule dialog, open the Trigger tab. Click condition.
9. Choose Trigger when fields contains specific words or phrases. Choose
Word list file. Click OK.
10. Open the Action tab. Click Add new rule.
11. Select Advanced Actions from the Action Type list. Choose the Set content
field action. This content field is created when the specified trigger fires. The
content field can be viewed in FAM reports.

160 IBM Guardium 10.0


12. In the Add action dialog, enter ProjectA_match as the content field name and
enter found in the Value field.
13. Import the content set into the decision plan project.
a. Create a text document that contains the string "ProjectA."
b. In the Project Explorer, expand the ProjectA_DP project. Right-click
Content Set and choose Import Content Set.
c. Click Files from a file system folder. Browse to the file that you created in
step a. Click Next, then Next, then Next, then Finish.
14. Verify that your definition is successful.
a. In the Project Explorer, open the Content Set tab. Right-click your file and
choose Run Item through Decision Plan.
b. In the Analyzed item dialog, expand Decision Plan and the group. Verify
that Rule:ProjectA is marked [Triggered].
c. Click Content Fields.... In the Select Content Fields dialog, verify that
“ProjectA_match” is displayed in the Changed fields box, and “found” is
displayed in the content box.
15. In the Project Explorer, click Project > Save to save the ProjectA_DP project.
16. In the Project Explorer, click Project > Export to export the ProjectA_DP
project to a dpn file.
17. Use GIM to push the dpn file to the file servers where you want to use the
decision plan.

FAM configuration with GIM Parameters


Use these GIM parameters to configure FAM file discovery and classification
Table 16. GIM Parameters with FAM
GIM Parameter Description
FAM_CLASSIFICATION_LANGUAGES
Internal.
FAM_CLIENT_PASSWORD
Internal.
FAM_CLIENT_SECRET Internal.
FAM_DEBUG 0=OFF1= ON
FAM_DISKSPACE This is the amount of disk space required to install the FAM
Discovery agent. This is set to 20000 KB and is not editable.
FAM_ENABLED 0 - FAM Discovery agent is disabled.

1 - FAM Discovery agent is enabled.

2 - restart FAM Discovery agent


FAM_ICM_CLASS_DECISION_PLANS
Enable the decision plans by including their Plan names and
their classification entities.

DecisionPlanName1{Entity1.1,Entity1.2,..}:DecisionPlanName2{Entity2.1,Entity2.2,..
FAM_ICM_CLASS_THREAD_COUNT
Number of threads for the classifier to use. The default is 5 and
is the recommended value.
FAM_ICM_URL The url of the IBM Content Classification Workbench. The
default is http://localhost:18087
FAM_INSTALLER Windows only.
FAM_INSTALL_DIR Windows only.

Chapter 4. Protect 161


Table 16. GIM Parameters with FAM (continued)
GIM Parameter Description
FAM_IS_DEEP_ANALYSISFalse=classification is disabled. Only a basic can for entitlements
will be run.

True=classification is enabled. If no decision plans are enabled,


only a basic scan will be performed.
FAM_SCAN_EXCLUDE_DIRECTORIES
UNIX only. Directories to exclude from discovery and
classification. Wildcards are not supported.
FAM_SCAN_EXCLUDE_EXTENSIONS
This feature excludes file extension or documents without
extensions from FAM scan. Relevant for both Windows and
Linux. Configured via FAM GIM configuration screen. The
setting is case sensitive. Examples of excluded extensions:
pdf;txt;doc (semicolon delimited list). To exclude documents
without extension setting is constant value"NO_EXTENSION".
FAM_SCAN_EXCLUDE_FILES
UNIX only. Files to exclude from discovery and classification.
Wildcards are not supported.
FAM_SCAN_MAX_DEPTH
UNIX only. Limit the depth for the scan relative to what the
agent is configured to start with
(FAM_SOURCE_DIRECTORIES).
FAM_SCHEDULER_HOUR_TIME_INTERVAL
The interval between running the scan. The default is 12 hours.
FAM_SCHEDULER_MINUTE_TIME_INTERVAL
Along with the hour interval, this is the time interval between
scans. For example, if you want scans to occur 12 hours and 30
minutes apart, specify 12 for the hour and 30 here for the
minute.
FAM_SCHEDULER_REPEAT
True=Repeat the discovery process on the time interval specified.

False= Do not repeat


FAM_SCHEDULER_START_TIME
Time to activate the schedule for the discovery and classification
processes.

Format:

dd-MM-yyyy HH:mm

For example, if you enter 01-02-2015 18:00, the scan will start at 6
PM on February 1st. . If the time interval is 12 hours, the process
will run every day at 6 PM and 6 AM.
FAM_SERVER_PORT The Guardium collector port, 16022.
FAM_SOURCE_DIRECTORIES
The directory or directories to start scanning from. Wild cards
are not supported. Example: /home/soonnee. There is also an
option to use FILE_SYSTEM_ROOT.
STAP_SQLGUARD_IP The IP or host name of the Guardium collector. Do not edit this
value.

Concepts for Guardium for Applications


IBM Guardium for Applications uses the Guardium infrastructure to monitor and
mask data as it travels between the application server and the client computers.
Many Guardium concepts and entities therefore relate to data masking with
Guardium for Applications.

162 IBM Guardium 10.0


To mask data, you must create and configure Guardium data masking polices that
specify the data that is to be masked. The procedure that you use to create data
masking policies is similar to the procedure that you use to create other types of
Guardium policies.

Policy

A policy is a set of rules and actions that are required to be performed when certain
events or status conditions occur in an environment.

A policy specifies how data is to be masked. After you create and install a policy,
application data is masked according to the rules that are specified in the active
policy.

Rule

A rule is a list of conditions and actions that are triggered when certain conditions
are met.

Screen masking rules mask data from the application before the application is
displayed on the client computer

You can use conditions to limit the cases in which a rule is used. For example, if
you specify a set of client IP addresses for a screen masking rule, the screen
masking rule applies only to clients with the specified IP addresses.

For best performance, apply the strictest possible filters to the rules that you create.

Masking rules - Conditions fields:


v Server IP - The application server IP
v URL Prefix - The target application URL prefix
v Client IP – The client who request the screen
v Application User name – The user that logged into the application.

Action

An action is a defined task that an application performs on an object as a result of


an event. An action is the part of a rule that defines what data is masked and how
the data is masked.

There are two types of data masking actions:


v A mask in context action masks data according to the location of the data within
the application. For example, a mask in context action can mask data in a
specific field or column on a specific page of the application. A context can be
defined either by using the selection tool to select a field or column within the
application or by manually defining the context. You can manually define the
context by using a Guardium masking script, which is entered into an action
item definition. It is easier to define a context by using the selection tool than by
manually defining the context. However, only a limited number of contexts can
be selected by using the selection tool, whereas you can use manual definitions
to select any context between elements.

Note: It is not permitted to mask an element more than once, for example,
active rules should apply to mutually exclusive sets of elements.

Chapter 4. Protect 163


v A mask by content action masks data according to the structure of the data. For
example, a mask by content action can mask data that is structured like Social
Security numbers. To mask data with a mask by content action, you must define
the data to be masked by specifying a data classifier or by using one of the
predefined data classifiers. A data classifier can be a regular expression or group
of regular expressions, a specific text or group of texts, or a Guardium masking
script. You can choose to mask by content only within a defined context. For
example, you can choose to mask Social Security numbers within a notes field
on a page of the application. If you do not choose a context for a mask by
content action that is set to mask Social Security numbers, the action masks
Social Security numbers throughout the application.

When you identify content that is to be masked, you can optionally use a regular
expression to specify which part of the identified content is to be masked. For
example, you can specify that only the prefixes of email are to be masked, or you
can specify that all digits but the last four digits of a Social Security number are to
be masked.

Masking methods

When you create a data masking action, you must specify the masking method
that is to be used to mask the data.

The following masking methods can be used in data masking actions:


v Encrypt encrypts the data and displays the encrypted data in the application in
the place of the data. Encryption algorithms that use larger keys are more secure
but take longer to encrypt.
– AES 128 encrypts the data by using Advanced Encryption Standard (AES)
encryption with a 128-bit key.
– AES 196 encrypts the data by using AES encryption with a 196-bit key.
– AES 256 encrypts the data by using AES encryption with a 256-bit key.
– DES 64 encrypts the data by using Data Encryption Standard (DES)
encryption with a 64-bit key.
– Format preserving replaces the data with text that contains the basic format of
the data. For example, email address user@example.com, can be replaced with
the string jke.fds@asdfdas.ade. As in all other encryption methods, this can be
decrypted back to the original value
v Redact removes the data from the application.
– Remove removes the data and displays blank space in the place of the data.
– Replace removes the data and displays defined replacement characters in the
place of the data.
– Cover removes the data and displays a colored rectangle in the place of the
data. Cover is available only for masking in context actions.
– Highlight highlights the data that is covered by the action. Use highlight to
test data masking policies, and then set the action to a different masking
method when you are done testing. Highlight is available only for masking in
context actions.
v Tokenize creates a token that can be matched with the masked data and
displays the token in the application in the place of the data.
– Format preserving replaces the data with a token that contains the basic
format of the data. For example, email address user@example.com can be
replaced with the token jke_fds@asdfdas.ade. The token can be matched with
the masked data.

164 IBM Guardium 10.0


– Serial replaces the data with a token that contains a serial number. The serial
number can be matched with the masked data.
Format preserving transformation masking capabilities
v Format preserving encryption masking capability
– Encryption - a reversible process that uses a key to transform from
plaintext to cipher text. Format preserving encryption, not only
encrypts but preserves the original value format.
v Format preserving tokenization works by the same basic method as
format preserving encryption, only the core transformation is based on a
hash. It does not require an encryption key.
– Tokenization - a non-reversible process that substitutes a surrogate
value for the original sensitive data.
– This surrogate value is called a “token”. The token value does not
contain sensitive information - it replaces it, maintaining the original
value. A simple example of tokenization is to generate a serial token,
for example, token_1, token_2, etc.
v v Since this may potentially expose sensitive information. we now mask
the tokenization map prior to saving it to the Guardium system
database.
– o Tokens and their mapping to the original value are saved per
session for the sake of referential integrity, an automatic request
reconstruct mechanism unmasking traffic when needed.

Encrypt and tokenize masking methods are reversible. Redact masking methods
are not reversible.

Group

A group is a set of same-type elements that are used to simplify the processing and
reporting of those elements. For example, you can define a group of city names or
a group of IP addresses.

A group defines objects that are to be acted upon by a rule or action. For example,
to mask city names, first create a group of city names, and then create an action
that is set to mask the city names in the group.

Create Whitelist or blacklist group access from context rules (masking script).

Data classifier

A data classifier is a script, pattern, text entity, or group that defines the data to be
processed by an action. For example, to mask city names, you can create a group
of city names. You can then create a data classifier to mask city names within the
group.

Data classifiers for common classes of data elements are predefined in the system.
For example, a data classifier for email addresses is predefined. You can create
more data classifiers to meet your needs. You can define data classifiers by using
the following methods:
v Regular expression
v Guardium masking script
v Text entity
v Group of regular expressions or text entities

Chapter 4. Protect 165


Configure data masking policy
After the Guardium for Applications solution components are installed and
configured to work together, you can configure a data masking policy to mask
application data.

Workflow to configure a data masking policy


The basic workflow to configure a data masking policy is:
1. Access Guardium
2. Create a policy
3. Test the policy with the policy simulator and add or edit rules as needed
4. Install or reinstall the policy to enable data masking

Access Guardium

To configure masking, you must first use a browser to access Guardium.

To access Guardium from a browser, enter https://<hostname>.<domain>:<port>


into the address box, where hostname, domain, and port are the host name, domain,
and port of the application server on which the application is deployed. The
default port is 8443.Your browser might warn you of a problem with the website's
security certificate. If you use SSL to access an application that uses a self-signed
certificate, choose to continue to the website.

If you cannot access Guardium, check these connection-related items:


v Guardium and the other solution components are started.
v You can access the Guardium computer from your computer. (That is, your
computer has a network connection to the Guardium computer. Also, if there is
a firewall between your computer and the Guardium computer, you must be
able to connect through the firewall.)
v The browser that you use to access Guardium is configured to use the proxy
settings for the network on which the Guardium computer is located.

Create a policy
A policy is a set of rules and actions that are required to be performed when certain
events or status conditions occur in an environment. A policy specifies how data
within an application is to be masked. After you install the policy, data is masked
according to the policy specifications.

Before you begin, access Guardium from a browser. If you use the selection tool,
you must have a user account and the URL for the application whose data you are
to mask. The user account must have access to the fields and columns that are to
be masked within the application.

To access the policy builder, click Protect > Security Policies > Policy Builder.

Click the Add icon.

Enter a description to identify the policy and click Apply. Click Edit Rules and
wait for Add Rules to display in the last button row.

166 IBM Guardium 10.0


Add at least one rule to the policy. Repeat this step for each rule that you
add.Click Add Rules > Add Screen Mask Rule. Enter a description to identify the
rule. Specify the conditions that apply to the rule. Conditions limit the information
that can be masked by the rule. Conditions can include any or all of the following
attributes:
v Application server subnet as identified by IP address and subnet mask (for
example, to specify all computers in the 192.0.2.* subnet, enter 192.0.2.0 into
the first box and 255.255.255.0 into the second box)
v Application URL prefix, either a literal value or a regular expression (for
example, http://example.com/index.php)
v Subnet of client computers that access the application as identified by IP address
and subnet mask (for example, to specify all computers in the 192.0.2.* subnet,
enter 192.0.2.0 into the first box and 255.255.255.0 into the second box)
v User IDs that are used to access the application

You can use groups in addition to or in place of a single value. You can also select
Not for an attribute to specify that the rule applies to all information but
information that is associated with the specified attribute values. Add at least one
action to the rule

Add at least one action to the rule.


v To add an action that applies to data with a specific structure (for example, data
that is structured like a credit card number):
1. Click Manually.
2. Select MASK BY CONTENT as the action, enter a description to identify the
action, and identify the content to mask by selecting a classifier or entering a
regular expression.
3. To mask only part of the identified content, select Partial mask and enter a
regular expression that identifies the part of the content to mask.
4. To mask the identified content only in a specific part of the application, select
Add Context, click Selection Tool, and select the field or column to mask by
using the selection tool.
5. Select the masking method to use. If applicable to the masking method, also
specify the replacement character to use.
6. Click Apply.
v To add an action that applies to data within a specific part of the application,
complete one of the following tasks:
– Select fields and columns to mask with the selection tool.
– Define the context manually by using a Guardium masking script. To define
the context manually by using a Guardium masking script:
1. Click Manually.
2. Select MASK IN CONTEXT as the action and enter a description to
identify the action.
3. Click New and enter a description to identify the action item.
4. Specify the application URL prefix, either a literal value or a regular
expression.
5. Select the desired HTTP method, content type, and parsing format. For
example, select GET as the HTTP method, text/html as the content type,
and html as the parsing format.
6. Enter the script and click OK.

Chapter 4. Protect 167


7. Select the masking method to use. If applicable to the masking method,
also specify the replacement character or color to use.
8. Click Apply.

Click Save.

When done, test the policy with the policy simulator to ensure that the data is
masked correctly.
Use case
Call Center outsourcing - health insurance company outsources its call
center. Customer Service Representatives (CSRs) access company
applications remotely. Guardium for Applications is installed in the middle
to guarantee that application screens undergo masking process. CSRs
utilize the application as usual. Sensitive information unessential for CSR
operation is masked out.

Select fields and columns


The selection tool is a user interface that simplifies the definition of contexts to
mask. You can use the selection tool to select a column, field, or labeled field
within an application. After you make your selection, the selection tool generates a
Guardium masking script that the masking engine uses to mask data in the
selected context.

If you use the selection tool, you must have a user account and the URL for the
application whose data you are to mask. The user account must have access to the
fields and columns that are to be masked within the application.

You can use the selection tool to define contexts in the following cases:
v To define a context for a new mask in context action.
v To define a context that limits the scope of a mask by content action.

The selection tool takes the following attributes into account when you use the
selection tool to define a context:
v The position of the selected column, field, or labeled field within the page.
v The label that is associated with a labeled field. For a more accurate context
definition, select a labeled field instead of a field where possible.
v The URL suffix, and the application URL parameters that you specify as context
significant. The selected field or column is masked only when the application
URL parameters are equal to the values that are defined in the context. The
masking engine considers only the values of the parameters and not the order of
the parameters within the URL.

To select fields and columns with the selection tool:


1. Click Selection Tool.
2. Enter the URL for the application to be masked.
3. Navigate to the page to which you are to apply the new action.
4. Select the type of context (table column, labeled field, or field).
5. Click Start Selection, move the mouse to the application, position the mouse so
that the field or column to select is highlighted, and click the field or column to
select. Selecting a labeled field is a two-step process: click the label, and then
click the field.

168 IBM Guardium 10.0


6. Select the masking method to use. If applicable to the masking method, also
specify the replacement character or color to use.
7. Use the check boxes after URL suffix filter to select the context-significant
parameters in the application URL. The selected field or column is masked only
when the context-significant parameters equal the values for these parameters
in the URL. Parameters that are not selected as context-significant parameters
are ignored for masking purposes. For example, the following URL is for the
Contacts module of the Sales tab of a web application. The scroll=1 parameter
specifies that the list of contacts starts with the first contact.
http://example.com/index.php?module=Contacts&parentTab=Sales&scroll=1
You use the selection tool to select the Email column in the Contacts module of
the Sales tab.
When you select the Email column, index.php?module=Contacts
&parentTab=Sales&scroll=1 is placed in URL suffix filter. Also, check boxes for
module, parentTab, and scroll are displayed after URL suffix filter but are not
selected. To specify that the Email is to be masked only in the Contacts module
of the Sales tab, select module and parentTab under URL suffix filter. Do not
select scroll, because you do not want the starting row of the list of contacts to
determine whether the Email column is masked.
8. Enter a description for the new action.
9. Click Apply.

Test a policy with policy simulator

After you add rules to a policy or edit rules in a policy, test the policy with the
policy simulator to ensure that the data is masked correctly.

Before you begin, access Guardium from a browser. You must also have a user
account for the application whose data you are to mask. The user account must
have access to the fields and columns that are to be masked within the application.

Note: The rules in installed policies are always active, even in the policy simulator.
To limit masking in the policy simulator only to the policy that is to be simulated,
uninstall all active policies before testing.

To test a policy with the policy simulator:


1. Access the Policy Rules list for the policy if necessary. Click Protect > Security
Policies > Policy Builder. Select the policy and click Edit Rules.
2. Click Policy Simulator.
3. Enter the URL for the application to which you want to apply the policy.
4. Sign into the application if necessary and navigate through the application to
ensure that the data is masked correctly.

When you are done testing the policy, you can add rules to the policy or edit rules
in the policy as necessary. If you are done adding and editing rules, you can install
or reinstall the policy.

Install a policy

When you are done adding rules to a policy or editing rules in a policy, install or
reinstall the policy to enable data masking.

Before you begin, access Guardium from a browser.

Chapter 4. Protect 169


To install or reinstall a policy:
1. Access the Policy Finder list if necessary by clicking Protect > Security Policies
> Policy Builder.
2. Select the policy, click Select an installation action, and select Install to install
the policy or Install and Override to reinstall the policy.
3. Click OK to confirm.

Data masking is now enabled.

Add a rule to a policy


After you create a policy, you can add rules to the policy as needed to mask more
data.

Before you begin, access Guardium from a browser. If you use the selection tool,
you must have a user account and the URL for the application whose data you are
to mask. The user account must have access to the fields and columns that are to
be masked within the application.

To add a rule to a policy:


1. Access the Policy Rules list for the policy if necessary:
v Click Protect > Security Policies > Policy Builder.
v Select the policy, click Edit Rules, and wait for Add Rules to display in the
last button row.
2. Click Add Rules > Add Screen Mask Rule.
3. Enter a description to identify the rule.
4. Specify the conditions that apply to the rule. Conditions limit the information
that can be masked by the rule. Conditions can include any or all of the
following attributes:
v Application server subnet as identified by IP address and subnet mask (for
example, to specify all computers in the 192.0.2.* subnet, enter 192.0.2.0 into
the first box and 255.255.255.0 into the second box)
v Application URL prefix, either a literal value or a regular expression (for
example, http://example.com/index.php)
v Subnet of client computers that access the application as identified by IP
address and subnet mask (for example, to specify all computers in the
192.0.2.* subnet, enter 192.0.2.0 into the first box and 255.255.255.0 into the
second box)
v User IDs that are used to access the application
You can use groups in addition to or in place of a single value. You can also
select Not for an attribute to specify that the rule applies to all information but
information that is associated with the specified attribute values.
5. Add at least one action to the rule.
v To add an action that applies to data with a specific structure (for example,
data that is structured like a credit card number):
a. Click Manually.
b. Select MASK BY CONTENT as the action, enter a description to identify
the action, and identify the content to mask by selecting a classifier or
entering a regular expression.
c. To mask only part of the identified content, select Partial mask and enter
a regular expression that identifies the part of the content to mask.

170 IBM Guardium 10.0


d. To mask the identified content only in a specific part of the application,
select Add Context, click Selection Tool, and select the field or column to
mask by using the selection tool. For information on how to use the
selection tool, see Selecting fields and columns.
e. Select the masking method to use. If applicable to the masking method,
also specify the replacement character to use.
f. Click Apply.
v To add an action that applies to data within a specific part of the application,
complete one of the following tasks:
– Select fields and columns to mask with the selection tool. For information
on how to use the selection tool, see Selecting fields and columns with the
selection tool.
– Define the context manually by using a Guardium masking script. To
define the context manually by using a Guardium masking script:
a. Click Manually.
b. Select MASK IN CONTEXT as the action and enter a description to
identify the action.
c. Click New and enter a description to identify the action item.
d. Specify the application URL prefix, either a literal value or a regular
expression.
e. Select the desired HTTP method, content type, and parsing format. For
example, select GET as the HTTP method, text/html as the content
type, and html as the parsing format.
f. Enter the script and click OK
g. Select the masking method to use. If applicable to the masking
method, also specify the replacement character or color to use.
h. Click Apply.
6. Click Save

Edit a rule in a data masking policy

If you discover through testing that the rules in your data masking policy do not
mask data correctly, edit the rules as necessary.

Before you begin, access Guardium from a browser. If you use the selection tool,
you must have a user account for the application whose data you are to mask. The
user account must have access to the fields and columns that are to be masked
within the application.

To edit a rule in a policy:


1. Access the Policy Rules list for the policy if necessary:
v Click Protect > Security Policies > Policy Builder.
v Select the policy and click Edit Rules.
2. Click the Edit this rule individually icon for the rule to edit.
3. Edit the conditions that apply to the rule if needed. Conditions limit the
information that can be masked by the rule. Conditions can include any or all
of the following attributes:
v Application server subnet as identified by IP address and subnet mask (for
example, to specify all computers in the 192.0.2.* subnet, enter 192.0.2.0 into
the first box and 255.255.255.0 into the second box)

Chapter 4. Protect 171


v Application URL prefix, either a literal value or a regular expression (for
example, http://example.com/index.php)
v Subnet of client computers that access the application as identified by IP
address and subnet mask (for example, to specify all computers in the
192.0.2.* subnet, enter 192.0.2.0 into the first box and 255.255.255.0 into the
second box)
v User IDs that are used to access the application
You can use groups in addition to or in place of a single value. You can also
select Not for an attribute to specify that the rule applies to all information but
information that is associated with the specified attribute values.
4. Edit the actions in the rule if needed. Rules that are created with the selection
tool can be edited only by changing the action items outside of the selection
tool. Therefore, if you used the selection tool to create an action, it might be
easier to delete the action and add the action again by using the selection tool.
To edit an action:
a. Click the Expand this action icon.
b. Select an item from the Action Item list.
c. Click Modify.
d. Change the properties of the action as needed and click OK.
5. Add actions to the rule if needed.
v To add an action that applies to data with a specific structure (for example,
data that is structured like a credit card number):
a. Click Manually.
b. Select MASK BY CONTENT as the action, enter a description to identify
the action, and identify the content to mask by selecting a classifier or
entering a regular expression.
c. To mask only part of the identified content, select Partial mask and enter
a regular expression that identifies the part of the content to mask.
d. To mask the identified content only in a specific part of the application,
select Add Context, click Selection Tool, and select the field or column to
mask by using the selection tool. For information on how to use the
selection tool, see Selecting fields and columns with the selection tool.
e. Select the masking method to use. If applicable to the masking method,
also specify the replacement character to use.
f. Click Apply.
v To add an action that applies to data within a specific part of the application,
complete one of the following tasks:
– Select fields and columns to mask with the selection tool. For information
on how to use the selection tool, see Selecting fields and columns with the
selection tool.
– Define the context manually by using a Guardium masking script. To
define the context manually by using a Guardium masking script:
a. Click Manually.
b. Select MASK IN CONTEXT as the action and enter a description to
identify the action.
c. Click New and enter a description to identify the action item.
d. Specify the application URL prefix, either a literal value or a regular
expression.

172 IBM Guardium 10.0


e. Select the desired HTTP method, content type, and parsing format. For
example, select GET as the HTTP method, text/html as the content
type, and html as the parsing format.
f. Enter the script and click OK
g. Select the masking method to use. If applicable to the masking
method, also specify the replacement character or color to use.
h. Click Apply.
6. Delete actions from the rule if needed. Click the Delete action icon.
7. Click Save.

Limitations

Review the limitations of IBM Guardium for Applications if you encounter issues
with your data masking policies.

For better performance, ensure that policies use only one masking method for each
value to be masked. The use of only one masking method for each value also
prevents encrypted or tokenized data from being overwritten and lost by
subsequent masking. For example, if you mask a value with both format
preserving and redaction, the original value is not recoverable. If you use regular
expressions to define which data is to be masked, ensure that the regular
expressions do not overlap each other.

If a field is validated for a specific format, mask the field with the format
preserving masking method so that the masked data does not fail validation. For
example, an email address field is validated for valid email addresses. If you mask
the email address with the redact masking method, the redacted data will fail
validation, which can result in unexpected behavior or results.

Application compatibility requirements

For Guardium for Applications to process application HTTP requests without


changing the application itself, the application must conform to the following
compatibility requirements.
v Requirements for runtime masking and manual rule authoring:
– The following data formats are supported: html, text, xml, or json over http or
https. Nested formats are not supported. For example, html within json is not
supported.
– HTML data must be valid HTML 4.01 (DOCTYPE tag is optional).
v Requirements for rule authoring with the selection tool and preview mode:
– The following data formats are supported: html, text, xml, or json over http or
https. Nested formats are not supported. For example, html within json is not
supported.
– The rule administrator must use one of the following browsers to use the
selection tool and preview mode:
- Microsoft Internet Explorer 9 and later
- Mozilla Firefox 24 and later
- Google Chrome 28 and later
Turn off compatibility mode if you use Internet Explorer. If the application
requires compatibility mode to display or run correctly, you might experience
compatibility issues when you use the selection tool.
– HTML data must be valid HTML 4.01.

Chapter 4. Protect 173


– The DOCTYPE tag must be present in every HTML page. Quirks mode is not
supported.
– Each tag that uses VBScript must contain the language="vbscript" attribute.
– Columns in tables that are not defined with the HTML elements <table>, <tr>
and <td> cannot be selected as table columns in the selection tool.
– A labeled field must be in the same frame as its related label to be selected by
the selection tool.

In cases where masked data may be used in subsequent requests to the server,
Guardium for Applications forces masking in a reversible and format preserving
manner in order to verify referential integrity.

Guardium for Applications does not support applications that are presented in
languages other than English.

Masking Engine Configurations

If your Guardium system seems to be stuck, you can restart the masking engines.

The Restart Masking Engines button is used only for troubleshooting situations. If
your Guardium system is not responding but has not completely failed, click this
button to restart certain processes. This restart might clear the problem.

Guardium for Applications Masking Script JavaScript API


The Guardium for Applications Masking Script is based on JavaScript with API
extensions. Our JavaScript API exposes a set of objects and classes for use in
JavaScript programs that run in the JavaScript engine to manipulate captured
messages. You can use this API if the selection tool does not meet all your needs
for some pages. By using this API, you can modify the content of HTTP messages
as well as other message parameters, such as the URL.

When you use the selection tool to define masking actions, it creates scripts that
are run when rule conditions are met. These scripts modify the HTTP messages
that occur with the use of the application. If this process does not give you the
results that you require, you can create your own scripts to manipulate the
contents and properties of the HTTP messages. Designing these scripts requires
that you understand the messages that are exchanged when users interact with the
applications that you want to mask.

To use your custom scripts, identify the conditions for running the scripts, then
create a mask in context action, and add one or more action items that invoke your
custom scripts. In these scripts, you can use the objects and classes that are
described here.

In addition to the objects and classes, the API provides a function that can be used
for debug purposes:
dbgm(...); //prints the supplied arguments to stdout.

For example,
dbgm(’this ’ + ’is’ + ’ a debug output’); //prints "this is a debug output"

You can insert values from the current class or object into the output string. For an
example, see the json global object.

174 IBM Guardium 10.0


The following notation is used in describing the objects and classes:
v [r] indicates that a property is read-only
v [rw] indicates that a property is read/write
v [] indicates that a parameter is optional.
v nnn:ttt is a property definition where nnn is a property name and ttt is its type
v "any property" means any nnn
v mmm(nnn:ttt[, ...]) is a method definition where mmm is the method name, nnn
is a parameter name, and ttt is the parameter type. The [, ...] indicates that
additional parameter:type pairs can be specified.

The Guardium for Applications JavaScript API defines objects and classes.

Javascript API objects

html

There is only one html object as a property of a global JS object.


Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on HTML text
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action
v remove(n:XmlNode) : void

Note: The only way to get to a specific node in an HTML document is to use an
XPath expression.

Example:
var ns = html.xpath(’some xpath expression returning text nodes’);
// "ns" is an object of JS class XmlNodeSet (see classes sections for mode details)
// providing the node set is not empty we can now mask text node contents according to the
// information stored in the current action
// the following lines mask contents of the first node in the set
if (ns.size > 0)
html.mask(ns[0]);
// the following code masks the ’a1’ attribute of the second node in the set
if (ns.size > 1)
html.mask(ns[1], ’a1’);

xml
A global object representing a parsed XML message.
Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on the XML
tree
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action

Note: The only way to get to a specific node in an XML document is to use an
XPath expression.
Chapter 4. Protect 175
Example: similar to the example for the html object.

json

A global object representing parsed JSON message.


Properties
data: JsonNode - root node for the parsed JSON message
Methods
v mask(n: JsonNode, p: String) - mask string value in "n.p" according to a
method stored in the current action. "n" can be either a JS object or an
array. In the latter case, "p" must be a string representing an array index.
v mask(n: JsonNode, i: int) - mask string value in "n[i]" according to a
method stored in the current action. "a" must be a JSON array (not an
object)

Example:
json.data = {"p1": "v1", "p2": "v2"}; // this would entirely replace JSON in the message
json.data.p1 = {};
json.data.p2 = null;
json.data.a1 = [1, 2, "aasdf"];
json.data.a1[0] = false; // 1 -> false
json.mask(json.data.a1, 2); // "aasdf" will be masked with "*****" if the parent action
// defines "replace" masking method
dbgm(JSON.stringify(json.data)); // should print:
// {"p1": {}, "p2": null, "a1": [false, 2, "*****"]}

form

A global object representing parsed form data, typically in POST requests.


Properties
data: FormData - provides access to the actual form data (parsed
name/value list)
Methods
mask(n: String) - mask form value with name "n" according to a method
stored in the current action.

Example:
// set value in form field "p1"
form.data["p1"] = "v1";
// mask form field "p2"
form.mask("p2");
// mask all fields in the form
for (var f in form.data)
form.mask(f);

query

A global object representing a parsed URL query part, as appears in the browser.
Properties
data: QueryData - provides access to the actual URL query data (parsed
name/value list)
Methods
mask(n: String) - mask query value with name "n".

176 IBM Guardium 10.0


Example:
// set value in query field "p1"
query.data["p1"] = "v1";
// mask query field "p2"
query.mask("p4");
// mask all fields in the query
for (var f in query.data)
query.mask(f);

text

A property of the global object of type String. Assignments to this property directly
modify message body. However, if during the message processing both HTML tree
structure and plain message text are modified, only modifications that are applied
to the HTML tree hold, as the modified tree is serialized back to the message
buffer and replaces its content.
Properties
none
Methods
none

Example:
text = ’this string will replace content in the message buffer’;

Javascript API Classes

XmlNodeSet

Instances of this class are created by xpath() methods of html and xml global
objects. These are actually the standard JavaScript Array objects containing
XmlNode objects as their elements. Access to these elements is provided through
the [] operator as it would normally be for JS arrays.
Properties
none
Methods
none

Example:
var ns = html.xpath(’some xpath expression’); // ns: Array of XmlNode objects
dbgm(’number of nodes in set: ’ + ns.length); // print number of nodes in the set "ns"
var node = ns[0]; // node: XmlNode

XmlNode
Instances of this class are also created by xpath() methods of html and xml global
objects.
Properties
v name: String [r] - get node name
v text: String [rw] - get/set inner text for text nodes only
v attributes: XmlAttributeSet [r] - access node attributes
Methods
none

Chapter 4. Protect 177


Example:
node.attributes[’a1’] = ’attribute one’; // node is of type XmlNode; setting ’a1’ attribute value
var a2 = node.attribute[’a2’]; // getting attribute ’a2’ value (of type string)

XmlAttributeSet

Instances of this class are used to access the XmlNode attributes through the
attributes property of XmlNode objects. The class behaves as a regular JS Array. All
the array elements are of type String.
Properties
any property [rw] - get/set the respective attribute value for given
XmlNode object
Methods
none

Example: similar to class XmlNode.

JsonNode

This is a dummy object, completely transparent to the calling script. It serves as a


bridge between the JS JSON interface and the native JSON parser and allows
manipulation of native JSON objects from within scripts as if they were normal JS
objects.
Properties
any property [rw] - get/set property value of underlined JS object
Methods
none

Note: JSON.stringify shows a JsonNode containing an array as if it were a regular


JS object.

FormData
Provides read/write access to the parsed form data represented as a name/value
list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none

Example: see object form.

QueryData

Provides read/write access to the parsed URL query data represented as a


name/value list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none

178 IBM Guardium 10.0


Example: see object query.

Incident Management
The Integrated Incident Management (IIM) application provides a business-user
interface with workflow automation for tracking and resolving database security
incidents.

It simplifies incident management by allowing administrators to group a series of


related policy violations into a single incident and assign them to specific
individuals. This reduces the number of separate policy violations that oversight
teams need to review.

Incident generation processes can be defined and scheduled to read the policy
violations log and generate new incidents. From an incident generation process,
each selected incident is:
v Assigned a unique incident number
v Assigned to a user
v Assigned a severity code
v Assigned to a category

In addition, policy violations can be assigned manually (by authorized users) to


new incidents or existing incidents from the Policy Violations / Incident
Management report.

Once an incident has been generated, administrators and other users work with
incidents from the Incident Management tab, which is included on both the admin
and user portals. From there, all other tasks can be performed (assign incidents,
send notifications, assign status, and so forth).

The Incident Management functions can be accessed from the drill-down menus of
the Incident Management reports. Each user may only have a subset of reports or
functions available, depending on the security roles assigned to the user account.

You can create your own copies of the Incident Management reports, but those
copies will not have all of the capabilities available from the pre-configured reports
on the Incident Management tab. To assign incidents, severity codes, and so forth,
use the reports on the Incident Management tab.

Define an Incident Generation Process


An incident generation process executes a query against the policy violations log,
and generates incidents based on that query. By default, the definition and
scheduling of incident generation processes is restricted to users with the admin
role.
1. Click Comply > Tools and Views > Incident Generation to open Incident
Generation Processes.
2. Click Add Process to open the Edit Incident Generation Process panel.
3. Select a query from the Query list. There are several restrictions that apply to
queries used in an incident generation process. We suggest that you open the
query in the Query Builder to verify that it satisfies the following criteria:
v The query must be from the Policy Violations domain.
v The query must have the Add Count check box checked.

Chapter 4. Protect 179


v The main entity for the query must be the Policy Rule Violation entity.
v The query fields for the query must not include a SQL string (from either
the SQL entity or the Full SQL String attribute of the Policy Rule Violation
entity).
4. Select a Severity for the incident (defaults to Info).
5. Optionally enter a Category for the incident (defaults to none).
6. Optionally enter a Threshold for generating the incident. The default is one,
meaning every row returned by the query will generate an incident.
7. From the Assign to User list, select the user to whom the incident will be
assigned.
8. Enter the From and To Dates for the query. For a scheduled query, use relative
dates (for example: now -1 day and now).
9. Click Save to save the process definition. You cannot run or schedule the
process until it has been saved.
10. To run the query now, click Run Once Now.
11. To schedule the query, click Modify Schedule to open the general-purpose
scheduling utility.

Assign/Reassign to Incident
1. Double-click the policy violation to be assigned or reassigned, in one of the
Incident Management reports.
2. Select Assign/Reassign to incident from the drill-down menu. When selected,
this menu will be replaced by a new menu containing a list of open incidents
(for example, Assign to Incident #123), and one additional option: Assign to a
new incident.
3. Select an incident to assign this violation to, or select Assign to a new incident
to assign this Policy Violation to the next incident number available (they are
numbered in sequence).
A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed. If a new incident has been created, it will
be listed first in the Open Incidents report.

Assign to User
1. Double-click the incident to be assigned to another user, in one of the Incident
Management reports.
2. Select Assign to user from the drill-down menu. When selected, this menu will
be replaced by a new menu containing a list of users, and one additional
option: Unassign.
3. Select a user, or select Unassign to remove the current user assigned. When a
user is assigned, the Status Description will be Assigned, and when unassigned
the Status Description will be Open.
A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed.

Change Severity
1. Double-click the incident on which the severity is to be changed, in one of the
Incident Management reports.
2. Select Change Severity from the drill-down menu. When selected, this menu
will be replaced by a new menu containing a list of severity codes: Info, Low,
Med, and High.
3. Select the new severity code.

180 IBM Guardium 10.0


A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed.

Notify
1. Double-click the incident a user is to be notified about, in one of the Incident
Management reports.
2. Select Notify from the drill-down menu. When selected, this menu will be
replaced by a new menu containing a list of users.
3. Select a user.
A message is displayed when the user has been a notification.

Change Status
1. Double-click the incident on which the status is to be changed, in one of the
Incident Management reports.
2. Select Change Status from the drill-down menu. When selected, this menu will
be replaced by a new menu containing a list of status codes:
v ASSIGNED - Once an incident has this status, it cannot have additional
policy violations added to it. To add policy violations, change the incident
status back to Open, add the violations, and then change the status back to
Assigned.
v CLOSED - Once an incident is marked Closed it cannot be modified, and is
no longer listed.
v OPEN - This is the initial status for a new incident.
3. Select the new status code.
A message is displayed when the change has been completed, and the Incident
Management panel will be refreshed.

Add Comments
1. Double-click the incident to which comments are to be added, in one of the
Incident Management reports.
2. Select Comments from the drill-down menu, to open the User Comment
window. For instructions on how to add comments, see Commenting in the
Common Tools book.

How to manage the review of multiple database security incidents


Incident management - track and resolve database security incidents.

About this task


Administrators can group a series of related policy violations into a single incident
and assign to specific individuals. This reduces the number of separate policy
violations that oversight teams need to review.

Prerequisites
v Create a Policy (See Policies).
v Start inspection engines (See Inspection Engine Configuration).

A security policy contains an ordered set of rules to be applied to the observed


traffic between database clients and servers.

Chapter 4. Protect 181


A policy violation is logged each time that a rule is triggered. Policy violations can
be assigned to incidents, either automatically by a process, or manually by
authorized users (see Incident Management).

Summary of Steps
1. Click Comply > Tools and Views > Incident Generation to open Incident
Generation Processes.
2. Edit Incident Generation Process (Query, Severity, Threshold, Scheduling).
3. Go to Incident Management tab for reports.

Incident Management

The Incident Management application provides a business-user interface with


workflow automation for tracking and resolving database security incidents.

Incident generation processes can be defined and scheduled to read the policy
violations log and generate new incidents. From an incident generation process,
each selected incident is:
v Assigned a unique incident number.
v Assigned to a user.
v Assigned a severity code.
v Assigned to a category.

In addition, policy violations can be assigned manually (by authorized users) to


new incidents or existing incidents from the Policy Violations / Incident
Management report.

Once an incident has been generated, administrators and other users work with
incidents from the Incident Management tab, which is included on both the admin
and user portals. From there, all other tasks can be performed (assign incidents,
send notifications, assign status, and so forth).

The Incident Management functions can be accessed from the drill-down menus of
the Incident Management reports. Each user may only have a subset of reports or
functions available, depending on the security roles assigned to the user account.

Define an Incident Generation Process

An incident generation process executes a query against the policy violations log,
and generates incidents based on that query. By default, the definition and
scheduling of incident generation processes is restricted to users with the admin
role.

Procedure
1. Click Comply > Tools and Views > Incident Generation to open Incident
Generation Processes.
2. Click the Add Process button to open the Edit Incident Generation Process
panel.
3. Select a query from the Query list. There are several restrictions that apply to
queries used in an incident generation process. Open the query in the Query
Builder to verify that it satisfies the following criteria:
v The query must be from the Policy Violations domain.

182 IBM Guardium 10.0


v The query must have the Add Count checkbox checked. See Query Builder
Overview (Queries) for more information.
v The main entity for the query must be the Policy Rule Violation entity.
v The query fields for the query must not include a SQL string (from either
the SQL entity or the Full SQL String attribute of the Policy Rule Violation
entity).
4. Select a Severity for the incident (defaults to Info).
5. Optionally enter a Category for the incident (defaults to none).
6. Optionally enter a Threshold for generating the incident. The default is one,
meaning every "row" returned by the query will generate an incident.
7. From the Assign to User list, select the user to whom the incident will be
assigned.
8. Enter the From and To Dates for the query. For a scheduled query, use relative
dates (for example: now -1 day and now).
9. Click Save to save the process definition. You cannot run or schedule the
process until it has been saved.
10. To run the query now, click Run Once Now.
11. To schedule the query, click Modify Schedule to open the general-purpose
scheduling utility. For instructions on how to use the scheduler, see
Scheduling in the Common Tools book.

12. Assign/Reassign to Incident - Double-click on the policy violation to be


assigned or reassigned, in one of the Incident Management reports.
13. Select Assign/Reassign to Incident from the drill-down menu. When selected,
this menu will be replaced by a new menu containing a list of open incidents
(for example, Assign to Incident #123), and one additional option: Assign to a
new incident.
14. Select an incident to assign this violation to, or select Assign to a new incident
to assign this Policy Violation to the next incident number available (they are
numbered in sequence).
A message displays when the change has been completed, and the Incident
Management panel will be refreshed. If a new incident has been created, it
will be listed first on the Open Incidents report.

Chapter 4. Protect 183


From the Incident Policy Violations / Incident Management report, users can:
v Assign/Reassign to Incident (create an incident from this policy violation).
v Change the severity of the incident.
v Notify one or more users about the incident.
v View reports of Client IP Activity, User Activity, or SQL from the incident.

15. Assign to User - Double-click on the incident to be assigned to another user,


in one of the Incident Management reports.
16. Select Assign to user from the drill-down menu. When selected, this menu
will be replaced by a new menu containing a list of users, and one additional
option: Unassign.
17. Select a user, or select Unassign to remove the current user assigned. When a
user is assigned, the Status Description will be Assigned, and when
unassigned the Status Description will be Open.
A message displays when the change has been completed, and the Incident
Management panel will be refreshed.
18. Change Severity - Double-click on the incident on which the severity is to be
changed, in one of the Incident Management reports.
19. Select Change Severity from the drill-down menu. When selected, this menu
will be replaced by a new menu containing a list of severity codes: Info, Low,
Med, and High.
20. Select the desired severity code.
A message displays when the change has been completed, and the Incident
Management panel will be refreshed.
Once a policy violation has been assigned to an incident the incident displays
in the Open Incidents report. From the Open Incidents report, users can
perform the actions shown:

21. Notify - Double-click on the incident a user is to be notified about, in one of


the Incident Management reports.
22. Select Notify from the drill-down menu. When selected, this menu will be
replaced by a new menu containing a list of users.
23. Select a user.

184 IBM Guardium 10.0


When the user gets the notification, a message will be displayed.
24. Change Status - Double-click on the incident on which the status is to be
changed, in one of the Incident Management reports.
25. Select Change Status from the drill-down menu. When selected, this menu
will be replaced by a new menu containing a list of status codes:
v ASSIGNED - Once an incident has this status, it cannot have additional
policy violations added to it. To add policy violations, change the incident
status back to Open, add the violations, and then change the status back to
Assigned.
v CLOSED - Once an incident is marked Closed it cannot be modified, and is
no longer listed.
v OPEN - This is the initial status for a new incident.
26. Select the desired status code.
A message displays when the change has been completed, and the Incident
Management panel will be refreshed.
27. Add Comments - Double-click on the incident to which comments are to be
added, in one of the Incident Management reports.
28. Select Comments from the drill-down menu, to open the User Comment
window. For instructions on how to add comments, see Commenting in the
Common Tools book.
Each user portal displays a My Open Incidents report for that user. From the
My Open Incidents report, users can perform the actions shown:

Query rewrite
Query rewrite functionality provides fine-grained access control for databases by
intercepting database queries and rewriting them based on criteria defined in
security policies.

The modification of queries happens transparently and on-the-fly, such that a user
issuing queries seamlessly receives results based on rewritten SQL statements.

Query rewrite functionality is implemented through a combination of query


rewrite definitions indicating how queries should be changed or augmented and a
run-time context indicating the specific circumstances where the query rewrite
definitions should be applied.

Rewriting database queries on the fly allows administrators to implement several


types of access control, as illustrated by the following examples.

Chapter 4. Protect 185


Table 17. Examples of access control with query rewrite.
Access control Original SQL Rewritten SQL
Limiting access to rows by SELECT C from T SELECT C from T WHERE
adding a WHERE clause [values]
Limiting access to columns SELECT C1 from T SELECT C2 from T
by modifying the SELECT
list
SELECT C1,C2 from T SELECT C2 from T
Restricting database activities SELECT EMAIL from T SELECT++ EMAIL from T
by rewriting SQL statements
to do nothing.
Restricting what users can do DROP TABLE T UPDATE T SET [values]
by modifying query verbs
(SELECT, INSERT, UPDATE,
etc.)
Restricting what users can do SELECT C from T1 SELECT C from T2
by modifying query objects
(TABLE, VIEW, COLUMN,
etc.)

The ability to seamlessly rewrite database queries provides an extremely powerful


and flexible form of access control that allows organizations to quickly address a
wide range of security concerns. For example, query rewrite definitions can be
developed to accomplish any of the following:
v enforcing security in multi-tenancy scenarios where multiple users and
applications share a single database, but where not all users and applications
should have access to all data
v exposing a database to a production environment for testing purposes without
exposing the entire database
v rapidly correcting critical security vulnerabilities while permanent solutions are
developed at the database or application level

Please review the following sections to learn more about how query rewrite works
and how to configure it for use within your Guardium environment.

How query rewrite works


Learn how Guardium implements query rewrite functionality.

Overview
Once query rewrite has been enabled on the S-TAP for supported database servers
(see “Enabling query rewrite” on page 187), query rewrite functionality is
implemented through three policy rule actions:
v QUERY REWRITE: ATTACH
v QUERY REWRITE: APPLY DEFINITION
v QUERY REWRITE: DETACH

These rule actions are installed as access policy rules. The access policy rules
specify both query rewrite definitions that indicate how queries should be
rewritten and a run time context that indicates when those definitions should be
applied.

186 IBM Guardium 10.0


Once query rewrite rules have been specified, sessions are handled as follows:
1. A SQL request triggers a QUERY REWRITE: ATTACH rule, and all subsequent
activity in the session is watched by query rewrite.
2. While sessions are being watched by query rewrite, traffic is held at the S-TAP
and the session information is checked against access policy rules.
3. If a query in the watched session matches a QUERY REWRITE: APPLY
DEFINITION rule, the query is rewritten according to the definition and sent to
the S-TAP.
4. The S-TAP releases the rewritten query to the database server.
5. When a QUERY REWRITE: DETACH rule is triggered, query rewrite stops
watching activity for the remainder of the session or until another QUERY
REWRITE: ATTACH rule is triggered.

Requirements and limitations

Query rewrite functionality is supported for the following database servers:


v Oracle (Linux and Unix only)
v DB2 (Linux and Unix only)
v Microsoft SQL

See the Guardium release notes to learn more about any version limitations or
other restrictions.

Important: When query rewrite is watching a session, the sniffer is required to


send engine verdicts to the S-TAP for each SQL request in the session. This process
is asynchronous and introduces latency between the sniffer and S-TAP. Create
query rewrite rule conditions that avoid attaching to sessions for
performance-sensitive or trusted applications.
Related tasks:
“Enabling query rewrite”
Learn how to configure an S-TAP for query rewrite functionality.

Using query rewrite


Learn how to enable and use query rewrite functionality.

About this task


Follow this task sequence to enable and begin using query rewrite functionality.

Enabling query rewrite


Learn how to configure an S-TAP for query rewrite functionality.

About this task

Query rewrite functionality is only enabled when both of the following conditions
are met:
v Query rewrite is enabled in the guard_tap.ini file
v Query rewrite policy rules exist and are triggered by session traffic

This task guides you through the changes you need to make in your
guard_tap.ini file.

Chapter 4. Protect 187


Procedure
1. Open guard_tap.ini in a text editor.
2. Locate the parameter qrw_installed = 0 and change it to qrw_installed = 1.
The parameter qrw_installed must be set to a value of 1 to enable query
rewrite functionality. Set qrw_installed = 0 to disable query rewrite
functionality.
3. Save your changes to guard_tap.ini.
4. On the Guardium system, log in as the CLI user and restart the inspection
engine using the restart_inspection_engines CLI command.

Results

Upon completion of this task, query rewrite functionality is enabled and will
respond to policy rules that contain query rewrite actions.

Creating query rewrite definitions


Learn how to create query rewrite definitions for data masking and access control
scenarios.

Procedure
1. Open Protect > Security Policies > Query Rewrite Builder.
2. Provide a unique and meaningful name for the query rewrite definition in the
Name field.
3. Create and parse a model query.
a. Provide a model query in the Enter a model query field.
For example, to create a rewrite definition preventing the use of SELECT *
from statements, enter SELECT * from EMPLOYEE as a model.
b. Click the DB Type menu and select a SQL parser to use with the model
query.
c. Click Parse to process the model query.
Your model query will be broken down into individual components with
each actionable component highlighted with underlined text.
4. Define how to rewrite specific components of the model query.
a. Click on an underlined component of the parsed query that you would like
to rewrite. A dialog will open to help create your query rewrite definition.
Options:
v Select and modify an individual verb, field, or object from the parsed
query
v Add a component to the query (shown as gray underlined text next to
the parsed query)
v Rewrite the entire query by clicking the gray underlined [R] next to the
parsed query
In the example SELECT * from EMPLOYEE where we want to prevent the use
of SELECT * from statements, click the * to provide rewrite content.
a. The Change from field indicates what will be rewritten.
b. The To field defines the rewritten component.
For example, to prevent the use of SELECT * from statements, replace the *
component with a list of specific objects: EMPNO, FIRSTNME, MIDINIT,
LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX.

Important:
188 IBM Guardium 10.0
Rewrite definitions are based on syntax, so any statement with the form
SELECT * from [OBJECT] will match the example. For instance, both SELECT
* from DEPARTMENT and SELECT * from EMPLOYEE statements match our
example.

Query rewrite definitions can be restricted to specific objects using access


policy rules. See “Defining a security policy to activate query rewrite” on
page 191 for instructions.
c. Click Save to save the rewrite definition, then click Back to close the dialog.
5. Review the output of the query rewrite definition using the Real time preview
field and make any changes as needed.
Using our example, SELECT * from EMPLOYEE is rewritten as SELECT EMPNO,
FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL,
SEX from EMPLOYEE.
6. When you are satisfied with the results, click Save to save your query rewrite
definition.
Your query rewrite definition is saved and displayed in the list of available
query rewrite definitions in the Query Rewrite Builder.

What to do next

Continue working with query rewrite definitions:


v Create additional definitions by clicking New and repeating the steps in this
task.
v Edit an existing query rewrite definition by double-clicking an item in the list of
available query rewrite definitions.
v Copy and edit an existing query rewrite definition by selecting the item in the
list of available query rewrite definitions and clicking Clone.
v Delete an existing query rewrite definition by selecting the item in the list of
available query rewrite definitions and clicking Delete.

When you are finished working with query rewrite definitions, continue to the
next step in this sequence to test and implement your definitions.
Related tasks:
“Defining a security policy to activate query rewrite” on page 191
Learn how to create access policy rules using your query rewrite definitions with
live queries.

Testing query rewrite definitions


Learn how to test query rewrite definitions against sample input and verify that
the rewrite definitions behave as expected.

Before you begin

To complete this task, you need to have created one or more query rewrite
definitions.

Procedure
1. Open Protect > Security Policies > Query Rewrite Builder.
2. Click Set Up Test to open a dialog and select query rewrite definitions for
testing.
a. Drag and drop items from the Available query rewrite definitions field to
the Test query rewrite definitions field.
Chapter 4. Protect 189
b. Drag and drop items with the Test query rewrite definitions field to order
multiple definitions as you would within an access policy.
c. Click Save to close the dialog when you are finished.
3. Type or paste test queries into the test field.
For example, to test a rewrite definition preventing the use of SELECT * from
statements (see “Creating query rewrite definitions” on page 188), enter sample
queries such as:
SELECT * from DEPARTMENT
SELECT * from EMPLOYEE
SELECT FIRSTNME, case
when SALARY > 150000 then ’high’
when SALARY > 100000 then ’medium’
when SALARY > 80000 then ’fair’
else ’poor’
end from EMPLOYEE
DELETE from EMPLOYEE where EMPNO=100
INSERT into TEMP_EMP SELECT * from EMPLOYEE
4. Click Run Test to process the sample queries and review the results.
For example, the sample queries provided in the previous step return the
following results:
Table 18. Query rewrite test results
Original SQL Rewritten SQL Changed
SELECT * from DEPARTMENT SELECT EMPNO, FIRSTNME, YES
MIDINIT, LASTNAME,
WORKDEPT, PHONENO,
HIREDATE, JOB, EDLEVEL,
SEX from DEPARTMENT
SELECT * from EMPLOYEE SELECT EMPNO, FIRSTNME, YES
MIDINIT, LASTNAME,
WORKDEPT, PHONENO,
HIREDATE, JOB, EDLEVEL,
SEX from EMPLOYEE
SELECT FIRSTNME, case SELECT FIRSTNME, case when NO
when SALARY > 150000 then SALARY > 150000 then
’high’ when SALARY > ’high’ when SALARY >
100000 then ’medium’ when 100000 then ’medium’ when
SALARY > 80000 then SALARY > 80000 then ’fair’
’fair’ else ’poor’ end else ’poor’ end from
from EMPLOYEE EMPLOYEE
DELETE from EMPLOYEE where DELETE from EMPLOYEE where NO
EMPNO=100 EMPNO=100
INSERT into TEMP_EMP INSERT into TEMP_EMP NO
SELECT * from EMPLOYEE SELECT * from EMPLOYEE

Important:

Rewrite definitions are based on syntax, so any statement with the form SELECT
* from [OBJECT] will match the example. For instance, both SELECT * from
DEPARTMENT and SELECT * from EMPLOYEE statements match our example.

Query rewrite definitions can be restricted to specific objects using access policy
rules. See “Defining a security policy to activate query rewrite” on page 191 for
instructions.

190 IBM Guardium 10.0


5. Continue entering sample queries to test your rewrite definitions. Click Set Up
Test to change or reorder the rewrite definitions used for the test.

What to do next

When you are satisfied with the test results, create a security policy to begin using
your query rewrite definitions with live queries.
Related tasks:
“Defining a security policy to activate query rewrite”
Learn how to create access policy rules using your query rewrite definitions with
live queries.
“Creating query rewrite definitions” on page 188
Learn how to create query rewrite definitions for data masking and access control
scenarios.

Defining a security policy to activate query rewrite


Learn how to create access policy rules using your query rewrite definitions with
live queries.

Before you begin

To complete this task, you need to have created and tested one or more query
rewrite definitions, and you need to be familiar with creating security policies.

Procedure
1. Open Protect > Security Policies > Policy Builder
2. Create a new policy or modify an existing policy to use your query rewrite
definitions.

Tip: Consider creating a new policy for testing query rewrite definitions. Add
your rewrite rules to existing security policies once you are satisfied with the
behavior of the test policy.
3. Click Edit Rules to begin adding rewrite rules to the selected policy, then select
Add Rules > Add Access Rule.

Note: Query rewrite rules are always classified as access rules.


4. Add a rule with a QUERY REWRITE: ATTACH rule action. Be sure to check
the Continue to next rule checkbox. This rule identifies the specific session
parameters that must be matched in order to trigger a query rewrite session,
for example a specific database user name or client IP address.
5. Add a rule with one or more QUERY REWRITE: APPLY DEFINITION rule
actions and select the query rewrite definition(s) you would like to apply. Be
sure to check the Continue to next rule checkbox. This rule identifies the
specific objects or commands that must be matched in order to apply the
rewrite definitions and modify the source query.
For example, setting the Object field to EMPLOYEE restricts a SELECT * from
rewrite definition to EMPLOYEE objects.
6. Add a rule with a QUERY REWRITE: DETACH rule action. This closes the
query rewrite session and prevents further monitoring of session traffic.
7. To install the new policy, return to the Policy Finder, select your security
policy, and choose Select an installaction action > Install and Override. Click
OK when asked to confirm installation of the policy.

Chapter 4. Protect 191


8. Log in to your database server and run test queries to verify that your access
policy rewrite rules are functioning as intended.
a. Log in to your database sever.
b. Issues queries that should trigger (or should not trigger) the installed access
policy rules and match the criteria of your query rewrite definitions.
For example, issue SELECT * from EMPLOYEE to verify that a SELECT * from
rewrite definition is applied to the EMPLOYEE object, and issue SELECT * from
DEPARTMENT to verify that the same definition is not applied to the
DEPARTMENT object.
c. Verify that the results reflect rewritten SQL.
Related concepts:
“Policies” on page 57
A security policy contains an ordered set of rules to be applied to the observed
traffic between database clients and servers. Each rule can apply to a request from
a client, or to a response from a server. Multiple policies can be defined and
multiple policies can be installed on a Guardium appliance at the same time.

Creating a custom report to validate query rewrite results


Learn how to create a query rewrite tracking report for auditing query rewrite
activity.

Before you begin

To complete this task, you need to have created and installed access policy rules
that apply query rewrite definitions, and you need to be familiar with creating
reports.

About this task

A query rewrite tracking report helps validate query rewrite actions in both test
and production environments.

Procedure
1. Open Reports > Report Configuration Tools > Query Builder
2. Select Query Rewrite from the Domain menu.
3. Click the icon to define a new query.
4. Provide a meaningful and unique name for the query in the Query Name
field.
For example, My query rewrite report
5. Select one of the available options from the Main Entity menu.
The following options are available:
v Query Rewrite Log
v Client/Server
v Session
v Access Period
6. Click Next to open the report builder.
7. Expand sections within the Entity List and select items to build your report.
v Click an item and select Add Field to add the item as a column in the
report.
v Click an item and select Add Condition to add a conditional filter to the
report.

192 IBM Guardium 10.0


v Alternatively, drag and drop items from the Entity List into the Query
Fields and Query Conditions tables to apply them to your report.
Include the following items as a starting point for a query rewrite report:
v Client/Server: Timestamp
v Client/Server: DB User Name
v Client/Server: Server Type
v Query Rewrite Log: Applied QR Definition Names
v Query Rewrite Log: Input SQL
v Query Rewrite Log: Output SQL
8. Click Save when you are done building your report.
9. Click Create Report to create the report.
10. Click Add to My Custom Reports to add the report to your custom reports.
11. Open Reports > My Custom Reports and select the report you created to
view a report of query rewrite actions.

Chapter 4. Protect 193


194 IBM Guardium 10.0
Chapter 5. Monitor and Audit
After you identify your sensitive data and take steps to protect it, you must
monitor activities that access this data. In many cases you can use the data that is
generated by monitoring to comply with audit requirements, either regulatory or
internal.

Building audit processes


Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.

Automate and integrate the following audit activities into a compliance workflow:
v The ability to group multiple audit tasks (reports, vulnerability assessments, etc.)
into one process.
v Schedule these processes to run on a regular basis.
v Run these tasks in the background.
v Write the task results to a comma-separated value (CSV) file or ArcSight
Common Event Format (CEF) file and/or forward the results to other systems
using Syslog.
v Add comments and notations.
v Assign the process to its originator for viewing (he/she will get a new item in
their To-Do list once the result is ready).
v Assign the process for other users or to a group of users or a role.
v Create the requirement that these assignees sign on the result.
v Allow escalation of the result (assign to someone outside of the original audit
trail).

Transform the management of database security from time-consuming manual


activities performed periodically to a continuous, automated process that supports
company privacy and governance requirements, such as PCI-DSS, SOX, Data
Privacy and HIPAA.

Export audit results to external repositories for additional forensic analysis –


Syslog, CSV/CEF files, external feed.

The Audit Process Log report, shows a detailed activity log for all tasks including
start and end times. This report is available for admin users via the Guardium
Monitor tab. Audit tasks show start and end times, however the start and end of
Security Assessments and Classifications (which go to a queue) is the same.

The results of each workflow process, including the review, sign-off trails, and
comments can be archived and later restored and reviewed through the
Investigation Center.

A compliance workflow automation process answers the following questions:


v What type of report, assessment, audit trail, or classification is needed?
v Who should receive this information and how are signoffs handled?

195
v What is the schedule for delivery?

Further elements of the compliance workflow automation process include:


v A process definition
v A distribution plan, which:
– Defines receivers, who can be individual users, user groups, or roles. (See
Process Receivers.)
– Defines the review/sign responsibility for each receiver.
– Defines the distribution sequence by setting the Continuous flag.
v A set of tasks (see Process Task Types)
v A schedule - The audit process can be run immediately, or a schedule can be
defined to run the process on a regular basis

Process Task Types

A workflow process may contain any number of audit tasks:


v Reports, custom or pre-defined. Guardium provides hundreds of predefined
reports, with more than 100 regulation-specific reports.
v Security assessment report, The security database assessment scans the database
infrastructure for vulnerabilities, and provides an evaluation of database and
data security health, with both real-time and historical measurements. It
compares current environment against preconfigured vulnerability tests based on
known flaws and vulnerabilities, grouped using common database security best
practices (like STIG and CIG1), as well as incorporating custom tests. The
application generates a Security Health Report Card, with weighted metrics
(based on best practices) and recommends action plans to help strengthen
database security.
v An entity audit trail, A detailed report of activity relating to a specific entity is
produced (for example, a client IP address or a group of addresses).
v A privacy set, A report detailing access to a group of object-field pairs (a Social
Security number and a date of birth, for example) is produced during a specified
time period.
v A classification process, The existing database metadata and data is scanned,
reporting on information that may be sensitive, such as Social Security numbers
or credit card numbers.
v An external feed, Data can be exported to an external specialized application for
further forensic analysis.

Note: The Optional External Data Feed is an optional component enabled by


product key. If this feature has not been enabled, this choice will not appear in
Audit Task selection and the Feed Type list will be empty.

Workflow Processes, Central Management and Aggregation


On a Central Manager, reports can reference data from remote datasources
(managed units). Audit processes that use these reports will be accessible from the
Central Manager only, and will not be visible from managed units.

Workflow Automation (audit processing) for the Aggregator server now includes
the capability to create ad-hoc databases for each Aggregator task and specify only
the relevant days for that task.

196 IBM Guardium 10.0


Note: The ad-hoc databases for the Aggregation server may be kept in the system
for up to 14 days (depending on the value of the CLI command,
drop_ad_hoc_audit_db) for post-run analysis by Guardium support services if
required.

When defining reports in Audit Process, the number of days of the report (defined
by the FROM-TO fields) should not exceed a certain threshold (one month by
default). If this threshold is exceeded, a run-time error will result when trying to
run the audit task on the Aggregator.

It is permissible to create an audit task with a FROM-TO range that is wider than
the max_audit_reporting value (set in CLI) because Audit processes defined on
the Aggregator may be run on managed collectors (when this aggregator is a
manager). Audit tasks run on collector unit, do not have a max_audit_reporting
limitation.

So, it is valid to save tasks beyond the allowed range, but you will get a Run Time
Exception when the task is executed on the Aggregator.

The Audit Report threshold can be configured using the CLI command, show
max_audit_reporting or store max_audit_reporting. There is no warning message
when a report is created with an invalid FROM-TO range. Instead a fixed message
appears in the Task Parameters panel in the Audit Process setup menu screen
(Tools/Audit Process Builder. open up Audit Tasks to display Task Parameters).
The fixed message is:
On aggregators, only reports not exceeding the allowed time range (CLI: max_audit_reporting) will

Note: When running a patch install, all audit processes are stopped.

Stop an audit process

Stopping an audit process can be performed only if the audit tasks have not been
run or are running. Stopping an audit process will not execute any more tasks that
have not started. Stopping an audit process does not deliver partial results. The
audit process stops and a stopped error message is the result. However, if tasks are
complete, stopping an audit process will not stop the sending of results.

Stop an audit process by using invoking GuardAPI (place the cursor on any line
and double-click for a drill-down) from the Audit Process Log Report (on the
Guardium Monitor tab).

For any user, stopping an audit process, will display only the line belonging to that
user (just the tasks, not all the details). An admin user can see all the details and
can stop anyone's audit processes. A user can only stop their own audit processes.

Note:

Queries using a remote source can not be stopped. Online reports using a remote
source can not be stopped.

Stopping an audit processes does not apply to Privacy Sets Audit Tasks or External
Feed Audit Tasks. If the Privacy Set or External Feed tasks have started, they will
finish even if the process is stopped.

Chapter 5. Monitor and Audit 197


Results Distribution
Audit process receivers will be notified via email and/or their To-Do list of
pending audit process results. You can designate any receiver as a signer for a
process, in which case the results can optionally be held at that point on the
distribution list, until that receiver electronically signs the results or releases them.
Receivers can be individual users, user groups, or roles.

Audit Process Summary

In the Audit Process Finder screen is the Audit Process Status Summary. This
section contains information on scheduled audit processes, as well as results,
receivers outstanding and errors. This summary is a consolidation of data from
multiple audit process reports.

There is also a button to delete any audit process results. See the Audit Process
Finder screen. Look for the Results button, next to the Run Once Now button
(choices of View or Delete).

Delete audit process results, but track or log who deleted the report. The
audit-delete role is used to track or log when an audit process result has been
deleted. Users with the audit-delete role can delete reports. Admin users can also
delete reports. Tracking is done through the User Activity Audit Trail report.

Note: Audit process results from remote sources is limited to 100,000 results. To go
beyond that limit, use the CLI command, store save_result_fetch_size (show
save_result_fetch_size).

Process Receivers

You can define any number of receivers for a workflow automation process, and
you control the order in which they receive results. In addition, receivers can notify
additional receivers, using the Escalate function. It is also possible to run an audit
process with no defined receivers. For example, an audit process with no receivers
that writes to syslog and has no need to review (or sign) the results.

Who can be a receiver?

On the Process Definition panel, the drop-down list of receivers includes all
Guardium users, user groups, and roles (groups and roles are labeled as such).
When a group or role is selected, all users belonging to the group or having that
role will receive the results.

If a group receiver is selected, and any workflow automation task uses the special
run-time parameter ./LoggedUser in a query condition, the query will be executed
separately for each user in the group, and each user will receive only their results.

For example, assume that your company has three DBAs, and each DBA is in
charge of a different set of servers. Using the Custom Data Upload facility, upload
the areas of responsibilities of each DBA (with server IPs) to the Guardium system,
and correlate that to the database activity domain, and then use a report in this
custom domain as an audit task. If a user group that contains the three DBAs is
designated as the receiver, each DBA will receive the report relevant for his or her
collection of servers only.

198 IBM Guardium 10.0


If a group receiver is selected, and sign-off is required, each group member must
sign the results separately (as explained earlier, each member of the group may be
looking at a different set of results).

A receiver can be solely an email address and results will be sent to that email
address. When entering an email address, the user will be required to enter a user
that will be used to filter the data. The user must be the same user that is logged
in or a user under the user that is logged in the data hierarchy.

If a role receiver is selected, only one user with that role will need to sign the
results, and other users with that role will be notified when the results have been
signed.

Note:

When a workflow event is created, every status used by that event can be assigned
a role (meaning that events can only be seen by this role when in this status).
When an event is assigned to an audit process, it is important that every role that
is assigned to a status of this event have a receiver on this audit process.
Otherwise, it is possible that an audit result row can be put into a status where
none of its receivers are able to see this row or change its status.

If this is to occur, the admin user (who is able to see all events, regardless of their
roles) would be able to see the row and change its status. However, if data level
security is on, the admin user may not be able to see this row. The admin user
would need to either turn data level security off (from Global Profile) or have the
dataset_exempt role. It is important to configure the audit process so that all roles
who must act on an event associated with this audit process are receivers of this
audit process.

email Notification

Optionally, receivers can be notified of new process results via email, and there are
two options for distributing results via email:
v Link Only - The email notification will contain a hypertext link to the results
stored on the Guardium system. For the link to work, you must access your mail
from a system that has access to the Guardium system. See the following section
for more information about email links.
v Full Results - A PDF file or generated CSV file containing the results will be
attached to the email, except for an Escalation that specifies a receiver not
included in the original distribution list, in which case no PDF or CSV file will
be attached. When the Full Results option is selected, care must be taken, since
sensitive and private data may be included in the PDF or CSV file. When
running an audit process, if there is a receiver with Full Results with CSV
checked, it does not generate CSV files for tasks of type Assessment, Classifier
or External Feed. These task types also can not generate CSV/CEF/PDF files for
export. Only for tasks of type Report, Privacy Set or Entity Audit Trails, and if
there is a receiver with Full Results via CSV checked, will CSV files be
generated.

Note: When viewing audit results, if a generated PDF already exists, a Recreate
PDF button will appear for the user to recreate and download the regenerated
PDF.

Chapter 5. Monitor and Audit 199


Hypertext Links to Process Results
In email messages, there are conditions where links to process results on the
Guardium system will not work. For example:
v If you are accessing email from a location where you cannot normally access the
Guardium system, the links will not work. For example, when out of the office,
you may have access to your email over the Internet, but not to your company's
private network or LAN, where the system is installed.
v If you have not accessed your email for a longer period of time than the report
results are kept, those results will not be available when you click the link. For
example, if the results are kept for seven days but you have been on vacation for
two weeks, your email may contain links to results older than seven days, and
those links will not work.

About Frozen Receivers Links

Once a process has been run, the existing receiver list is frozen, which means:
v You cannot delete receivers from the list.
v You cannot move existing receivers up or down in the list.
v You can add receivers to end of the list at any time, and reposition the new
receivers at that time.
v If the Guardium user account for a receiver on the list is deleted, the admin user
account (which is never deleted) is substituted for that receiver. Thus the admin
user receives any email notifications that would have been sent to a deleted
receiver, and the admin user must act upon any results released to that receiver.
v If you need to create a totally different set of receivers for an existing process,
deactivate the original process, make a clone of it, and then make the
modifications to the receivers list in the cloned version before saving it.

How Results are Released to Receivers

Results are released to the Guardium users listed on the receivers list, subject to
the Continuous check box, as follows:
v If the Continuous check box is marked, distribution continues to the next
receiver on the list without interruption.
v If the Continuous check box is cleared, distribution to the next receiver is held
until the current receiver performs the required action (review or sign).

For example, assume you want to define a workflow process as follows:


v DBAs - All DBAs should receive their results at the same time, with each DBA
receiving a different result set based on the server IPs associated with him/her
v Only when ALL DBAs have signed, the DBA Manager should see the results
v Only when DBA Manager releases the report, the Auditors should see the results
v All Auditors should receive the reports at the same time, but only one of them
(any of them) needs to sign each result. The others will be updated when a
result was signed.
v An auditor can escalate a result to the Audit Manager.

To define this flow:


v The DBAs group would be named as the first receiver
v The DBA Manager would be next on the list.

200 IBM Guardium 10.0


v The Auditors role (not group) would be next on the list. Any Auditor could sign
and others will be notified. Also, any auditor can escalate a results set to the
Audit Manager.

Note: The results will only distribute to the next receiver when the current
receiver has marked the Continuous button. This is completely separate from the
review/sign functionality and does not depend on the review/sign functionality
all.

Note: Process results that are exported to CSV or CEF files are sent to another
network location by the Guardium archiving and exporting mechanism. These
results are not subject to the receivers list or to any signing actions. They are
subject to the Guardium CSV/CEF export schedule (if any is defined), and they
are subject to the access permissions that have been granted for the directory in
which they are ultimately stored.

Exporting Audit Task Output to CSV, CEF or PDF Files

Reports containing information that can be used by other applications, or reports


containing large amounts of data, can be exported to other file formats. Report,
Entity Audit Trail, and Privacy Set task output can be exported to CSV (Comma
Separated Value) files, and output for database activity reports can be exported to
an ArcSight Common Event Format (CEF) file.

In addition, CEF and CSV file output can be written to syslog. If the remote syslog
capability is used, this will result in the immediate forwarding of the output
CEF/CSV file to the remote syslog locations. The remote syslog function provides
the ability to direct messages from each facility and severity combination to a
specific remote system. See the remotelog (syslog) CLI command description for
more information.

Each record in the CSV or CEF file represents a row on the report.

The exported file is created in addition to the standard task output, it does not
replace it. These files are useful when you need to:
v Integrate with an existing SIEM (Security Incident and Event Manager) in your
infrastructure (Qradar, ArcSight, Network Intelligence, LogLogic, TSIEM, etc.).
v Review and analyze very large compliance task results sets. (Task results sets
that are intended for Web presentation are limited to 5,000 rows of output,
whereas there is no limit to the number of rows that will be written to an
exported CSV or CEF file.)

Exported CSV and CEF files are stored on the Guardium system, and are named in
the format:
process_task_YYYY_MMM_DD-HHMMSS.<csv | cef>

Where process is a label you define on the audit process definition, task is a
second-level label that you can define for each task within the process, and
YYYY_MMM_DD-HHMMSS is a date-time stamp created when the task runs.

You cannot access the exported CSV or CEF files directly on the Guardium system.
Your Guardium administrator must use the CSV/CEF Export function to move
these files from the Guardium system to another location on the network. To access
those files, check with your Guardium administrator to determine the location to
which they have been copied.

Chapter 5. Monitor and Audit 201


The fact that exported files are sent outside of the Guardium system has two
important implications:
v The release of these files is not connected to the results distribution plan defined
for the audit process. These files are exported on a schedule defined by the
Guardium administrator.
v Once the CSV/CEF Export function runs, all exported files will be available to
anybody (Guardium user or not) who can access the destination directory
defined for the CSV/CEF Export operation. For this reason, your Guardium
administrator may want to schedule additional jobs (outside of the Guardium
system) to copy sets of exported files from the Guardium CSV/CEF Export
destination directory, to directories with appropriate access permissions.

CSV/CEF Export activity is available in the Aggregation/Archive Activity report.

Note: If observed data level security has been enabled, then audit process output
(including files) will be filtered so users will see only the information of their
assigned databases. Files sent to an email receiver as an attachment will be filtered.
However, files downloaded locally on the machine and then moved elsewhere
using the Results Export function from Administration Console are not subject to
data level security filtering. See CSV/CEF Export later in this topic for further
information on CSV/CEF Export.

The following table summarizes what happens when exporting an Audit Process
file to CSV/CEF/PDF.
Table 19. Exporting Audit Task Output to CSV, CEF or PDF Files
Function Level CSV CEF PDF
Attach to email Receiver Full Details radio --> N/A Full Details radio -->
PDF check box PDF check box

The radio buttons are


only for receiver PDF
Export file Task Export CSV file check Export CSV file check Export CSV file check
box box box
Report empty and Receiver Export not affected Export not affected Export not affected
Approve if Empty = (empty files will be (empty files will be (empty files will be
yes exported) exported) exported)

Attachment, no email Attachment, no email Attachment, no email


attachment attachment attachment
Zip attachment Audit Process If no file generated, N/A If no file generated,
nothing to zip nothing to zip

Merge all CSVs into PDF is not zipped


one ZIP file
Compress (export) Task Compressed, separate Compressed, separate PDF is not
file for each CSV file file for each CSV file compressed

How Zip for Email and Compress work for Audit Task Output

Zip for Email is the highest level of control for Audit Task Export. Zip for email
produces a set of CSV or CEF files. PDF is not ever zipped and is not ever
compressed.

Compress works on individual files.

202 IBM Guardium 10.0


Note: For CSV attachments, when Zip for Email is cleared, Compress can still be
applied. And Compress can be per task. Thus one Audit Task may send a .csv file
while another may send a .csv.gz file, in the same email.

The interaction of Zip for Email and Compress is as follows:


v With Zip for email checked (regardless of whether Compress is also checked),
the attachment is one zip file of CSV files.
v With Zip for email not checked, and Compress checked, the attachment is a set
of csv.gz files.
v With Zip for email not checked, and Compress not checked, the attachment is a
set of csv files.
v With Compress checked, Download All will be csv.gz.
v With Compress cleared, Download All will be csv.
v With Compress checked or cleared, Download displayed will still be csv.
v With Compress checked, export of CSV/CEF files will be gzipped.
v With Compress cleared, export of CSV/CEF files will not be gzipped.

Export to SCAP or AXIS

In the Audit Process Definition, in the section on Add New Task, when choosing a
Task Type of Security Assessment, a number of choices will appear: Export AXIS
xml and Export SCAP xml. Choose one of these selections in order to save the
Audit Process results and to transfer the XML file to the destination set up for
Results Export (Manage > Data Management > Results Export (Files)). Further
choices are for configuring the PDF format: Report, Difference, Report and
Difference.

SCAP is Security Content Automation Protocol. AXIS is Apache EXtensible


Interaction System and is used by QRadar.

Creating or Changing Reports

Use the Report Builder to create or customize reports, including customization


such as applying highlight colors to rows. To open the Report Builder, navigate to
Reports > Report Configuration Tools > Report Builder.

Create an Audit Workflow Process


1. Open the Audit Process Builder by navigating to Comply > Tools and Views
> Audit Process Builder.
2. Click the New button to open the Audit Process Definition panel The Audit
Process Definition panel is divided into three sections: General, Receivers and
Tasks.
3. Go to the Tasks section first. You must define at least one audit task before
you can save the process. Work your way through each task in setting choices.
Perform the appropriate procedure for each audit task you want to include in
the audit process. The task choices detailed in this section are:
v Define a Report Task
v Define a Security Assessment Task
v Define an Entity Audit Trail Task
v Define a Privacy Set Task
v Define a Classification Process Task
v Define an External Feed Task

Chapter 5. Monitor and Audit 203


4. Go to the Receivers section. Open the drop-down box and add the receivers
for the process. See Add Receivers. Checkoffs are needed to determine action
required, additions to To-do list, notification via email notifications and
continuous distribution. Again see Add Receivers for complete information in
setting these choices.
5. Go to the General section. Enter a name in the Description box. Do not
include apostrophe characters.
6. Check the Active box to associate a schedule with this process.
7. Mark the Archive Results box if you want to store the results offline after the
retention period has expired. When results have been archived, you can
restore them to the system for viewing again, later.
8. Use the Archive Result purge before Reviewed box to delete the results of an
ad-hoc process without holding until all reviewers had reviewed, all sign-offs
have taken place, all workflow activities have been met. This feature gives the
user an option of deleting results in a specified period of time (such as 1-day)
whether the results have been reviewed or not.
9. In the Keep for a minimum of (n) days or (n) runs boxes, specify how long to
keep the results, as either a number of days (0 by default) or a number of
runs (5 by default). After that, the results will be archived (if the Keep for a
minimum box is marked) and purged from the system.

Note: Results will only be shown if there are receivers for the results. Add
receivers, re-run the results and the run will now show up in the dropdown
list.
10. If one or more tasks create CSV or CEF files, you can optionally enter a label
to be included in all file names, in the CSV/CEF File Label box. These files
can also be compressed, or Zipped, by clicking on the Zip for mail box to add
a checkmark.

Note: There is a limit on export of CSV/CEF file sizes greater than 10240 MB
(10.240 GB). It is a recommended best practice to check the box Zip for mail.
11. The Email Subject field in the Audit Process definition is used in the emails
for all receivers for that audit process. The subject may contain one (or more)
of the following variables that will be replaced at run time for the subject:
v %%ProcessName will be replaced with the audit process description
v %%ExecutionStart will be replaced with the start date and time of the first
task.
v %%ExecutionEnd will be replaced with the end date and time of the last
task.
Upon entering a subject, it will check whether any variable (starting with %%
is present) and will ensure all are valid variables.
12. Optionally assign security roles.
13. Optionally add comments.
14. Click the appropriate buttons to Schedule or Run an Audit Workflow Process.
15. Click Save. Do not leave this menu screen to perform another configuration
before saving your work. Work-in-progress is not saved and not held in
half-created suspension if you leave this section to go create something else
needed for the audit task.
For example, to define an assessment task in Audit Process Builder, it is first
necessary to go to Security Assessment Builder to create assessment tests and
then to Datasource Definitions to identify the database(s) to be assessed. Save

204 IBM Guardium 10.0


your work when creating Audit Workflow and then go to other tasks or
perform those other tasks first and then create the Audit Workflow Process.

Add Receivers
1. In the Receiver column, select a receiver from the drop-down list of Guardium
individual users, groups, or roles. If you select a group or a role, all members
of the group or users with that role will receive the results; and if signing is
required, only one member or user will need to sign the results.
2. In the Action Required column, select one option:
v Review (the default) - Indicates that this receiver does not need to sign the
results.
v Review and Sign - Indicates that this receiver must sign the results
(electronically, by clicking the Sign Results button when viewing the results
online).
3. In the To-Do List column, either mark or clear the Add check box to indicate
whether this receiver should be notified of pending results in their Audit
Process To-Do List.

Note: To send files on an external server without sending email and without
adding results to the to-do list, define an audit process without receivers. Also
clear the to-do list check box in the Add Receiver section and remove/ do not
add any receiver in the receiver section in order not to add results to To-do list.
4. In the Email Notification column, select one option:
v No - email will not be sent to the receiver.
v Link Only - email will contain a hypertext link to the results (on the
Guardium system).
v Results - email will contain a copy of the results in PDF or CSV format. Be
aware that the results from Classification or Assessment tasks may return
sensitive information.
5. The check box in the Continuous column controls whether or not distribution
of results continues to the next receiver (the default), or stops until this receiver
has taken the appropriate action. If the Continuous box is cleared, and this
receiver is a group or a role, when any user who is a member or that group or
role performs the selected action, the results will be released to the next
receiver on the list.

Note: The results will only distribute to the next receiver when the current
receiver has marked the Continuous button. This is completely separate from
the review/sign functionality and does not depend on the review/sign
functionality all.
6. Click Add to add the receiver to the end of the list, and repeat these steps for
each receiver. One receiver is required.
7. Receivers who are not users are permitted. Choose: Email and then enter an
email address, and the results will be sent to that email address. When entering
a non-user email address, there is a requirement that a user name that will be
used to filter the data. The user must be the same user that is logged in or a
user under the user that is logged in the hierarchy. This user will be saved in a
new column in the Receivers section of the screen.
8. Approve if Empty - When this check box is checked, if all the reports of the
task are empty, it will do the following: automatically sign the result (and/or
mark it as viewed); automatically click Continue (if relevant); will NOT send
the notification email; will NOT add the task to the To-Do list of that user;
will NOT generate any PDF/CSV/CEF files. With this check box, empty audit

Chapter 5. Monitor and Audit 205


results will be signed automatically and the results will still look like any other
complete (viewed/signed) audit results when looking at the audit result logs.
This action will apply to empty reports and the empty security assessment
results. See table summarizing what happens when Approve If Empty = YES in
the section Exporting Audit Task Output to CSV, CEF or PDF Files.

Export a CSV or CEF File


Report, Entity Audit Trail, and Privacy Set audit task output can be exported to
CSV files, and Report audit task output can be exported to a CEF file. From the
Report, Entity Audit Trail or Privacy Set section under Audit Tasks, work through
the following:
1. Select title.
2. Enter an optional label for the file in the CSV/CEF File Label box. The default
is from the Description for the task. This label will be one component of the
generated file name (another will be the label defined for the workflow
automation process).
3. Mark either Export CSV file or Export CEF file.

Note: CEF file output is appropriate for data access domain reports only
(Access, Exceptions, or Policy Violations, for example). Other domains like the
Guardium self-monitoring domains (Aggregation/Archive, Audit Process,
Guardium Logins, etc.) do not map to CEF extensions.
4. If Export CEF file was selected, optionally mark the Write CEF to Syslog box to
write the CEF records to syslog. If the remote syslog facility is enabled, the CEF
file records will thus be written to the remote syslog.
5. If the Compress box is checked, then the CSV/CEF files to be exported will be
compressed.
6. If the Export PDF file box is checked, then a PDF file (with similar name as
CSV Export file) for this Audit Task is created and exported together with the
CSV/CEF files.

Note: The Export PDF file will not be compressed, even if the Compress box in
the previous step is checked.

Define a Report Task


If you have not yet started to define compliance workflow automation process,
create a workflow process before performing this procedure. If the report to be
used has not yet been defined, do that first.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Report radio button.
3. There a number of choices for CSV/CEF File Label, Export CSV/CEF, Export
PDF, Write to Syslog, and Compress. See Export a CSV or CEF File.
4. The selection of PDF Options are: Report (the current results), Diff (difference
between one earlier report and a new report) and Reports and Diff (both).

Note: The selection of PDF Options applies to both PDF attachments and PDF
export files. The Diff result only applies only AFTER the first time this task is
run. There is no Diff with a previous result if there is no previous result. The
maximum number of rows that can be compared at one time is 5000. If the
number of result rows exceeds the maximum, the message
(compare first 5000 rows only)

206 IBM Guardium 10.0


will show up in the diff result.
5. Enter all parameter values in the Task Parameters pane. The parameters will
vary depending on the report selected.
6. Click Apply.

API for automatic execution

By default, the Guardium application comes with setup data that links many of the
API functions to reports, providing users, through the GUI, with prepared calls to
APIs from reporting data. Use API Assignment in Reports to link additional API
functions to predefined Guardium reports or custom reports. The menu choice API
for automatic execution will appear in the Add Audit Task: Report when selecting
an appropriate predefined Guardium report or custom report that have fields in
the report that are linked to API parameters. Examples of predefined reports where
the API for automatic execution menu choice will appear are Access Policy
Violations, Databases Discovered, and Guardium Group Details.

Workflow Builder

The formal sequence of event types created in Workflow Builder is managed by


clicking on the Event and Additional Column button in the Audit Tasks window.
This button will appear after an audit task has been created and saved. This
additional button will not appear until the audit task is saved. Configure these
workflow activities when Adding An Audit Task:
1. Create and save an Audit Task. After saving, an additional button will appear,
Events and Additional Columns.
2. Click this additional button.
3. At the next screen, place a checkmark in the box for Event & Sign-off. The
workflow created in Workflow Builder will appear as a choice in Event &
Sign-off.
4. Highlight this choice. Apply (save) your selection.
5. If additional information (such as company codes, business unit labels, etc.) is
needed as part of the workflow report, add this information in the Additional
Column section of the screen and then click Apply (save). In order to select the
predefined or created groups column, change the Type column to Group. When
done, close this window.
6. Apply (save) your Audit Task. Apply (save) the entire Audit Process Definition.

This Event and Additional Column button appears in all audit tasks. By placing
the cursor over this button, an information balloon will appear telling the user if
the audit task has an Event or a Sign-off column linked to the specific audit task.

Note:

If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.

Under the Report choices within Add an Audit Task are two procedural reports,
Outstanding Events and Event Status Transition. Add these two reports to two
new audit tasks to show details of all workflow events and transitions These two
reports will not be filtered (observed data level security filtering will not be
applied). These two reports are available by default in the list of reports only to
admin user and users with the admin role.

Chapter 5. Monitor and Audit 207


The Additional Columns button is disabled for Classification tasks.

Clone an Audit Task - If you are cloning a process, and you made changes to a
cloned task before the cloned process is saved, the workflow associated with the
original task will not be cloned.

Deletion of a event status is permitted only if the status is not in the first status of
any events, and if it not used by any action. The validation will provide a list of
events/actions that prevent the status from being deleted.

The owner/creator of a workflow event can always see all statuses of this event,
regardless of what roles have been assigned to these statuses.

Define a Security Assessment Task

f you have not yet started to define a compliance workflow automation process,
create a workflow process before performing this procedure. If the assessment to
be used has not yet been defined, do that first.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Security Assessment button.
3. Select a security assessment from the Security Assessment list.
4. The selection of PDF Content are Report (the current results), Diff (difference
between one earlier report and a new report) and Reports and Diff (both).
5. Click Apply.

Note:

If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.

If a security assessment task is empty (for example, a security assessment with a


set of no roles), this empty security assessment will not show up in the drop-down
list in Audit Builder.

Define an Entity Audit Trail Task


If you have not yet started to define a compliance workflow automation process,
create a workflow process before performing this procedure.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Entity Audit Trail button.
3. Select the type of entity to be audited. Depending on the type selected, you will
be required to supply the following information:
v Object: Enter an object name.
v Object Group: Select an object group from the list.
v Client IP: Enter a client IP address.
v Client Group IP: Select a client IP group.
v Server IP: Enter a server IP address.
v Application User Name: Enter an application user name.
4. There a number of choices for CSV/CEF File Labels, Write CEF to Syslog,
Compress and Export PDF. See Export a CSV or CEF File.

208 IBM Guardium 10.0


5. In the Task Parameters pane, supply run-time parameter values (only the From
and To periods are required).
6. Click Apply.

Note: If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.

Define a Privacy Set Task


f you have not yet started to define a compliance workflow automation process,
create a workflow process before performing this procedure. If the privacy set to
be used has not yet been defined, do that first.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Privacy Set button.
3. Select a privacy set from the Privacy Set list.
4. Select either Report by Access Details or Report by Application User to indicate
how you want the results sorted and displayed.
5. There a number of choices for CSV/CEF File Labels, Write CEF to Syslog,
Compress and Export PDF. See Export a CSV or CEF File.
6. Enter starting and ending dates for the report in the Period Start and Period
End boxes.
7. Click Apply.

Note: If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.

Define a Classification Process Task

If you have not yet started to define a compliance workflow automation process,
create a workflow process before performing this procedure. If the classification
process to be used has not yet been defined, do that first.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Classification Process button.

Note: You will be alerted that classification processes may return sensitive data,
and those results will be appended to PDF or CSV files.
3. Select a classification process from the Classification Process list. Click Apply.

Note: If data level security at the observed data level has been enabled, then audit
process output will be filtered so users will see only the information of their
databases.

Define an External Feed Task

This type of workflow automation task feeds data collected by Guardium to an


external application, mapping the data to a format recognized by that application.
This task type is an extra-cost feature, enabled by a patch.

Note: If this feature is used in a Central Manager environment, the External Feed
Patch must be installed on the Central Manager, and on all managed units on
which the task will run.

Chapter 5. Monitor and Audit 209


For more information about how the data is mapped from Guardium to the
external application, refer to the documentation for the option that was purchased.

If you have not yet started to define a compliance workflow automation process, s
create a workflow process before performing this procedure.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click External Feed.
3. Select a feed type from the Feed Type list.
4. The controls that appear next depend on the feed type selected. See Optional
External Feed for additional information on specific External Feed Types.
5. Select an event type from the Event Type list.
6. Select a report from the Report list. Depending on the report selected, a
variable number of parameters will appear in the Task Parameters pane.
7. In the Extract Lag box, enter the number of hours by which the feed is to lag,
or mark the Continuous box to include data up to the time that the audit task
runs.
8. In the Datasources pane, identify one or more datasources for the external
feed.
9. Enter all parameter values in the Task Parameters pane. The parameters will
vary depending on the report selected.
10. Click Apply.

View or Sign Results


1. Open the Compliance Workflow Automation results.
2. If signing is required, click the Sign Results button.
3. Optional. To forward these results to another user, click Escalate, and see
Forward Results to Additional Receivers (in Escalation section).
4. Click Close this window link.

Note: If there are outstanding events, then the results can not be signed either
from the audit viewer or from the To-do list. If there are outstanding events and an
attempt is made to sign the results, the following message appears:
Audit process cannot be signed - has pending events.

Please update all outstanding events prior to signing this result.

Note: When viewing audit process results, if a result has events associated with it,
the Sign Results button is not available on this result until all events are in a Final
state or cannot be seen by this user (due to data-level security).

Note: This report also contains a date or Last Action Time, located in a column
between Receiver and Status. This report shows that the result was signed by user
AAA, but also when this user AAA signed this result.

Release Results without Signing or Viewing


1. Open your To-Do List panel.
2. Click the Continue button for the results you want to release to the next
receiver on the distribution list.
3. Click Close this window link.

210 IBM Guardium 10.0


View Results Distribution
1. Open the compliance workflow automation results.
2. Expand the Distribution Status panel by clicking the (Show Details) button.
3. Click Close this window link.

View Receiver Comments Added to Results


1. Open the compliance workflow automation results.
2. Expand the Comments panel by clicking the Show Details button.

Note: These are the comments that were attached to the results when the
report page was retrieved from the Guardium system. If you add comments of
your own, or if other receivers are adding comments simultaneously, you will
not see those comments until you refresh your page (using your browser
Refresh function).
3. Click Close this window link.

Escalate Process Results

A receiver of process results can forward the results notification for review and/or
sign-off to additional receivers. If you escalate the results to a receiver outside of
the original audit and sign-off trail, and the results include a CSV file, that file will
not be included with the notification.

Regardless of who is a receiver of an audit result, an escalation can involve any


user on the system, provided the Escalate result to all users box is checked in the
Setup > Tools and Views > Global Profile menu. A check mark in this box
escalates audit process results to all users, even if data level security at the
observed data level is enabled. The default setting is enable. If the check box is
disabled (no check mark in the check box), then audit process escalation will only
be allowed to users at a higher level in the user hierarchy. If the check box is
disabled, and there is no user hierarchy, then no escalation is permitted.

Also, depending on event permissions, if for example, the infosec user can only see
events in status1 and dba user can only see events in status2, the dba user will
receive a different result than the result the infosec user saw when the infosec user
clicked Escalate. It is possible that infosec will escalate to dba, and dba will
receive an audit result with 0 rows in it.
1. If the compliance workflow automation results you want to forward are not
open, open them now.
2. Click Escalate.
3. Select the receiver from the Receiver list.
4. In the Action Required column, select Review (the default) or Review and Sign.
5. Click the Escalation button to complete the operation.

Note:

Audit process results cannot be escalated to a group of users, only to users or


roles.

When escalating to an user who already has the result in the user's to-do list, a
popup message will appear, asking if an additional email should be sent. If yes, an
additional email will be sent to the user, but the to-do list will not be incremented.

Chapter 5. Monitor and Audit 211


Schedule or Run a Compliance Workflow Automation Process
1. Open the Audit Process Builder by navigating to Comply > Tools and Views >
Audit Process Builder.
2. Select the process from the Process Selection List.
3. Click Modify to open the Audit Process Definition panel.
4. To run the process once, click Run Once Now, or to define a schedule for the
process, click Modify Schedule.

Note: After a schedule has been defined for a process, the process runs
according to that schedule only when it is marked active. To activate or
deactivate an audit process, see the next section.

Activate or Deactivate a Compliance Workflow Automation


Process

After a schedule has been defined for an audit process, it runs according to that
schedule, only when it is marked active.

To activate or deactivate an audit process:


1. Open the Audit Process Builder by navigating to Comply > Tools and Views >
Audit Process Builder.
2. Select the audit process from the Process Selection List.
3. Click Modify.
4. In the Audit Process Definition panel, mark the Active box to start running the
process according to the schedule; or clear the Active box to stop running the
process (ignoring any schedule defined).

Note: If you are activating the process but there is no schedule, click Modify
Schedule to define a schedule for running the process.
5. Click Save.

How to create an Audit Workflow


Create an audit process workflow that generates a pre-defined report on a pre-set
schedule, assigns the report to the database administrator for review and sign-off,
and facilitates the reviewed report being sent to a supervisor for an additional
review and signoff.

About this task


Automate the workflow steps of the audit process of the customer.

See the Compliance Workflow Automation topic for additional information on this
subject.

Procedure
1. Open the Audit Process Finder by navigating to Comply > Tools and Views >
Audit Process Builder.
2. Click the New button to open the Audit Process Definition panel.
The Audit Process Definition panel is divided into three sections: General,
Receiver Table, and Audit Tasks.

212 IBM Guardium 10.0


Audit Process Builder menu screen
3. Go to the General section. Enter a name in the Description box. Do not
include apostrophe characters.
4. Check the Active box to associate a schedule with the process. At least one
audit task must be defined before you can save the process.
5. Mark the Archive Results box if you want to store the results offline after the
retention period has expired. When results have been archived, you can
restore them to the appliance for viewing again, later.
6. In the Keep for a minimum of (n) days or (n) runs boxes, specify how long to
keep the results, as either a number of days (0 by default) or a number of
runs (5 by default). After that, the results will be archived (if the archive box
is marked) and purged from the appliance.

Chapter 5. Monitor and Audit 213


7. If one or more tasks create CSV or CEF files, you can optionally enter a label
to be included in all file names, in the CSV/CEF File Label box. These files
can also be compressed, or Zipped, by clicking on the Zip for mail box to add
a checkmark.
8. The Email Subject field in the Audit Process definition is used in the emails
for all receivers for that audit process. The subject may contain one (or more)
of the following variables that will be replaced at run time for the subject:
v %%ProcessName will be replaced with the audit process description
v %%ExecutionStart will be replaced with the start date and time of the first
task.
v %%ExecutionEnd will be replaced with the end date and time of the last
task.
Upon entering a subject, it will check whether any variable (starting with %%
is present) and will ensure all are valid variables.
9. Go to the Receivers section. Open the drop-down box and add the receivers
for the process. See Add Receivers in the Compliance Workflow Automation
topic for further information. Checkoffs are needed to determine action
required, additions to To-do list, notification via email notifications and
continuous distribution. Again, see Add Receivers for complete information in
setting these choices. In this example, do not check the continuous boxes for
the receivers. If the Continuous checkbox is marked, distribution continues to
the next receiver on the list without interruption. If the Continuous checkbox
is cleared, distribution to the next receiver is held until the current receiver
performs the required action (review or sign). In this example, the DBA needs
to view and sign the report before it goes to the Supervisor.
10. Go to the Tasks section. You must define at least one audit task before you can
save the process.
11. Define a Report Task.
a. If the Add New Task pane is not open, click Add Audit Task (see
illustration).
b. Click the Report button.
c. Optionally create CSV or CEF file output and write to Syslog.
d. Enter all parameter values in the Task Parameters pane. The parameters
will vary depending on the report selected.
e. Click Apply.

214 IBM Guardium 10.0


Audit Task – Report
12. Optionally assign security roles.
a. Open or select the item to which you want to assign one or more security
roles (a report definition, for example).
b. Click the Roles button.
c. In the Assign Security Roles panel, mark all of the roles you want to assign
(you will only see the roles that have been assigned to your account).
d. Click Apply.
13. Optionally add comments
14. Click the appropriate buttons to Schedule or Run an Audit Workflow Process
(see link)
15. Click Apply.
16. Schedule or Run a Compliance Workflow Automation Process
Open the Audit Process Finder by navigating to Comply > Tools and Views >
Audit Process Builder.
a. Select the process from the Process Selection List.
b. Click Modify to open the Audit Process Definition panel.
c. To run the process once, click Run Once Now, or to define a schedule for
the process, click Modify Schedule.

Note: After a schedule has been defined for a process, the process runs
according to that schedule only when it is marked active.

Chapter 5. Monitor and Audit 215


17. Sign-off and Review of Report
After the report has run, distribution status can be observed from the report.
In the example, the DBA has viewed and signed the report and the supervisor
has not.

Distribution Status
The Audit Process Log report shows a detailed activity log for all tasks
including start and end times. This report is available by navigating to
Reports > Guardium Operational Reports > Audit Process Log. Audit tasks
show start and end times.

Example of Audit Process Log

Open Workflow Process Results


Use View to see the Workflow Process Results

Do one of the following:


v Open your Workflow Automation To-Do List panel (see Audit Process To-Do
List) and click View for the results set you want to view or sign.
v If you have received an e-mail notification containing hypertext links to your
To-Do List or the results, click one of those links to open your To-Do List or the
results directly from the e-mail. You must have access to the Guardium system

216 IBM Guardium 10.0


at the location from which you are accessing your e-mail (or these links will not
work). If you are not logged in, you will be prompted to log in to the Guardium
system.

Note: When you register a new managed unit to a central manager, you might be
unable to view audit results. The application does not show results that have a
timestamp before the managed unit was registered to the central manager. The
timestamp of the registration uses the central manager time, and the timestamp of
the audit result uses the managed node time. So, if the central manager time is
ahead of the managed unit time, results generated on the managed unit are not
visible until the managed unit time passes the time of registration. This should
happen in no more than 24 hours, possibly less depending on the locations of the
two machines. You should be able to view the results of audit processes on the
managed unit within 24 hours of registration.

How to distribute workflow through Guardium groups


Using the group receiver option, define a single Compliance Workflow audit
process that will send different results to different Guardium users based on a
pre-defined, custom mapping.

Value-added: Setup a single audit process and distribute the appropriate results to
the appropriate manager. This saves having to create separate audit processes for
separate receivers.

IBM Security Guardium’s Compliance Workflow Automation automatically


delivers reports, classification results, and security assessment results to Guardium
users on a scheduled basis. Result receivers can be defined as Guardium users,
Guardium roles or user groups.

For example, consider a large organization that has fifteen DBA managers that
need to review the activities for the DBAs they manage without viewing the
activities of the other manager’s DBAs. One solution would be to setup fifteen
separate audit processes; one for each manager. This would take a lot of time to
configure and it is difficult to manage: Each audit process needs to be scheduled
separately and any global change would need to be made individually for all
fifteen audit processes.

The user group distribution method, on the other hand, permits the setup of a
single audit process and distributes the appropriate results to each manager based
on a manager/DBA mapping. This process requires more upfront configuration but
reduces to maintenance time. Only one audit process needs to be scheduled and
changes only need to be applied in one location.

User mapping
The first step in the process is to map the users to the data elements within
Guardium that will be the basis for report distribution. The example that will be
used in this document will be based on objects, but you can apply these concepts
with any data element within Guardium.

Example: Three users have responsibility over three different sets of tables, based
on audit requirements (PCI, HIPPA, and CCI) within a database server, as follows:

Chapter 5. Monitor and Audit 217


Table 20. User with Table/Object.
User Table/Object

User01 db2inst1.cc_numbers

User01 db2inst1.ccn

User02 db2inst1.ADDRESSES

User02 db2inst1.SSN_NUMBERS

User02 db2inst1.G_CUSTOMERS

User02 db2inst1.G_EMPLOYEES

User02 db2inst1.G_FUNDS

User03 db2inst1.doctor

User03 db2inst1.medicare

User03 db2inst1.med_history

This table must be added as a custom table within Guardium, either manually or
through a data upload. The following steps demonstrate how to create a custom
table manually. The screenshots are from the “admin” user interface, but they can
also be accessed from within the “user” user interface.
1.
Navigate to Reports > Report Configuration Tools > Custom Table Builder
and press the Manually Define button.

2.

218 IBM Guardium 10.0


At the Custom Table Builder screen, define the table layout. Make sure that
Group Type matches the correct data element in Guardium. Press Apply and
Back when complete.

3.
Press Edit Data to manually add the records. Note, if you have a large amount
of data, choose Upload Data to import from an external data source.

4.
Press Insert.

Chapter 5. Monitor and Audit 219


5.
Enter each combination of values and press Insert until you have added all of
the required records.

6.
When complete, press the Query button to review the data.

220 IBM Guardium 10.0


7.
Press return when complete.

Chapter 5. Monitor and Audit 221


Custom Domains
Next, join this custom table to the Guardium table structure using Custom
Domains.
1. Navigate to Reports > Report Configuration Tools > Custom Domain Builder.
Highlight [Custom] Accessand press Clone.

2.
In the Custom Domain Builder:
a.
Highlight the new table created under Available entities.
b.
Highlight the table under Domain entities to which you would like to join
the custom table.
c.
Under Join condition choose the fields on each table on which to create the
join and press Add Pair.

3.
Press the arrows (>>) button to move the custom table from Available entities
to Domain entities.

222 IBM Guardium 10.0


4.
Press the Detail button to review the joins.

5.
Confirm that the joins are correct and press Close.

6.
Press Apply to save the new custom domain.

Chapter 5. Monitor and Audit 223


Custom Report
Next, create a report to distribute to the users.
1.
Navigate to Reports > Report Configuration Tools > Report Builder and select
the new domain from the Domain drop-down menu.

2.
Press New.

3.
Enter a Query Name and Main Entity and press Next.

224 IBM Guardium 10.0


4.
Create a new report with a run-time parameter for the user field created in the
custom table.

User Group

Create a new group of “Guardium Users” based on the custom table.


1.
Navigate to Setup > Tools and Views > Group Builder and create a new
group with Guardium Users as the Group Type.

Chapter 5. Monitor and Audit 225


2.
Add all of the users from the custom table.

Audit Process
1.
Create a new Audit Process.
2.
Choose the group created in User Group as the Receiver
226 IBM Guardium 10.0
3.
Choose the custom report created in step 4 as the task.
4.
In the run-time parameter, enter the special tag “./LoggedUser”. This will
cause the results to be distributed based on the custom mapping.
5.
Press Run Once Now to run the Audit Process

When the audit process completes, each receiver should a different result set based
the mapping:

Chapter 5. Monitor and Audit 227


Users
User01

User02

User03

228 IBM Guardium 10.0


Audit Process To-Do List
This topic describes the Audit Process To-Do List and the steps required to open
and use it.

There are several ways to open the Audit Process To-Do List, including:
v Click the icon in the page banner.
v Navigate to Comply > Tools and Views > Audit Process To-Do List.
v If you received an email notification, click the To-Do List link to open your
To-Do List. Alternatively, click the report link to open the results. In either case,
you must be accessing your email from a location where the Guardium system
can be accessed.

The following steps describe how to use the Audit Process To-Do List:
1. Select the user whose To-Do list you want to open, either by opening up the
drop-down menu or clicking Search Users. You will be informed if the list is
empty.
2. As an administrator, you can perform any actions on any to-do list entry. Any
actions you perform are logged, indicating that the action was performed on
behalf of the user by the administrator.
3. The choices available per to-do list entry are View, Download as PDF and Sign
viewed results.
The selection of PDF Content are: Report (the current results), Diff (difference
between one earlier report and a new report) and Reports and Diff (both).

Note: The selection of PDF Content applies to both PDF attachments and PDF
export files. The Diff result only applies only AFTER the first time this task is
run. There is no Diff with a previous result if there is no previous result. The
maximum number of rows that can be compared at one time is 5000. If the
number of result rows exceeds the maximum, the message compare first 5000
rows only will show up in the diff result.
4. Click on the icon of arrows circling to Refresh the set.

Note: To send files on an external server without sending email and without
adding results to the to-do list, define an audit process without receivers. Also
clear the to-do list checkbox in the Add Receiver section and remove/ do not add
any receiver in the receiver section in order not to add results to To-do list.

Chapter 5. Monitor and Audit 229


To-Do Lists and Data Level Security
The To-Do list has a pull-down menu to see the to-do lists of other users. Unlike
the pull-down menu of users with role admin, the pull-down menu for the rest of
the users will include ONLY users under the current user in the Data Level
Security (DLS) hierarchy. If the user has the exempt role, then all the users are
shown in the pull-down menu. Users with role admin can see all users in the
pull-down menu.

When a user accesses another user's results, the data presented in the report is
filtered according to the Data Level Security and the role of the user selected (for
example, in the case of a custom workflow, the data is filtered according to the role
of the user selected and the status defined for that role).

If a user with role admin accesses a result of a user that is UNDER in the
hierarchy, then it behaves as explained in the previous paragraph. If administrator
accesses a result of a user which is NOT under in the hierarchy, then it will show
the result using the Data Level Security of the administrator and will show for all
roles.

When a result is added to a user's to-do list because a change in a status of an


event, if the result was not in the to-do list previously, then an email is sent to the
user. The email will not contain a PDF, just a notification and link.

If a user goes to some other user's to-do list, a message will indicate which user is
determining the DLS filtering.

Audit and Report


Guardium organizes the data it collects into a set of domains. Each domain
contains a different type of information relating to a specific area of concern: data
access, exceptions, policy violations, and so forth.

All domains and their contents are described in the Domains, Entities, and
Attributes appendix.

There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Regardless of the domain, the same
general-purpose query-builder tool is used to create all queries. For detailed
instructions on how to build queries, see Queries.

In addition to the standard set of domains, users can define custom domains to
contain information that can be uploaded to the Guardium appliance. For example,
your company might have a table relating generic database user names (hr23455 or
qa4872, for example) to real persons (Paula Smith, John Doe). Once that table has
been uploaded, the real names can be displayed on Guardium reports, from the
custom domain. For more detailed information on how to define and use custom
domains, see External Data Correlation.

External Data Correlation


This topic describes the creation of custom tables for enterprise information that is
needed in addition to existing Guardium internal data.

Many customers have valuable information in many different databases in their


environment. It is extremely useful for an audit report, to correlate relevant

230 IBM Guardium 10.0


information need to make these reports easy and useful to understand. The
External Data Correlation allows you to create custom tables on the Guardium
appliance for enterprise information that is needed in addition to the existing
Guardium internal data. You can do this either manually within the GUI or based
on an existing table on a database server. Queries and reports can then be created
for this information just as if it were predefined data.

There is a distinction between a custom table, a custom domain, and a custom


query.

For example, perhaps a table exists on a database servers containing all employees,
their database usernames, and the department to which they belong (for example,
Development, Financial, Marketing, HR, etc.). If you upload this table and all its
data, you could cross-reference this table with Guardium's internal tables to see,
for example, which employees from Marketing are accessing the financial database
(which may constitute a suspicious activity).

To access Data Mart help, Click “Data Mart” on page 310.

Custom Tables

A custom table contains one or more attributes that you want to have available on
the Guardium appliance. For example, you may have an existing database table
relating encoded user names to real names. In the network traffic, only the
encoded names will be seen. By defining a custom table on the Guardium
appliance, and uploading data for that table from the existing table, you will be
able to relate the encoded and real names.

Before defining a custom table, first verify that the data you need from the existing
database is a supported data type. A data type is supported if it is taken as one of
the following SQL type by the underlying JDBC driver: INTEGER, BIGINT,
SMALLINT, TINYINT, BIT, BOOLEAN, DECIMAL, DOUBLE, FLOAT, NUMERIC,
REAL, CHAR, VARCHAR, DATE, TIME, TIMESTAMP. The following table
summarizes some of the supported and unsupported data types for uploading to a
custom table.

Supported and Unsupported Data Types for Custom Tables

Use this table to see what supported and unsupported data types exist for certain
databases.
Table 21. Supported and Unsupported Data Types for Custom Tables
Databases Supported Data Types Unsupported Data Types
Oracle float number char varchar2 date nchar nvarchar2 long clob raw nclob longraw bfile rowid urowid
blob
DB2 char varchar bigint integer smallint real double blob clob longvarchar datalink
decimal date time timestamp
Sybase char nchar varchar nvarchar int smallint tinyint text binary varbinary image timestamp
datetime smalldatetime
MS SQL bigint bit char datetime decimal float int money text
nchar numeric nvarchar real smalldatetime
smallint tinyint smallmoney varchar unique
identifier
Informix char nchar integer smallint decimal smallfloat text
float serial date money varchar nvarchar datetime

Chapter 5. Monitor and Audit 231


Table 21. Supported and Unsupported Data Types for Custom Tables (continued)
Databases Supported Data Types Unsupported Data Types
MY SQL bigint decimal int mediumint smallint tinyint longtext tinyblob tinytext blob text mediumblob
double float date datetime timestamp time year mediumtext longblob longtext
char binary enum set

Note: Blob value (even 1k) in dynamic SQL can be captured, but same size blob
value in static SQL cannot be captured.

Custom Table Archive and Restore

The Custom Table Builder screen has a button called Purge/Archive.

The Custom Table Data Purge screen has a checkbox for Archive. Checking this
box results in the data of the custom table being included in the normal data
archive.

This custom table data is archived according to the date in


SQLGUARD_TIMESTAMP column of the custom table.

The data of the custom table can be archived from a collector or an aggregator.

The data of the custom table archived from a collector can be restored to any
collector or aggregator managed by the same Central Manager as the source
collector (the metadata must be present).

The data of the custom table archived from an aggregator can be restored to any
aggregator managed by the same Central Manager as the source aggragator.

If the archive file to be restored to a Guardium system does not have the metadata,
then the data of the custom table is not restored.

If the structure of the custom table has changed between the time of archive and
the time of restore in a way that results in an SQL error (for example, columns
removed or type changed), then a warning message appears on the
aggregation/archive activity report and the data is not restored.

If a custom table is set to be purged by the default purge, then the restored data
will be kept for the number of days specified on the restore screen.

If the custom table is set to overwrite data when it uploads, then restored data will
be deleted at the time an upload is performed.

Custom Domains
A custom domain contains one or more custom tables. If it contains multiple
tables, you define the relationships between tables when defining the custom
domain.

Custom Queries
A custom query accesses data from a custom domain. You use the Custom Query
Builder to create queries against custom domains. Custom queries can then be

232 IBM Guardium 10.0


used like any other query to generate reports or audit tasks, populate groups, or to
define aliases.

Database Entitlement Reports


DB Entitlement Reports use the Custom Domain feature to create links between the
external data on the selected database with the internal data of the predefined
entitlement reports. See topic, Link External Data to Internal Data, on this subject.
See “Database Entitlement Reports” on page 247 for further information on how to
use predefined database entitlement reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

Create a Custom Table

Open the Custom Table Builder by navigating to either of the following:


v Comply > Custom Reporting > Custom Table Builder
v Reports > Report Configuration Tools > Custom Table Builder

Upload a Table Definition


Creating a custom table can be accomplished by the uploading of a table definition
by accessing its metadata from the database server on which it is defined.

Note: Custom Tables uploaded to Guardium are optional components enabled by


product key. If these components have not been enabled, the Custom Tables
choices listed will not appear in the Custom Table Builder selection.
1. Open the Custom Table Builder.
2. Click Upload Definition to open the Import Table Structure panel. It is not
necessary to select an item
3. Enter a description for the table in the Entity Desc field. This is the name you
will use to reference the table when creating a custom query.
4. Enter the database table name for the table in the Table Name field. This is the
name you will use to create the table in the local database.
5. Enter a valid SQL statement for the table in the SQL Statement field. The result
set returned by the SQL statement must have the same structure as the custom
table defined. For example, if the custom table contains all columns from the
table named my_table, enter select * from my_table.

Note:

Do not include any newline characters in the SQL statement. All columns must
be explicitly named; making use of a column alias if necessary.
6. Click Add Datasource to open the Datasource Finder in a separate window.
This will allow us to define where the external database is located, and the
credentials needed to retrieve the table definition and content later in the
process.
7. Use the Datasource Finder to identity the database from which the table
definition will be uploaded.
8. Click Retrieve to upload the table definition. This will execute the SQL
Statement and retrieve the table structure. The SQL request will come from the
Guardium Appliance to the external database. Remember that only the
definition is being uploaded and you can upload data later.

Chapter 5. Monitor and Audit 233


Manually Define a Table Definition
1. Open the Custom Table Builder.
2. Click Manually Define to open the Define Entity panel.
3. Enter a description for the table in the Entity Desc field. This is the name you
will use to reference the table when creating a custom query. Use of the
special characters \$|&;'`" are not allowed in the entity description.
4. Enter the database table name for the table in the Table Name field. This is the
name you will use to create the table in the local database.
5. For each column in the table to be defined:
v Enter a name in the Column Name box. This will be the name of the column
in the database table.
v Enter a name in the Display Name box. This is the name you will use to
reference the attribute in the Custom Domain Builder and the Custom Query
Builder.
v Select a data type (Text, Date, Integer, Float, or TimeStamp).
v For a Text attribute, enter the maximum number of characters in the Size
box. (The Size box is not available for other data types.)
v If uniqueness is to be enforced on the column, check the Unique box.
v If the attribute being defined corresponds to a group type, select that group
type from the Group Type list.
v Click Add to add the column.
6. Use the Entity Key drop-down list to identify which column will be used as the
entity key. The Entity Key is used in query builder when select count, the count
field will be Entity Key.
7. If additional changes are made after the Add button, such as deletion of a
column, or changing an attribute, Click Apply to save any changes.
8. Click Done when you have added all columns for the table.

Modify a Table Definition

If you modify the definition of a custom table, you may invalidate existing reports
based on queries using that table. For example, an existing query might reference
an attribute that has been deleted, or whose data type has been changed. When
applying changes to a custom table, if any queries have been built using attributes
from that table, the Queries are displayed in the Query List panel. Note: You can
also use the Modify to view and validate the table structures that were imported.
1. Open the Custom Table Builder.
2. Choose a custom table by clicking on the entity label and highlighting it.
3. Click Modify to open the Modify Entity panel.
4. See Defining a Table Manually for assistance.
5. When applying changes to a custom table, if any queries could be invalided
due to modification to attribute from that table, the queries are displayed in the
Query List panel. Use the Query List panel to choose and change queries. You
do not have to make all changes immediately as you can always come back
and use the Check for Invalid Queries option.

Check for Invalid Queries


If you modify the definition of a custom table, you may invalidate existing reports
based on queries using that table. For example, an existing query might reference

234 IBM Guardium 10.0


an attribute that has been deleted, or whose data type has been changed. It is a
good idea to check for invalid queries after the table modification process.
1. Open the Custom Table Builder.
2. Click Invalid Queries.
3. The queries are displayed in the Query List panel. Use the Query List panel to
choose and change queries.

Purge Data from Custom Table


Data can be purged from custom tables on the Guardium server on demand, or on
a scheduled basis.
1. Open the Custom Table Builder.
2. Choose a custom table by clicking on the table name and highlighting it
3. Click Purge to open the Custom Table Data Purge panel.
4. Click Purge All to purge now.

Note: Run once now purge will look at the RESTORED_DATA table for
retention. Purge ALL will purge all records deleted without checking the
retention.
5. In the Configuration panel, enter the age of the data to be purged, as a number
of days, weeks or months prior to the purge operation date.
6. Click Run Once Now to run a schedule purge operation once.
7. Click Modify Schedule to open the standard Schedule Definition panel and
schedule a purge operation.
8. Click Done to close the panel.

Upload Data to a Custom Table


1. Open the Custom Table Builder.
2. Choose a custom table by clicking on the name of the table and highlighting it
3. Click Upload Data to open the Import Data panel.
4. In the SQL Statement box, enter a valid SQL statement for the table. The result
set returned by the SQL statement must have the same structure as the custom
table defined. For example, if the custom table contains all columns from the
table named my_table, enter select * from my_table. The following fields,
which are internal to Guardium, are available for use within SQL statements:
v ^FromDate?^ and ^ToDate?^ where the value is equal to the previous
upload date and the current upload date respectively.
v ^fromID^ and ^toID^ where, when used with Id Column Name consist of
the maximum value of the Id Column from the previous upload and the
maximum value of the current upload respectively.

Note: Do not include any newline characters in the SQL statement.


5. Specify, if needed, a column name in the Id Column Name (from the table
defined within the datasource) will be used and allow for tracking by ID and
be used in conjunction with the internal Guardium fields ^fromID^ and
^toID^.
6. In the DML command after upload box, enter a DML command (an update or
delete SQL statement) with no semicolon, to be executed after uploading the
data. Note: Do not include any newline characters in the SQL statement.

Chapter 5. Monitor and Audit 235


7. Check the Overwrite per upload box if you wish to have data purged in the
custom table before the upload. Check the Overwrite per datasource if you
wish to have data for that datasource purged before the upload from it
8. Check the default purge button (in the Upload Custom Data screen) to be part
of the Default Custom Table Purge Job purge object which has an initial
default age of 60 days. To add a purge schedule for this table, go to initial
Custom Table Builder page, select a Custom Table and click Purge to open a
Custom Table Data Purge configuration screen.
9. Check the Use default schedule box only if uploading tables from previous
versions of Guardium. This check box only appears in a Central Manager
view and only for predefined custom tables CM Buffer Usage Monitor,
Enterprise View No Traffic, Enterprise View S-TAP Changes and S-TAP Info.
10. Click Add Datasource to open the Datasource Finder in a separate window.
Use this window to identify one or more databases from which the table data
will be uploaded. You may add multiple datasources to upload from multiple
sources. Note: For a Central Manager, in the Import Data page there is a read
only check box called Include default source. If this check box is checked,
upload data will iterate through all online registered managed units. Note:
When adding a datasource, the application can not be scheduled to run
without specifying the user name and password of the selected datasource.
11. You can click Check/Repair to compare the schema of the custom table to the
schema of the meta-data. For central management environments: In a central
management environment, the custom table definition resides on the central
manager, and the custom table may not exist on the local (managed unit)
database. Click the Check/Repair button to check if the custom table exists
locally, and create one if it does not.
12. Click Verify Datasources to test the external database connection. An
acknowledgement screen will appear.
13. Click Apply.
14. To upload data to this custom table, do one of the following:
v Click Run Once Now to upload data manually.
v Check Modify schedule to configure the schedule.

Maintain Custom Table


When following the procedure for creating a Custom Table (detailed previously)
and selecting a predefined custom table, click Maintenance to manage the table
engine type and table index. The table engine types for custom tables/entitlements
(InnoDB and MyISAM) will appear for all predefined custom databases as the data
stored on the Guardium internal database is MYSQL-based. The two major types
of table storage engines for MySQL databases are InnoDB and MyISAM. Major
differences between these MYSQL table engine types:
v InnoDB is more complex while MyISAM is simpler.
v InnoDB is more strict in data integrity while MyISAM is looser.
v InnoDB implements row-level lock for inserting and updating while MyISAM
implements table-level lock.
v InnoDB has transactions while MyISAM does not.
v InnoDB has foreign keys and relationship constraints while MyISAM does not.

Note: Changing the engine type is disallowed (and the selection greyed out) if the
row number in the table is greater than 1M.

236 IBM Guardium 10.0


The other selection in the Maintain Custom Table menu is Manage Table Index.
Click Insert to open Table Index Definition. The pop-up screen suggests columns
in the table to add to indexes based on columns used on custom domains as Join
conditions. Select the columns and save. Indexes will be created (or re-created).

Schedule Custom Data Uploads

Once a custom table definition is in place, data can be uploaded to custom tables
on the Guardium appliance on a scheduled basis.

Note: New installations do not automatically start Enterprise reports. There is one
upload schedule for each custom table. The total amount of disk space reserved on
the Guardium appliance for custom tables is 4GB.
1. Open the Custom Table Builder.
2. Choose a custom table by clicking on the entity label and highlighting it.
3. Click Upload Data to open the Import Data panel.
4. Mark the Use Default Schedule check box to upload this table using the
default schedule. Otherwise, this custom table uses its own upload data
schedule.
5. Click Modify Schedule to open the standard Schedule Definition panel and
modify the schedule.
6. Click Done when you are finished.

The Enterprise reports custom upload are like other jobs. There are two ways to
enable them:
v In the Custom Table Upload GUI. (requires license for custom upload)
v Use GuardAPI from the CLI:
grdapi add_schedule jobName=CustomTablePurgeJob_CM_SNIFFER_BUFFER_USAGE obGroup=customTableJobGr

Create a Custom Domain

After defining one or more custom tables, define a custom domain so that you can
perform query and reporting tasks using the custom data. The information
collected is organized into domains, each of which contains a different type of
information relating to a specific area of concern: data access, exceptions, policy
violations, etc. There is a separate query builder tool for each domain. Custom
domains allow for user defined domains and can define any tables of data
uploaded to the Guardium appliance. The usage for these custom entitlement
(privileges) domains are for entitlement reports which are found if logged in as a
user. To see these reports, go to the user tab, DB Entitlements.

Note: DB Entitlements Domains are optional components enabled by product key.


If these components have not been enabled, the choices listed in the Custom
Domains help topic will not appear in the Custom Domain Builder selection.
1. Open the Custom Domain Builder by navigating to one of the following:
v Comply > Custom Reporting > Custom Domain Builder
v Reports > Report Configuration Tools > Custom Domain Builder
v Setup > Tools and Views > Custom Domain Builder
2. Click Domains to open the Domain Finder panel.
3. Click New to open the Custom Tables Domain panel.
4. Enter a Domain Name. Typically, you will be including a single custom table in
the domain, so you may want to use the same name for the domain.

Chapter 5. Monitor and Audit 237


5. The Available Entities box lists all custom tables that have been defined (and
to which you have access). Select an entity. Optionally, click the (Filter) tool to
open the Entity Filter and enter a Like value to select only the entities you
want listed, and click Accept. This closes the filter window and returns you to
the Custom Tables Domain panel, with only those entities matching the Like
value listed in the Available Entities box. Select the entity you want to include.
6. Click the >> arrow button to move the entity selected in the Available Entities
list to the Domain Entities list.
7. To add an entity to a domain that already has one or more tables, follow the
procedure outlined. You will need to use the Join Condition to define the
relationship between the entities.
For each additional entity:
v From the Domain Entities box, select an entity. All of the attributes of that
entity will become available in the field drop-down list of the Domain
Entities box. Select the attribute from that list that will be used in the join
operation.
v From the Available Entities list, select the entity you want to add. All of the
attributes of that entity will become available in the field dropdown list of
the Available Entities box. Select the attribute from that list that will be used
in the join operation.
v Select = (the equality operator) if you want the join condition to be equal
(e.g., domainA.attributeB = domainC.attributeD). Select outer join if you
want the join condition to be an outer join using the selected attributes.
v Click Add Field Pair. Add Field Pair can be used to add more attributes
pairs of these two entities to the join condition.
v Repeat the steps for any additional join operations.

Note: When data level security is on, internal entities added to the custom
domain cannot belong to different domains with filtering policies.
8. Select the Timestamp attribute for the custom domain entity.

Note: At least one entity with a timestamp must be used, since a timestamp is
required to save a custom domain.
9. Click Apply.

Modify a Custom Domain


The goal is to create a linkage between external data and the internal data.
1. Open the Custom Domain Builder.
2. Choose the Custom Domain that you wish to clone
3. Click Modify to open the Custom Tables Domain panel.
4. See Open Custom Domain Builder and Linking External Data to Internal
Data for assistance.
5. Click Apply to save the changes.

Remove a Custom Domain


1. Open the Custom Domain Builder.
2. Choose the Custom Domain that you wish to clone.
3. Click Domains to open the Domain Finder panel.
4. Click Delete to remove the custom domain.

238 IBM Guardium 10.0


Clone a Custom Domain
1. Open the Custom Domain Builder.
2. Choose the Custom Table that is in the domain you wish to clone.
3. Click Domains to open the Domain Finder panel.
4. Click Clone to open the Custom Tables Domain panel.
5. Change the Domain Name to reflect the new domain.
6. See Open Custom Domain Builder and Linking External Data to Internal
Data for assistance.
7. Click Apply to save the changes.

Link External Data to Internal Data

The goal is to create a linkage between external data and the internal data.
1. Open the Custom Domain Builder.
2. Choose the Custom Table that has your external data.
3. Click Domains to open the Domain Finder panel.
4. Click Modify to open the Custom Tables Domain panel.
5. Click the Filter icon next to the Available Entities.
6. Un-check the Custom box for the filter and optionally fill in a Like condition
to filter entity names and click Accept.
7. Select an entity from the Available Entities that you would like to link with
your external data.
8. Select the field that will be used to join data with your external data.
9. Highlight the table from the Domain Entities that contains your external data
10. Select the field that will be used to join data with the internal data.
11. Click the Add Field Pair to add the relationship.
12. Click the double arrow >> to add the internal table to the Domain Entities
list.
13. Click Apply to save the changes.

Working with Custom Queries

This section describes how to open the Custom Query Builder. See Building
Queries and Building Reports for assistance in defining a query and building a
report. Use the Custom Query Builder to build queries against data from custom
domains, which contain one or more custom tables.
1. Open the Custom Query Builder by navigating to Comply > Custom
Reporting > Custom Query Builder.
2. Select a custom domain from the list.
3. Click Search to open the Query Finder
4. To view, modify or clone an existing query, select it from the Query Name list,
or select a report using that query from the Report Title list.
5. To view all of the queries defined for a specific custom table, select that custom
table from the Main Entity list and click the Search button (only the custom
tables included in the selected custom domain will be listed).

Bidirectional Interface to and from InfoSphere® Discovery


Both IBM Guardium and InfoSphere Discovery have the capability to identify and
classify sensitive data, such as Social Security Numbers or credit card numbers.

Chapter 5. Monitor and Audit 239


A customer of the IBM Guardium product can use a bidirectional interface to
transfer identified sensitive data information from one product to another. Those
customers who have already invested the time in one InfoSphere product can
transfer the information to the other InfoSphere product.

Note: In IBM Guardium , the Classification process is an ongoing process that


runs periodically. In InfoSphere Discovery, Classification is part of the Discovery
process that usually runs once.

The data will be transferred via CSV files.

The summary of Export/Import procedures is as follows:


v Export from Guardium - Run the predefined report (Export Sensitive Data to
Discovery) and export as CSV file.
v Import to Guardium - Load to a custom table against CSV datasource; define
default report against this datasource.

Follow these steps:


v Export from Guardium
v Export Classification Data from IBM Guardium to InfoSphere Discovery
1. As an admin user in the Guardium application, go to Tools > Report Building
>Classifier Results Tracking > Select a Report > Export Sensitive Data to
Discovery.

Note: Add this report to the UI pane (it is not by default).


2. Click the Customize icon on the Report Result screen and specify the search
criteria to filter the classification results data to transfer to Discovery.
3. Run the report and click Download All Records.
4. Save as CSV and import this file to Discovery according to the InfoSphere
Discovery instructions.

Import to Guardium

Import Classification Data from InfoSphere Discovery to IBM Guardium


1. Export the classification data as CSV from InfoSphere Discovery based on
InfoSphere Discovery instructions.
2.
Open the Custom Table Builder by navigating to either of the following:
v Comply > Custom Reporting > Custom Table Builder
v Reports > Report Configuration Tools > Custom Table Builder
Select ClassificationDataImport and click Upload Data.
3. In Upload Data screen, click Add Datasource, click New, define the CSV file
imported from Discovery as new datasource (Database Type = Text).

Note: Alternatively you can load the data directly from Discovery database if
you know how to access the Discovery database and Classification results data.
4. After defining the CSV as Datasource, click Add in Datasource list screen.
5. In Upload data screen click Verify Datasource and then Apply.
6. Click Run Once Now to load the data from the CSV.
7. Go to Report Builder, select the Classification Data Import report, Click Add to
Pane to add it to your Portal and then navigate to the report.

240 IBM Guardium 10.0


8. Access the Report, click Customize to set the From/To dates and execute the
report.

The report result has the classification data imported from InfoSphere Discovery.
Double click to invoke APIs assigned to this report. The data imported from
Discovery can be used for the following:
v Add new Datasource based on the result set.
v Add/Update Sensitive Data Group.
v Add policy rules based on datasource and sensitive data details.
v Add Privacy Set.

CSV Interface signature

Use the table for examples of CSV interface signatures used in the bidirectional
transfer between IBM Guardium and InfoSphere Discovery.
Table 22. CSV Interface signature.
Interface signature Example

Type DB2

Host 9.148.99.99

Port 50001

dbName (Schema name for DB2 cis_schema


or Oracle, db name for others)

Datasource URL

TableName MK_SCHED

ColumnName ID_PIN

ClassificationName SSN

RuleDescription Out-of-box algorithm of InfoSphere Discovery

HitRate 70% - not available for export in Guardium Vers. 8.2

ThresholdUsed 60% - not available for export in Guardium Vers. 8.2

Privacy Sets
A privacy set is a collection of elements that can be used to do special monitoring.

It consists of one or more object-field pairs - for example, the salary field of the
employee table, or all fields of the salary history table. All access to these elements
within a given timeframe can be reported.

Select any of the topics to work with privacy sets.

Chapter 5. Monitor and Audit 241


Open the Privacy Set Builder
To access a privacy set definition, your Guardium user account must be assigned a
security role that is also assigned to that privacy set definition. Privacy sets that
you cannot access will not display in a list of privacy sets.
1. Open the Identify Privacy Set panel by navigating to one of the following:
v Comply > Tools and Views > Privacy Set Builder
v Discover > Database Discovery > Privacy Set Builder
2. Do one of the following:
v Click the New button to define a new privacy set (see Create a Privacy Set).
v Select a privacy set from the list, and click one of the following buttons:
– Clone - See Clone a Privacy Set.
– Modify - Use this button to modify the definition or to run a report based
on that definition. See Modify a Privacy Set, or Run a Privacy Set Report.
– Remove - See Remove a Privacy Set.

Create a Privacy Set


1. Open the Identify Privacy Set panel by navigating to one of the following:
v Comply > Tools and Views > Privacy Set Builder
v Discover > Database Discovery > Privacy Set Builder
2. Click New to open the Privacy Set Definition panel.
3. In the Privacy Set Description box, enter a unique name for the privacy set. Do
not include apostrophe characters in the name. This is the name that will
display in the Identify Privacy Set panel.
4. From the Security Classification drop-down list, optionally select a security
classification for this privacy set.
5. In the Elements in this Privacy Set pane, for each element pair to include:
v Enter an object name in the Object box.
v Enter a field name in the Field box, or mark the Any Field in this Object box
to include all fields contained in the specified object.
v Click Add this new Object – Field Pair.
6. When all elements have been added, click Save.
7. Optionally, click the Roles button to add Roles.
8. Optionally, click the Comments button to add comments.

Modify a Privacy Set


1. Open the privacy set to be modified, in the Privacy Set Builder. See Open the
Privacy Set Builder.
2. Make any changes to the privacy set definition, as necessary. For a description
of all fields, see Create a Privacy Set.
3. Click Save.
4. Click Done when you are finished.

Clone a Privacy Set


1. Open the privacy set to be cloned, in the Privacy Set Builder. See Open the
Privacy Set Builder.
2. The cloned privacy set will be named COPY OF selected privacy set. We
suggest that you change this to something more meaningful. Do not include
apostrophe characters in the name.

242 IBM Guardium 10.0


3. Make any additional changes to the privacy set definition, as necessary. For a
description of all fields, see Create a Privacy Set.
4. Click Save.
5. Click Done when you are finished.

Remove a Privacy Set

If a auditing process is running, you cannot remove a privacy set. Stop the
auditing process, then follow the steps to remove the privacy set.
1. Select the privacy set to be removed, in the Identify Privacy Set panel. See
Open the Privacy Set Builder.
2. Click Delete and confirm the action.
3. Click Done.

Run a Privacy Set

This procedure describes how to run a privacy set report on demand. To schedule
a privacy set report, include it in a compliance workflow (see Compliance
Workflow Automation).
1. Open the privacy set for the report, in the Privacy Set Builder. See Open the
Privacy Set Builder.
2. Click Run.
3. In the Task Parameters, enter the starting and ending times for the task.
4. Select Report by Access Details, or Report by Application User, to specify
how the results should be displayed. The first option is the default, in which
case a count of accesses is shown for each combination of client IP, server IP,
server (name), server type, database protocol, source program name, and
database user name. If Application User is selected, the report will contain a
separate column with that name (following DB User Name) and the output will
be additionally qualified by the application user.
5. Click Run Once Now. After the report has been executed, it will be displayed
in a separate window.
6. Click Done.

Custom Alerting
Alert messages can be distributed via e-mail, SNMP, syslog, or user-written Java
classes. The last option is referred to as custom alerting.

When an alert is triggered, a custom alerting class can take any action appropriate
for the situation; for example, it might update a Web page or send a text message
to a telephone number.

To create a custom alerting class, first contact Technical Support to obtain the
necessary interface file. The following topic describes how to implement the
interface. See Use the Custom Alerting Interface, and also the following topic
which contains an example: Sample Custom Alerting Class.

Once the class has been compiled, it must be uploaded to the Guardium appliance
from the Administration Console. See Manage Custom Classes.

For guidelines on testing a custom alerting class, see the Test a Custom Alerting
Class section later in this topic.

Chapter 5. Monitor and Audit 243


Note: Do not take or run custom code from untrusted data sources to order to
reduce the risk of security vulnerabilities.

Note: Do not take or run custom code from untrusted sources.

Note: Do not write a custom class that gets data from an untrusted source.

Use the Custom Alerting Interface

The custom alerting class must be in the com.guardium.custom package and must
implement the com.guardium.custom.alerts.CustomerDefinedAlertingIfc interface:
package com.guardium.custom
public class YourClassNameHere implements CustomerDefinedAlertingIfc {
}

The interface contains the five methods described.


Table 23. processAlert Method
Method 1
Description Process a single alert message.
Syntax public void processAlert (String message, Date timeStamp)
Parameters A String containing the message generated by the alert.

A java.util.Date for the time the alert message was created.

Table 24. getMessage Method


Method 2
Description Return the alert message
Syntax public String getMessage ()
Parameters A String containing the alert message.

Table 25. getTimeStamp Method


Method 3
Description Return the timestamp associated with the alert message.
Syntax public Date getTimeStamp ()
Parameters A java.util.Date for the time the alert message was created.

Table 26. setMessage Method


Method 4
Description Set the alert message.
Syntax public void setMessage (String inMessage)
Parameters A String containing the alert message.

Table 27. setTimeStamp Method


Method 5
Description Set the timestamp associated with the alert message.
Syntax public void setTimeStamp (Date inDate)
Parameters A java.util.Date for the time the alert message was created.

244 IBM Guardium 10.0


Sample Custom Alerting Class

The following sample program implements the five methods described in the
previous section. For the processAlert method, this program simply writes the alert
message and timestamp to the system console.
/*
* Sample Custom Alerting Class
*
*/
package com.guardium.custom;
import java.text.DateFormat;
import java.util.Date;
public class HandleAlerts implements CustomerDefinedAlertingIfc {
private String message = "";
private Date timeStamp = null;
public void processAlert(String message, Date timeStamp){
setMessage(message);
setTimeStamp(timeStamp);
System.out.println(getMessage() + " on " +
DateFormat.getDateInstance(). format(getTimeStamp()));
}
public void setMessage(String inMessage){
message = inMessage;
}
public String getMessage(){
return message;
}
public void setTimeStamp(Date inDate){
timeStamp = inDate;
}
public Date getTimeStamp(){
return timeStamp;
}
}

Test a Custom Alerting Class

After compiling a custom alerting class, follow the procedure to test it.
1. Upload the custom class to the appliance. This is an administration function
that is performed from the Administrator Console. See Manage Custom Classes.
2. Define a correlation or real-time alert to use the custom alerting class.
Regardless of which alert type generates the alert, testing is easier if you assign
a second notification type (email, for example) against which you can compare
the custom alerting results.
3. Check the environment by doing one of the following:
v For a correlation alert:
– Check that the Anomaly Detection polling interval is suitable for testing
purposes and that Anomaly Detection has been started. If the polling
interval is too long (it may be 30 minutes or more), you may have a long
wait before the query runs.
– Check that the Alerter polling interval is suitable for testing purposes and
that the Alerter has been started.
– Check that the alert to be tested has been marked Active.
v For a real-time alert:
– Check that policy containing the rule with the custom alert action is the
installed policy.

Chapter 5. Monitor and Audit 245


– Verify that the inspection engine was restarted after the updated policy
was installed.
– Check that the Alerter polling interval is suitable for testing purposes and
that it has been started.
4. Take whatever action is necessary to trigger the alert (generate a number of
login failures, for example).

Flat Log Process


The Flat Log option is a process to allow the Guardium appliance to log
information without immediately parsing it in real time.

This saves processing resources, so that a heavier traffic volume can be handled.
The parsing and amalgamation of that data to Guardium's internal database can be
done later, either on a collector or an aggregator unit.

Note: Rules on flat does not work with policy rules involving a field, an object,
SQL verb (command), Object/Command Group, and Object/Field Group. In the
Flat Log process, "flat" means that a syntax tree is not built. If there is no syntax
tree, then the fields, objects and SQL verbs cannot be determined.

The following actions do not work with rules on flat policies: LOG FULL
DETAILS; LOG FULL DETAILS PER SESSION; LOG FULL DETAILS VALUES;
LOG FULL DETAILS VALUES PER SESSION; LOG MASKED DETAILS.

Selection of this feature involves the Policy Builder menu in Setup >Tools and
Views and Flat Log Process menu in Manage > Activity Monitoring.

When the Log Flat (Flat Log) checkbox option listed in the Policy Definition screen
of the Policy Builder is checked,
v Data will not be parsed in real time .
v The flat logs can be seen on a designated Flat Log List report.
v The offline process to parse the data and merge to the standard access domains
is configured through the Administration Console.
1. Navigate to Manage > Activity Monitoring > Flat Log Process.
2. Select the activity to perform:
v Process - Merge the flat log information to the internal database.
v Archive/Aggregation/Purge - Archive or aggregate, and optionally purge, the
flat log.
v Purge Only - Purge the flat log data.
3. Click Apply to save the configuration.
4. For a Process activity, optionally do one of the following:
v Click Run Once Now to merge the flat log information to the internal
database immediately.
v Click Modify Schedule to define a schedule for this activity. You can select
the start time, restart frequency, and repeat frequency. For the Schedule by..
field, you must select either Day/Week or Month.

246 IBM Guardium 10.0


Build Expression on Query condition
Use the Add Expression icon, next to the Value, Parameter, Attribute selections, to
enter Query Conditions including user-defined string and mathematical
expressions.

Use this feature when you need to add a condition that is based not on the entire
content of the attribute as is, but on part of the attribute, a function of the
attribute, or a function that combines more than one attribute.

An example is: INSTR(:attribute, ’150.1’) = 5, which will return all instances of


Client IP matching the five characters listed. Type the character 5 in the entry box
next to the Add Expression icon. Type the INSTR(:attribute, ’150.1’) expression
in the separate Build Expression window. Test the validity of the expression in the
Build Expression window. Another example is: LENGTH(:attribute) >= 40, which
will return the length of any SQL statement greater than 40 characters. The
expression may (or may not) contain references to the actual attribute and can also
contain references to other attributes.

Database Entitlement Reports


Entitlement reviews are the process of validating and ensuring that users only
have the privileges required to perform their duties.

Along with authenticating users and restricting role-based access privileges to data,
even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have
the privileges required to perform their duties. This is also known as database user
rights attestation reporting.

Use Guardium’s predefined database entitlement (privilege) reports (for example)


to see who has system privileges and who has granted these privileges to other
users and roles. Database entitlement reports are important for auditors tracking
changes to database access and to ensure that security holes do not exist from
lingering accounts or ill-granted privileges.

Custom database entitlement reports have been created to save configuration time
and facilitate the uploading and reporting of data from the following databases:
Oracle; MYSQL; DB2; SYBASE; SYBASE IQ:; Informix; MS SQL 2000/2005/2008;
Netezza®; Teradata; and, PostgreSQL.

Follow these steps to use Guardium’s predefined database entitlement (privilege)


reports with up-to-date snapshots of database users and access privileges:
1. Add datasources/databases to the appliance (navigate to Comply > Custom
Reporting > Custom Domain Builder.
2. Assign datasources to entitlements (navigate to Comply > Custom Reporting >
Custom Table Builder. Select the custom table listing of your entitlement. Click
Upload Data. Assign datasources to the entitlement report at the Import Data
menu screen. When done, click Run Once Now.
3. To see entitlement reports, log on the user portal, and go to the DB
Entitlements tab.

DB Entitlement Reports use the Custom Domain feature of Guardium to create


links between the external data on the selected database with the internal data of

Chapter 5. Monitor and Audit 247


the predefined entitlement reports. See External Data Correlation for further
information on Custom Domain Builder/Custom Query Builder/Custom Table
Builder.

Note: DB Entitlements Reports are optional components enabled by product key. If


these components have not been enabled, the reports will not appear in the
Custom Domain Builder/Custom Domain Query/Custom Table Builder selections.

The predefined entitlement reports are listed in the Predefined Content section of
the online help.

How to use Access Maps to show paths between clients and servers
You can use access maps to easily understand all access paths between database
clients and database servers.

About this task

In an enterprise environment, it is very important to manage database access. This


requirement can stem from the need to understand and secure access to the
database due to compliance initiatives, or even due to the need to tune and
optimize your database environment. Because there can be many databases and a
very large number of database clients in enterprise environments, understanding
all the data access paths can be hard to do.

Access maps provide a convenient way to understand data access between clients
and servers. The access map shows all access paths derived from a set of criteria
that you define.

Criteria can be set based on any combination including server type or location on
the network (IPs and subnets). In addition, you can group access patterns together,
since one of the main problems in reviewing access data is the detailed granularity.
By grouping similar access paths, you are able to get a visual map, which can be
meaningful in understanding your access environment. Using this visual depiction,
you can then drill down and get further information on any one access path in the
map.

Note:

To use the Access Map Builder/Viewer, your Guardium user account must be
assigned a security role that is also assigned to the Access Map Builder/Viewer
application.

Procedure
1. Open the Access Map Builder/Viewer by clicking Reports > Report
Configuration Tools > Access Map Builder/Viewer.
2. Create a new access map or select an existing access map.
v Create a new access map by entering a unique name for the new map in the
Enter a map name field.
v Select an existing map by selecting an item from the select an existing map
name menu.
The appearance of the remaining sections changes depending on your selection.

248 IBM Guardium 10.0


3. Specify a From date and To Date using the calendar tool. For example, a time
range that captures accesses from yesterday would be From date is NOW -1
day, To date is NOW.
4. Use the Accesses involving the addresses section to identify from which clients
and servers to map access. Leave this pane blank to map all traffic between
clients and servers.
5. Use the Access involving the database types section to identify which databases
on the specified servers are to be mapped.

6. You can aggregate paths to all servers and clients by selecting an option for
Server IP aggregation granularity and Client IP aggregation granularity. The
prefixes are based on the segments or bits in IPV4 and IPV6 addresses, and
indicate how the addresses will be grouped together. You can choose whether
you want to aggregate based on segments or bits.
For example, if you choose a CIDR prefix of 16, all addresses starting with 1.1
are grouped into one node, all addresses starting with 1.2 are grouped into
another node, and so on.
In an IPV4 address, there are 4 segments, each segment comprising 8 bits.
In an IPV6 address, there are 8 segments, each segment comprising 16 bits.
X.*.*.*: For mapping purposes, treat each server or client IP address beginning
with the same first octet as a single endpoint.
X.Y.*.*: For mapping purposes, treat each server or client IP address beginning
with the same first and second octets as a single endpoint.
X.Y.Z.*: For mapping purposes, treat each server or client IP address beginning
with the same first, second, and third octets as a single endpoint.
Full IPs: For mapping purposes, treat each complete server or client IP address
as a single endpoint. Be aware that this option aggregates multiple databases at
the same IP address.
None: (Default) No path aggregation by server IP address.
7. To group together the aggregated addresses, essentially creating a group of
groups, choose an option for Grouping of aggregated addresses. The default is
set to None. For example, if you want to group together two groups of
addresses that begin with 1.2 and 1.1, choose the option one additional
segment.

Chapter 5. Monitor and Audit 249


8. Choose the output type for the access map from Generated output type for
access map. To view the map online and drill down further, select Generate
Interactive Map. For a printable version of the map in PDF format, select
Generate PDF. Check the Base Acces Map on aliases box to use defined
aliases in the access map.

9. Click Save & View when finished. Following a short delay, the map displays in
the output type you selected. The legend that displays on your map will vary
depending on its contents.

250 IBM Guardium 10.0


User Identification
Guardium provides several methods to identify application users, when the actual
database user is not apparent from the database traffic.

Some database applications are designed to use or share a small number of


database user accounts. These applications manage their users independently of
the database management system, which means that when observing database
traffic from outside of the application, it can be difficult to determine the
application user who is controlling a database connection at any given point in
time. However, when questionable database activities occur, you need to relate
specific actions to specific individuals, rather than to an account shared by groups
of individuals. In other words, you must know the application user, not just the
database user.

Guardium provides several methods to identify application users, when the actual
database user is not apparent from the database traffic:
v Identify Users via Application User Translation - For some of the most popular
commercial applications (Oracle EBS, PeopleSoft, SAP, etc.), Guardium can
identify users automatically.
v Identify Users via API - The Application Events API allows you to signal
Guardium when an application user takes or relinquishes control of a
connection, or when any other event of interest occurs. (This can be used for
more than just identifying users.)
v Identify Users via Stored Procedures - Many applications use database stored
procedures to identify the application user. In these cases, user information can
usually be extracted from the stored procedure parameters.

Within the enterprise, it may be necessary to employ several methods to identify


users, depending on the applications used.

Identify Users via Application User Translation


Some applications manage a pool of database connections. In such three-tier
architectures the pooled connections all log into a database using a single
functional ID, and then manage all application users internally. When a user
session needs access to the database ,it acquires a connection from the pool, uses it
and then releases it back to the pool. When this happens, Guardium can see how
the application interacts with the database, but it cannot attribute specific database
actions to specific application users.

For some widely used applications, Guardium has built-in support for identifying
the end-user information from the application, and thus can relate database activity
to the application end-users.

To use this facility, follow these procedures:


1. Define an Application User Translation configuration for the application. See
Configure Application User Detection.
2. Populate any pre-defined groups required for that application. See Populate
Pre-Defined Application Groups.
3. Regenerate any portlets for special reports for that application, and place the
portlets on a page. See Regenerate Special Application Report Portlets.

Chapter 5. Monitor and Audit 251


Selective Audit Trail and Application User Translation
If the installed data access policy uses the selective audit trail feature to limit the
amount of data logged, there are two important considerations that apply to
application user translation:
v The policy will ignore all of the traffic that does not fit the application user
translation rule (for example, not from the application server).
v Only the SQL that matches the pattern for that security policy will be available
for the special application user translation reports.

Configure Application User Detection


1. Navigate to Protect > Database Intrusion Detection > Application User
Translation.
2. Click the Add button to expand the Add App User Translation panel.
3. In the Application Code box, enter a unique code to identify the application.

Note: Under Central Management, you must use different application codes
on different managed machines. This prevents aliases generated for the users
from conflicting with each other. (Under Central Management, there is one set
of aliases that is shared by all managed units.)
4. From the Application Type list, select the application type:
v BO-WI - Business Objects / Web Intelligence
v EBS - Oracle E-Business Suite
v PeopleSoft
v SAP Observed
v SAP DB
v SIEBEL Observed
v SIEBEL DB
5. In the Application Version box, enter the application version number (11, for
example).
6. From the Database Type list, select the database type. Only the types that are
available for the selected Application Type and Version will be displayed.
7. In the Server IP box, enter the IP address the application uses to connect to
the database.
8. In the Port box, enter the port number the application uses to connect to the
database.
9. In the Instance Name box, enter the instance name the application uses to
connect to the database.
10. In the DB Name box, enter the database name for the application. (Required
for some applications, not used for others.)
11. Mark the Active box to enable user translation. Nothing is translated until
after the first import of user definitions.
12. Enter a User Name for Guardium to use when accessing the database. Enter a
password for Guardium to use when accessing the database.
13. Mark the Responsibility box if you want to associate responsibilities
(Administration, for example) with user names. Or clear the Responsibility
box to just record user names. When the box is cleared, all activities
performed by a user will be grouped together, regardless of the responsibility
at the time the activity occurred.

252 IBM Guardium 10.0


14. If Application Type is EBS (Database Type is Oracle), then two additional
choices appear - Connect to Server IP and Connect to User Name. If
populated, the system will connect using that IP and username in order to
retrieve the Responsibility and User names.
15. Click the Add button to save the Application User Translation definition.
16. Continue to the procedures: Populate Pre-defined Application Groups and
Regenerate Special Application Report Portlets.
17. After the previous step is done, navigate to Manage > Activity Monitoring >
Inspection Engines and click Restart Inspection Engines in the Inspection
Engine Configuration panel.
18. After performing the tasks specified in the two procedures in step 16, return
to Application User Translation and click Run Once Now to import the user
definitions for this application (and any others defined).
19. Later, after verifying that the data import operation worked successfully (see
step 20), return to this panel and click the Modify Schedule button to define
an import operation to run on a regular basis. You should schedule the
importing of user definition data at whatever interval is suitable for your
environment. The maximum time that a new application user name will not
be available is the time between executions of the import operation.
20. The data import of Application User Translation can be confirmed by looking
at predefined reports, e.g.,SAP Application Access. Navigate to Reports >
Report Configuration Tools > Report Builder and choose the report SAP
Application Access. Regenerate this report and add to a pane, then set the
date range to rather large (for example, go back a year for data).

Note: The first time Run Once Now is clicked after installing the Application User
Translation setting(s), it retrieves the last update-date for the tables it looks at.
After that, it imports only new data. Otherwise, we could find ourselves
needlessly importing decades worth of data and filling many tables/databases.

Populate Pre-defined Application Groups

When Application User Translation has been configured, you must populate at
least two pre-defined groups with information that will be specific to your
environment. This table identifies the groups that must be populated for each
application type.

Application Pre-Defined Group Group Type

EBS EBS App Servers Client IP

EBS DB Servers Server IP

PeopleSoft PSFT App Servers Client IP

PSFT DB Servers Server IP

PeopleSoft Objects Objects

Siebel SIEBEL App Servers Client IP

SIEBEL DB Servers Server IP

Chapter 5. Monitor and Audit 253


Application Pre-Defined Group Group Type

SAP SAP App Servers Client IP

SAP DB Servers Server IP

SAP - PCI Objects

Regenerate Special Application Report Portlets

For some application types, one or more special report portlets must be
regenerated. For example, there are two pre-defined EBS reports, and two
pre-defined PeopleSoft reports. These reports cannot be modified. After populating
the pre-defined application groups, follow the procedure to regenerate the
predefined application portlets and place them on a page.

The examples in this section are for the EBS portlets, but the procedure is identical
for other application types.
1. Do one of the following to open the Report Finder: Users with the admin role:
Select Tools - Report Building - Report Builder. All Others: Select
Monitor/Audit - Build Reports - Report builder.
2. Click Search to open the Report Search Results panel.
3. Select a report portlet for the application type (EBS Application Access, for
example, and click Regenerate Portlet. You will be informed that the portlet
has been regenerated
4. Repeat the previous step for each application report (EBS Processes Database
Access, or the PSFT Processes Database Access report, for example). Now add
a tab to your layout, and include the two regenerated portlets on that tab.
5. Click Customize to open the Customize pane.
6. Click Add Pane to define a new tab.
7. Enter a name for the tab - EBS Reports, for example - and click Apply. The
new tab appears as the last tab in the list.
8. Click on the new tab name to edit that pane.
9. Click Add Portlet, and click Next until you locate the reports you want (the
EBS reports, for example), and mark the checkbox next to each desired report.
10. Click Apply, and then click Save and Apply and then click Save to save the
new pane layout. The new tab will appear at the end of the first row of tabs.
11. Click on the new tab name to open the tab.
12. Click Customize to set the runtime parameters (date range and Show Aliases,
for example).

Unwilling to give DB_USER PASSWORD for EBS application

In some cases customers do not want to use the Oracle EBS DB_USER for
translating EBS traffic. Under this scenario, when setting up Oracle EBS and
wanting to translate traffic with Application User Translation, there are two choices
to make it work:
v Supply the username and password that EBS uses to talk to Oracle (often
APPS/$passwd).
v If the customer does not want to supply/enter the password for the DB_USER
EBS uses to access Oracle, it is still possible to get Application User Translation,
however the process is more complicated.

254 IBM Guardium 10.0


1. Make/choose a login for Oracle that will permit access to the database for
gathering aliases/users/responsibilities. That user needs access to the table
[APPLSYS.]FND_USER and the view FND_RESPONSIBILITY_VL which
combines two tables: APPLSYS.FND_RESPONSIBILITY and
APPLSYS.FND_RESPONSIBILITY_TL.
( CREATE VIEW FND_RESPONSIBILITY_VL AS SELECT /* $HEADER$ */ B.ROWID ROW_ID , B.WEB_HOST_NAME
2. Run the following SQL statements directly from the Guardium system: select
RESPONSIBILITY_ID, RESPONSIBILITY_NAME from
FND_RESPONSIBILITY_VL order by RESPONSIBILITY_ID; and SELECT
USER_ID, USER_NAME from FND_USER ORDER BY USER_ID;
Once the user is set up so that those two statements successfully run, two
different Application User Translation entries are needed. Both need to have
the same server IP, port, and instance name, (and of course EBS and Oracle
chosen for APP type and APP server type).
It does not matter if the Application Code is identical or not. One entry needs
the username that EBS uses to connect to the database (usually APPS), but you
can put in an incorrect (dummy) password. The second entry needs the
username and password that has been created to access those tables.
3. Once both are entered with Active and Responsibility selected, click Run Once
Now, and start or restart EBS (assuming there is an Inspection Engine (S-TAP
or net) looking at the traffic). The collection of data and the assignment of
APPS user names to that data for the EBS traffic will now take place.

Oracle priviliges needed for the Oracle EBS App User

Translation:
1. Grant select on the following tables to Custom DB User:
APPLSYS.FND_USER
APPLSYS.FND_RESPONSIBILITY
APPLSYS.FND_RESPONSIBILITY_TL
2. Create a private synonym FND_USER on APPLSYS.FND_USER for Custom DB
User.
3. Create a view called FND_RESPONSIBILITY_VL for Custom DB User. You can
find this view under the APPS user to use as your template.

How to Validate SAP Stack for Application User Translation

When supporting IBM Guardium SAP Application User Translation, there is a


difference between the ABAP Stack and Java Stack.

Note:

ABAP Stack and Java Stack have different kernel specifications.

ABAP Stack and Java Stack systems will have different tables.

ABAP Stack

Traditional ECC (Enterprise Core Components) SAP systems are written in ABAP
code and are predominantly accessed via the SAP GUI, although web access is
possible.

Chapter 5. Monitor and Audit 255


SAP ABAP systems have direct (read/write/update) access to traditional SAP
databases. The databases are very large and contain all the sensitive data. This is
where IBM Guardium will be best utilized.

The following screen will appear when you enter the SAP GUI (ABAP Stack):

1-SAP GUI (ABAP Stack)

To validate the ABAP Stack SAP Kernel module for Application User Translation,
follow these steps:
1. Log in to SAP.
2. Go to System > Status

256 IBM Guardium 10.0


2-System Status (ABAP Stack)
3. Click Other Kernel Info on the System Status screen.

3-System Kernel Information (ABAP Stack)

In this example, the kernel is 700.

Chapter 5. Monitor and Audit 257


SAP with a DB2 backend is also available for SAP kernel 640, but the user needs to
set DB6_DBSL_ACCOUNTING=1 (in kernel 700 and up, this
DB6_DBSL_ACCOUNTING value is 1 by default). SAP for Oracle backend requires
a kernel of 710 or higher.

Data gets put into the app user field and the app event string.

Java Stack

SAP Portal systems are written in Java code and are the front end web applications
utilizing pre-canned queries to display SAP related web pages.

Portal systems can only be accessed via a web browser. Portal system databases are
much smaller with only a few tablespaces.

The following screen will appear when you enter SAP Portal System (Java Stack).

4-SAP Portal System (Java Stack)

To validate the Java Stack SAP Kernel module for Application User Translation,
follow these steps: 1. Click on System Information.

258 IBM Guardium 10.0


5-System TCJ (Java Stack)

In this example, the SAP Kernel version is 7.00.

SAP for either DB2 or Oracle requires a kernel of 7.02 or higher.

SAP sets similar client properties in the Java stack as it did for ABAP Stack.

Identify Users via API


For some applications that manage users internally, the application user cannot be
identified from the traffic. When this happens, you can use the Guardium
Application Events API

The Application Events API provides simple no-op calls that can be issued from
within the application to signal Guardium when a user acquires or releases a
connection, or when any other event of interest occurs.

Note: If your Guardium security policy has Selective Audit Trail enabled, the
Application Events API commands that are used to set and clear the application
user and/or application events will be ignored by default, and the application user
names and/or application events will not be logged. To log these items so that
they will be available for reports or exceptions, include a policy rule to identify the
appropriate commands, specifying the Audit Only rule action.

Set the Application User via GuardAppUser

Use this call to indicate that a new application user has taken control of the
connection. The supplied application user name will be available in the
Application User attribute of the Access Period entity. For this session, from this
point on, Guardium will attribute all activity on the connection to this application
user, until Guardium receives either another GuardAppUser call or a
GuardAppUserReleased call, which clears the application user name.

To signal when other events occur (you can define event types as needed), use the
GuardAppEvent call, described in the following section.

Syntax: SELECT ‘GuardAppUser:user_name’ FROM location

user_name is a string containing the application user name. This string will be
available as the Application User attribute value in the Access Period entity.

FROM location is used only for Oracle, DB2, or Informix. (Omit for other database
types.) It must be entered exactly as follows:
v Oracle: FROM DUAL
v DB2: FROM SYSIBM.SYSDUMMY1
v Informix: FROM SYSTABLES

Clear the Application User via GuardAppUserReleased


Use the GuardAppUserReleased call to signal that the current user has
relinquished control of the connection. Guardium will clear the application user
name, which will remain empty for the connection until it receives another
GuardAppUser call.

Syntax: SELECT ‘GuardAppUserReleased’ FROM location

Chapter 5. Monitor and Audit 259


FROM location is used only for Oracle, DB2, or Informix. (Omit for other database
types.) It must be entered exactly as follows:
v Oracle: FROM DUAL
v DB2: FROM SYSIBM.SYSDUMMY1
v Informix: FROM SYSTABLES

Set an Application Event via GuardAppEvent

This call provides a more generic method of signaling the occurrence of application
events. You can define your own event types and provide text, numeric, or date
values to be stored with the event, both when the event starts and when it ends.
You can use this call together with the GuardAppUser call. Guardium will
attribute all activity on the connection to this application event, until it receives
either another GuardAppEvent:Start command or a GuardAppEvent:Released
command.

Syntax:

SELECT ‘GuardAppEvent:Start|Released’,

‘GuardAppEventType:type’,

‘GuardAppEventUserName:name’,

‘GuardAppEventStrValue:string’,

‘GuardAppEventNumValue:number’,

‘GuardAppEventDateValue:date’ FROM location

Start | Released - Use the keyword Start to indicate that the event is taking control
of the connection or Released to indicate that the event has relinquished control of
the connection.

type identifies the event type. It can be any string value, for example: Login,
Logout, Credit, Debit, etc. In the Application Events entity, this value is stored in
the Event Type attribute for a Start call, or the Event Release Type attribute for a
Released call.

name is a user name value to be set for this event. In the Application Events entity,
this value is stored in the Event User Name attribute for a Start call, or the Event
Release User Name attribute for a Released call.

string is any string value to be set for this event. For example, for a Login event
you might provide an account name. In the Application Events entity, this value is
stored in the Event Value Str attribute for a Start call, or the Event Release Value
Str attribute for a Released call.

number is any numeric value to be set for this event. For example, for a Credit
event you might supply the transaction amount. In the Application Events entity,
this value is stored in the Event Value Num attribute for a Start call, or the Event
Release Value Num attribute for a Released call.

date is a user-supplied date and optional time for this event. It must be in the
format: yyyy-mm-dd hh:mm:ss, where the time portion (hh:mm:ss) is optional. It

260 IBM Guardium 10.0


may be the current date and time or it may be taken from a transaction being
tracked. In the Application Events entity, this value is stored in the Event Date
attribute for a Start call, or the Event Release Date attribute for a Released call.

FROM location is used only for Oracle, DB2, or Informix. (Omit for other database
types.) See the following example. However, any dummy table name is acceptable
for the dummy SQL.
v Oracle: FROM DUAL
v DB2: FROM SYSIBM.SYSDUMMY1
v Informix: FROM SYSTABLES

The GuardAppEvent call populates an Application Events entity (see Application


Events Entity in the Entities and Attributes section of the Appendices). When
creating Guardium queries and reports, you can access the Application Events
entity from either the Access Tracking domain or the Policy Violations domain.

If any Application Events entity attributes have not been set using the
GuardAppEvent call, those values will be empty.

Regarding the two date attributes:


v Event Date is set using the GuardAppEvent call, or from a custom identification
procedure as described in the following section.
v Timestamp is the time that Guardium stores the instance of the Application
Event entity.

Identify Users via Stored Procedures


In many existing applications, all of the information needed to identify an
application user can be obtained from existing database traffic, from stored
procedure calls. Once Guardium knows what calls to watch for, and which
parameters contain the user name or other information of interest, users can be
identified automatically.

In the simplest case, an application might have a single stored procedure that sets
a number of property values, one of which is the user name. A call to set the user
name might look like this:
set_application_property(’user_name’, ’JohnDoe’);

In a custom procedure mapping (described later), you can tell Guardium to:
v Watch for a stored procedure named set_application_property, with a first
parameter value of user_name.
v Set the application user to the value of the second parameter in the call
(JohnDoe, in the example).

There may be multiple stored procedures for an application: one to start an


application user session, one to end a session, and others to signal key events
particular to that application. Guardium’s custom identification procedure
mechanism can be used to track any application events you want to monitor.

Since each of your applications may have a different way of identifying users, you
may have to define separate custom identification procedure mappings for each
application. To do that, follow the procedure outlined.

Chapter 5. Monitor and Audit 261


Define a Custom Identification Procedure Mapping
1. Navigate to Protect > Database Intrusion Detection > Custom ID Procedures.
2. To view an existing mapping, hold the mouse pointer over the More Info
column icon for the row containing the map you want to view.
3. To add a mapping, click on the Add Mapping pane title to expand that pane.
4. In the Custom Map Name box, enter the name to be used for this mapping.
5. In the Procedure Name box, enter the name of the database procedure that
will supply information.
6. Select Set or Clear from the Action list to indicate whether the procedure call
will set or clear application values. The Event Type Position field has a special
use when the Clear action is selected.
7. If application information can be obtained from an existing stored procedure
call, but only under one or two conditions:
v Use a Condition Location box to specify which stored procedure call
parameter is to be tested
v Use the corresponding Condition Value box to specify the value that must
be matched to set application information from one or more of the other
parameters.
v For example, assume that a stored procedure named set_context is used by
an application to set a number of values, one of which is the user name.
The procedure is passed three parameters: an application name, a property
name, and a value. Three typical calls are illustrated:
– set_context('publishing_application', 'role_name', 'manager');
– set_context('publishing_application', 'user_name', 'jsmith');
– set_context('publishing_application', 'company', 'guardium');
v In the examples, the second statement illustrates the format of the call we
are interested in. The second parameter (the property name) is the
parameter that needs to be tested, so 2 would be entered in the Condition1
Location box, and user_name in the Condition1 Value box.
v If a second format of the call also sets the user name, then the Condition2
Location and Value boxes can be used. For example, assume that the
following format of the procedure call is sometimes used to set a user
name:
– set_context('admin_application', 'admin_name', 'wjones');
v To use this procedure, to set the application user name, enter 2 in the
Condition2 Location box, and admin_name in the Condition2 Value box.

Note: If two conditions are used, the user name or any other information
being extracted must be in the same parameter position for both types of
calls.
8. For a Clear action:
v Use only the Event Type Position and Application Username Position fields.
v Do one of the following:
– To clear the application event: set the Event Type Position to 1, and set
the Application Username Position to 0.
– To clear the application user: set the Event Type Position to 0, and set the
Application Username Position to 1.
9. For a Set action, use the Parameter Position pane to indicate which stored
procedure parameters map to which Guardium application event attributes.
The first procedure parameter is numbered 1. Use 0 (zero – the default) for all

262 IBM Guardium 10.0


attributes that are not set by the call. Application Username Position – Enter
the parameter position of the application user name you want associated with
database activity from this point forward (until reset, as described previously).
Event String Value Position – Enter the parameter position of a string value
for the event (for a login, this might be a user or account name). Event
Number Value Position – Enter the parameter position of a numeric value for
the event (for a transaction, this might be a dollar amount). Event Type
Position – Enter the parameter position of a name for the event type (Login,
Logout, Credit Request, etc.). Event Date Position – Enter the parameter
position of a date/time value for the event. The format must be yyyy-mm-dd
hh:mm:ss. The time portion (hh:mm:ss) is optional, and if omitted will be set
to 00:00:00.
10. In the Server Information pane: Select the database server type from the
Server Type list. Enter the database user name in the DB Username box.
Optional: Enter a database name in the Database Name box. If omitted, all
databases will be monitored. Optional: Identify one or more servers. If no
server is specified, all servers will be monitored. To select a specific server
only, enter the server IP address and network mask in the Server IP and
Server Net Mask boxes; or, to select a group of servers, select a server group
from the Server IP Group list or click the Groups button to define a new
group of servers.
11. When you are done, click the Add button to add the mapping to the list.

Value Change Auditing


The Value Change Auditing feature tracks changes to values in database tables.

The Value Change Auditing feature tracks changes to values in database tables. For
each table in which changes are to be tracked, you select which SQL value-change
commands to monitor (insert, update, delete). Each time a value-change command
is run against a monitored table, before and after values are captured. On a
scheduled basis, the change activity is uploaded to a Guardium system, where all
the reporting and alerting functions can be used. The basic steps to perform to use
the Value Change Auditing feature are:
1. From the Administration Console, create an audit database on the database
server. This database is where value-change data is stored until it is uploaded
to the Guardium system. See “Create an Audit Database” on page 266.
2. Identify the tables to be monitored, and for each table select the value-change
commands (insert, delete, update) for which changes will be recorded. To
record the changes, a trigger is created for each table to be monitored, and that
trigger writes the value-change data to the audit database. To allow updates to
the audit database (by the trigger), all users with update privileges for the
monitored table are given appropriate privileges for the audit database. This
has implications for users who are given update privileges for that table later
(see step 4). For detailed instructions on how to define the monitoring
activities, see Define Monitoring Activities.
3. Schedule uploads to transfer value-change data from the database server to the
Guardium system. See Schedule Value-Change Uploads.
4. Maintain audit database access privileges. After a trigger has been created, a
new user may be given access to the table on which the trigger is based. If that
user issues a monitored value-change command, it will fail because that user
will not have appropriate privileges to update the audit database. See Maintain
Privileged Users Lists.

Chapter 5. Monitor and Audit 263


5. Monitor change activity from the administrator console, or use the Value
Change Tracking query domain to create custom reports on the Guardium
appliance. See Value-Change Reporting.

Oracle Streams Alternative for Before and After Values Tracking

In addition to the native facilities within the Guardium product used for showing
before and after values of DML, getting before/after values for Oracle can be
accomplished by using Oracle Streams and Guardium’s External Data Correlation
(upload) facility. Streams are used to create change records for any change that
affects a sensitive column, and the upload job is used to bring the data into the
Guardium repository, where you can issue reports, combine the data with other
details, and add these reports into the sign-off process.

Note: Oracle Streams requires that the Oracle database that is being monitored is
in ARCHIVELOG mode.
1. Define a datasource. Click Value Change Auditing Builder and complete the
blocks under Datasource Definition: Name; Database Type (Oracle); Share
Datasource (check mark); Save Password (check mark); Login Name, Use sys;
Password; Connection Property field with value SysLoginRole=SYSDBA: Host
Name/IP; Port, 1521 (for Oracle); Service Name. Get Host Name/IP, Port and
Service Name from the Oracle database.
2. Test Connection. If successful, click Save and click Done.
3. Configure the audit database. Click Value Change Auditing Builder. Attach
the datasource that you built in step 1, by clicking Add Datasource.
4. Click Choose Tables to Monitor. A pop-up screen appears where a choice
between two monitoring methods is presented. Choose Stream. And then click
Apply. Go to the next section.

Define Monitoring Activities

After you define an audit database, use the Value Change Auditing Builder to
identify the tables to be monitored, and to select the types of changes (inserts,
updates, deletes) to be recorded.
1. Open the Value Change Auditing Builder by navigating to Harden >
Configure Change Control (CAS Application) > Value Change Auditing
Builder.
2. Click Add Datasource to open the Datasource Finder panel.
3. Select a datasource on which an audit database is defined. If an audit
database is not yet defined, see “Create an Audit Database” on page 266.
4. Click Add to close the Finder and add the selected datasource to the Value
Change Audit panel.
5. Optionally enter a Schema Owner and/or Object Name to limit the number of
tables that are displayed when choosing the tables to be monitored. You can
use the % (percent) wildcard character. For example, to limit the display to all
tables that begin with the letter a, enter a% in the Object Name box.
6. Click Choose Tables To Monitor to open the Define Data Audit panel.
7. Mark the Select box for each table to be monitored.

Note: You cannot define a trigger for a table that contains one or more
user-defined data types.

264 IBM Guardium 10.0


The Trigger Defined column indicates if a trigger is already defined for the
table. The Audit Insert, Audit Delete, and Audit Update check boxes indicate
if the trigger will record changes for that command.
If the Trigger Defined column is not marked, marking the Select checkbox for
a table automatically marks all three the Audit checkboxes (Audit Insert,
Audit Delete, and Audit Update). If you do not want to monitor one or two of
those commands, clear the appropriate checkbox.
8. Click Add Selections to define triggers for the selected tables. You will be
informed of the action taken.
9. Click OK to close the message box and re-display the Define Data Audit
panel. The selected tables remain selected, and the Trigger Defined column is
now marked for those tables. Note: The instant a trigger is defined for a table,
it is active and recording changes for the selected commands in the audit
database. The configuration of triggers is done entirely on the database server,
which is unlike most other Guardium configurations, which are defined on
the Guardium database, and then activated or deactivated as a separate task.
10. To define additional actions, repeat these steps, or remove triggers by marking
the appropriate Select check boxes and clicking Remove Selections.
11. Click Done after you complete all changes.

Note: The Cancel button does not back out any changes that you have made
to triggers using the Add or Remove Selections buttons.

After Defining Monitoring Activities


If you have added value-change monitoring activities to a datasource for the first
time, you should schedule uploads for this datasource, because the audit database
will be emptied only after the data recorded there has been uploaded to the
Guardium system. See the next section.

Schedule Value-Change Uploads


1. Open the Value Change Auditing Builder by navigating to Harden > Configure
Change Control (CAS Application) > Value Change Auditing Builder.
2. Select the audit datasource for which you want to schedule uploads, and click
Schedule Upload to open the general-purpose task scheduler. If you need help
defining a schedule, see Scheduling in the Common Tools book.

Maintain Privileged Users Lists


When the value-change feature adds a trigger for a database table, all current users
with permission to update that table are granted permission to update the audit
database table. This is required because the trigger updates the audit database with
new and/or old values. If a new user is granted update permission for a
monitored table, when that user attempts an update, the update is not allowed
because that user does not also have permission to update the audit database.
When this happens, you must update the audit database privileged users list by
using the Value Change Auditing Builder.

To update the audit database privileged users list, the database user ID that is used
to log in to the monitored database must be the creator of any role to which new
users have been added. Otherwise, the members of that role will not be available.
1. Open the Value Change Auditing Builder by navigating to Harden > Configure
Change Control (CAS Application) > Value Change Auditing Builder.

Chapter 5. Monitor and Audit 265


2. Click Add Datasource to open the Datasource Finder panel, select the
appropriate Datasource from the list, and click Add.
3. Click Update Audit Tables Privileged Users. The permissions for all users who
can run triggers to update the audit database tables are updated, and you are
informed when the operation completes.
4. Click OK to close the message box.

Value-Change Reporting

You can view value-change data from the default Values Changed report, or you
can create custom reports using the Value Change Tracking domain. By default, the
Value Change Tracking domain is restricted to users having the admin role.

Values Changed Default Report

There is one default values-changed report available by navigating to Reports >


Real-Time Guardium Operational Reports > Values Changed.

The main entity for the Values Changed report is the Changed Columns entity. In
most cases, there is a separate row of the report for every column change that is
detected for every audit action (Insert, Update, Delete). However, for MS SQL
Server and Sybase, if the monitored table does not have a primary key, there are
two rows per change, with the old and new values displayed on separate rows.

Create an Audit Database


Create an audit database and perform value-change monitoring activities.

To create an audit database and perform value-change monitoring activities, you


must have a user account with appropriate permissions to:
v Create a database on the server
v Create a database user account on the server

Log in to each database to be monitored Create tables and triggers on each


database to be monitored

Before Defining an Audit Database under Informix or Sybase


For Informix and Sybase (except for Sybase IQ, which does not support triggers)
and depending on the operating system for the database server, you must perform
one of the following procedures before defining the audit database.

Informix Setup - Locate or Create a New Database Space


This topic applies for Informix (9.4 or later). Under Informix, we strongly
recommend that you avoid using the default root database space, root_dbs. You
cannot drop this space or reduce its size.

You should use any other database space that has been defined, or to create a new
database space, perform one of the following procedures (depending on the
operating system).

266 IBM Guardium 10.0


Informix - Create an Informix Database Space on a Windows
Server

This procedure is performed outside of the Guardium GUI, and applies for
Informix version 9.4 or later.
1. Verify that the database server is online and listening.
2. Create a zero-byte file named guardium_dbs_dat.000 in the
C:\IFMXDATA\server-name directory (sever-name is the name of the Informix
server or the service name). You can do this by saving an empty text file, and
then renaming the file, replacing the txt suffix with 000.
3. Make the following directory the working directory:
C:\Program Files\Informix\bin
4. Execute following command:
C:\Program Files\Informix\bin>onspaces -c -d guardium_dbs -p C:\IFMXDATA\server-name\guardium_d
If the file is created successfully, you see the following messages:
Verifying physical disk space, please wait ...
Space successfully added.
** WARNING ** A level 0 archive of Root DBSpace will need to be done.
5. Restart the Informix server, and use a suitable tool (Aqua Data Studio remote
client, for example) to connect and verify that the space named guardium_dbs
has been created. Your first connection attempt may fail with a message about
the server running in Quiescent Mode. If this happens, attempt to re-connect
at least two more times, and it should work.
6. To verify that the guardium_dbs database space has been created, use Aqua
Data Studio, and look under Storage.

Informix - Create an Informix Database Space on a Unix Server

This procedure is performed outside of the Guardium GUI, and applies for
Informix version 9.4 or later.
1. From a command-line window, enter the following commands:
su - informix
cd demo/server
vi guardium_dbs
2. Without adding any text, save the empty guardium_dbs file.
3. Enter the following commands:
chmod 660 guardium_dbs
cd ../../bin
onspaces -c -d guardium_dbs -p /home/informix10/demo/server/guardium_dbs -o 0 -s 100000

Sybase Setup - Initialize Disks

This topic applies for Sybase servers only (except for Sybase IQ, which does not
support triggers). Depending on the operating system of the database server,
perform one of the following procedures to initialize disks.

Sybase - Initialize Disks on a Windows Sybase Server


1. Connect to the server on which you want to create the Guardium audit
database: guardium_audit.
2. Create a folder named guardium_audit, under the c: drive.
3. Connect to the database.
4. Execute the following commands:

Chapter 5. Monitor and Audit 267


use master
go
disk init name="guardium_auditdev", size=8192
go
disk init name="guardium_auditlog",
physname="c:/guardium_audit/guardium_auditlog", size=8192
go

Sybase - Initialize Disks on a Unix Sybase Server


1. Connect to the database.
2. Execute the following statements:
use master
go
disk init name = ’guardium_auditdev’, physname
=’/home/sybase/data/guardium_auditdev’ , size = 8192
go
disk init name = ’guardium_auditlog’, physname
=’/home/sybase/data/guardium_auditlog’ , size = 8192
go

Create the Database

For an Informix or Sybase database, be sure to perform the preliminary tasks


before performing this procedure.
1. Open the Value Change Database Builder by navigating to Harden >
Configuration Change Control (CAS Application) > Value Change Audit
Database Creation.
2. Click Add Datasource to open the Datasource Finder panel. Datasources that
have been defined from the Value Change Auditing application are labeled
Monitor Values. Datasources that have been defined for other applications will
have different labels (Listener, or DBanalyzer, for example), and those
datasources may not have the appropriate set of database access permissions
for Value Change Auditing application, which requires a user account having
database administrator authority. If a suitable datasource is not available, click
the New button to define a new one for the database to be monitored (see
Datasources in the Common Tools book for detailed information on defining
datasources).

Note: If a GUARDIUM_AUDIT database is already created on this dbserver,


another one cannot be created. The GUARDIUM_AUDIT database/user must
be dropped before a new one can be created.
3. Select a datasource that uses an administrator account, and click Add, to add it
to the Datasources pane on the Create Value Change Audit Database panel.
4. Enter an Audit Datasource Name. This is the name that will be used to identify
the datasource later, to define monitoring tasks and to upload data. Do not
confuse this name with the name of the Datasource from the Datasources panel.
5. Optionally mark the Share Datasource box to share this datasource with other
applications (Classification, for example). The default is not to share the
datasource. This type of datasource requires administrator privileges, so you
may not want to share this datasource with other applications.

Note: To share a datasource with other users, assign security roles to that
datasource.
6. For any database type other than DB2, there will be additional fields in the
Audit Configuration pane. All fields are required. Referring to the following
table, enter the appropriate values.

268 IBM Guardium 10.0


Table 28. Additional Audit Configuration Fields Table
Database Type Field: Description
Informix Database Space: Enter the name of an existing database space to use,
or enter the name of the database space you created for the audit
database (guardium_dbs in the example shown previously). If you
leave this blank, the default root_dbs space will be used, which we do
not recommend.
MS SQL Server Audit User Name: Enter a new database user name to use when
accessing the audit database. This user will be given the sysadmin role.

Audit Password: Enter a password.

An additional choice appears in Value change Audit Database Creation


menu screen when then the datasource is MSSQL server. This
additional choice appears only when the datasource is MSSQL Server.

Compatibility Mode: Choices are Default or MSSQL 2000. The


processor is told what compatibility mode to use when monitoring a
table.

Use the GuardAPI command, grdapi list_compatibility_modes to show


the compatability modes for MS SQL Server.
Oracle Audit Password: Enter the password for the system user, which will be
the database account used to access the audit database.

Default Tablespace: Enter a name for the default tablespace.

Temp Tablespace: Enter a name for the temporary tablespace.


Sybase Audit User Name: Enter a new database user name to use when
accessing the audit database. This user will be granted the sa_role.

Audit Password: Enter a password.

Data Device Name: Enter the same data device name used when
initializing the disk for the audit database (guardium_auditdev in the
disk initialization procedure described earlier).

Log Device Name: Enter the same log device name used when
initializing the disk for the audit database (guardium_auditlog in the
disk initialization procedure described earlier).

7. Click Create Audit Database to create the audit database.


8. Use the selection Value Change Audit Database Update and Upload on the
Config and Control tab to select the actions in this table.

Action Description
Delete Click to remove the datasource from the Datasources pane.
Modify Click to edit this datasource definition in the Datasource Definition
panel.
Schedule Upload Click to schedule the upload of this audit datasource.

After Defining the Audit Database

After an audit database has been created on a database server, it will be available
for use by the Value Change Auditing Builder, which is the tool that is used to
build triggers. See “Value Change Auditing” on page 263.

Chapter 5. Monitor and Audit 269


Monitored Table Access
This feature adds a “Last Assessed” field to relevant tables, for interaction with
Optim™ Designer data lifecycle products.

This feature is also called “Table Last Referenced”.

This feature uses Guardium’s External Feed that is preconfigured with the data (a
predefined External Feed map), and an audit process to run it.

Follow these Steps


1. Create the target (Optim) tables on any Informix database. Use the script.
2. Open the Audit Process Builder by navigating to Comply > Tools and Views >
Audit Process Builder, then edit the process named Table Last Referenced. Add
a datasource to the External Feed task (the Informix datasource that contains
the tables) and setup the run-time parameter for servers group. All the rest is
predefined and there is no need to change it.
3. Run (or schedule to run periodically) the audit process.

Note: The resulting table will show only the last run. The receiver count is the
count of the receivers, and not the count of run results since the last run only.

IBM Guardium can detect external references to database objects, specifically


tables. This capability, in conjunction with Optim Designer, can be used to manage
the retirement of inactive tables or archiving with certain retention policies.

Guardium collects and maintains a list of tables with the date of last reference. The
list is built using policies in Guardium that dictate the interval of last reference and
the frequency to be used for updating the list content. The information captured by
Guardium is referred to as the “last reference” list and supplies the following
information: What tables are no longer referenced? What table access trends exist
for retirement candidates?

Having the ability to accurately plan for the retirement of applications will help to:
v Plan for hardware retirement or redeployment
v Reduce cost of ownership by moving or retiring those resources supporting the
applications (for example, hardware, DBA(s), Application owners, IT operations
such as backups).
v Know what tables are rarely or never accessed

This functionality of IBM Guardium has been added directly to the Optim
Designer user interface.

The information supplied by Guardium to Optim consists of the following


attributes per table entry:
Table 29. Monitored Table Access List Entry
List Entry Description

Field Comment

DataSourceDesc Description

Server IP

270 IBM Guardium 10.0


Table 29. Monitored Table Access List Entry (continued)
List Entry Description

Host Name

DB Vendor for example, Oracle, DB2

User Name for example, for Oracle it mostly defines the schema

Database Name

Schema

Table

Date Date of last access

Script to create Informix tables in the Optim product

Last_referenced_datasource

create table last_referenced_datasource (

id serial(1) not null,

datasource_desc varchar(100),

server_ip char(39),

host_name varchar(200),

db_vendor char(40),

primary key (id) constraint last_referenced_datasource_pk

);

Last_referenced_table

create table last_referenced_table (

id serial(1) not null,

datasource_id int not null,

user_name char(32),

db_name char(128) not null,

schema_name char(128) not null,

table_name char(128) not null,

last_reference datetime year to second not null,

Chapter 5. Monitor and Audit 271


primary key (id) constraint last_referenced_table_pk,

foreign key (datasource_id) references last_referenced_datasource(id) constraint


last_referenced_table_fk

);

How to use PCI/DSS Accelerator to implement PCI compliance


Install and configure IBM Security Guardium’s PCI/DSS Accelerator and create a
series of policies and reports, in order to meet PCI/DSS requirements.

PCI/DSS (Payment Card Industry/ Data Security Standard) is a set of technical


and operational requirements designed to protect cardholder data.

Value-added: Give customers a whole view of PCI/DSS and provide predefined


policies and reports to save configuration time.

Follow these steps:


1.
Install PCI/DSS accelerator.
2.
Configure PCI role.
3.
Configure reports and policies that follow the requirements.

Upgrade/new installation of Accelerator patch that comes with


v9.0 GPU patch 50

Requirement: Must use Accelerator patch that comes with v9.0 GPU patch 50

Example
1. User downloaded a v9.0 Accelerator and installed it on v9.0 Guardium system
A, and saved the Accelerator patch.
2. Guardium system A is upgraded to v9.0 GPU patch 50. (At this point, the
Accelerator on Guardium system A is fine).
3. User installs a different Guardium system B with v9.0 GPU patch 50. And used
the Accelerator patch saved in step 1 for Accelerator to install on Guardium
system B. This will not work. The correct action in this step is to install
Accelerator that comes with v9.0 GPU patch 50.

Install PCI/DSS accelerator


1.
Download the corresponding PCI Accelerator module and upload it to
fileserver.
2.
Using the CLI user login system, run the following CLI command and follow
the prompted steps, store system patch install sys
3. After the installation is complete, use the following CLI command to confirm
the installation state (see screenshot example), show system patch installed

272 IBM Guardium 10.0


Configure PCI role
1.
Login via the Guardium GUI page using the “accessmgr” user account. Select a
user (in this case, user1), and click on Roles.

2.
In the user role form, check PCI, and then save the assignment.

Chapter 5. Monitor and Audit 273


3.
Next, click the Change Layout to add the role corresponding module updates
to the interface layout.

Implement PCI accelerator

Log on using “user1” will display the following PCI accelerator information:

Overview
1.
Click the PCI Data Security Standard to open the Introduction page.
2.
Click the PCI Accelerator for Compliance to get the detail introduction
of PCI Accelerator.
Plan and Organize
Plan and Organize
Click the Overview to get to the introduction of how the predefined
reports follow the compliance in this section.
Each Tab has predefined reports:
1.
Cardholder Server IPs List: Cardholder information database server list.
According to the company's actual situation, set the “PCI Authorized
Server” IPs group information, which specifies the database server that
stores cardholder information.
2.
Cardholders Databases: Cardholder information database. Set the PCI
Cardholder DB: designated group information, which is stored in the
database's cardholder information.

274 IBM Guardium 10.0


3.
Cardholder Objects: Cardholder information object. This needs to set
the PCI Cardholder Sensitive objects.
4.
DB Clients to Servers Map: Client/server mapping, “PCI Authorized
Server IPs” sets group information, which specifies the database servers
storing cardholder information. Query can be used to find client access
to the cardholder database.
5.
Active DB Users: Administrator in addition to categories of users,
which visited the cardholder database. Set the “PCI Authorized Server
IPs” and “PCI Admin Users”.
6.
Cardholder DB Administration: Cardholder database management
operations. Set the “PCI Authorized Server IPs” and “PCI Admin
Users”.
7.
Authorized Source Programs: Credit program access. Set the PCI
Authorized Server IPs, PCI Authorized Source Programs”. Procedure
for recording Credit Cardholder database access.
8.
Unauthorized Application Access: Non-credit program access. Set the
PCI Authorized Server IPs, PCI Authorized Source Programs”. Records
of credit program for the cardholder database access.
9.
8.5.8 Shared Accounts: PCI eighth requirements to have each person
having computer access to be assigned a unique ID. Set the PCI
Authorized Server IPs to count the same database username trying to
access from the cardholder database IP.

Note: Follow the method to set Group.


In the statements, click to view a report form, and then determine what
specific group content needs to be filled.

Here is the actual name of the group:

Navigate to Setup > Tools and Views > Group Builder, and in the
Modify Existing Groups selection, select the group name.

Chapter 5. Monitor and Audit 275


Click Modify and go to Manage Members for Selected Group page. Add
new members.

The group can also be filled through a customized query.

276 IBM Guardium 10.0


PCI Req. 10 Track & Monitor
Click the Overview to get the introduction of how the Guardium monitor
and predefined reports follow the compliance in this section.
1. 10.2 and 10.3 Automation - Use the online help Protect help book and
Comply Help book to automate this section.
2. 10.2.1 Data Access - PCI Access to cardholder data, Set the PCI
Authorized Server IPs and PCI Admin Users.
3. 10.2.2 Admin Activity - PCI Activity by Admin. user. Set the PCI
Authorized Server IPs and PCI Admin Users.
4. 10.2.3 Audit Trail Access - To follow this section completely, at least
four kinds of reports must be defined: Logins to SQLGuard; User
activity audit trails on SQLGuard server; Scheduled job exceptions; and,
User to-to lists. Navigate to Setup > Reports > Report Builder to create
reports as you need.
5. 10.2.4 Invalid Access - PCI - Invalid Login Access Attempts: record the
login failed try in the database. PCI - Unauthorized Application access:
record the database access not defined in PCI Authorized Source
Programs.
6. These three sections can also use the Monitor and Audit Help Book in
the embedded online help - 10.2.6 Initialization Log, 10.5 Secure audit
trails, and 10.6 Access Auditing.
7.
8.
PCI Req. 11 Ongoing Validation
Click Overview get the simple introduce about the importance of
vulnerabilities assessment. Click Security Assessment to build an
assessment process.
1.
Select database security assessment.

Chapter 5. Monitor and Audit 277


2.
Click New to create a new Assessment.

3.
Input name, select add new data source:

278 IBM Guardium 10.0


4.
Input name, database type, input certification requires the user name
password; enter the server ip/ port / service name, then click Apply..

Chapter 5. Monitor and Audit 279


5.
After get the success return, click Test Connection to confirm
availability.

6.
In Datasource Finder selected to assess the data source, click Add.

7.
After get the success return, choose the new assessment, and
Configure tests...

280 IBM Guardium 10.0


8.
Select and add a test project according to the actual requirement.

9.
Click Run Once Now to run the assessment immediately, then can
view the results (according to the selection of project, may take a long
time).

Chapter 5. Monitor and Audit 281


10.
Use Change Audit System (CAS) to ensure that the database
configuration audit and CAS agent can monitor files and output
operating system, script, environment variables, and registry keys.

PCI Policy Monitoring

Click Overview to introduce the Policy.


1.
To show your current policy installations, navigate to Setup > Tools and Views
> Policy Installation and choose a suitable policy for installation. Recommend
the use of 'PCI' strategy predefined series:

282 IBM Guardium 10.0


2.
Policy Violations: Records of violation operations, according to the strategy
selection information severity levels will have to different highlight display (as
shown in the figure, the Middle level violation for orange red)

Workflow Builder
The Workflow Builder is used to define customized workflows (steps, transitions
and actions) to be used in the Audit Process.

For additional information, see “Building audit processes” on page 195. Follow
these steps to:
v Define the workflow steps (Event Status),
v Define the flow of transit from one step to another (Actions)

Chapter 5. Monitor and Audit 283


v Define which actions require sign-off
v Assign roles to each status, to define the users permitted to view each status

Relevant Terms for this feature

Event Type - Custom workflow

Event Status - State/status of the workflow.

Event Action - Action/Transition

Note: Workflow Builder is an optional component enabled by product key.

Create a Workflow Process


1. Open the Workflow Builder by navigating to Comply > Tools and Views >
Workflow Builder.
2. At the first screen (Event Type), click Event Status to go to the Event Status
configuration.
3. Click Add Event Status to define a new Event Status. A multiple of Event
Status are expected. Fill in the status description and place a check mark in
the Is Final check box if the task is a final task in the workflow.
4. Click Event Type and then click Add on Add Event Type Definition to define
a new Event Type.
5. Fill in the description and designate the first task in the workflow.
6. Then choose all the Allowed Status for the workflow from the Available Status
list, by highlighting the Status item and clicking on the > button between the
Available Status List and Allowed Status List.
7. When done, click the Save button. Note: the Save button (or Cancel button)
only apply to changes made to name, default event or available events.
8. Go to the Defined Event Actions section of the Event Type menu screen.
Defined Event Actions involves designating the separate Event Actions of the
workflow.
9. Click the New button.
10. Fill in the Event Action Description and designate Prior status, Next status
and if Sign-off of this event action is required. Click the Apply button.
11. Repeat Steps 9 and 10 until all event actions are described and designated.
12. Go to the Roles section of the Event Type menu screen. Roles involve defining
who can see the event when it is in a particular Event Action. For example,
who can see events that are "Under Review" and who can see events that are
"Approved".
13. Select the Event Type Status and click the Roles button.
14. In the Assign Security Roles panel, mark all of the roles you want to assign
(you will only see the roles that have been assigned to your account). Click
Apply to save security role choices. Click the Back button.
15. Repeat steps 13 through 14 until all event type status have had roles defined.
16. The configuration effort from Workflow Builder is done.
17. Open the Audit Process Builder by navigating to Comply > Tools and Views
> Audit Process Builder to schedule the workflow and build and show
workflow reports. See the Audit Process Builder steps under Define a Report
Task.

284 IBM Guardium 10.0


There is a usage scenario, Workflow Builder Workflow Example in the Appendices.

Note: If the task type in Audit Process Builder is Classification Process, then
Workflow Builder can not create customized workflows.

Warning Note: When a workflow event is created, every status used by that event
can be assigned a role (meaning that events can only be seen by this role when in
this status). When an event is assigned to an audit process, it is important that
every role that is assigned to a status of this event have a receiver on this audit
process. Otherwise, it is possible that an audit result row can be put into a status
where none of its receivers are able to see this row or change its status.

If an audit row becomes inaccessible, the admin user (who is able to see all events,
regardless of their roles) would be able to see the row and change its status.
However, if data level security is on, the admin user may not be able to see this
row. The admin user would need to either turn data level security off (from Global
Profile) or have the dataset_exempt role. It is important to configure the audit
process so that all roles who must act on an event associated with this audit
process are receivers of this audit process.

Note: Deletion of a event status is permitted only if the status is not in the first or
final status of any events, and if it not used by any action. The validation will
provide a list of events/actions that prevent the status from being deleted.

Add Default Events only to limited number of records

When running an Audit Process report task, the results of this process task are
saved in the table, REPORT_RESULT_DATA_ROW. This table will have a row for
every row of the report. If this report task also has a default event assigned to it, a
row is added to the table, TASK_RESULT_ADDITIONAL_INFO, for every row of
the report. This may lead to a disk space issue only if default events are used for
large results. Create events only on task results with a limited number of records,
otherwise users will never be able to manage the large number of records. If
default events are used in the intended limited manner, there will not be any disk
space issues nor any usability issues, since it is not easy to close thousands of
events.

How to create Customized Workflows


Define customized workflows made up of specific customer steps, transitions and
actions to be further used in an audit process.

About this task


Define and manage workflow based on customer's specific practices.

See Workflow Builder for an overview of this component.

Prerequisites
v See How to create an Audit Workflow. For additional information, see
Compliance Workflow Automation.
v After creating this customized workflow, See How to combine Customized
Workflow with Audit Workflow.

Chapter 5. Monitor and Audit 285


Procedure
1. Open the Workflow Builder by navigating to Comply > Tools and Views >
Workflow Builder.
2. At the first screen (Event Type), click the Event Status button to go to the
Event Status configuration.
3. Click on Add Event Status to define a new Event Status. A multiple of Event
Status are expected. Fill in the status description and place a check mark in
the Is Final check box if the task is a final task in the workflow. When done,
go to the next step.
An example of a simple three-step workflow is: Open to Review state to
Approve or Not Approved. Each step of the workflow is a separate Defined
Task Event Status.
The workflow tasks of the example are: Open, Review state, Approve after
review, or Not approved. Also, if the task is the final task in a workflow, place
a check mark in the Is Final column. Examples of a final task in the example
are Approved or Not Approved.

4. Click on the Event Type button and then click on the Add button of Add
Event Type Definition to define a new Event Type.
5. Fill in the description and designate the first task in the workflow.

286 IBM Guardium 10.0


6. Then choose all the Allowed Status for the workflow from the Available Status
list, by highlighting the Status item and clicking on the > button between the
Available Status List and Allowed Status List.
7. When done, click the Save button.
8. Go to the Defined Event Actions section of the Event Type menu screen.
Defined Event Actions involves designating the separate Event Actions of the
workflow.
9. Click the New button.
From the simple three-step workflow example, an Event Action of Under
Review has a prior status of Open and a next Status of Review State. The
Event Action of Approved follows Under Review with a prior status of
Review State and next status of Approve after review. Or the Event Action of
Not approved has a prior status of Review State and a next status of Not
Approved. There is also a signoff capability for designated reviewers per
Event Action (continuous or sequential). See the previous screen shot.
10. Fill in the Event Action Description and designate Prior status, Next status
and if Sign-off of this event action is required. Click the Apply button.
11. Repeat Steps 9 and 10 until all event actions are described and designated.
12. Go to the Roles section of the Event Type menu screen. Roles involve defining
who can see the event when it is in a particular Event Action. For example,
who can see events that are "Under Review" and who can see events that are
"Approved".
13. Select the Event Type Status and click the Roles button.
14. In the Assign Security Roles panel, mark all of the roles you want to assign
(you will only see the roles that have been assigned to your account). Click
Apply to save security role choices. Click the Back button.
15. Repeat steps 13 through 14 until all event type status have had roles defined.
16. The configuration effort from Workflow Builder is done.
17. Open the Audit Process Builder by navigating to Comply > Tools and Views
> Audit Process Builder to schedule the workflow and build and show
workflow reports. See the Audit Process Builder steps under Define a Report
Task.

How to use Customized Workflows


Define an audit process that follows the customer's customized workflow practices.
Bring the customer's specific auditing processes and practices into the Guardium
solution.

About this task


Customized Workflows within the Guardium Audit Workflow process

The formal sequence of event types created in Workflow Builder is managed by


clicking on the Event and Additional Column button in the Audit Tasks window.
This button will appear after an audit task has been created and saved. This
additional button will not appear until the audit task is saved.

Prerequisites
v See How to create Customized Workflows. For additional information, see
Workflow Builder.
v See How to create an Audit Workflow. For additional information, see
Compliance Workflow Automation.

Chapter 5. Monitor and Audit 287


v Define an audit process that follows the customer's customized workflow
practices by following the additional steps.

Procedure
1. Configure these workflow activities when Adding An Audit Task.
2. Create and save an Audit Task. After saving, an additional button, Events and
Additional Columns, will appear.
3. Click this additional button.

4. At the next screen, place a checkmark in the box for Event & Sign-off. The
workflow created in Workflow Builder will appear as a choice in Event &
Sign-off.
5. Highlight this choice. Save your selection.
6. If additional information (such as company codes, business unit labels, etc.) is
needed as part of the workflow report, add this information in the Additional
Column section of the screen and then click Apply (save). When done, close
this window.
7. Apply (save) your Audit Task. Apply (save) the entire Audit Process Definition.
Click on the Run Once Now to create the report. Click on View to see the
report.
8. Click on the Run Once Now to create the report. Click on View to see the
report.

288 IBM Guardium 10.0


This Event and Additional Column button appears in all audit tasks.

Note:
If data level security at the observed data level has been enabled (see Global
Profile settings), then audit process output will be filtered so users will see only
the information of their databases.
Under the Report choices within Add an Audit Task are two procedural
reports, Outstanding Events and Event Status Transition. Add these two reports
to two new audit tasks to show details of all workflow events and transitions.
These two reports will not be filtered (observed data level security filtering will
not be applied). These two reports are available by default in the list of reports
only to admin user and users with the admin role.

Quick Search for Enterprise


Quick Search for Enterprise provides immediate access to your data without
requiring detailed knowledge of Guardium topology, aggregation, or
load-balancing schemes.

Quick Search for Enterprise represents a powerful enhancement to Quick Search.


Existing Quick Search features are retained and augmented with the addition of
distributed search functionality, topology navigation, and an investigation
dashboard. Each of these features are interrelated to provide a dynamic real-time
search experience. For example, narrowing the search scope through the topology
view automatically updates the search results, and additional filters applied to the
search results are automatically reflected on the investigation dashboard.

The distributed search functionality of Quick Search for Enterprise enables you to
query data across an entire Guardium environment, potentially from any
Guardium machine within that environment.

Chapter 5. Monitor and Audit 289


Quick Search for Enterprise supports three operating modes: Central Manager only,
local only, and all machines. See GuardAPI Quick Search for Enterprise Functions
for information about setting the search mode.
Central Manager only
In this mode, search queries submitted on managed units return local
results while queries submitted on a Central Manager will return
enterprise-wide results from all Guardium machines with search enabled.
Central Manager only is the default operating mode.
all machines
This mode supports enterprise-wide search queries submitted from any
machine in the Guardium environment with search enabled. This mode
may result in slower search results and requires connectivity between all
managed units in the environment.
local only
This mode limits search queries to the local machine where the search is
submitted: it is not possible to get search results from other machines in
the Guardium environment.
Related reference:
GuardAPI Quick Search for Enterprise Functions
Use these GuardAPI commands to enable, disable, or configure Quick Search for
Enterprise features and parameters.
Quick Search for Enterprise CLI Commands
Use these CLI commands to configure Quick Search for Enterprise.

Enabling and disabling Quick Search for Enterprise


This topic describes how to enable and disable Quick Search for Enterprise.

Before you begin

Quick Search for Enterprise has following minimum hardware requirements:


v 64-bit architecture
v 24 GB RAM
v 4-core CPU

In addition, Quick Search for Enterprise is only available on systems configured as


a Central Manager or as a collector. When enabled on an aggregator, the message
"Quick Search is Enabled" will be displayed, but the aggregator will not index data
for search.

When enabling Quick Search for Enterprise on systems not meeting these
requirements, a limited version of data search will be enabled that only supports
local data queries.

About this task


The steps described below allow you to enable or disable search.

Procedure
1. Log in to the machine as a user or administrator with the CLI role.
2. Use the following GuardAPI command to enable Quick Search for Enterprise
functionality:

290 IBM Guardium 10.0


grdapi enable_quick_search schedule_interval=2 schedule_units=MINUTE
By default, violations will not be included in search results. To include
violations, set the includeViolations parameter to true:
grdapi enable_quick_search schedule_interval=2 schedule_units=MINUTE includeViolations=true
Additional parameters may be specified, such as the search index update
interval. For a complete list of parameters and descriptions, see the GuardAPI
Quick Search for Enterprise Functions reference information.
3. Use the following GuardAPI command to disable the Quick Search for
Enterprise function at any time:
grdapi disable_quick_search

Results

Once enabled, see “Using Quick Search for Enterprise” to learn more about using
data search queries.

Attention:
v Distributed search functionality opens ports 8983 and 9983 on both Central
managers and collectors. The ports are opened when distributed search is
enabled and closed when it is disabled. To use distributed search, ensure that
bidirectional communication between Central managers and collectors on ports
8983 and 9983 is not blocked by any firewall.
v Indexed search data is retained for 3 days. Use the purge object Guardium CLI
command to change the retention period. For example, the following command
changes the retention period to 5 days: store purge object age 39 5. Note that
39 is the default object identification number associated with the search index.
For additional information, see Configuration and Control CLI Commands
reference information.
Related tasks:
“Using Quick Search for Enterprise”
This topic describes how to use essential features of Quick Search for Enterprise.
Related reference:
Quick Search for Enterprise CLI Commands
Use these CLI commands to configure Quick Search for Enterprise.
GuardAPI Quick Search for Enterprise Functions
Use these GuardAPI commands to enable, disable, or configure Quick Search for
Enterprise features and parameters.

Using Quick Search for Enterprise


This topic describes how to use essential features of Quick Search for Enterprise.

Before you begin

To use the features described in this topic, Quick Search for Enterprise must be
enabled.

Searching and syntax


About this task

Quick Search for Enterprise is intended to provide an immediate and intuitive


mechanism for conducting sophisticated data inquiries across a Guardium
environment. Follow the steps in this section to conduct a search on any Guardium
system where search is enabled.

Chapter 5. Monitor and Audit 291


Procedure
1. Type search text into the Search field or simply click the search icon to return
all available data.
When entering or refining search terms, the following rules and syntax applies:
v To match an exact phrase, use double quotation marks around the search
terms. For example, “Profiling Alert List” returns entries for Connection
Profiling Alert List but not for Profiling List Alert.
v To match all specified search terms, separate the terms with a space. For
example, Hadoop getlisting returns any entries containing both Hadoop and
getlisting in any location or sequence.
v To match any specified search terms, separate the terms with OR or a vertical
bar (|). For example, Hadoop OR getlisting returns any entries containing
either Hadoop or getlisting in any location.
v To exclude a specified search term, use NOT or a period (.). For example,
NOT Hadoop will not return any entries containing Hadoop in any location.
v Wildcards are supported by using asterisks (*) at the beginning or ending of
a string. For example, 10.10.70.* returns any entries with the string 10.10.70.
followed by any additional characters.
Search rules can be used in combination. For example, 2013–5-08 (19.*|20.*)
returns results in the time range of May 8 between the hours of 19:00:00 –
20:59:59.
2. Refine search results using any of the following methods:
a. Enter additional search terms in the search field.

b. Select specific filters based on the available data.

c. Click an individual search result to apply it as a filter.

3. Explore individual results by right-clicking on specific search results and


exploring related outliers, errors, or violations, or viewing one of several
available drill-down reports.

Local and distributed search


About this task

Quick Search for Enterprise may be used in either local or distributed modes. In
local search mode, searches are limited to the data available under the local
machine (the machine from which the search is being run). For example, a local
search run from an individual collector returns results from datasources under that
collector but not from any datasources under other collectors in the environment.
In distributed search mode, searches return data from across the entire Guardium
environment and results are not limited by the specific machine from which the
search is run. A topology tool is provided to conveniently narrow search results to
specific segments of the overall Guardium environment.

Quick Search for Enterprise defaults to local search mode.

292 IBM Guardium 10.0


Procedure
1. To toggle between local or distributed search, click the “Enable / Disable search
all appliances” icon in the search window toolbar.
2. Search results will automatically update to reflect the available data based on
the selection of local or distributed search. See the “Topology view” section for
information about filtering global search results by a specific segment of the
Guardium environment.

Topology view
About this task

A topology view is provided to help visualize and refine the data sources included
in search results. Using the topology view, it is possible to narrow search results to
specific segments of the overall Guardium environment.

Procedure
1. To invoke and use the topology view, Click the “topology view” icon in the
search window toolbar to open the topology browser.
2. Hover the mouse over an object in the topology view to display detailed
information about that object.

3. Click an object in the topology view to select that object and narrow the search
results to only that object and its children if any exist. Use control-click to select
multiple objects in the topology view.
4. After exploring objects in the topology view and selecting a desired scope,
close the topology view by clicking the close icon or clicking outside the
topology browser. The search results update automatically to reflect the
available data based on the scope selected in the topology view.

Using the Investigation Dashboard


The investigation dashboard provides interrelated charts that help reveal patterns,
anomalies, and relationships across your data.

About this task

The default or best-practice view includes data source-to-user behavior, data


source-by-time behavior, data source-to-source program behavior, and other
essential relationships. From this default view, you can focus on any specific
context (such as a specific data source, user, or date) and all other views refocus
around that selected context.

Procedure
1. To invoke and use the investigation dashboard, click the investigation
dashboard icon in the search window toolbar.
2. Explore your data by interacting with the charts in the following ways:

a. Hover over an individual cell to display the specific values associated with
that cell.

Chapter 5. Monitor and Audit 293


b. Click a column heading, row heading, or an individual cell to create and
apply filters to the dashboard.
3. Filters are created and applied to the dashboard as you interact with individual
charts. Active filters are displayed in the Active Filters bar.

a. Click individual filters to remove them from the view.


b. The sample results table shows results that match the active filter criteria.
c. Filters defined on the dashboard are retained and applied when you return
to the search results table.
4. Create new charts by clicking the button and selecting the ColorMap or
Animation chart type (a new chart will be added at the bottom of the
dashboard), or edit charts by clicking the settings icon of an existing chart. For
example, the following image shows options for configuring a ColorMap chart:

After editing the settings, click OK to save your changes and view the updated
chart on the dashboard.

Outliers Detection
Outlier detection extends traditional database monitoring with increased
intelligence that helps security analysts understand risk based on relative change in
behavior.

For example, if a DBA is observed accessing a particular table more frequently


than in the past, it may indicate that the DBA is slowly downloading small
amounts of data over time, or if an application generates more SQL errors than it
has in the past, it may indicate that there is a SQL injection attack in progress. In
both cases, the relative change in activity may indicate that a security violation is
taking place even if the activities do not directly violate an existing security policy.

Guardium Data Activity Monitoring includes an advanced machine-learning


algorithm featuring an adaptive learning process: it models the normal patterns of
a user’s activities and then analyzes new activities as they accumulate. The process
not only checks whether current activities are consistent with a user’s previous
activities, it also models a user’s actions against the activity of similar users. This
two-pronged approach enables more accuracy and the ability to detect more cases
of suspicious behavior (including those that do not violate security policies) while
also preventing false positives. For example, what represents new behavior for an
individual user may be entirely consistent and normal for a user of that type.

Overview
The processes of outlier detection works in two phases: a learning phase and an
analysis phase.

During the learning phase, outlier detection operates on data that is transparently
extracted from the collected audit data. That is, the outlier detection algorithm uses
data that is being collected normally for security and compliance reasons. If data is
not being audited already by a security policy, it is not available for Guardium to
analyze. The model is trained over a period of time and requires 3-4 weeks of data
to build a solid model and learn the normal behaviors of the environment. No
outlier indicators will be generated until sufficient training has taken place.

294 IBM Guardium 10.0


After the model is trained, new data that falls outside the established pattern is
assigned an anomaly score and a reason for that score. At this point, outlier data
will begin appearing in the Quick Search area of the Guardium user interface with
no user intervention required. Outliers are those activities by a particular user in a
particular time period that fall outside of the “normal” clusters of activity based on
the established model.

Example

Assuming an adequately trained model, consider a scenario where a malicious


DBA decides to extract the entire contact list into a CSV or other format that they
can take with them. Given this scenario, the algorithm will identify the following
exceptional behaviors as an outlier:
v Access to the objects (source + target) by this user is probably exceptional.
v The volume within the time-window is exceptional.
v The volume and type of errors is probably exceptional.

The Guardium user may investigate incidents identified as outliers using either the
Search tab in the user view, from the Quick Search function in the admin user
view, or by reviewing the Outlier Analytic List report.

Enabling and disabling outliers detection


This topic describes how to enable and configure outliers detection.

Before you begin


v
It is strongly recommended that you enable outliers only on 64-bit collectors
with a minimum of 24 gigabytes of memory.
v
Outlier detection is not available on aggregators or Central Managers, but it is
possible to use distributed reports to aggregate outlier report data to a Central
Manager.
v
The presentation of the outlier results as described in this article is included
with Quick Search; thus, you must ensure Quick Search is enabled. Alternatively,
review the Outlier Analytic List report, which does not require Quick Search.

About this task


Outliers detection is disabled by default. Follow the steps described below to
enable outlier detection.

Procedure
1. Login to the collector as a user or administrator with the CLI role.
2. Use the following GuardAPI command to enable the outliers detection
function.
grdapi enable_outliers_detection schedule_interval=1 schedule_units=HOUR
v
A new data mart is defined to extract data from GDM tables into CSV files
(default path: /var/dump/ANALYTIC/input).
v

Chapter 5. Monitor and Audit 295


If you issue the command with no additional parameters, extraction to the
data mart begins immediately and runs hourly.
v
To specify a delayed start time, alter the extraction interval, or set other
parameters, see the GuardAPI Input Generation reference topic.
3. Use the following GuardAPI command to disable the outliers detection
function at any time:
grdapi disable_outliers_detection

Results

Once enabled, the outliers detection module is available from the Search tab in the
user view and from the Quick Search function in the admin user view.

Allow one month of data collection for effective modeling of the normal patterns
of database activity.
Related concepts:
“Quick Search for Enterprise” on page 289
Quick Search for Enterprise provides immediate access to your data without
requiring detailed knowledge of Guardium topology, aggregation, or
load-balancing schemes.
Related information:
GuardAPI Input Generation
GuardAPI Input Generation allows the user to take the output of one Guardium
report and feed it as the input for another Guardium entity; allowing users to used
prepared calls to quickly call API functionality.

Interpreting outliers
Guardium provides a convenient graphical interface for identifying and responding
to outliers detected by the algorithm.

The summary chart includes red and yellow indicators that reflect the severity or
total outliers score for a time interval. Red indicators reflect highly anomalous
events requiring immediate attention. Yellow indicators represent less extreme
anomalies that warrant attention as part of other or related investigations. The
outlier score is a calculated aggregate value based on the volume of outliers, the
severity of individual outliers, the predicted volume of outliers for a given time of
day, and other factors.

For example, on a system that typically identifies 0 outliers at 1am and 5-10
outliers at 1pm during weekdays, the presence of two additional outliers (of 2
outliers at 1am or of 12 outliers at 1pm) is more significant—and weighted more
heavily—than the hourly total itself.

Placing the cursor over one of the outlier icons provides detailed information
about outliers detected during that time period. To view other activities or outliers
that occurred during the same time period, click “Show activities” or “Show
outliers.”

The outlier reason will be identified as one of the following:


rare a seldom seen condition

296 IBM Guardium 10.0


high volume
an unusually high incidence of a condition
new a condition seen for the first time
error an unusually high incidence of error conditions

Outlier reasons are assigned in combinations when needed. For example, an outlier
may be flagged as both rare and high volume if a seldom-seen condition suddenly
occurs many times.
Related information:
Anomaly Detection
The Anomaly Detection process runs every polling interval to create and save, but
not send, correlation alert notifications that are based on an alert's query.

Grouping users and objects for outlier detection


This task shows you how to use GuardAPI commands to add additional groups to
the outlier detection algorithm.

About this task

By default, there are two groups of users and objects that are weighted or scored
more heavily by Guardium's machine-learning algorithm: Admin Users and
Sensitive Objects. However, you may have already established additional groups
that would also be useful for outlier detection. For example, you may have a group
of Suspicious Users or you may have several different groups of sensitive objects
that are aligned with different applications.

Procedure
1. This task requires that you know the internal group ID to use with the grdapi
command. To get the group ID, you can use the following command: grdapi
list_group_by_desc desc=[group name]. For example, if you have a group
named BadGuys, you can enter the following command to get its internal
group ID:
grdapi list_group_by_desc desc=”BadGuys”
2. Once you know the desired ID, you can add the group or object to outlier
detection using one of the following commands.
v To add a group with the ID 1234:
grdapi set_outliers_detection_parameter privUsersGroupIds=1234
v To add sensitive objects with the IDs 333 and 156:
grdapi set_outliers_detection_parameter sensitiveObjectGroupIds=333,156

Results
The specified groups or sensitive objects have been added to the outlier detection
and will be given additional weight by the algorithm.

Excluding events from outlier detection


It is possible to exclude events from outlier detection, for example activity from
test data.

Chapter 5. Monitor and Audit 297


Exclude events using Outlier Response
To exclude events matching specific criteria, right-click an outlier indicator and
select “Ignore” to open the Define Outlier Response dialog. Enter specific values or
use wildcard entries (with the * character) to define what you want to ignore.

For example, to ignore all activity from server 10.70.144.159, database ON1PARTR,
and any database user beginning with GUARD:
1. Remove any unnecessary fields by clicking on the appropriate icons.
2. Enter the specific values for the server and database fields.
3. Use the wildcard character (*) to expand values for the DB user field.
4. Click OK to commit the changes.

To include previously ignored events, view the Analytic User Feedback report,
double-click the previously-ignored event, and select Invoke >
delete_analytic_user_feedback.

Exclude events using Group Builder

If you have many items for exclusion, use the Guardium Group Builder and
populate any or all of the following groups as needed:
v Analytic Exclude DB User
v Analytic Exclude OS User
v Analytic Exclude Server IP
v Analytic Exclude Service Name
v Analytic Exclude Source Program
The Group Builder has options for bulk uploading including the ability to populate
from a query on a custom table.

Alternatively, use GuardAPI commands to populate the Analytic Exclude groups.


For example, to add OMNISERVER to the Analytic Exclude Source Program group,
use the following command:
grdapi create create_member_to_group_by_desc desc=”Analytic Exclude Source Program” member=”OMNISERV
Related information:
Groups
Grouping can simplify the process of creating policy and query definitions.

298 IBM Guardium 10.0


Chapter 6. Reports
A report defines how the data collected by a query is presented.

The default report is a tabular report that reflects the structure of the query, with
each attribute displayed in a separate column. All presentation components of a
tabular report (the column headings, for example) can be customized. All graphical
reports are defined using the Report Builder. In addition to the start and from date
(query to and query from) parameters, values can now be displayed between the
beginning of the page and start of the table in all reports.

Before using the Report Builder, create a query using the Query Builder. See
“Using the Query Builder” on page 316.

The fastest way to create and view a report is by using the steps to Create a
Report, then select the report from My Dashboard.

Move back and forth between menu screens using the Back and Next buttons. The
back arrow in the web browser does not work for navigation between Guardium
screens.

Icons used in Reports

Use icons to select functions within Report Builder.


Table 30. Report Icons
Graphical
icons Function
Ad-hoc process for Run Once now

Refresh

Open or run in a new window

Add a report

Add to favorites

Modify or Edit the query for this report or Customize chart

299
Table 30. Report Icons (continued)
Graphical
icons Function
Delete

Data Mart Builder

Clone

Configure runtime parameters

Configure report columns

Find a Report for Editing

To access a report definition, select the Reports lifecycle icon and then click Report
builder.

Search for a report by choosing Domain, Query or Report title. The results display
in the Report Search Results panel.
v To locate a specific report, select that report from the Report Title list. The
selected report displays immediately in the Report Search Results panel.
For the remaining types of search, click the Search button after making entries in
one or more fields, or just click the Search button to list all reports available for
your Guardium account.
v To list all reports that use a specific query, select that query from the Query list.
v To list all reports for a specific chart type, select it from the Chart Type list.

To locate a specific report, select that report from the Report Title list. The selected
report displays immediately in theReport Search Results panel.

If the search locates any reports, they display in the Report Search Results panel.
Click any of the following buttons:
v New - See Create a Report.
v Clone - See Clone a Report.
v Modify - See Modify a Report.
v Roles - See Security Roles. Assign roles to reports in Report Builder. Assigning
roles to reports while in Query Builder (Tracking) assign only the role to the
Query, not the report.
v Delete- See Remove a Report.
v Comment - See Comments.

300 IBM Guardium 10.0


v API Assignment - See API Assignment
v Drilldown Control - See Modify the Drill-Down Reports menu for a Report.

Create a Report
1. To access a report definition, select the Reports lifecycle icon and then click
Report builder.
2. Click New to open the Create Report panel.
3. From the Query list, select a query value to be used by the report (for example,
Guardium Logins)
4. Enter a unique name for the report in the Report Title field.

Customize the Report Presentation

Follow the step procedures to customize the report presentation.


1. In the Report Column Descriptions panel,
v Optionally override the Report Title. The default is from the report
definition. You can modify the title on most subsequent panels.
v Optionally override any Column Description (the column headings).
2. Click Next to open the Report Attributes panel:
v Mark the Tabular or Chart button.
v Click Next to go to the Submit Report panel.
3. Click Save to submit the report for creation.

Create a Graphical Report


Follow the step procedures to create a graphical report.
1. Follow the previous steps in Customize the Report Presentation for Report
Column Descriptions, Report Parameter Descriptions, and Report Attributes.
2. In the Report Chart Type panel, select the Chart type and click Next. The
choices are Area, Bar, Bar Area, Bar Line, Column, Date Area, Date Column,
Date Line, Distributed Label Line, Individual Bar, Individual Column, Line,
Pictogram, Pie, Polar, Speedo, and Stack Bar. Pie, Polar, Speedo, and Stack Bar
are recommended. Choose one and click Next.
3. If the Report Chart Type panel is not displayed, skip this step (all necessary
data has been entered). Select the type of chart for the report from the Chart
Type list.
4. Click Next to open the Report Presentation Parameters panel.
v Review the parameters, which varies for each type of chart.
v Optionally override any of the default settings for the chart type selected.
5. Click Next to continue to the Submit Report panel, and continue with the
Submit Report Definition procedure.
6. To view your graphical report, go to My Dashboards, and add your graphical
report.

Note:

A refresh icon appears in all graphical reports next to the help icon.

Chapter 6. Reports 301


Submit Report Definition
1. Optionally add comments (see Comments).
2. Optionally assign roles (see Security Roles).
3. Click Save.

Modify a Report
1. Find the report to be modified. Go to the Report Builder finder menu.
2. Click Modify to open the Report Columns panel.
3. Continue with Customize the Report Presentation.

Clone a Report
1. Find the report to be cloned. Go to the Report Builder finder menu.

2. Click Clone to open the Report Columns panel.


3. Enter a new name for the cloned report, in the Report Title box. You can enter
the new name on any of the subsequent screens - the only requirement is that
the new name must be entered before the cloned report can be saved.
4. Continue with Customize the Report Presentation.

Remove a Report

Be aware that you cannot remove predefined reports, and you cannot remove
reports that are used in Audit Processes.
1. Find the report to be removed.

2. Click Delete to remove the report.

Report Size Limitation

Tabular reports are limited to 5,000 rows of output, but when included in a
workflow process, any number of rows can be exported from the report task to a
CSV or CEF file.

Limits

The limit for the buttons when viewing a report (generate PDF, generate CSV, and
printable) is 30,000 rows. This is non-customizable.

The limit for the Populate From Query in Group and Alias Builder when run via
Run Once Now is 5,000 rows. This is non-customizable.

The limit for the Populate From Query in Group and Alias Builder when run via
Scheduling is 20,000 rows. This limit is customizable, via the CLI command,
show/store populate_from_query_maxrecs.

Modify the Drill-Down Reports Menu for a Report


By default, the drill-down menu for a report includes all reports with run-time
parameters that can be supplied by attributes from the report, which is given the
usual security role restrictions. To disable or enable any reports on the drill-down
menu for a report:
1. Locate the report. Go to the Report Builder finder menu.
2. Click Drilldown Control to open the report’s Drilldown Control panel.

302 IBM Guardium 10.0


3. Mark the checkbox for any report to be disabled, or clear the checkbox for any
report to be enabled.
4. Click Apply. The system displays a message saying your changes were applied
successfully.
5. Click Done when you are finished.

API Assignment
By default, the Guardium application comes with setup data that links many of the
API functions to reports; providing users, through the GUI, with prepared calls to
APIs from reporting data. Use API Assignment to link additional API functions to
predefined Guardium reports or custom reports.

For more information on using linked API functions, see the documentation on
GuardAPI Input Generation.
1. Locate the report. Go to the Report Builder finder menu.
2. Click API Assignment to open the API Assignment panel; showing the current
API functions that are mapped to the selected report.
3. Click an API function to display a pop-up window of the current API to Report
Parameter Mappings; showing the API parameters, if the API parameters are
required, any default values, and if any of the report fields are currently
mapped to those parameters.
If there are no fields in the report that are linked to API parameters, it might be
irrelevant to link an API function to a report. The mapping of API parameters
to report fields can be accomplished through both the GUI and the Guardium
CLI. For additional information on mapping API parameters to report fields,
see Mapping GuardAPI Parameters to Domain Entities and Attributes in the
GuardAPI Input Generation section.
4. Click the greater-than sign '>' to add the selected API function to the current
list of functions that are assigned to this report.
5. Click Apply to save the changes.

Open Query for Editing from Report Portlet


1. Open a report portlet for any report that is based on the query to be edited.
2. Click Edit this Report's Query in the tool bar. You must be authorized to
modify the query that the report is based on.

Report parameters
You can use parameters to control the contents and presentation of a report.

There are two types of report parameters:


v A runtime parameter provides a value to be used in a query condition. There is a
default set of runtime parameters for all queries, and any number of runtime
parameters can be defined in the query that is used by the report.
v A presentation parameter describes a physical characteristic of the report; for
example, whether a graphical report includes a legend or labels, or what colors
to use for an element. All presentation parameters are provided with initial
settings when you define a report.

To set report parameters:

Chapter 6. Reports 303


1. Click Configure Report Parameters from the choices within the report. See the
icon (do not confuse the lifecycle icon with this choice within the report).
2. In the panel, enter runtime and presentation parameters in the boxes that are
provided, as necessary for the task to be performed.
3. Click Save.
4. To view the report, go My Dashboards.

Standard Runtime Parameters


The following runtime parameters are present for all reports.

Runtime Parameter Default and Description


QUERY_FROM_DATE None for a new report, varies for default
reports. The starting date for the report is
always required.
QUERY_TO_DATE None for a new report, varies for default
reports, though the default is almost always
NOW. This date is the ending date for the
report, and is always required.
REMOTE_SOURCE None. In a Central Manager environment,
you can run a report on a managed unit by
selecting that Guardium system from the
Remote Data Source list.
SHOW_ALIASES None (meaning the system-wide default is
used). Select the On to always display
aliases, or Off to never display aliases. Select
the default button to revert to the
system-wide default (controlled by the
administrator) after either the On or Off
button has been used.

Creating dashboards
You can create one or more dashboards, add reports to them, and configure their
appearance.

Before you begin


Think about how you want to organize the reports that you view regularly. Do you
want to view them in one dashboard, or in several dashboards? Do you want to
group and order them according to their purpose, how critical they are, or some
other approach? You can always rearrange your dashboards or create new ones.

About this task


Procedure
1. Click My Dashboards > Create New Dashboard to open a new dashboard.
2. Enter a descriptive name in the Name field. This name is used in the list of
dashboards in the menu.

3. Click Add Report to display a list of available reports. If you have


designated certain reports as favorites, you can check the My Favorites box to
see only a list of those reports. If you want to see only graphical reports, check
the Chart Only box.

304 IBM Guardium 10.0


4. The Add a Report dialog shows a list of all reports that meet your criteria. You
can browse the list of reports, or type a string in the Filter field. The list of
reports is updated as you type.
5. Click the title of a report to add it to your dashboard. Continue adding as
many reports as you want. When you are finished adding reports, click Close.

Results

You have a dashboard that gives you easy access to some selected reports.

What to do next

Review the appearance of your dashboard. Is it easy to use, and to find the
information that you want? If not, you can configure it further.

Configuring your dashboard


You can configure several aspects of the appearance of your dashboard to make it
as useful as possible.

About this task

Think about how you use your reports. What arrangement makes it easy to
achieve your goals? Experiment with these changes.

Procedure
1. Rearrange the reports. To move a report, place your cursor on the report’s title
bar, and drag it to a new location.
2. Choose a new number of columns by clicking 1, 2, or 3 in the Number of
columns area. By default, your reports are shown in two columns. If you need
more space for each report, click 1 to see how your reports look when they are
the full width of the dashboard. If you prefer to see more reports at one time,
try three columns.
3. Resize your reports. Drag the resize icon to make a report longer or shorter,
narrower or wider. If you adjust the width of a report, all the reports in that
column use the new width. If you change the number of columns, all columns
return to their default widths.

Using your dashboard


Use the steps to add a report to dashboard and then to customize the appearance.

About this task

Dashboard replaces Add to Pane and Add to My Reports.

Procedure
1. Click on the Dashboard icon from the navigation.
2. Then click Create New Dashboard.
3. Click Add Report to select a report from all of the reports that you have
access to, including any new reports that you created.
4. Leverage filtering to quickly find the report you are interested in.
5. Click the report name to add it to your dashboard. Add as many reports to
your dashboard as you want, just by selecting each report.

Chapter 6. Reports 305


6. Customize your dashboard by selecting a layout. The default is two columns.
One column - the reports will assume the width of the dashboard. Two
columns - the reports will assume half of the width of the dashboard. Three
columns - the reports will assume one third the width of the dashboard.
7. Customize your dashboard by moving the reports within the screen. Use the
icon to customize the chart.

8. Designate specific reports as favorites by selecting the icon. When adding


reports to a dashboard, filter based on favorites or filter based on charts.
9. Name your dashboard by clicking on the edit icon.

10. Delete a dashboard, by clicking on the delete icon .

Viewing a report
There are several ways to view a report, including your dashboard and UI search.

You can view a report in several ways:


v If you have saved the report to a dashboard, open the dashboard to view the
report.
v You can add the report to a dashboard. Open the dashboard and click Add
Report, then choose the report from the list.
v Some reports are listed in categories in the Reports lifecycle.
v Some reports are listed under the lifecycle to which they are most relevant.
v You can use the user interface (UI) search function to find the report. On the
banner, choose User Interface from the drop-down list next to the Search box.
Enter the name of the report into the Search box. Results begin to appear after
you type a few characters. Choose the report from the list of results.

The following choices (with icons) permit editing and configuring of the report:
v Edit the query for this report
v Ad-hoc process for Run Once Now - Use this to invoke a call to GuardAPI
commands.

v Open in new window

v Configure report columns


v Configure runtime parameters - A run-time parameter provides a value to be
used in a query condition. There is a default set of run-time parameters for all
queries, and any number of run-time parameters can be defined in the query
that is used by the report.

v Add to favorites

v Refresh

You can hide columns from view. Click the columns icon and clear the check boxes
for the columns that you want to hide.

You can sort report data by the contents of any column. Click the title of the
column on which you want to sort. To reverse the order, click the title again.
Sorting is always performed on the actual data values, ignoring any aliases that are
defined.

306 IBM Guardium 10.0


You can print a report while you are viewing it. Click Export > Full printable
report to open a printable copy of the report in a new tab. Click the printer icon on
the new tab to print the report. You can also print a report by exporting it to a
PDF file and printing the PDF file.

Note: For an instance where the PDF text is too small to read, the PDF report has
a physical limit on how much it can expand horizontally given how wide the page
is. Since each line of the PDF report has to fit on one line, the typeface size
changes to fit the data, and may force a very small typeface size in order to
display all the data.

Graphical reports can be customized by clicking the Customize Chart icon. The
choices include converting the data to a line chart, changing the X-axis and Y-axis
orientation, converting the report to a pie chart or a stacked column chart.

When viewing reports that display Oracle information, occasionally the ? question
mark character is used to inform the viewer that the login information was not
available. Again when viewing reports that display Oracle information, the
appearance of the number -1 signifies that an unknown number of records are
affected. All Oracle sessions are recorded, even with missed logins.

Refreshing reports
Some reports are configured to refresh their data automatically. On other reports,
you can refresh the data manually through the UI.

When you view a report that is configured to refresh automatically, the color of the
Circular Arrows Refresh icon for this report is green, indicating that the report is
refreshing itself automatically.

At a certain point, the report stops refreshing if no further changes are made to the
report and the color of the refresh icon turns from green to red. The point in time
where the color changes is equal to half of the GUI session timeout (which can be
found by running the CLI command, show session timeout).

For example, if the session timeout is the default 900 seconds, the Circular Arrows

Refresh icon on the Request Rate report is green for 450 seconds, then turns
red.

There are several ways to refresh report data manually:

v Click Refresh on the toolbar.


v Use any toolbar button to print a report, download report data, or write the
report to a PDF file. The report data is refreshed before performing any of these
actions.
v Set a time interval for periodic refreshing, by setting the refreshRate parameter
value. To perform this task:
– Click Customize on the report toolbar.
– In the Configuration dialog, set the refreshRate parameter to the number of
seconds after which the report data is to be updated. The default value of
zero indicates that the report data is not refreshed on a scheduled basis.
– Click OK.

Chapter 6. Reports 307


Customize Reports
When the user edits a report or makes a modification to the report through Report
Customization, the user must manually click on Refresh. There is no automatic
refresh.

UI Customization - In "New Life Cycle" dialog and "New Group" dialog, groups
are limited to a maximum of 5 levels deep, so even with longer group names, all
levels of group names and node item text are visible on the navigation pane.

UI Customization - When user enters "<" or ">" in the textbox of "New Life Cycle"
dialog or "New Group" dialog, a popup message is displayed to indicate that "The
name cannot contain < or > special characters", and the "OK" button becomes
disabled.

UI Customization - In "New Life Cycle" dialog and "New Group" dialog, user can
enter a maximum of 50 characters in the text box.

Exporting a report
You can export a report to a PDF file or a file of comma-separated values.

You can export the contents of a report to a Portable Document Format (PDF) file,
and save the file or view it. In the report toolbar, click Export > Download as PDF
to create a PDF copy. Follow the prompt to save or view the file.

When you generate a large PDF file, the process can cause the UI to time out. If
you plan to generate large PDF files, consider doing so as part of an audit process,
or increasing the UI timeout value to avoid this problem.

You can also export the contents of a report to a comma-separated value (csv) file.
You can export either all the records (the entire report) in the report, or only the
display records (the data currently displayed).

In the report toolbar, click Export > Download all records or Export > Download
display records. You can save the results or select an application in which to view
them.

Note: If editing a report and removing a column (for example, editing a report
with seven columns and removing one column, leaving six columns), when the
report is exported as a PDF file, the report will show the original seven columns.

Viewing Drill-Down Reports


Many reports provide access to drill-down reports that provide more granular
data.

If any drill down actions are available on a tabular report the user will know by
right-clicking on a row of the grid and a context-menu will appear with any
available drill-down actions.

To be available as a drill-down report:


v All of the runtime parameters for the drill-down report must be available from
the report that is being viewed.
v If security roles have been assigned, you must have access to the drill-down
report.

308 IBM Guardium 10.0


Modify the Drill-Down Reports Menu for a Report
By default, the drill-down menu for a report includes all reports with run-time
parameters that can be supplied by attributes from the report, which is given the
usual security role restrictions. To disable or enable any reports on the drill-down
menu for a report:
1. Locate the report. Go to the Report Builder finder menu.
2. Click Drilldown Control to open the report’s Drilldown Control panel.
3. Mark the checkbox for any report to be disabled, or clear the checkbox for any
report to be enabled.
4. Click Apply. The system displays a message saying your changes were applied
successfully.
5. Click Done when you are finished.

Creating a report
If the predefined reports do not meet your needs, you can create your own.

Before you begin

You choose a query on which this report is based, and the domain of the query. If
you must create a new query, do that before you create a report based on it.
Remember that there is distinction between queries and reports. A query describes
a set of information to be obtained from the collected data. A report describes how
the data returned by the query is presented. Refer to “Using the Query Builder” on
page 316 for further information on creating a query. Refer to “Domains, Entities,
and Attributes” on page 323 for further information on working with domains.

About this task

You might find it easier to clone a report and modify it than to create a report
from scratch.

Procedure
1. Click Reports > Report Configuration Tools > Report Builder to open the
Report Builder finder or filter menu. If you select Search at this point without
choosing any domain or query, a menu will appear with all queries listed.
Select a query and use the icons (Add New Report , Modify , Clone
to work with the queries.
, or Delete
2. From the Report Builder finder menu, click New .
3. The Create Report menu appears. Select a query and give the report a name.
Then click Next.
4. The next screen returns the table columns of the query selected. Customize or
use as is. Then click Next.
5. The Report Attributes menu appears. Chose a report type, either tabular or
chart. Then click Next.
6. Then submit the report for creation by clicking Save. An acknowledgement
screen will appear saying the data was successfully saved.

Chapter 6. Reports 309


What to do next
If you want to include this report on a dashboard, open the dashboard, click Add
Reports, and select this report from the list.

Data Mart
A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates and
organizes the data in a generic fashion that can be used later for analysis and
reports. A Data Mart begins with user-defined data analysis and emphasizes
meeting the specific demands of the user in terms of content, presentation, and
ease-of-use.

Use this feature to:


v Define and generate a Data Mart.
v Aggregate summarized and analyzed data from all units to enable high-level/
corporate view in a reasonable response time.
v Improve performance of online reports on Guardium Aggregators.
v Provide interactive analysis capabilities for finding patterns, trends, and outliers.
v Enable collapsing and expanding levels of data

A Data Mart is practical and efficient for all the Guardium predefined-reports. It
prepares the data in advance to avoid overload, full scans, and poor performance.

The Data Mart Configuration icon is available from any Predefined Report.

Highlights of benefits:
v Provide Guardium Analytic capability that supports full lifecycle of data
analysis.
v The analytic process starts from the Query Builder and Pivot Table Builder
where the users define their data analysis needs and then “Set As Data Mart”.
v The Data Mart extraction program runs in a batch according to the specified
schedule. It summarizes the data to hours, days, weeks, or months according to
the granularity requested and then it saves the results in a new table in
Guardium Analytic database.
v The data is then accessible to the users via the standard Reports and Audit
Process utilities, likewise any other traditional Domain/ Entity. The Data Mart
extraction data is available under the DM domain and the Entity name is set
according to the new table name specified for the data mart data. Using the
standard Query Builder and Report Builder, users can clone the default query
and edit the Query and report, generate Portlet and add to a Pane.
v The summarization of data shrinks the data volume significantly. It eliminates
joins of many tables by storing the data analysis in un-normalized and
pre-calculated table.
v The corporate view is supported by using the standard Aggregation utility for
the new Guardium Analytic tables. If there is a huge amount of detailed row
data at the higher levels of the Aggregation Hierarchy, the Selective Aggregation
feature, that enables aggregation of specific module(s), can be configured to
aggregate analytic data only.

The Data Mart builder is accessible via Query builder, Report Results, and
Pivot-Table view.

310 IBM Guardium 10.0


Select the Set As Data Mart icon. The button is available only after Saving.

Access to the screen is enabled for users with Data Mart Building permission (User
Role Permission). Display the Set As Data Mart new button only for users with the
appropriate permission.

Data Mart persistency - changes to the original Query, Report, or Pivot Table do
not affect the Data Mart; A snapshot of the originated analysis definition is saved
together with the Data Mart upon creation.

If the Data Mart is based on Pivot Table, then the extraction process does not
calculate the Total line (sum of columns) and Percent Of Column is not supported.

In addition to the Data Mart definition, the following are created by the Data Mart
Definition process:
v New Domain and Entity
v Default Query
v Default Report and portlet
v New Data Mart table in the “DATAMART” new database to store the extracted
data
Data Mart – Query and Report Builder
The Data Mart definition process creates new Domain, Entity, default
Query and Report. The default Query and Report is accessible via the
Report Building menu.
Clicking Data Mart opens the Query Finder GUI; The Query, Report, and
Entity fields filter only Data Mart domains (domain name starts with -
DatamartDefinition.DOMAIN_PREFIX).
Report Builder GUI: The default Data Marts' reports and all other reports
that are related to Data Marts domains are available in the Report Builder
GUI.
Follow these steps:
1. As an Admin user, select Data Mart icon

.
2. Select New to create a new Data Mart or select from the list of
previously created Data Marts.
3. Complete the fields asking for Data Mart name and Table name
(Default is DM). Specify a time granularity and select an initial start
time from the calendar icon. Description is optional.
4. Use the Scheduler to schedule when to run this feature (Run Once
Now).
5. Use the Roles section to restrict Data Mart only to users with the
appropriate permission.
6. Save the configuration.

Note: Changes to the originated query/report do not affect the existing


Data Mart.

Chapter 6. Reports 311


Note: When a data mart extraction runs (Scheduled or Run once now)
for the first time, it extracts data from Initial start date to the current
time based on the Time granularity. It saves the next period from in the
DM_EXTRACTION_STATE table. On the next run, it extracts data
starting from the next period from. If a data mart extraction is sought
earlier than next period from, then the data mart extraction will show
as empty, because the extraction has already processed that time period.
In order to extract data earlier than next period from, restore the old
data and then run data mart again.
Central Management and Data Mart
In a Central Management environment, the configuration is distributed
automatically to the managed units.
The extraction schedule can be over-ridden on a Managed Unit.
In case of multiple Central Managers, the Data Mart definition can be
cloned by using the Export/Import capability.
Add Data Mart Extraction schedule to the Central Manager Distribution
screen.
Data Mart Extraction
The extraction program is executed by the scheduler. It runs the Query that
is selected for every time unit (based on Time Granularity specified -
hourly/daily/weekly/monthly). For example, the query's From Date, To
Date is set based on the time granularity that is selected for the Data Mart.
The time period for each run starts from the end of previous run (or from
the Initial Start for the first extraction); until the next run (if not found,
then set to end of next Unit of measure).
The results are written into the new table in Guardium Analytic database,
which is created upon definition.
The extraction log consists of the following - Data Mart Name, Collector IP,
Server IP, from-time, to-time, ID, run started, run ended, number of
records, status, error code.
GuardAPIs for Data Mart
Use the following GuardAPIs for Data Mart:
grdapi datamart_define
Table 31. GuardAPIs for Data Mart
Mandatory
Parameter Default (Y/N) Comment
Name Y Unique name
queryName Y Originated query. Validate that the Query exists

Unique per report


reportTitle
pivotTitle

312 IBM Guardium 10.0


Table 31. GuardAPIs for Data Mart (continued)
Mandatory
Parameter Default (Y/N) Comment
tableName Default – if it is blank, then generate code that is
automatically based on the name.

Validation: no special characters, limit length


according to IDS limitation.

Unique table name.


Comment Free text
initialStart
granularityValue Y HOUR/DAY/WEEK/MONTH
userName Default – user logged
areFilterIncluded Y
runOnCollector 1 Determine whether to run it on Collector or
Aggregators

Default = Collector

GuardAPI commands
Use the following GuardAPI commands to make the Data Mart function
active and inactive.
grdapi datamart_set_active <Name>
grdapi datamart_set_inactive <Name>

Audit and Report


Guardium organizes the data it collects into a set of domains. Each domain
contains a different type of information relating to a specific area of concern: data
access, exceptions, policy violations, and so forth.

All domains and their contents are described in the Domains, Entities, and
Attributes appendix.

There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Regardless of the domain, the same
general-purpose query-builder tool is used to create all queries. For detailed
instructions on how to build queries, see Queries.

In addition to the standard set of domains, users can define custom domains to
contain information that can be uploaded to the Guardium appliance. For example,
your company might have a table relating generic database user names (hr23455 or
qa4872, for example) to real persons (Paula Smith, John Doe). Once that table has
been uploaded, the real names can be displayed on Guardium reports, from the
custom domain. For more detailed information on how to define and use custom
domains, see External Data Correlation.

Queries
Use one of the many predefined queries that come with Guardium to get
information about your data. Use the Query Builder to work with queries.

Chapter 6. Reports 313


Use queries to ask questions of your data such as, what are all the clients updating
a specific database during weekend hours?

Queries are different from reports. A query describes a set of data, whereas a
report describes how the data returned by a query is presented.

Once a query is completed, present the results of the query using reports. Reports
usually are presented in tabular form, but you can customize the layout of a report
as you like.

To use queries, open the Query Builder by clicking Comply > Custom Reporting >
Custom Query Builder. Choose a domain to query, select a main entity, and then
use the query as needed.

You cannot modify the predefined queries, but you can create a clone of a query
and modify the clone.

The Main Entity

The main entity that you select for a query determines the following:
v The level of detail for the report. There is one row of data for each occurrence of
the main entity included in the report. The location of the main entity within the
hierarchy of entities is important in terms of what values can be displayed. The
attributes for any entities under the main entity can be counted, but not
displayed (since there might be many occurrences for each row). To choose this
level of detail, check the Sort by Count check box.
v The total count is a count of instances of the main entity included on that row of
the report, added as the last column of the report. To add or drop the count
column of the report, click the Add Count check box. This can result in the
query/report performance boost in some cases.
v To add or drop the ability to display one-row-per-value in the report, (which can
result in the query/report performance boost in some cases), click the Add
Distinct check box. This selection yields condensed reports.
v The time fields against which the Period From and Period To runtime
parameters are compared to select the rows of the report. The Query Builder
uses the main entity (among other parameters) to determine which time fields
are used when defining the Period From and Period To values. This can be
important for long-running sessions, such as when pooled sessions are kept
open by an application server. When applicable, the Period Start/Period End
from the Access Period entity is used, in other cases it will choose period values
according to the main entity:
– Session - the time stamp used is for the last update that is made to the
session entity
– Session Start - the starting time of the session entity is used
– Session End - the ending time of the session entity is used
– Full SQL - time stamp from Full SQL domain; query includes rows from the
Full SQL domain even if not linked to values (for example - when Log Full
Details is set, there are no values)
– Full SQL Values - time stamp from the Full SQL domain; query includes
rows only if they have values from the Full SQL domain even if not linked to
the Field domain

314 IBM Guardium 10.0


– Field SQL Values - time stamp from the Full SQL domain; query includes
rows only if they have values from the Full SQL domain and they are linked
to the Field domain
v In the Main Entity screen is the selection Run in Two Stages.
Use this selection for two-stage execution for Audit tasks of type report.
This applies to reports on queries on specific tables only. This two-stage
mechanism applies to running queries as audit processes with columns and
conditions only on the following entities: Access (client/server), Session, Access
Period, Construct (SQL), Object, and Sentence (Command).
This two-stage mechanism is not used if the query contains a condition with the
Like Group operator or any alias-related operator (such as In Aliases Group) or
the condition uses Having.
In addition to using the query builder, each query can be set to run in two
stages. By default queries run using the old method. In order for a query to run
in two stages, a flag must be set in the query builder. In addition, this method of
running queries can be disabled (system-wide) to make all audit tasks use the
old method by creating the file: /var/log/guard/DontRunInTwoStages. Existence
of this file indicates that the new two stages method should NOT be used.

Note: Fields containing tuples (combined fields) in the Two Stages execution is
not supported in this release.

Note: Note: The Main Entity drop-down list includes only primary entities.
However, access to secondary entities (for example Session Start and Session End)
can be done through its corresponding primary entity (for example, Session for
Session Start and Session End).

Sorting

By default, query data is sorted in ascending order by attribute value, with the sort
keys ordered as the attributes appear in the query. Aliases are ignored for sorting
purposes. The actual data values are always used for sorting. Attributes for which
values are computed by the query (Count, Min, Max, or Avg) cannot be sorted.

To change the default sort order:


1. Check the Order-by check box.
2. Enter a number for Sort Rank (1 is the most major sort key).
3. Optionally, check the Descend check box to sort the values of that attribute in
descending sequence.

The last column of a tabular report is a count of main entity occurrences. To sort
on this count in descending sequence (in other words, listing the greatest number
occurrences first), mark the Sorted by occurrences check box.

Timestamps

A timestamp (lowercase t) is a data type containing a combined date-and-time


value, which when printed displays in the format yyyy-mm-dd hh:mm:ss (for
example, 2012-07-17 15:40:25). When creating or editing a query, most attributes
with a timestamp data type display with a clock icon in the Entity List panel.

A Timestamp (uppercase T) is an attribute defined in many entity types, containing


the time that the entity was last updated. For many timestamp attributes, you can
print the date, time, weekday or year components separately, by referencing

Chapter 6. Reports 315


additional Timestamp attributes (Date, Time Weekday, or Year).

Using the Query Builder


Use the Query Builder to create or modify queries. Specify the domain you want to
query, choose a main entity, then use the Query Builder to define or modify a
query.
1. Open the Query Builder by clicking Comply > Custom Reporting > Custom
Query Builder.
2. Determine the domain you want to query. Select an item from the Domain
Finder menu and click Search, or click New to create a custom domain.
3. Choose an existing query using the filter menus in the Query Finder, or click
New to create a new query.
4. There are three main components to the Query Builder screen:
v The Entity List pane identifies all entities and attributes contained in the
domain. Entities are represented as folders, and attributes are the items
within the folders. Click on an entity folder to display its attributes, or click
again to hide them. For a description of all entities and attributes, see Entities
and Attributes in the Domains, Entities, and Attributes appendix.
v The Query Fields pane lists all fields to be accessed, what is to be displayed
for that field (its value, a count, minimum, maximum, or average), and the
sort order. For more information about using this pane, see Query Fields
Overview.
v The Query Conditions pane specifies any conditions for selecting these fields
(for example, where VERB = UPDATE). For more information about using this
pane, see Query Conditions Overview.

Creating a Query
1. Open the Query Builder for the appropriate domain.
2. Click New to open the New Query – Overall Details panel.
3. Type a unique query name in the Query Name box. Do not include apostrophe
characters in the query name.
4. Select the main entity for the query from the Main Entity list. Remember that
the main entity controls the level of detail that is available for the query, and
that it cannot be changed. Basically, each row of data returned by the query
will represent a unique instance of the main entity, and a count of occurrences
for that instance.
5. Click Next. The new query opens in the Query Builder panel. To complete the
definition, see one of the following topics:
v Query Builder Overview
v Modify a Query

Modifying a Query

You cannot modify the Guardium predefined queries, but you can clone a query
and modify the clone as needed.
1. Choose a domain and main entity to open the Query Builder for the query you
want to modify.
2. Click Clone, enter a new name for the query (apostrophes are not allowed),
and click Save.
3. Refer to the Query Builder Overview topic to modify any component of the
query definition.

316 IBM Guardium 10.0


Removing a Query
You cannot remove a query that is being used by some other component. To delete
such a query, you must first delete all components that use it (reports or
correlation alerts, for example). When attempting to delete a query, the reports and
correlation alerts dependent on the query will be listed.
1. Choose a domain and query to open the Query Builder for the query you want
to delete.
2. Click Delete.

Query Fields Overview

The Query Fields pane lists the columns of data to be returned by the query.

The Field Mode menus indicate what to print for the field: its Value, Count
(number of distinct values), Min, Max, Average (AVG) or Sum for the row. The
Value selection is not available for attributes from entities greater than the main
entity in the entity hierarchy for the domain.

There are two ways to add a field to the Query Fields pane:
v Pop-Up Menu Method:
1. From the Entity List, click on the field to be added.
2. Select Add Field from the pop-up menu.
v Drag-and-Drop Method:
1. From the Entity List, click on the icon of the field name (not on the field
name itself), drag the icon to the Query Fields pane and release it.

When a field is added, it will be added to the end of the list.

To move a field up or down in the Query Fields pane, check the field's check box
and click the Up or Down icons to move the field up or down one row.

A Caution about Full SQL Attributes in Queries

Beware of using the Full SQL attribute in a query. It may produce excessively large
reports, because each distinct value of the attribute (the complete SQL query string
in this case) will be returned in a separate row.

On the other hand, the report may contain no information at all, or many blank
columns where you are expecting Full SQL strings. Guardium captures Full SQL
only when directed to do so by policy rules - and the rules may not have been
triggered during the reporting period.

Do not confuse the Full SQL attribute with the ability to drill down to the SQL for
most queries in the Data Access domain having anything to do with SQL requests.

Groups of Types other than Types defined in Attribute

Validation on group type is often restrictive. Using Query Conditions, Query


Builder, a group of types other than the type defined for the attribute in the group
condition is permitted. These additional choices are only for the operators IN
GROUP and IN DYNAMIC GROUP. The selection of types other than the type
defined for the condition is performed in the Run-time parameter of the tabular
report.

Chapter 6. Reports 317


1. Create a group in the Group Builder by clicking Setup > Tools & Views >
Group Builder. Specify a Group Name and choose OBJECTS for Group Type.
2. Create an Access report in the Report Builder by clicking Setup > Reports >
Report Builder.
3. Specify a query name and click on the OBJECT folder from the Entity List in
order to see more choices.
4. Highlight Object Name and click once in order to get the ADD CONDITION
choice. Click Add Condition so that a line is added to the Query conditions
section in the main body of the menu screen.
5. Go to the drop-down selection next to the attribute Object name and choose,
in the Operator column, IN GROUP or IN DYNAMIC GROUP. In the second
drop-down selection (Run-time Parameter column), choose the group that you
created in step 1.
6. Save your work. Click Generate Tabular and then click Add to My New
Reports.
7. Go to the My New Reports tab and highlight the report you created.
8. Click Customize next to the report name. This opens a tab called Customize
Portlet (Run-time Parameters).
9. Open up the drop-down selection and the groups of the type corresponding to
the entity being tested will appear at the beginning of the list, then a double
dash line, and then the rest of the groups. This is where different groups can
be selected.
10. Save your work by clicking Update.
Table 32. Buttons
Buttons Steps
Delete 1. Select the query to be deleted.
2. Click Delete.
Clone 1. Select the query to be cloned.
2. Click the Clone.
3. Enter a new name for the cloned query.
Roles Assigning roles to reports while in the Query Builder only assigns the role to
the Query, not the report. Assign roles to reports in Report Builder. See
Chapter 6, “Reports,” on page 299.
Save Click Save when you have finished all the tasks required on the menu screen.
Back Move back between menu screens of a multi-screen Guardium task or
function using the Back button. The back arrow in the web browser does not
work for navigation between menu screens.
Set as Data A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates
Mart and organizes the data in a generic fashion that can be used later for analysis
and reports.

Query Conditions
Use the AND, OR and HAVING operators with parentheses to create query
conditions.

The AND, OR and HAVING operators are located in the Query Conditions title
bar in the Query Builder.

318 IBM Guardium 10.0


Select from the Entity List and use the operators to build query conditions as part
of your query.

Note:

AND operators have precedence over OR operators.

All conditions are independent. Group conditions together by adding left and right
parentheses around the conditions. Use brackets in complicated query conditions.

Add an AND operator or an OR operator to the end or middle of the condition list
using the add-condition menu or drag-drop the attribute's icon. Select and remove
conditions by clicking Delete. Save the query. If the generated SQL query is
invalid, the query will not save, and an error message results.

Note: Using parentheses

When a condition is selected, pressing the left parenthesis button adds one left
parenthesis condition before the first selected condition. Pressing the right
parenthesis button will add one right parenthesis condition after the first selected
condition. If there is no condition that is selected, pressing the parentheses buttons
has no effect.

When creating a query condition that uses parentheses, the parentheses appear in
the UI BEFORE the operator, but are applied AFTER the operator. For example, a
query condition is displayed as, this (AND that OR another). However, the actual
logic is, this AND (that OR another).

There are two parts in the condition display panel: one starts with a WHERE
condition and another one starts with a HAVING condition.

In the HAVING part, the aggregate field has options: Count, Min, Max, and AVG.
The option SUM also applies to certain entities with ID in name (Session ID,
Global ID, Full SQL ID, Instance ID). If the HAVING button is not checked, the
condition is inserted into the WHERE part with the aggregate field as empty
string. If the HAVING button is checked, the condition is inserted into the
HAVING part and the aggregate field has options. After adding or removing a
condition, the condition option will be updated. Pressing SAVE generates a SQL.
The SQL is validated before saving it. If validation failed (for example, syntax
error), it generates an alert error message and puts a more detailed error
description in the log. If adding a condition at the wrong part, (for example,
HAVING button is set, and the attribute icon is dropped on the WHERE part, or
vice versa) it generates a not-matched alert message. If the selected condition is in
WHERE part, but the HAVING button is set, the adding condition fails because
the setting is not matched.

The attributes Total Access, Failed SQLs, and Successful SQLs can be added only
under a HAVING clause (not the WHERE clause).

Allowed queries must have one time stamp column and either at least one column
with Mode=Count OR the count flag set (or both). The query column to be

Chapter 6. Reports 319


evaluated by the query must be one of the columns with Mode=Count OR the
total access column (if the count flag is set).

Add or Remove a Query Condition


1. To remove a query condition, mark the check box in the row for that condition,
and click the X button (Delete marked item) in the Query Conditions title bar.
2. To add a condition, create a row in the Query Conditions list for the
appropriate field from the Entity List pane.
To add an AND condition, select the AND radio button in the Query
Conditions title bar and do one of the following:
v Select an entity from the Entity List pane and select Add Condition from the
pop-up menu.
v Drag the field icon from the Entity List pane, and drop it in the Query
Conditions pane.
To add an OR condition, select the OR radio button in the Query Conditions
title bar and do one of the following:
v Drag the field icon from the Entity List pane, and release it to the start of the
condition for which it is an OR condition.
v Mark the check box for the condition to which you want to add the OR
condition, click the field in the Entity List pane, and then select Add
Condition from the pop-up menu.
3. Optional: Use the Aggregate drop-down to select an aggregate of the attribute
to be used for the query condition: Count, Min (minimum value), Max
(maximum value), or AVG (average value). Restrictions apply, as follows:
v You cannot use an aggregate in an OR condition.
v You cannot add an OR condition to one that contains an aggregate.
4. Select the operator for the new condition from the list. Not every attribute type
has the same set of operators available. For example, attributes that cannot be
associated with groups will not have any of the group options (IN GROUP,
LIKE GROUP). However, when adding tuples (multiple attributes that are
combined together to form a single group) as a condition of a query, all
operators for new condition are available for selection.
Table 33. Operator for New Condition
Operator Description
< Less than
<= Less than or equal to
<> Not equal to
= Equal to
> Greater than
>= Greater than or equal to
CATEGORIZED AS Member of a group belonging to the category selected from the drop-down list, which
appears when a group operator is selected.
CLASSIFIED AS Member of a group belonging to the classification selected from the drop-down list,
which appears when a group operator is selected.
IN DYNAMIC GROUP Member of a group that is selected from the drop-down list in the runtime parameter
column, which appears when a group operator is selected.
IN GROUP Member of the group that is selected from the drop-down list in the runtime parameter
column, which appears when a group operator is selected. IN GROUP or IN ALIASES
GROUP cannot both be used at the same time.

320 IBM Guardium 10.0


Table 33. Operator for New Condition (continued)
Operator Description
IN DYNAMIC ALIASES The operator works on a group of the same type as IN DYNAMIC GROUP, however
GROUP assumes the members of that group are aliases.
IN ALIASES GROUP The operator works on a group of the same type as IN GROUP, however assumes the
members of that group are aliases. Note that the IN GROUP/IN ALIASES GROUP
operators expect the group to contain actual values or aliases respectively. An alias
provides a synonym that substitutes for a stored value of a specific attribute type. It is
commonly used to display a meaningful or user-friendly name for a data value. For
example, Financial Server might be defined as an alias for IP address 192.168.2.18.
IS NOT NULL Attribute value exists, but might be blank or unprintable
IS NULL Empty attribute
IN PERIOD For a time stamp only, is within the selected time period
LIKE
LIKE GROUP Matches a like value that is specified in the boxes. A like value uses the percent sign as
a wildcard character, and matches all or part of the value. Alphabetic characters are not
case-sensitive. For example, %tea% would match tea, TeA, tEam, steam. If no percent
signs are included, the comparison operation is an equality operation (=).
NOT IN DYNAMIC Not equal to any member of a group, which is selected from the drop-down list in the
GROUP runtime parameter column, which appears when a group operator is selected.
NOT IN DYNAMIC The operator works on a group of the same type as NOT IN DYNAMIC GROUP,
ALIASES GROUP however assumes the members of that group are aliases.
NOT IN GROUP Not equal to any member of the specified group, which is selected from the drop-down
list in the runtime parameter column, which appears when a group operator is selected.
NOT IN ALIASES The operator works on a group of the same type as NOT IN GROUP, however assumes
GROUP the members of that group are aliases.
NOT IN PERIOD For a time stamp only, not within the selected time period
NOT LIKE Not like the specified value (see the description of LIKE)
NOT LIKE GROUP Not like the value that is specified in LIKE GROUP
NOT REGEXP Not matched by the specified regular expression
REGEXP Matched by the specified regular expression For detailed information about how to use
regular expressions, see Regular Expressions.

Note: There are four special words that are not allowed as the name of a
parameter: user; group; role; page.
An error results if an attempt is made to save a query with any of these words
in the parameter. There are two types of conditions where this applies:
v When creating a query condition with an operator such as =, <, LIKE, etc,
and then selecting Parameter. This field does not allow the special words.
v When creating a query condition with a DYNAMIC GROUP type operator
(IN, NOT IN, IN ALIAS, etc), this field does not allow the special words.
5. For a group operator, select a group from the list.
For most other operators, you must supply a value for the condition, or
indicate that a runtime parameter value (not containing exclamation points) is
supplied later (when the query is run). In these cases, a drop-down with three
options appears. Do one of the following:
v Select Value and enter an exact value in the box.
v Select Parameter and enter a name for the runtime parameter (the name
must not contain spaces).

Chapter 6. Reports 321


v Select Attribute and select another attribute to match the selected one (for
example, this can be used to test for local traffic by matching the client and
server IP addresses).
There is an Add Expression icon next to the Value, Parameter, Attribute
selections. Use this icon to enter query conditions, including user-defined string
and mathematical expressions.
Use this feature where the user needs to add a condition that is based not on
the entire content of the attribute as is, but on part of the attribute, a function
of the attribute, or a function that combines more than one attribute.
An example is: INSTR(:attribute, ’150.1’) = 5, which returns all instances of
Client IP matching the 5 characters listed. Type the character 5 in the entry box
next to the Add Expression icon. Type the INSTR(:attribute, ’150.1’)
expression in the separate Build Expression window. Test the validity of the
expression in the Build Expression window. Another example is:
LENGTH(:attribute) >= 40, which returns the length of any SQL statement
greater than 40 characters. The expression might or might not contain
references to the actual attribute and can also contain references to other
attributes.
6. When you are done adding all conditions, remember to save the definition.

Build Expression on Query condition

There is an Add Expression icon next to the Value, Parameter, Attribute selections.
Use this icon to enter query conditions, including user-defined string and
mathematical expressions.

Use this feature where the user needs to add a condition that is based not on the
entire content of the attribute as is, but on part of the attribute, a function of the
attribute, or a function that combines more than one attribute.

An example:

Return the location of the string 150.1, from the value 192.150.1.x., where the
string 150.1 is at the fifth character of the value. The string 150.1 represents all
instances of Client IP matching the 5 characters listed.

When the function is run in the Expression field, it returns a value, and that value
should be in the entry box.

Use the function, INSTR(:attribute, ’150.1’) with a "5" value in the entry box
next to the Add Expression icon to return the records with 150.1 in the fifth
location.

If the function is INSTR(:attribute, ’150.1’) = 5, then it becomes a Boolean


phrase, and the only values in the entry box are 0 or 1.

Type the INSTR(:attribute, ’150.1’) expression in the separate Build Expression


window.

Test the validity of the expression in the Build Expression window.

Another example: LENGTH(:attribute) >= 40, which returns the length of any SQL
statement greater than 40 characters. The expression might or might not contain
references to the actual attribute and can also contain references to other attributes.

322 IBM Guardium 10.0


Domains, Entities, and Attributes
A domain provides a view of the data that Guardium stores.

Each domain contains a set of data related to a specific purpose or function (data
access, exceptions, policy violations, and so forth). For a description of all domains,
see Domains.

Each domain contains one or more entities. An entity is a set of related attributes,
and an attribute is basically a field value. For a description of all entities and
attributes, see Entities and Attributes.

A Guardium query returns data from one domain only. When the query is defined,
one entity within that domain is designated as the main entity of the query. Each
row of data returned by a query will contain a count of occurrences of the main
entity matching the values returned for the selected attributes, for the requested
time period. This allows for the creation of two-dimensional reports from entities
that do not have a one-to-one relationship.

There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Thus each Guardium role typically has
access to a subset of domains, depending on the function of that role within the
company. Guardium admin role users typically have access to all reporting
domains.

Some domains are available only when optional components (CAS, or


Classification, for example) are installed. Other domains report information
pertaining to the Guardium appliance (archiving activity, for example), and are
available by default to Guardium admin role users only.

Some of the attributes described in this appendix are available to users with the
admin role only. These are labeled: Reserved for admin role use only.

For users who do not have the admin role, these attributes will not be available
from the query builder.

Similarly, not all attributes are available for all database protocols. When using a
query builder, if you notice that an entity or attribute described in the
documentation is not listed in the Entities pane, that entity or attribute is not
available for the selected database type.

See the following topics:


v Domains
v Entities and Attributes
v Building queries

Domains
The following table describes the query builders and associated domains that are
provided with your Guardium system. Your company may have defined additional
custom domains.

Each domain contains a set of data related to a specific purpose or function (data
access, exceptions, policy violations, and so forth). For a description of all domains,
see Domains.

Chapter 6. Reports 323


Each domain contains one or more entities. An entity is a set of related attributes,
and an attribute is basically a field value. For a description of all entities and
attributes, see Entities and Attributes.

A Guardium query returns data from one domain only. When the query is defined,
one entity within that domain is designated as the main entity of the query. Each
row of data returned by a query will contain a count of occurrences of the main
entity matching the values returned for the selected attributes, for the requested
time period. This allows for the creation of two-dimensional reports from entities
that do not have a one-to-one relationship.

There is a separate query builder for each domain, and access to each query
builder is controlled by security roles. Thus each Guardium role typically has
access to a subset of domains, depending on the function of that role within the
company. Guardium admin role users typically have access to all reporting
domains.

Some domains are available only when optional components (CAS, or


Classification, for example) are installed. Other domains report information
pertaining to the Guardium appliance (archiving activity, for example), and are
available by default to Guardium admin role users only.

Some of the attributes described in this appendix are available to users with the
admin role only. These are labeled: Reserved for admin role use only.

For users who do not have the admin role, these attributes will not be available
from the query builder.

Similarly, not all attributes are available for all database protocols. When using a
query builder, if you notice that an entity or attribute described in the
documentation is not listed in the Entities pane, that entity or attribute is not
available for the selected database type.

See the following topics:


v Domains
v Entities and Attributes
v Building queries

Access to the query builder for each domain is controlled by security roles, so each
user role typically has access to a separate set of domains. Some domains are
available only when optional components are installed (CAS, for example).

On the default admin portal, all query builders can be opened from the menu of
the Tools > Report Building tab. On the default user portal, many query builders
can be opened from the Custom Reporting application: Monitor/Audit > Build
Reports.

Following a short description of the domain, the Description column lists the
default security role assigned for each domain, and indicates how to access the
domain from the default user portal (if available).

324 IBM Guardium 10.0


Table 34. Domains
Query Builder
(Domain) Description
Access Policy Use this domain to track for all available policies on system.
Similar to Installed Policies domain used to track all installed
(Access Policy) policies on system.

Roles: all. User portal: Not available


Access All of the client/server, session, SQL, and access periods related
data. This is the data collected by the inspection engines every
(LOGGER INFO) time a request is sent to a server being monitored.

Roles: all User portal: Monitor/Audit > Build Reports > Track data
access
Aggregation/Archive Aggregation and archiving activity, including the date, time, and
status of each operation (archive, send, purge, etc.).
(AGGREGATION/
EXPORT/IMPORT) Roles: admin User portal: Not available
Alert All alerts generated and sent by Guardium.

(ALERT ) Roles: all User portal: Monitor/Audit > Build Reports > Track sent
alerts
Application Connection, session, and application data recorded for special
non-Guardium application (Siebel and SAP, for example).
(Application Data)
Roles: admin User portal: Not available
Audit Process The execution of audit processes and the distribution of results.

(AUDIT TRAIL) Roles: all User portal: Monitor/Audit > Build Reports > Audit
Process builder
Auto-discovery Database auto-discovery activity, including all processes that have
been run, and the hosts and ports discovered.
(AUTODETECT DB
DISCOVERY) Roles: all User portal: Discover > DB Discovery > Auto-discovery
Query Builder
CAS Changes All changes detected by CAS, including any changed data
recorded.
(CAS Changes)
Roles: cas User portal: Not available
CAS Config CAS instance configurations, describing the use of templates on
specific hosts.
(CAS Config)
Roles: cas User portal: Not available
CAS Host History History of CAS changes applied to CAS agent hosts.

(CAS Host History) Roles: cas User portal: Not available


CAS Templates Reports on the contents of CAS templates (which define the items
to monitor).
(CAS Templates)
Roles: cas User portal: Not available
Classifier Results Reports on classifier process runs and results.

(Classification Process) Roles: admin User portal: Not available

Chapter 6. Reports 325


Table 34. Domains (continued)
Query Builder
(Domain) Description
Comments User defined comments for various Guardium components.

(COMMENT ) Roles: all User portal: Monitor/Audit > Build Reports > Comment
builder
Custom Domain Custom domains have been defined for uploading commonly used
Builder tables and products. See Custom Table as a custom domain
contains one or more custom tables. If it contains multiple tables,
you define the relationships between tables when defining the
custom domain.
Custom Query Builder User defined domains can define any tables of data uploaded to
the Guardium appliance.

Roles: all User portal: Monitor/Audit > Build Reports > Custom
query builder
Custom Table Builder A custom table contains one or more attributes that you want to
have available on the Guardium appliance. For example, you may
have an existing database table relating encoded user names to real
names. In the network traffic, only the encoded names will be
seen. By defining a custom table on the Guardium appliance, and
uploading data for that table from the existing table, you will be
able to relate the encoded and real names.
DB Default Users Non-credential Scan - A process to scan a list of databases and
Enabled check whether default users are enabled. The default users as well
as the list of servers to scan are provided as parameters to the API.
A default group is provided for each database type with the
default users and passwords created by the database on every
installation, customers can add/remove from that list. The groups
are of type DB User/DB Password and the names of the default
groups are:

ORACLE Default Users; DB2 Default Users; SYBASE Default


Users; MS SQL SERVER Default Users; INFORMIX Default Users;
MYSQL Default Users; TERADATA Default Users; IBM ISERIES
Default Users; POSTGRESQL Default Users; NETEZZA Default
Users
Discovered Instance Instances that have been discovered by GIM

(Discovered Instances) Roles: all

User portal: Monitor/Audit > Build Reports > Discovered Instance


Enterprise Buffer Shows the aggregate of Sniffer Buffer Usage from all managed
Usage units.

Roles: none User portal: Not available


Exceptions (see note at All of the exceptions and exception-related data. These are SQL
the end of the table) exceptions sent from a database server and collected by inspection
engines, as well as exceptions generated by Guardium itself.
(LOGGER
EXCEPTIONS) Roles: all User portal: Monitor/Audit > Build Reports > Track
exceptions

Flat Log Flat log processing activity.

(Flat Log) Roles: none User portal: Monitor/Audit > Build Reports > Flat Log
builder

326 IBM Guardium 10.0


Table 34. Domains (continued)
Query Builder
(Domain) Description
GIM Events Guardium Installation Manager

(GIM Events) Roles: all

User portal: Monitor/Audit > Build Reports > GIM Events


Group Membership in Guardium groups.

(Group ) Roles: all User portal: Monitor/Audit > Build Reports > Group
builder
Guardium Activity All modifications performed by Guardium users to any Guardium
entity, such as a report or query definition or modification.
(USER ACTIVITY
AUDIT) Roles: admin User portal: Not available
Guardium Login All Guardium user login and logout information.

(USER LOGIN ) Roles: admin User portal: Not available


Installed Policy Provides description of policy parameters and rules for the
installed policy. The Installed Policy domain supports multiple
(Installed Policy) policies and multiple actions per rule.

Roles: all User portal: Not available


Policy Violations All policy violation data, for all violations of the policy detected by
the Guardium inspection engines or STAPs.
(ACCESS RULES
VIOLATIONS) Roles: all User portal: Monitor/Audit > Build Reports > Policy
violations builder
Policy Violations All policy violation data, for a summary of all violations of the
Summary policy detected by the Guardium inspection engines or STAPs.

(Access Rules Roles: all User portal: Monitor/Audit > Build Reports > Policy
Violations) violations summary builder
Replay Results Replays the data stream from one datasource by another different
datasource.

Roles: none

User portal: Not available


Rogue Connections Local database server processes that have circumvented S-TAP to
connect to the database via shared memory, named pipes, or other
(HUNTER ) non-standard means. Applies to Unix S-TAP only, when the TEE
monitoring method used.

Roles: all User portal: Monitor/Audit > Build Reports > Rogue
connections builder
Security Assessment Records the results of vulnerability assessment processes.
Result
Roles: none User portal: Not available
(Assessment Test
Result Monitor)
Sniffer Buffer Usage Inspection engine statistics.

(Sniffer Buffer Usage Roles: none User portal: Not available


Monitor)

Chapter 6. Reports 327


Table 34. Domains (continued)
Query Builder
(Domain) Description
User/Role/ Relates Guardium users, roles and applications (to report on who
Application has access to which Guardium applications).

(Role User App) Roles: admin User portal: Not available


VA Tests Reports on tests that are available for security assessments.

(Assessment Tests) Roles: admin

User portal: Not available


Value Change All changes tracked by the trigger-based value change application.

(Value Change) Roles: admin User portal: Not available

Custom Domains
Custom domains allow for user defined domains and can define any tables of data
uploaded to the appliance.

The usage for these custom entitlement (privileges) domains are for entitlement
reports which are found if logged in as a user. To see these reports, go to the user
tab DB Entitlements.

A number of custom domains have been predefined.

[Custom] Access

This domain contains all of the same entities as the standard Data Access domain.
It is provided as a custom domain to allow additional user-defined domains to be
built including information from this domain and any custom tables that have
been uploaded by the user. [Custom]Access domain is meant to be cloned. This
domain is updated on each version therefore is not advisable to create reports on
this domain. For a description of the entities included in the Access domain, see
the Access domain description in the Domains topic.

S-TAP Info (Central Manager)


Report: See S-TAP Reports. On a Central Manager, an additional report, S-TAP
Info, is available. This report monitors S-TAPs of the entire environment. Upload
this data using the Custom Table Builder.

S-TAP info is a predefined custom domain which contains the S-TAP Info entity
and is not modifiable.

When defining a custom query, go to upload page and click Check/Repair to


create the custom table in CUSTOM database, otherwise save query will not
validate it. This table loads automatically from all remote sources. A user cannot
select which remote sources are used - it pulls from all of them.

Based on this custom table and custom domain, there are two reports:

328 IBM Guardium 10.0


Enterprise S-TAP view shows, from the Central Manager, information on an active
S-TAP on a collector and/or managed unit (If there are duplicates for the same
S-TAP engine, one being active and one being inactive, then the report will only
use the active).

Detailed Enterprise S-TAP view shows, from the Central Manager, information on
all active and passive S-TAPs on all collectors and/or managed units.

If the Enterprise S-TAP view and Detailed Enterprise S-TAP view look the same, it
is because there only one S-TAP on one managed unit being displayed. The
Detailed Enterprise S-TAP view would look different if there is more S-TAPs and
more managed units involved.

These two reports can be chosen from the TAP Monitor tab of a standalone system,
but they will display no information.

DB Entitlement Domains

Along with authenticating users and restricting role-based access privileges to data,
even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have
the privileges required to perform their duties. This is also known as database user
rights attestation reporting.

Use Guardium’s predefined database entitlement (privilege) reports (for example)


to see who has system privileges and who has granted these privileges to other
users and roles. Database entitlement reports are important for auditors tracking
changes to database access and to ensure that security holes do not exist from
lingering accounts or ill-granted privileges.

DB Entitlement Reports use the Custom Domain feature to create links between the
external data on the selected database with the internal data of the predefined
entitlement reports. See Database Entitlements Reports for further information on
how to use predefined database entitlement reports. To see entitlement reports, log
on the user portal, and go to the DB Entitlements tab.

Note: DB Entitlements Reports are optional components enabled by product key. If


these components have not been enabled, the choices will not appear in the
Custom Domain Builder/Custom Domain Query/Custom Table Builder selections.

The predefined entitlement reports are listed as follows. They appear as domain
names in the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections.
v Oracle DB Entitlements
v MYSQL DB Entitlements
v DB2 DB Entitlements
v SYBASE DB Entitlements
v Informix DB Entitlements
v MSSQL 2000 DB Entitlements
v MSSQL 20005/2008 DB Entitlements
v Netezza DB Entitlements
v Teradata DB Entitlements
v PostgreSQL DB Entitlements

Chapter 6. Reports 329


Oracle DB Entitlements
The following domains are provided to facilitate uploading and reporting on
Oracle DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

Oracle
v ORA Accnts of ALTER SYSTEM - Accounts with ALTER SYSTEM and ALTER
SESSION privileges
v ORA Accnts with BECOME USER - Accounts with BECOME USER privileges
v ORA All Sys Priv and admin opt - Report showing all system privilege and
admin option for users and roles
v ORA Obj And Columns Priv - Object and columns privileges granted (with or
without grant option)
v ORA Object Access By PUBLIC - Object access by PUBLIC
v ORA Object privileges - Object privileges by database account not in the SYS
and not a DBA role
v ORA PUBLIC Exec Priv On SYS Proc - Execute privilege on SYS PL/SQL
procedures assigned to PUBL
v ORA Roles Granted - Roles granted to users and roles
v ORA Sys Priv Granted - Hierarchical report showing system privilege granted to
users including recursive definitions (i.e. privileges assigned to roles and then
these roles assigned to users
v ORA SYSDBA and SYSOPER Accnts - Accounts with SYSDBA and SYSOPER
privileges

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

grant select on sys.dba_tab_privs to sqlguard;

grant select on sys.dba_roles to sqlguard;

grant select on sys.dba_users to sqlguard;

grant select on sys.dba_role_privs to sqlguard;

grant select on sys.dba_sys_privs to sqlguard;

grant select on sys.obj$ to sqlguard;

330 IBM Guardium 10.0


grant select on sys.user$ to sqlguard;

grant select on sys.objauth$ to sqlguard;

grant select on sys.table_privilege_map to sqlguard;

grant select on sys.dba_objects to sqlguard;

grant select on sys.v_$pwfile_users to sqlguard;

grant select on sys.dba_col_privs to sqlguard;

MYSQL DB Entitlements

The following domains are provided to facilitate uploading and reporting on


MYSQL DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

MYSQL: The queries ending in _40 use the most basic version of the mysql schema
(for MySQL 4.0 and beyond). The information_schema has not changed since it
was introduced in MySQL 5.0, so there is a set of _50 queries, but no _51 queries.
The _50 queries work for MySQL 5.0 and 5.1 and for 6.0 when it comes out, since
the information_schema is not expected to change in 6.0. The queries ending in
_502 (MYSQL502) use the new information_schema, which contains much more
information and is much more like a true data dictionary.
v MYSQL Database Privileges 40
v MYSQL User Privileges 40
v MYSQL Host Privileges 40
v MYSQL Table Privileges 40
v MYSQL Database Privileges 500
v MYSQL User Privileges 500
v MYSQL Host Privileges 500
v MYSQL Table Privileges 500
v MYSQL Database Privileges 502
v MYSQL User Privileges 502
v MYSQL Host Privileges 502
v MYSQL Table Privileges 502

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list details the minimal privileges required, in the database table (or
view of the database table), in order for the entitlement to work.

Note: In addition to the privileges required, the user should connect to the MYSQL
database to upload the data.

Chapter 6. Reports 331


The entitlement queries for all MySQL versions through MySQL 5.0.1 use this set
of tables: mysql.db mysql.host mysql.tables_priv mysql.user

Beginning with MySQL 5.0.2, and for all later versions, the entitlement queries use
this set of tables: information_schema.SCHEMA_PRIVILEGES mysql.host
information_schema.TABLE_PRIVILEGES information_schema.USER_PRIVILEGES

If a datasource has a MYSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MYSQL databases the user has access to.

DB2 DB Entitlements

The following domains are provided to facilitate uploading and reporting on DB2
DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are
available from the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections. As with other predefined entities and reports, these cannot be
modified, but you can clone and then customize your own versions of any of these
domains or reports. To see entitlement reports, log on the user portal, and go to
the DB Entitlements tab.
v DB2 Column-level Privileges (SELECT, UPDATE, ETC.)
v DB2 Database -level Privileges (CONNECT, CREATE, ETC.)
v DB2 Index-level Privilege (CONTROL)
v DB2 Package-level Privileges (on code packages – BIND, EXECUTE, ETC.)
v DB2 Table-level Privileges (SELECT, UPDATE, ETC.) DB2 Privilege Summary

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

GRANT SELECT ON SYSCAT.COLAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.DBAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.INDEXAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.PACKAGEAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.DBAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.TABAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.SCHEMAAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.PASSTHRUAUTH TO SQLGUARD;

332 IBM Guardium 10.0


SYBASE DB Entitlements
The following domains are provided to facilitate uploading and reporting on
SYBASE DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v SYBASE System Privilege and Roles Granted to User including Grant option
v SYBASE Role Granted to User and System Privileges Granted to user and role
including Grant option
v SYBASE Object Access by Public
v SYBASE Execute Privilege on Procedure, function assigned To Public
v SYBASE Accounts with System or Security Admin Roles
v SYBASE Object and Columns Privilege Granted with Grant option
v SYBASE Role Granted To User

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/* These are required on MASTER database */

grant select on master.dbo.sysloginroles to sqlguard

grant select on master.dbo.syslogins to sqlguard

grant select on master.dbo.syssrvroles to sqlguard

/*These are required on every database, including MASTER */

grant select on sysprotects to sqlguard

grant select on sysusers to sqlguard

grant select on sysobjects to sqlguard

grant select on sysroles to sqlguard

If a datasource has a SYBASE database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all SYBASE databases the user has access to.

Chapter 6. Reports 333


Informix DB Entitlements
The following domains are provided to facilitate uploading and reporting on
Informix DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v Informix Object Privileges by database account not including system account
and roles
v Informix database level privileges, roles and language granted to user including
grant option
v Informix database level privileges, roles and language granted to user and role
including grant option
v Informix Object Grant to Public
v Informix Execute Privilege on Informix procedure and function granted to Public
v Informix Account with DBA Privilege Informix Object and columns privileges
granted with Grant option
v Informix Role Granted To User and Role

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements). The following list (with
comment line heading) details the minimal privileges required, in the database
table (or view of the database table), in order for the entitlement to work.

/* Select privilege to these tables/views is required */

Since all users have sufficient privileges for system catalog SELECT privileges,
there is no need to grant privilege to any user. Informix doesn't seem to like
granting system catalog to users. The grant would normally be used. But in this
case they are not required.

grant select on systables to sqlguard;

grant select on systabauth to sqlguard;

grant select on sysusers to sqlguard;

grant select on sysroleauth to sqlguard;

grant select on syslangauth to sqlguard;

grant select on sysroutinelangs to sqlguard;

grant select on sysprocauth to sqlguard;

grant select on sysprocedures to sqlguard;

grant select on syscolauth to sqlguard;

334 IBM Guardium 10.0


If a datasource has a Informix database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all Informix databases the user has access to.

MSSQL 2000 DB Entitlements

The following domains are provided to facilitate uploading and reporting on


MSSQL 2000 DB Entitlements. Each of the following domains has a single entity
(with the same name), and there is a predefined report for each domain. All of
these domains are available from the Custom Domain Builder/Custom Domain
Query/ Custom Table Builder selections. As with other predefined entities and
reports, these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v MSSQL2000 Object Privilege By database account not including default system
user
v MSSQL2000 Role/System Privileges Granted to User including grant option
v MSSQL2000 Role granted to user and role. System Privileges Granted to User
and Role including grant option
v MSSQL2000 Object Access by PUBLIC
v MSSQL2000 Execute Privilege on System Procedures and functions to PUBLIC
v MSSQL2000 Database accounts with db_owner and db_securityadmin role
v MSSQL2000 Server account with sysadmin, serveradmin and security admin /*
only run this entitlement against MASTER database */
v MSSQL2000 Object and columns privileges granted with grant option
v MSSQL2000 Role granted to user and role

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/* These are required on MASTER database */

grant select on dbo.syslogins to sqlguard

/*These are required on every database including MASTER */

grant select on dbo.sysprotects to sqlguard

grant select on dbo.sysusers to sqlguard

grant select on dbo.sysobjects to sqlguard

grant select on dbo.sysmembers to sqlguard

Chapter 6. Reports 335


If a datasource has a MSSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MSSQL databases the user has access to.

MSSQL 20005/2008 DB Entitlements

The following domains are provided to facilitate uploading and reporting on


MSSQL 2005 or MSSQL 2008 DB Entitlements. Each of the following domains has a
single entity (with the same name), and there is a predefined report for each
domain. All of these domains are available from the Custom Domain
Builder/Custom Domain Query/ Custom Table Builder selections. As with other
predefined entities and reports, these cannot be modified, but you can clone and
then customize your own versions of any of these domains or reports. To see
entitlement reports, log on the user portal, and go to the DB Entitlements tab.

Note: The entitlement domains for MSSQL2005 listed cover MSSQL2008 as well.
v MSSQL2005/8 Object privileges by database account not including default
system user.
v MSSQL2005/8 Role/System privileges granted To User
v MSSQL2005/8 Role/System Privilege granted to user and role including grant
option
v MSSQL2005/8 Object access by PUBLIC
v MSSQL2005/8 Execute Privilege on System Procedures and functions to PUBLIC
v MSSQL2005/8 Database accounts of db_owner and db_securityadmin Role
v MSSQL2005/8 Server account of sysadmin, serveradmin and security admin /*
only run against MASTER database */
v MSSQL2005/8 Object and columns privileges granted with grant option
v MSSQL2005/8 Role granted to user and role.

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/*These are required on MASTER database */

grant select on sys.server_principals to sqlguard

/*These are required on every databases including MASTER */

grant select on sys.database_permissions to sqlguard

grant select on sys.database_principals to sqlguard

grant select on sys.all_objects to sqlguard

336 IBM Guardium 10.0


grant select on sys.database_role_members to sqlguard

grant select on sys.columns to sqlguard

If a datasource has a MSSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MSSQL databases the user has access to.

Netezza DB Entitlements

The following domains are provided to facilitate uploading and reporting on


Netezza DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

Note: There is no DB error text translation for Netezza. The error appears in the
exception description. Users can clone/add a report with the exception description
for Netezza as needed.
v Netezza Obj Privs by DB Username - Object privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Admin Privs by DB Username - Admin privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Group /Role Granted To User - Group (Role) granted to user
v Netezza Obj Privs By Group - Object privileges with or without grant option by
GROUP excluding PUBLIC.
v Netezza Admin Privs By Group - Admin privileges with or without grant option
by GROUP excluding PUBLIC.
v Netezza Admin Privs By DB Username, Group - Admin privileges with or
without grant option by database username, group excluding ADMIN account
and PUBLIC group.
v Netezza Obj Privs Granted - Object privileges granted with or without grant
option to PUBLIC.
v Netezza Admin Privis Granted - Admin privileges granted with or without grant
option to PUBLIC.
v Netezza Global Admin Priv To Users and Groups - Global admin privilege
granted to users and groups excluding ADMIN account.
v Netezza Global Obj Priv To Users and Groups - Global object privilege granted
to users and groups excluding ADMIN account.

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

Chapter 6. Reports 337


/* Select privilege to these tables/views is required */

/* This script must be run from the system database */

GRANT SELECT ON SYSTEM VIEW TO sqlguard;

GRANT LIST ON DATABASE TO sqlguard;

GRANT LIST ON USER TO sqlguard;

GRANT LIST ON GROUP TO sqlguard;

GRANT SELECT ON _V_CONNECTION TO sqlguard;

For Netezza entitlement queries, it is recommended to connect to SYSTEM


database, especially when granting the privilege to the user who is going to run
these reports. The granting privilege MUST take place from SYSTEM database or
else the granted privilege will only take place on one particular database. When
the granted privilege takes place from SYSTEM database, a special feature will
allow the granted privilege to carry through to all the databases.

Teradata DB Entitlements

The following domains are provided to facilitate uploading and reporting on


Teradata DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v Teradata Object privileges by database account not including default system
users.
v Teradata System privileges and roles granted to users including grant option.
v Teradata Roles granted to users and roles including grant option.
v Teradata Role granted to users and roles. System privileges granted to users
and roles including grant option.
v Teradata Objects and System privileges granted to public. Note role cannot be
granted to public in Teradata.
v Teradata Execute privileges on system database objects to public.
v Teradata System admin, Security admin privileges granted to user and role.

Note: There are no such role as System or Security admin in Teradata. User
must create their own roles. These are some important system privileges that
would normally not be granted to normal user: ABORT SESSION, CREATE
DATABASE, CREATE PROFILE, CREATE ROLE,CREATE USER, DROP
DATABASE, DROP PROFILE, DROP ROLE, DROP USER, MONITOR
RESOURCE, MONITOR SESSION, REPLICATION OVERRIDE, SET SESSION
RATE, SET RESOURCE RATE.
v Teradata Object privileges granted with granted option to users. Not including
DBC and grantee = 'All'.

338 IBM Guardium 10.0


For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

GRANT SELECT ON DBC.AllRights TO sqlguard;

GRANT SELECT ON DBC.Tables TO sqlguard;

GRANT SELECT ON DBC.AllRoleRights TO sqlguard;

GRANT SELECT ON DBC.RoleMembers TO sqlguard;

PostgreSQL DB Entitlements

The following domains are provided to facilitate uploading and reporting on


PostgreSQL DB Entitlements. Each of the following domains has a single entity
(with the same name), and there is a predefined report for each domain. All of
these domains are available from the Custom Domain Builder/Custom Domain
Query/ Custom Table Builder selections. As with other predefined entities and
reports, these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

There are seven entitlement custom domains/queries/reports for PostgreSQL. They


are as follows (each is listed with Report name, description, note):

v PostgreSQL Priv On. Databases Granted To Public User Role With Or Without
Granted Option. Privilege on databases granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
v PostgreSQL Priv On Language Granted To Public User Role With Or Without
Granted Option. Privilege on Language granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Schema Granted To Public User Role With Or Without
Granted Option. Privilege on Schema granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without
Granted Option. Privilege on Tablespace granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
v PostgreSQL Role Or User Granted To User Or Role. Role or User granted to user
or role including grant option. Run this once in any database. Ideally
PostgreSQL.
v PostgreSQL Super User Granted To User Or Role. Super user granted to user or
role. Run this once in any database. Ideally PostgreSQL.
v PostgreSQL Sys Privs Granted To User And Role. System privileges granted to
user and role. Run this once in any database. Ideally PostgreSQL.

Chapter 6. Reports 339


v PostgreSQL Table View Sequence and Function privs Granted To Public. Tables,
Views, Sequence and Functions privileges granted to public. Run this per
database. Run this per database.
v PostgreSQL Table View Sequence and Function Privs Granted With Grant
Option. Tables, Views, Sequence and Functions privileges granted to user and
role with grant option only. Exclude PostgreSQL account.
v PostgreSQL Table View Sequence Function Privs Granted To Roles. Tables,
Views, Sequence and Functions privileges granted to roles. Not including
public. Run this per database.
v PostgreSQL Table Views Sequence and Functions Privs Granted To Login. Tables,
Views, Sequence and Functions privileges granted to logins. Not including
postgres system user. Run this per database.

Note: As of version 8.3.6, PostgreSQL does not support grant admin option to
public. There is only function, no store procedure. There is no support for column
grant, only table grant. Public is a group, not user. Public does not show up in
pg_roles. The only privileges need to run all these queries is: GRANT CONNECT
ON DATABASE PostgreSQL TO username;

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/*This is required on POSTGRES database*/

grant connect on database postgres to sqlguard;

/*These are required on every database, including POSTGRES (By default these are
already granted to PUBLIC) */

grant select on pg_class to sqlguard;

grant select on pg_namespace to sqlguard;

grant select on pg_roles to sqlguard;

grant select on pg_proc to sqlguard;

grant select on pg_auth_members to sqlguard;

grant select on pg_language to sqlguard;

grant select on pg_tablespace to sqlguard;

340 IBM Guardium 10.0


grant select on pg_database to sqlguard;

If a datasource has a PostgreSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all PostgreSQL databases the user has access to.

Entities and Attributes


This topic contains a description of the attributes contained in each entity.

For an overview of domains, entities, and attributes, see “Domains, Entities, and
Attributes” on page 323. For a description of all domains, see “Domains” on page
323.

Access Policy Entity

Describes all available policies on the system. Similar to Installed Policies entity
used for all installed policies on system.

Entity List for Access Policy- Access Policy Entity; Rule Policy Entity; Rule Action
Entity; and, Alert Notification. See Rule Entity for a list of attributes. See Rule
Action Entity for a list of attributes. See Alert Notification Entity for a list of
attributes.
Table 35. Access Period Entity
Attribute Description

Policy ID Uniquely identifies an access policy

Policy Describes the access poliy


Description

Selective Audit Indicates if this is a selective audit trail policy (T/F).


Trail

Audit Pattern Test pattern used for a selective audit trail policy.

Timestamp Timestamp for the creation of the record.

Access Period Entity


Access Periods are related to Sessions. By default, an access period is one hour
long, but this can be changed by the Guardium administrator in the Inspection
Engine Configuration (it corresponds to the Logging Granularity).

Timeout values depend on the number of the sessions opened by analyzer thread.
For each analyzer thread there are following default values: If number of open
sessions >0 and < 250, then timeout is 60 minutes. If number of open sessions
>=250 and < 500, then timeout is 30 minutes. If number of open sessions >= 500
and < 750, then timeout is 15 minutes, If number of open sessions >= 750 and <
1200, then timeout is 5 minutes. If number of open sessions is >= 1200, then
timeout is 2 minutes.

Chapter 6. Reports 341


Table 36. Access Period Entity
Attribute Description

Session ID Uniquely identifies a session.

Instance ID Uniquely identifies an instance of a construct.

Construct ID Uniquely identifies a command construct (for example, select a from b).

Total Access Total count of construct instances for this access period.

Period Start Date Date only from the period start attribute.

Period Start Weekday only from the period start attribute.


Weekday

Period Start Time only from the period start attribute.


Time

Timestamp Initially, the Timestamp value is set the first time that a request is
observed on a client-server connection during an access period. By
default, an access period is one hour long, but this can be changed by
the Guardium administrator in the Inspection Engine Configuration -
see the Guardium Administrator Guide. Thereafter, for each subsequent
request, it is updated when the system updates the average execution
time and the command count for this period.

Period End Date and time for the end of the access period.

Period End Date Date only from the period end attribute.

Period End Weekday only from the period end attribute.


Weekday

Period End Time Time only from the period end attribute.

Application User Application user name.

Average The average command execution time during the period. This is for SQL
Execution Time statements only. It does not apply to FTP or Windows file share traffic.

Failed Sqls (2) The number of failed SQL requests. See note at the end of the table.

Successful Sqls The number of successful SQL requests. See note at the end of the table.
(2)

Application The application event ID if set from the API.


Event ID

Total Records The total number of records affected. See note at the end of the table.
Affected (2)

Avg Records The average number of records affected. See note at the end of the table.
Affected (2)

342 IBM Guardium 10.0


Table 36. Access Period Entity (continued)
Attribute Description

Total Records If the Total Records Affected attribute is a character string instead of a
Affected (Desc) number, that value appears here (for example, Large Results Set, or
(2) N/A.

Records affected - Result set of the number of records which are affected
by each execution of SQL statements.
Note: The records affected option is a sniffer operation which requires
sniffer to process additional response packets and postpone logging of
impacted data which increases the buffer size and might potentially
have a adverse effect on overall sniffer performance. Significant impact
comes from really large responses. To prevent large amount of overhead
associated with this operation, Guardium uses a set of default
thresholds that allows sniffer to decide to skip processing operation
when exceeded.

You can use the store max_results_set_size, store


max_result_set_packet_size, and store max_tds_response_packets CLI
commands to set levels of granularity.

Example of result set values:


v Case 1, record affected value, positive number - this represents correct
size of the result set.
v Case 2, record affected value, -2 - This means number of records
exceeded configurable limit (This could be tuned through CLI
interface).
v Case 3, record affected value, -1 - This shows any unsupported cases
of packets configurations by Guardium.
v Case 4, record affected value, -2 - If the result set is sent by streaming
mode.
v Case 5, record affected value, -2 - Intermediate result during record
count to update user about current value, ends up with positive
number of total records.

Show Seconds If a the number of accesses per second is being tracked, this contains
counts for each second in the access period (usually one hour).

Avg Execution Average Execution Acknowledged time in milliseconds


Ack Time

Original The UTC offset.


Timezone
This is to point out that a UTC offset should be set so that the time from
two different collectors that are in two different time zones aggregate
correctly. If the offset was not set then there would exist a condition
where users would not really be able to determine or see a true
representation of when things happened in relation to time.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Chapter 6. Reports 343


Session ID, Instance ID, Construct ID, and Total Access are only available to users
with the admin role.

Failed Sqls, Successful Sqls, Application Event ID, Total Records Affected, Avg
Records Affected, and Total Records Affected (Desc) are attributes that only appear
when the main entity for the query permits this level of detail. These are not
available if either Client/Server or Session is the main entity.

Access Rule Entity

The name assigned to an access rule when it was defined. This is available for
reporting only from the owning Policy Rule Violation entity (described later), when
an access rule violation is logged.
Table 37. Access Rule Entity
Attribute Description

Access Rule Description from the access policy rule definition.


Description

Activity Types Entity

Available only from the Aggregation/Archive domain, which by default is


available to users assigned the admin role only. The Activity Types entity can be
accessed only from the owning Aggregation/Import/Export Log Entity. It
identifies a type of action (Prepare for Aggregation, Encrypt, Send, etc.).
Table 38. Activity Types Entity
Attribute Description

Activity Type Description of an aggregation/import/export activity.

Agg/Archive Log Entity

Available only from the Aggregation/Archive domain, which by default is


available to users assigned the admin role only. One or more Aggregation/Import/
Export Log entities are created for each activity. For example, when an aggregator
system imports data, you will typically see at least four activities:

Prepare for Aggregation

Check Duplicate Import (one per file exported to this aggregator)

Extract (one per file to be merged)

Merge (one per file merged)


Table 39. Agg/Archive Log Entity
Attribute Description

Timestamp Updated at the start and end of the activity being logged (prepare for
archiving, encrypt, send, etc.).

Status Status of the aggregation/import/export log activity.

344 IBM Guardium 10.0


Table 39. Agg/Archive Log Entity (continued)
Attribute Description

User Name User name under which activity initiated.

Start Time Starting time of activity.

End Time Ending time of activity.

Period Start Starting time for the data being acted upon. Each archiving or
aggregation activity operates on one full day of activity.

Period End Ending time for the activity being acted upon.

File Name Name of file used for the activity. Files created by the archive and
export operations are named as follows:

<daysequence>-<scp_host>-w<run_datestamp>-
d<data_date>.dbdump.enc

For example:

732423-g1.guardium.com-w20050425.040042-d2005-04-22.dbdump.enc

The date of the data contained on the file, in yyyy-mm-dd format is


data_date, near the end of the file name (just before .dbdump.enc). Take
care that you do not confuse this date with the run date, which appears
earlier in the file name, and is the date that the data was archived or
exported.

Comment Additional comment for the activity.

Guardium Host The name of the Guardium host.


Name

Records Purged If the activity type is Purge, the number of records purged. Otherwise,
N/A.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Chapter 6. Reports 345


Alert Notification Entity
Describes a policy alert notification.
Table 40. Alert Notification Entity
Attribute Description

ALERT_NOTIFICATION_ID
Identifies the alert notification.

ALERT_ID Identifies the alert definition.

Alert Notification Type of alert from the policy rule definition.


Type

Alert User Receiver of the alert.

Alert Destination Type of alert (EMAIL, SNMP, SYSLOG, CUSTM).

Timestamp Timestamp alert record created.

ALERT_NOTIFICATION_ID and ALERT_ID are only available to users with the


admin role.

Application Data Entity

Used for the SAP and Siebel reports.


Table 41. Application Data Entity
Attribute Description

Application Data Unique identifier for this data.


ID

Application The application type code.


Code

Full SQL ID Identifies the full SQL data.

Application Type Application type.

User Application user name.

Operation Type The type of operation.

Change Date Date of the change.

Time Stamp Time stamp for this record.

Item Name Name of the item affected.

Transaction Code Transaction code.

System ID Unique identifier for the system.

346 IBM Guardium 10.0


Table 41. Application Data Entity (continued)
Attribute Description

Record Detail 1 Varies by item type.

Record Detail 2 Varies by item type.

Record Detail 3 Varies by item type.

Record Detail 4 Varies by item type.

VBKey The VBKey value.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Application Events Entity

This entity is created each time that the system observes an Application Events API
call (which sets these attribute values) or a stored procedure call that has been
identified as a Custom Identification Procedure (which maps stored procedure
parameters to these attributes).
Table 42. Application Events Entity
Attribute Description

Application Unique identifier for this application events entity.


Event ID

Event User User name, set by GuardAppEvent:Start.


Name

Event Type Type of event, set by GuardAppEvent:Start.

Event Value Str String value, set by GuardAppEvent:Start.

Event Value Numeric value, set by GuardAppEvent:Start.


Num

Event Date Datetime value, set by GuardAppEvent:Start. It displays in the format


yyyy-mm-dd hh:mm:ss.
Note: If an attempt is made to set the event date using a format other
than yyyy-mm-dd, it will contain all zeroes. The time portion
(hh:mm:ss) is optional, and if omitted will be 00:00:00.

Chapter 6. Reports 347


Table 42. Application Events Entity (continued)
Attribute Description

Timestamp Created only once, when the event is logged. Do not confuse this
attribute with the Event Date attribute, which can be set using an API
call or from a stored procedure parameter. (See the Guardium
Administrator Guide for a description of the Application Events API
and Custom Identification Procedures.)

Event Release Type of event, set by GuardAppEvent: Released.


Type

Event Release User name, set by GuardAppEvent: Released.


User Name

Event Release String value, set by GuardAppEvent: Released.


Value Str

Event Release Numeric value, set by GuardAppEvent: Released.


Value Num

Event Release Datetime value, set by GuardAppEvent:Released. It displays in the


Date format yyyy-mm-dd hh:mm:ss.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Application Event ID is only available to users with the admin role.

App User Name Entity

This entity will display the username from the App Event if the App Event exists.
Otherwise, the user name will display from the Construct Instance.
Table 43. App User Name Entity
Attribute Description

APP User Name Unique identifier for this App User Name entity.

348 IBM Guardium 10.0


Assessment Log Entity
This entity is created each time that an assessment is run.
Table 44. Assessment Log Entity
Attribute Description

Assessment Log Uniquely identifies the assessment.


ID

Timestamp Timestamp for the assessment.

Timestamp Date Date portion of timestamp.

Timestamp Time Time portion of the timestamp.

Assessment Log Predefined, query or custom test.


Type

Assessment Log The assessment test severity: Critical, Major, Minor, Cautionary,
Severity Informational. This is an ordered list of the level of severity
classifications. Assessment test severity: Critical, Major, Minor,
Cautionary, Informational. The highest severity is the first classification
in this list. The lowest severity is the last classification in this list.

Assessment Identifies the assessment results set.


Result Id1

Message Message returned by the assessment.

Details Details for this assessment.

Assessment Log ID is only available to users with the admin role.

Assessment Result Datasource Entity


This entity is identifies a datasource accessed by the assessment test.
Table 45. Assessment Result Datasource Entity
Attribute Description

Assessment Identifies a results set for a datasource.


Result data
source ID

Assessment Identifies the result.


Result ID

DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix, etc.

DB Name Database name.

Version Level Version level of the database.

Chapter 6. Reports 349


Table 45. Assessment Result Datasource Entity (continued)
Attribute Description

Patch Level Patch level of the database.

Full Version Info Full version information for the datasource

Datasource name Name of the datasource.

Description Datasource description.

Host Host name for the datasource.

Port Port number on the host.

Service Name Service name for the datasource.

User Name User name used for datasource access.

Assessment Result data source ID and Assessment Result ID are only available to
users with the admin role.

Assessment Result Header Entity

This entity is created for each task in the assessment results set.
Table 46. Assessment Result Header Entity
Attribute Description

Assessment Identifies the assessment results set.


Result ID

Assessment ID Identifies the assessment.

Task ID Identifies the task within the assessment.

Parameter Indicates if parameters modified since last run.


Modified Flag

Execution Date Date that the assessment was run.

Received By All Indicates whether or not these results have been received by all
receivers on the distribution list.

Overall Score Overall score for the assessment.

From Date From date for the assessment.

To Date To date for the assessment.

Assessment Assessment name from the definition.


Description

350 IBM Guardium 10.0


Table 46. Assessment Result Header Entity (continued)
Attribute Description

Filter Client IP Clients selected: exact IP address, address with wildcards (*), or empty
to select all.

Filter Server IP Servers selected: exact IP address, address with wildcards (*), or empty
to select all.

Recommendation Recommendation returned for the task.

Assessment Result ID, Assessment ID, and Task ID are only available to users with
the admin role.

Assessment Tests Entity

This entity contains entries for available tests.


Table 47. Assessment Tests Entity
Attribute Description

Test Description Text description of the test

Test Type Type of assessment test (Observed, Predefined, Custom, Query based,
CVE)

Datasource Type Type of Datasource (DB2, Informix, MYSQL, ORACLE, SYBASE, etc.)

Threshold User defined threshold, to override the value define upon the test’s
creation

Threshold Default threshold that defines the success/fail criteria


Default Value

Severity Severity of the assessment (Critical, Major, Minor, Caution, Info)

Category Category of the assessment (Privilege, Authentication, Configuration,


Version, Other)

Timestamp Timestamp test was created

Audit Process Entity

This entity contains basic definition parameters for an audit process.


Table 48. Audit Process Entity
Attribute Description

Process Description from audit process definition.


Description

Active Indicates if the process is active (able to be scheduled).

Chapter 6. Reports 351


Table 48. Audit Process Entity (continued)
Attribute Description

Keep Result The number of days the results will be kept by the system.
Days

Keep Results The number of results sets that will be kept by the system.
Quantity

Audit Process Comments Entity

This entity has comments attached to an audit process definition. Comments


attached to audit process results are contained the Audit Process Results
Comments entity.
Table 49. Audit Process Comments Entity
Attribute Description

Audit Process The text of the comment.


Comment

Audit Process The creator of the comment.


Comment
Creator

Audit Process Timestamp for the comment.


Comment
Timestamp

Audit Task Entity

This entity describes a single audit task (within an audit process).


Table 50. Audit Task Entity
Attribute Description

Task Type A numeric value indicates whether the task is a report, security
assessment, entity audit trail, privacy set, or classification process.
Aliases are defined for these types, so reports with Aliases on will
simplify reading of the report output.

Task Description Name of the task from the task definition.

Audit Process Result Entity

This entity contains the execution date for a set of audit process results.
Table 51. Audit Process Result Entity
Attribute Description

Execution Date The date the audit process was executed.

352 IBM Guardium 10.0


Audit Process Results Comments Entity
This entity has comments attached to an audit process results. Comments attached
to an audit process definition are contained the Audit Process Comments entity.
Table 52. Audit Process Results Comments Entity
Attribute Description

Audit Process The text of the comment.


Comment

Audit Process The creator of the comment.


Comment
Creator

Audit Process Timestamp for the comment


Comment
Timestamp

Auto-discovery Scan Entity

This entity identifies when a scan executed.


Table 53. Auto-discovery Scan Entity
Attribute Description

Scan Timestamp The time the scan executed.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Changed Columns Entity

This entity describes a changed column.


Table 54. Changed Columns Entity
Attribute Description

Changed Name of the changed column on the database.


Column Name

Old Value Value before the change.

New Value Value after the change.

Chapter 6. Reports 353


Table 54. Changed Columns Entity (continued)
Attribute Description

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Changed Data Values Entity

This entity is used with the IBM InfoSphere Change Data Capture (InfoSphere
CDC) replication solution that allows the replication to and from supported
databases. Maintenance of replicated databases can be used to reduce processing
overheads and network traffic.

IBM Guardium Customers with Database Activities Monitoring will have access to
InfoSphere CDC.

This Guardium feature uses Java CDC user exit to send value change information
to the Guardium collector.

User exits for InfoSphere CDC lets the user define a set of actions the InfoSphere
CDC can run before or after a database event occurs on a specified table.
Table 55. Changed Data Values Entity
Attribute Description

Full SQL ID Unique identifier for the Full SQL.

Table Name Table Name from database

Column Name Column Name from database

Old Value Value before the change.

New Value Value after the change.

Timestamp Time the record was created.

Two files that need to be installed on the Database Server are for the Guardium
agent that interfaces with IBM's InfoSphere Change Data Capture (InfoSphere
CDC) application. They are in the sources/apps/GuardCDC/lib/ directory of the
build. These files are: protobuf-java-2.4.1.jar; and, GuardCdc.jar
Instructions for installation
Prerequisites - the InfoSphere Change Data Capture (InfoSphere CDC)
application must already be installed on the DB Server.
Steps to install the Guardium agent on the Database server:

354 IBM Guardium 10.0


1. Copy these two files to the RepEngine/lib/ directory of the cdchome
directory. An example of the full path would be /cdchome/cdc6.5.2/
RepEngine/lib/
2. Unzip each file
3. Edit the guard_cdc_user_exit_config.mxl file to add the Guardium_Host
name. An example of where this file would be located is
/cdchome/cdc6.5.2/RepEngine/lib/com/guardium/cdc/userexit/
4. Configure InfoSphere CDC to write to the GuardiumAgent. There are
multiple steps to set up and configure the CDC application. These steps
can be obtained from the InfoSphere CDC development/support team
at IBM.

Classification Process Results Entity

This entity is created for each classification process rule that is fired.
Table 56. Classification Process Results Entity
Attribute Description

Catalog Catalog location for results set.

Schema Schema name if applicable.

Table Name Table name from the rule definition.

Column Name Column name from the rule definition.

Rule Description The classifier policy rule description.

Comments Any comments added to this rule definition.

Classification Classification for the rule.


Name

Category Category for the rule.

Data Source Data source for the rule.


Description

Classification Process Run Entity


This entity describes a classification process job execution.
Table 57. Classification Process Run Entity
Attribute Description

Process From the process definition.


Description

Status Job status.

Queue DateTime Timestamp when the job was submitted to the classifier/assessment
queue.

Chapter 6. Reports 355


Table 57. Classification Process Run Entity (continued)
Attribute Description

Start DateTime Timestamp at start of job.

End DateTime Timestamp at end of job.

Data Sources Identifies the datasource list for the job.

Client/Server Entity

This entity describes a specific client-server connection. An instance is created each


time a unique set of attributes (excluding the Timestamp) is detected.
Table 58. Client/Server Entity
Attribute Description

Access ID A unique identifier for this client/server connection.

Timestamp Since all attributes in this entity contain static information, this
timestamp is created only once, when Guardium observes a request on
the defined client-server connection for the first time.

Timestamp Date Date only from the timestamp.

Timestamp Time Time only from the timestamp.

Timestamp Weekday only from the timestamp.


Weekday

Timestamp Year Year only from the timestamp.

Server Type DB2, Oracle, Sybase, etc.

Client IP Client IP address.

Server IP Server IP address.

Network Network protocol used (e.g., TCP, UDP, etc. Note that for K-TAP on
Protocol Oracle, this may display as either IPC or BEQ)

DB Protocol Protocol specific to the database server.

DB Protocol Protocol version for the DB Protocol.


Version

DB User Name Database user name. The DB user name is the person who connected to
the database, either local or remote.

Source Program Source program for the interaction.

Client MAC Client hardware address.

356 IBM Guardium 10.0


Table 58. Client/Server Entity (continued)
Attribute Description

Client Host Client host name.


Name

Service Name Service name for the interaction. In some cases (AIX® shared memory
connections, for example), the service name is an alias that is used until
the actual service is connected. In those cases, once the actual service is
connected, a new session is started - so what the user experiences as a
single session will be logged as two sessions.

For Teradata, Service name contains the session logical host id value.

Server OS Server operating system.

For Informix, the OS may appear as follows:

IEEEM indicating Unix or JDBCIEEEI indicating WindowsDEC


indicating DEC Alpha

For Teradata, as there is no direct information about client/server OS,


instead, the data format type is used; indicating how integer data are
stored during db session. This has a close relation to the platform being
used and may appear as follows:

IBM MAINFRAME // IBM mainframe data format

HONEYWELL MAINFRAME // Honeywell mainframe data format

AT&T 3B2 // AT&T 3B2 data format.

INTEL 8086 // Intel 8086 data format (IBM PC or compatible)

VAX // VAX data format

AMDAHL // Amdahl data format

Client OS Client operating system.

For Teradata, as there is no direct information about client/server OS,


instead, the data format type is used; indicating how integer data are
stored during db session. This has a close relation to the platform being
used and may appear as follows:

IBM MAINFRAME // IBM mainframe data format

HONEYWELL MAINFRAME // Honeywell mainframe data format

AT&T 3B2 // AT&T 3B2 data format.

INTEL 8086 // Intel 8086 data format (IBM PC or compatible)

VAX // VAX data format

AMDAHL // Amdahl data format

OS User OS user account for the interaction.

Server Host Server host name.


Name

Chapter 6. Reports 357


Table 58. Client/Server Entity (continued)
Attribute Description

Server Server description (if any).


Description

ClientIP/DBUser Paired attribute value consisting of the client IP address and database
user name.

Analyzed Client Applies only to encrypted traffic; when set, client IP is set to zeroes.
IP
Analyzed Client IP has a map for CEF source. If the query used for the
CEF does NOT contain the Client IP but contains the analyzed client IP,
the analyzed client IP will be used for the source. If both included in the
query, then Client IP takes precedence.

Server IP/DB Paired attribute value consisting of Server IP address and database user
user name.

Client/ Server Client/Server by session is also a Main Entity. Access this secondary
by session entity by clicking on the Client/Server primary entity.

Access ID is only available to users with the admin role.

Note: For Access Tracking only, Client/Server Entity name will appear in the
pulldown menu as two possible entities - Client/Server and Client/Server By
Session.

Client/Server By Session will get count from Client/Server and date conditions
from Session.

Client/Server will get count from Client/Server and date conditions also from
Client/Server.

If the user chooses Client/Server, then the query will be populated with
ATTRIBUTE_ID = 1. If the user chooses Client/Server By Session, then the query
will be populated with MAIN_ATTRIBUTE_ID = 0.

CM Buffer Usage Monitor Entity

Within Central Manager, shows the aggregate of all Sniffer Buffer Usage Entity that
have been uploaded.
Table 59. CM Buffer Usage Monitor Entity
Attribute Description

Sniffer Buffer
Usage ID

Timestamp Time the record was created.

Sniffer CPU PCT Percentage of CPU used by sniffer.

Sniffer Mem PCT Percentage of memory used by sniffer.

358 IBM Guardium 10.0


Table 59. CM Buffer Usage Monitor Entity (continued)
Attribute Description

MySQL CPU Percentage of CPU used by MySQL.


PCT

MySQL MEM Percentage of memory used by MySQL.


PCT

PID Sniffer process identifier.

Memory Amount of memory used by sniffer.

Time Elapsed time used by sniffer.

Free Buffer Amount of free buffer space.

Analyzer Rate Rate at which messages being analyzed.

Analyzer Queue Size of the analyze queue.

Analyzer Total Total number of messages analyzed.

Logger Queue Size of logger queue.

Logger Total Total number of message logged.

Session Queue Size of session queue.

Session Total Total number of sessions.

Handler Data Internal sniffing engine data.

Extra STR Internal sniffing engine data.

Sniffer Total number of connections currently being monitored since inspection


Connections engine was restarted.
Used

Sniffer Packets Packets dropped by sniffer.


Dropped

Sniffer Packets Packets ignored by sniffer.


Ignored

Sniffer Packets Total number of connections that have been ignored due to throttling
Throttled since inspection engine was restarted.

Sniffer Total number of connections that were monitored and have ended since
Connections inspection engine was restarted.
Ended

Logger Session Count of sessions logged.


Count

Chapter 6. Reports 359


Table 59. CM Buffer Usage Monitor Entity (continued)
Attribute Description

Logger Packets Packets ignored by policy rule action.


Ignored by Rule

Analyzer Lost Packets lost by analyzer.


Packets

Logger Dbs List of database types currently being monitored.


Monitored

Mysql Is Up Boolean indicator for internal database restart (1=was restarted, 0=not
restarted).

System Cpu System CPU utilization.


Load

System Uptime Time since last start-up.

Mysql Disk MySQL disk usage.


Usage

System Memory System memory utilization.


Usage

System Var Disk System var disk utilization.


Usage

System Root System Root disk utilization.


Disk Usage

Eth0 Received Messages received on ETH 0.

Eth0 Sent Messages sent on ETH 0.

Promiscuous Rate of received packets through the sniffing network cards


Received (non-interface ports).

Open FDs Open File Descriptors.

Open FDs Database open File Descriptors


MySQL

Sessions normal Count of normal sessions.

Sessions not Count of sessions not opened by sniffer.


opened

Sessions timeout Count of sessions timed-out.

Sessions ignored Count of sessions ignored by sniffer.

Session Direct Count of sessions directly closed .


closed

360 IBM Guardium 10.0


Table 59. CM Buffer Usage Monitor Entity (continued)
Attribute Description

Session guessed Count of sessions guessed.

SqlGuard Is the time the record is inserted into the custom table
Timestamp

Datasource Is the name of the data source used to upload the record
Name

Command Entity

For each command, an entity is created for each parent node and position in which
the command appears in a command construct.
Table 60. Command Entity
Attribute Description

Command Id Uniquely identifies the command.

Construct Id Uniquely identifies the construct (e.g., select a from b).

SQL Verb Main verb in SQL command (e.g., select, insert, delete, etc.).

Depth Depth of the command in the SQL parse tree.

Parent Identifier of parent node in the parse tree.

Command ID and Construct ID are only available to users with the admin role.

Comments Entity

This entity describes a user comment. It is available in the Comments domain only,
which is restricted to admin users. This domain includes only sharable comments,
which are all comments except for those that run locally (see the Local Comments
entity).
Table 61. Comments Entity
Attribute Description

Comment The Guardium user who created the comment.


Creator

Comment Indicates the element to which the comment is attache - a query, audit
Reference process result, or another comment, for example.

Content of The complete comment text.


Comment

Timestamp Date and time the comment was created.

Timestamp Year Year only from the timestamp.

Chapter 6. Reports 361


Table 61. Comments Entity (continued)
Attribute Description

Timestamp Weekday only from the timestamp.


WeekDay

Timestamp Time Time only from the timestamp.

Timestamp Date Date only from the timestamp.

Object The name of the object from which the comment was defined. For
Description example, a comment defined on a policy has an object description of
ACCESS_RULE_SET.

Record A list of records that this comment is associated with.


Associations

Database Error Text Entity


The text of each common database error message is stored in a table in the
Guardium internal database. It is available for reporting only from the owning
Exception Entity for each exception that is a database error. Some types of
exceptions - S-TAP disconnects or reconnects, for example - will have no database
error text.
Table 62. Database Error Text Entity
Attribute Description

Database Error A database error code followed by a short text description of the error.
Text The error code is taken from the Exception Description attribute of the
Exception entity. Using the error code as a key, the error text is obtained
from an internal table on the Guardium appliance, which contains the
most common error messages (about 54,000 of them).

For example: ORA-00942: table or view does not exist

Error Code Displays the database error code.

Data Source Entity

This entity (under CAS Config Tracking/ Monitored Item Details Entity) identifies
a data source.
Table 63. Data Source Entity
Attribute Description

Data source ID Identifies a results set for a data source

Data source Type Data source type - Oracle, MS-SQL, DB2, Sybase, Informix, etc.

Data source Data source name


Name

362 IBM Guardium 10.0


Table 63. Data Source Entity (continued)
Attribute Description

Data source Description of the data source


Description

Host Host name for the data source

Port Port Number on host

Service Name Service name for the data source

User Name User name for datasource access

Database Name Database name

Last Comment Last comment

Shared Yes or No

Connection The Connection Property box has information in it only if additional


Properties connection properties must be included on the JDBC URL to establish a
JDBC connection with this datasource.

Discovered Host Entity

This entity identifies a discovered host.


Table 64. Discovered Host Entity
Attribute Description

Server IP IP address of the discovered host.

Server Host Host name of the discovered host.


Name

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Chapter 6. Reports 363


Discovered Instances Entity
This entity identifies discovered instances.
Table 65. Discovered Instances Entity
Attribute Description

Timestamp A timestamp value created when Guardium records this instance of the
entity (every instance has a unique timestamp).

Host Host name for this instance

Protocol Protocol specific to this instance

Port Min Port range, minimum port number for inspection-engines

Port Max Port range, maximum port number for inspection-engines

Client IP IP address/mask of client

Exclude Client IP address/mask of clients to exclude


IP

Proc Names Name of database executable

Named Pipe Pipe name used by database

KTAP DB Port Database port for KTAP

DB Install Dir Database Install Directory

Proc Name Process name

DB2 Shared Packet header size


Mem
Adjustment

DB2 Shared Client I/O area offset


Mem Client
Position

DB2 Shared DB2 shared memory segment size


Mem Size

Instance Name Name of the discovered instance

Informix Version Informix Version

364 IBM Guardium 10.0


Discovered Port Entity
This entity identifies a discovered port.
Table 66. Discovered Port Entity
Attribute Description

Port Discovered port number.

Probe Attempted Indicates if a probe for a supported database service has been attempted
on this port. T=yes, F=no.

Port Type Indicates the port type (usually TCP).

DB Type If a probe of the port has found a supported database type, indicates the
type (DB2, Informix, MS SQL Server etc.)

Probe The date and time that this specific port was probed.
Timestamp

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Exception Entity

This entity is created for each exception encountered.


Table 67. Exception Entity
Attribute Description

Exception ID Uniquely identifies the exception.

Exception Type Uniquely identifies the exception type.


ID

Exception Date and time created when this Exception entity was logged.
Timestamp

Exception Date Date only from the timestamp.

Exception Time Time only from the timestamp.

Exception Weekday only from the timestamp.


Weekday

Exception Year Year only from the timestamp.

Chapter 6. Reports 365


Table 67. Exception Entity (continued)
Attribute Description

Source Address Source IP address of the exception.

Source Port Source port number.

Destination Destination IP address.


Address

Destination Port Destination port number.

Database Database protocol for the exception.


Protocol

New TTL value Reserved for admin role use only.

Exception Description of the exception.


Description
For an S-TAP reconnect or timeout exception, this will contain the IP
address or DNS name of the database server.

For a database exception, this is an error code from the database


management system. For most common messages (about 54,000 of
them), a longer text description is available in the Database Error Text
attribute. That text comes from the internal Guardium database table of
error messages, not from the exception itself.

SQL string that The SQL string that caused the exception.
caused the
exception

User Name Database user name. On encrypted traffic, where correlation is required,
this value may not be available, but it is always available from the DB
User Name attribute in the Client/Server entity.

App User Name Application user name.

Link to more Optional link that is sometimes available, depending on the exception
information source.
about the
exception1

Global ID1 Global identifier for the exception.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

366 IBM Guardium 10.0


Exception ID and Exception Type ID are only available to users with the admin
role.

Exception Type Entity


There is a fixed set of exception types, one of which will be associated with each
exception logged. These are available for reporting only from the owning
Exception Entity.

Chapter 6. Reports 367


Table 68. Exception Type Entity
Attribute Description

Exception A text description of the exception type, from the following list. Most of
Description these should never be seen. See the notes in italic for the most common
exceptions and notes.

A new construct was used

Alert Process threw an exception

Custom Alerting Processing Exception

Database Server returned an error

For this message, a database error code will be stored in the Exception
Description attribute of the Exception entity, and a text version of the
database error message will be available in the Database Error Text
attribute of the Database Error Text entity.

DB Protocol Exception

Debug prints through the EXCEPTIONs mechanism

Dropped database requests

Session information was dropped due to excess traffic.

Error During Configuration Auditing System Process

Error During Classification Process

Invalid Query Invocation

Login Failed

Low-level DB protocol Exception

Scheduled job threw an exception

Security Assessment Exception

Security Exception

For this message, a custom class exception has been raised when
breaching code execution is blocked; such as when users use the Java
API to define their own alerts or assessments.

Session closed prematurely

SQL Parser Exception

S-TAP Connectivity reconnect

For this message, the IP address or DNS name of the database server
will be available in the Exception Description attribute of the Exception
entity

S-TAP Connectivity timeout

For this message, the IP address or DNS name of the database server
will be available in the Exception Description attribute of the Exception
entity

TCP ERROR

368 IBM Guardium 10.0 For this message, additional information about the error will be
included in the Exception Description attribute of the Exception entity

Turbine class threw an exception


Field Entity

Each time Guardium encounters a new field, it creates a field entity.


Table 69. Field Entity
Attribute Description

Field ID Uniquely identifies the field.

Construct ID Uniquely identifies the construct in which it was referenced.

Command ID Uniquely identifies the main command from the construct in which it
was referenced.

Object ID Uniquely identifies the object from the construct in which it was
referenced.

Field Name Name of the field.

List Clause Use these attributes to order complex SQL queries.

Where Clause Example of SQL queries:

Order by Clause Order by

Having Clause SELECT * FROM dept_costs

Group By Clause WHERE dept_total >

On Clause (SELECT avg FROM avg_cost)

ORDER BY department

Having

SELECT column_name1, SUM(column_name2)

FROM table_name

GROUP BY column_name1

HAVING (numerical function condition)

Group By

SELECT column_name1, SUM(column_name2)

FROM table_name

GROUP BY column_name1

Where

SELECT FirstName, LastName, City

FROM Users

WHERE City = Los Angeles

Chapter 6. Reports 369


Field ID, Construct ID, Command ID, and Object ID are only available to users
with the admin role.

Field SQL Value Entity


These entities are created only by policy rule actions that log with values; for
example: Log Full Details With Values, and Log Full Details Per Session With
Values. The field value logged may or may not be associated with a field name.
For example, field names will be available (in the Field entity) if the following
statement is logged:

insert into t1 (foo, bar) (10, 20)

But not available when the following statement is logged:

insert into t2 (10, 20)


Table 70. Field SQL Value Entity
Attribute Description

Value A field value from the logged construct.

Flat Log Entity

This entity describes flat log processing activity.


Table 71. Flat Log Entity
Attribute Description

Full SQL The full SQL logged.

Timestamp Date and time stamp when logged.

Timestamp Date Date portion of the timestamp.

Timestamp Time Time portion of the timestamp.

Response Time Response time for the request in milliseconds.

Records Affected The number of records affected by the request.

Succeeded Indicates if request was successful (True/False).

370 IBM Guardium 10.0


Table 71. Flat Log Entity (continued)
Attribute Description

Statement Type The type of SQL statement

SQL: simple, direct SQL command, for example, typed directly into the
CLI

RAW: PREPARE of a SQL statement for later execution, for example,


conn.prepareStatement (select a from b where c=:value)

BIND: execution of a prepared statement including bound parameter


values

Statement type is part of the FULL SQL entity and is audited only if you
have configured Log Full Details for this statement within the policy.

You can not filter out specific statement types in the policy, for example,
audit-only SQL and BIND statements. You can, however, filter these out
in reports.

Returned Data Data returned (if any)

Bind Info Bind information for the request

Bind Variables For DB2/zOS, contains a list of comma separated bind variable
Values

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

FULL SQL Entity


Full SQL entities are created only by the following policy rule actions: Log Full
Details,Log Full Details With Values, Log Full Details Per Session, or Log Full
Details Per Session With Values.
Table 72. FULL SQL Entity
Attribute Description

Full Sql Full SQL statement including values.

Timestamp A timestamp value created when Guardium records this instance of the
entity (every instance has a unique timestamp).

Response Time The response time for the request in milliseconds. When requests are
monitored in network traffic, the response times are an accurate
reflection of the time taken to respond to the request (Guardium
timestamps both the client request and the server response).

Chapter 6. Reports 371


Table 72. FULL SQL Entity (continued)
Attribute Description

Records Affected The number of records affected for each session. On reports using this
attribute, we suggest that you turn on aliases to properly display special
cases such as Large Result Set or N/A.

Returned Data Data returned for this request (if any, and if available).

Full SQL ID Unique identifier for the Full SQL.

Instance ID Unique identifier for the Full SQL instance.

Succeeded Indicates if the call succeeded.

Records Affected When the Records Affected is a string value instead of a number, that
(Desc) string is stored here. For example: Large Result Set or N/A.

Access Rule Description of the policy rule used


Description

Returned Data Number of rows returned from the SQL statement used in the policy
Count rule.

Auto-Commit Entries are automatically numbered.

Ack Response Acknowledged Response Time in milliseconds.


Time

Ingress Kbyte Records the number of bytes in requests.


count

Egress Kbyte Records the number of bytes in responses.


count

Statement Type The type of SQL statement

SQL: simple, direct SQL command, for example, typed directly into the
CLI

RAW: PREPARE of a SQL statement for later execution, for example,


conn.prepareStatement (select a from b where c=:value)

BIND: execution of a prepared statement including bound parameter


values

Statement type is part of the FULL SQL entity and is only audited if you
have configured Log Full Details for this statement within the policy.

You can not filter out specific statement types in the policy, for example,
audit-only SQL and BIND statements. You can, however, filter these out
in reports.

Bind Variables For DB2/zOS, contains a list of comma separated bind variable
Values

372 IBM Guardium 10.0


Table 72. FULL SQL Entity (continued)
Attribute Description

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Full SQL ID, Instance ID, and Succeeded are only available to users with the
admin role.

FULL SQL Values Entity

These entities are created only by the following policy rule actions: Log Full Details
With Values, and Log Full Details Per Session With Values.
Table 73. FULL SQL Values Entity
Attribute Description

Values One or more values from the logged construct.

Timestamp Date and Time Full SQL Values Entity was created.

GIM Events Entity

This entity describes events that have occurred while using the Guardium
Installation Manager (GIM).
Table 74. GIM Events Entity
Attribute Description

Event Generator IP address of the client (i.e. DB-Server) which generated the event.

Event Event Description.


Description

Event Time The time when the event occurred.

Group Entity
This entity describes a group that has been defined to Guardium.
Table 75. Group Entity
Attribute Description

Group The name of the group.


Description

Chapter 6. Reports 373


Table 75. Group Entity (continued)
Attribute Description

Group Subtype Subtype, if any, defined for the group.

Timestamp Date and time the group entity was created.

Group Member Entity


This entity describes a member of a group that has been defined to Guardium.
Table 76. Group Member Entity
Attribute Description

Group Member The name of the group member.

Timestamp Date and time the group member was created or updated.

Timestamp Date Date only from the timestamp.

Timestamp Time Time only from the timestamp.

Timestamp Year Year only from the timestamp.

Timestamp Weekday only from the timestamp.


Weekday

Group Type Entity

This entity describes a type of Guardium group (user, client IP address, command,
etc.).
Table 77. Group Type Entity
Attribute Description

Group Type Identifies the group type.

Timestamp Date and time the group type was created.

Guardium Activity Types


This entity describes the various user activities
Table 78. Guardium Activity Types
Attribute Description

Activity Type Description of the activity


Description

Activity Type ID Uniquely identifies the activity type.

374 IBM Guardium 10.0


Guardium Role Entity
This entity (under User Entity) identifies a Guardium role.
Table 79. Guardium Role Entity
Attribute Description

Role Identifier ID of role identified.

Role Guardium role listed.

Guardium Applications Entity


This entity (under User Entity) identifies a Guardium application.
Table 80. Guardium Applications Entity
Attribute Description

Application ID of application identified.


Identifier

Application Guardium application listed (foe example, Query Builder, Policy Builder,
etc.).

Guardium Activity Types Entity

An instance is defined in the internal Guardium database for each type of activity.
Table 81. Guardium Activity Types Entity
Attribute Description

Activity Types Description of an activity.


Description

Guardium User Activity Audit Entity

This entity is created for each Guardium user activity.


Table 82. Guardium User Activity Audit Entity
Attribute Description

Login ID ID used for login.

User Name Guardium user name for the activity.

Timestamp Created when the activity was logged.

Modified Entity The Guardium entity modified (a group definition, for example).

Entity Key Used Key used to access the entity.

Key Value New value of the entity.

Chapter 6. Reports 375


Table 82. Guardium User Activity Audit Entity (continued)
Attribute Description

All Values All values altered.

Object The name of specific object altered.


Description

Global ID A unique global ID for the session.

Host Name Host name of the user.

Guardium Users Login Entity

This entity is created each time a user logs in to the Guardium appliance.
Table 83. Guardium Users Login Entity
Attribute Description

Login ID ID used for login.

User Name Created when the Guardium user logs in or out (there will be one entity
per Guardium session).

Login Date And Date and time user logged in.


Time

Logout Date Date and time user logged out.


And Time

Login Succeeded Indicates if login was successful.

Global Id A unique global ID for the session.

Host Name Host name of the user.

Remote Address Remote address of the user.

Host Entity
A CAS Host entity is created the first time that CAS is seen on a database server
host. It is updated each time that the online/offline status changes. The Host entity
is also available in the CAS Host History domain.
Table 84. Host Entity
Attribute Description

Host Name Database server host name (may display as IP address)

OS Type Operating system: UNIX or WIN

Is Online Online status (Yes/No) when record was written

376 IBM Guardium 10.0


Table 84. Host Entity (continued)
Attribute Description

Host Id Identifies the host record

Host Configuration Entity

A Host Configuration entity is created for each item in a CAS instance.


Table 85. Host Configuration Entity
Attribute Description

Audit State Unique numeric identifier for the configuration item


Label Id

Timestamp Timestamp for creation of the entity

Host Name Database server host name or IP address

OS Type Operating system: Unix or Windows.

DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix, or N/A if the
change is to an operating system instance

Instance Name Name of the template set instance

Type Type of monitored item that changed.

OS Script or SQL Script: A change triggered by the OS script contained


in the monitored item template definition.

Environment Variable: An environment variable (Unix only)

Registry Variable: A registry variable (Windows only)

File: A specific file. There is no host configuration entity for a file


pattern defined in the template set used by the instance. Instead, there is
a separate host configuration entity for each file that matches the
pattern.

Monitored Item The name of the changed item, from the Description (if entered),
otherwise a default name depending on the Type (a file name, for
example).

Host Event Entity

A host event entity is created each time an event is detected or signaled (see the
event types) by CAS.

Chapter 6. Reports 377


Table 86. Host Event Entity
Attribute Description

Audit Host Identifies the host event entity


Event Id

Event Time Date and time that the event was recorded

Event Type Identifies the event being recorded:

Client Up - CAS started on database server host

Client Down - CAS stopped on database server host

Failover Off - A server is available (following a disruption), so CAS data


is being written to the server

Failover On - The server is not available, so CAS data is being written


to the failover file

Server Down - The database server stopped

Server Up - The database server started

Timestamp Timestamp for creation of the entity

Audit Host Id Identifies the host

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Incident Entity

Incident entities are created by incident generation processes, or manually by


assigning a policy violation to an incident.
Table 87. Incident Entity
Attribute Description

Timestamp Time the incident was created.

Category Name Category assigned to the incident.

Incident Number Incident number (assigned sequentially).

378 IBM Guardium 10.0


Incident Severity Entity
The incident severity description for an incident.
Table 88. Incident Severity Entity
Attribute Description

Incident Severity The severity code will be one of the following:


Description
INFO, LOW, MED, HIGH

Incident Status Entity


Describes the status of an Incident entity.
Table 89. Incident Status Entity
Attribute Description

Status Will be one of the following values:


Description
OPEN - The incident has not yet been assigned to a user.

ASSIGNED - The incident has been assigned.

CLOSED - The incident is closed.

Installed Policy Entity

Describes the installed policy.


Table 90. Installed Policy Entity
Attribute Description

ID Identifies the policy installation record.

Rule Set Id Identifies the set of rules.

Policy Description from the policy definition.


Description

Selective Audit Indicates if this is a selective audit trail policy (T/F).


Trail

Audit Pattern Test pattern used for a selective audit trail policy.

Timestamp Timestamp for the creation of the record.

Sequence Sets the order of sequence when there is multiple installed policies.

Instance Config Entity


An Instance Config entity is created each time that an instance configuration is
defined. This entity defines how the CAS instance connects to the database (if

Chapter 6. Reports 379


necessary), and identifies the template set used by the instance. It provides current
status of the instance (in use, enabled, or disabled) and the date of the last
revision.

Instance Config Entity Attributes


Table 91. Instance Config Entity
Attribute Description

Config Id Identifies this configuration record.

Timestamp Timestamp record created.

DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix; or N/A for an
operating system instance

Instance The name of the instance

User The user name that CAS uses to log onto the database; or N/A for an
operating system instance.

Port The port number CAS uses to connect to the database; or empty for an
operating system instance

DB Home Dir The home directory for the database; or empty for an operating system
instance

Template Set Id Identifies the template set used by this instance

OS Type Operating system of the host: UNIX or Windows

Join Entity
A join table is a way of implementing many-to-many relationships. Use join entity
to join tables in a SELECT SQL statement.
Table 92. Join Entity
Attribute Description

Join ID Unique identifier

Construct ID Identifies the construct in which the join is referenced.

Join SQL Join tables

Where SQL Where clause (join conditions)

Timestamp Date and Time that the Join Entity was created.

380 IBM Guardium 10.0


Local Comments Entity
This entity describes a local comment. It is available in the Comments domain
only, which is restricted to admin users. This entity includes only local comments,
for processes and results sets that run locally. Comments that are sharable are
defined in the Comments entity.
Table 93. Local Comments Entity
Attribute Description

Comment The Guardium user who created the comment.


Creator

Comment Indicates the element to which the comment is attached - a query, audit
Reference process result, or another comment, for example.

Content of The complete comment text.


Comment

Timestamp Date and time the comment was created.

Timestamp Year Year only from the timestamp.

Timestamp Weekday only from the timestamp.


WeekDay

Timestamp Time Time only from the timestamp.

Timestamp Date Date only from the timestamp.

Object The name of the object from which the comment was defined. For
Description example, a comment defined on an incident has an object description of
INCIDENT.

Record A list of records that this local comment is associated with.


Associations

Location View
How to determine what days are not archived

Use a query (Tools tab > Report Building > Report Builder > query Location View)
that can be modified to create a report showing the files that are archived. This
report lists all the files with archive dates. Dates not on this report indicate that
those dates have not been archived. Run archive for the dates not on the list, if
required.
Table 94. Location View Entity
Attribute Description

From Date The start date

To Date The finish date

Chapter 6. Reports 381


Table 94. Location View Entity (continued)
Attribute Description

Aggregator The Guardium system where the file was generated on. However this
can be a collector, not just a Aggregator

Host Host name

User Name Name of user

Path Path name to files

System Type What protocol was used while archiving - if it was SCP or FTP or
Centera or TSM

Count of Archive destinations


Destinations

Login Correlation Entity

Obsolete beginning with version 4.0 of Guardium. This was the only entity of the
Access Trace Tracking domain, which was obsolete beginning with version 4.0 of
S-TAP. If you have old queries or reports using that domain, they will not work in
this release, and any database login information recorded in that domain would
pre-date the installation of version 4.0 of S-TAP.

Message Text Entity

For a threshold alert, the text of the message.


Table 95. Message Text Entity
Attribute Description

Message Text ID Uniquely identifies the message text

Message Subject Message subject (for an email message, for example).

Message Text Message text.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Messages Sent Entity

For each threshold alert message sent, the message type, recipients, status, and
date of that message.

382 IBM Guardium 10.0


Table 96. Messages Sent Entity
Attribute Description

Message ID Uniquely identifies the message

Message Type Type of message.

Sent To One or more recipients of message.

Message Status Status of message:

FAIL The send operation failed.

WAIT The message has not yet been sent.

SENT The message was sent.

Message Date Date message sent.

Message Context Message type:

INFO Informational message.

WARNING Possible error condition.

ALERT Real time or threshold alert.

ERROR Software or hardware error condition.

DEBUG Debugging message.

Message The module creating the message; for example monitor or


Originator GuardiumJetspeedUser.

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Monitor Values Entity


A monitor values entity is created for each insert, update or delete recorded,
contains the details of the change (table name, action, SQL text, etc.).
Table 97. Monitor Values Entity
Attribute Description

Timestamp Date and time the change was recorded on the Guardium appliance.
This timestamp is created during the data upload operation. It is not the
time that the change was recorded on the audit database. To obtain that
time, use the Audit Timestamp entity.

Chapter 6. Reports 383


Table 97. Monitor Values Entity (continued)
Attribute Description

Timestamp Date Date only from the timestamp.

Timestamp Time Time only from the timestamp.

Timestamp Year Year only from the timestamp.

Timestamp Weekday only from the timestamp.


Weekday

Server IP IP address of the database server.

DB Type Database type.

Service Name Oracle only. Database service name.

Database Name DB2, Informix, Sybase, MS SQL Server only. Database name.

Audit PK For Sybase and MS SQL Server only. A primary key used to relate old
and new values (which must be logged separately for these database
types).

Audit Login Database user name defined in the datasource.


Name

Audit Table Name of the table that changed.


Name

Audit Owner Owner of the changed table.

Audit Action Insert, Update or Delete.

Audit Old Value A comma-separated list of old values, in the format:column-


name=column_value,

Audit New A comma-separated list of new values, in the format:column-


Value name=column_value,

SQL Text Available only with Oracle 9. The complete SQL statement causing the
value change.

Triggered ID Unique ID (on this audit database) generated for the change.

Audit Date and time that the trigger was executed.


Timestamp

Audit Date portion of Audit Timestamp.


Timestamp Date

Audit Time portion of Audit Timestamp.


Timestamp Time

384 IBM Guardium 10.0


Table 97. Monitor Values Entity (continued)
Attribute Description

Audit Day of week of the Audit Timestamp.


Timestamp
WeekDay

Audit Year of the Audit Timestamp.


Timestamp Year

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Monitored Changes Entity

This entity is created each time a monitored item changes. It identifies the
monitored item within the CAS instance, and points to the saved data for the
change.
Table 98. Monitored Changes Entity
Attribute Description

Change Unique identifier for the change


Identifier

Sample Time Timestamp (date and time on host) that sample was taken

Audit Config Id Identifies the host configuration

Saved Data Id Identifies the Saved Data entity for this change

Audit State Identifies the Host Configuration entity for this change
Label Id

Timestamp Date and time this change record was created on the server (Guardium
appliance server clock)

MD5 Indicates whether or not the comparison is done by calculating a


checksum using the MD5 algorithm and comparing that value with the
value calculated the last time the item was checked. The default is to
not use MD5. If MD5 is used but the size of the raw data is greater than
the MD5 Size Limit configured for the CAS host, the MD5 calculation
and comparison will be skipped. Regardless of whether or not MD5 is
used, both the current value of the last modified timestamp for the item
and the size of the item are compared with the values saved the last
time the item was checked.

Owner Unix only. If the item type is a file, the file owner

Chapter 6. Reports 385


Table 98. Monitored Changes Entity (continued)
Attribute Description

Permissions Unix only. If the item type is a file, the file permissions

Size File size, but there are special values as follows:

-1 = File exists, but has a zero bytes

0 (zero) = File does not exist, but this file name is being monitored (it
never existed or may have been deleted)

Last Modified Timestamp for the last modification, taken from the file system at the
sample time

Last Modified Date for the last modification


Date

Last Modified Time for the last modification


Time

Last Modified Day of week for the last modification


Weekday

Last Modified Year for the last modification


Year

Group Unix only. If the item type is a file, the group owner

Monitored Item Details Entity

A Monitored Item Details entity is created for each monitored item in a CAS
instance.
Table 99. Monitored Item Details Entity
Attribute Description

Audit Config Id Identifies the host configuration

Timestamp Timestamp for creation of the entity

Template ID Identifies the item template for this monitored item

Monitored Item Depending on the Audit Type, this is the OS or SQL script,
environment, or registry variable, or file name. Regarding a file pattern
defined in an item template, there will be a separate monitored item
detail entity for each file that matches the pattern, but there is no
monitored item details entity for the file pattern itself. If a file pattern is
used, it is always available in the Template Content attribute.

Audit Config Set Identifies the template set in the host configuration
Id

386 IBM Guardium 10.0


Table 99. Monitored Item Details Entity (continued)
Attribute Description

Audit Type Type of monitored item:

OS Script or SQL Script: The actual text or the path to an operating


system or SQL script, whose output will be compared with the output
produced the next time it runs

Environment Variable or Registry Variable: An environment variable or


a (Windows) registry variable

File: A specific file or a pattern to identify a set of files

Enabled Indicates whether or not the template is enabled

In Synch Indicates whether or not the template item definition on the server
matches the template item definition on the CAS host

Audit Frequency The maximum interval at which the item is to be tested

Use MD5 Indicates whether or not the comparison is done by calculating a


checksum using the MD5 algorithm and comparing that value with the
value calculated the last time the item was checked. The default is to
not use MD5. If MD5 is used but the size of the raw data is greater than
the MD5 Size Limit configured for the CAS host, the MD5 calculation
and comparison will be skipped. Regardless of whether or not MD5 is
used, both the current value of the last modified timestamp for the item
and the size of the item are compared with the values saved the last
time the item was checked.

Save Data When marked, previous version of the item can be compared with the
current version

Description Optional description of the instance

Template The template entry that is the basis for this monitored item, set from the
Content Template entity Access Name attribute when the instance was created.
Typically this will be the same as the monitored item, but in the case
where a file pattern was used in the template, this will be the file
pattern

Object Entity

An instance of this entity is created for each object in a unique schema.


Table 100. Object Entity
Attribute Description

Object Id Uniquely identifies the object.

Construct Id Uniquely identifies the construct in which the object is referenced.

Schema Database schema for the object.

Object Name Name of the object.

Chapter 6. Reports 387


Table 100. Object Entity (continued)
Attribute Description

App Object Uniquely identifies the application object module.


Module1

Object Id and Construct Id are available to users with the admin role only.

Object Command Entity

Describes an object-command entity.


Table 101. Object Command Entity
Attribute Description

Object/ An object value combined with a command value.


Command

Object Field Entity

Describes an object-field entity. Note fields with no objects will not show up in
reports that include the object.
Table 102. Object Field Entity
Attribute Description

Object/Field An object value combined with a field value.

Policy Rule Violation Entity

This entity is created each time that a policy rule violation is logged. Not all policy
rule violations are logged - see the description of the rule actions in Chapter 11:
Building Policies. The access rule causing the violation will be available in the
dependent Access Rule Entity (described earlier).
Table 103. Policy Rule Violation Entity
Attribute Description

Violation Log Id Uniquely identifies the violation entity.

Application User Name of the user creating the policy rule violation.
Name

Full SQL String SQL string causing the policy rule violation.

Timestamp Created when the policy rule violation is logged. Not all policy rule
violations are logged - see the description of the rule actions in Chapter
11: Building Policies.

Timestamp Date Date only from the timestamp.

Timestamp Time Time only from the timestamp.

388 IBM Guardium 10.0


Table 103. Policy Rule Violation Entity (continued)
Attribute Description

Timestamp Weekday only from the timestamp.


Weekday

Timestamp Year Year only from the timestamp.

Message Sent The text of the policy rule violation message that was sent.

Total Occurrence count that triggered the violation.


Occurrences

Application Application event ID (if any - these are set using the application events
Event Id API)

Access Rule The description of the rule from its definition.


Description

Category Name Category defined for the rule.

Severity Severity defined for the rule (the severity of an incident to which this is
assigned may be different).

Incident Number If assigned to an incident, this is the incident number.

Classification Name of classification process.


Name

Construct ID Uniquely identifies the construct in which it was referenced.

CLS Process Run Classification process job execution ID.


ID

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Violation Log Id are available to users with the admin role only.

Qualified Object Entity

A tuple allows multiple attributes to be combined together to form a single group


member. In this case, the fields Server IP, Service name, DB name, DB user and
Object are combined together.

Chapter 6. Reports 389


Table 104. Qualified Object Entity
Attribute Description

Qualified Object Tuple - Server IP, Service name, DB name, DB user, Object

Rogue Connections Entity

An instance is created for each database connection seen by the S-TAP Hunter
process, but not by S-TAP itself, indicating that the connection has bypassed the
access paths monitored by S-TAP.
Table 105. Rogue Connections Entity
Attribute Description

Timestamp A timestamp value created when the Guardium appliance records the
rogue connection reported by the Hunter.

Server Host Database server host name.


Name

Source Program Source program name for the connection.

Source Port Source port for the connection.

Source PID Source process ID.

Target Program Target program name for the connection.

Target Port Target port for the connection.

Target PID Target process ID.

OS User Operating system user account name.

IPC Type Type of inter-process communications used for the connection, which
may be from the following list:

SHM Shared memory

IPv4 Internet Protocol version 4

IPv5 Internet Protocol version 6

FIFO Named pipe

PIPE Simple pipe

INET Internet Protocol (HPUX)

DB Server Type Database server type: Oracle, DB2, Informix, or Sybase.

390 IBM Guardium 10.0


Table 105. Rogue Connections Entity (continued)
Attribute Description

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Rule Entity

Can be used for Installed policy rule entity or access policy rule entity. There is one
for each rule of the installed policy/policies or access policy/policies. Apart from
the ID fields (which uniquely identify components on the internal database), all of
these fields are described in the Policies help topic.
v GDM_INSTALLED_POLICY_RULES_ID - Identifies an installed policy rule.
v ACCESS_RULE_ID - Identifies an access rule.
v Rule Description - From the policy definition.
v Rule Position - Position within the policy.
v Rule Type - Access, Exception, or Extrusion.
v LAST_ACCESSED - Last
v Client IP - From the rule definition.
v Client Net Mask - From the rule definition.
v Client IP Group - From the rule definition.
v Server IP - From the rule definition.
v Server IP Mask - From the rule definition.
v Client MAC - From the rule definition.
v Net Protocol - From the rule definition.
v Net Protocol Group - From the rule definition.
v Field - From the rule definition.
v Field Group - From the rule definition.
v Object - From the rule definition.
v Object Group - From the rule definition.
v Command - From the rule definition.
v Command Group - From the rule definition.
v Object-Field Group - From the rule definition.
v DB Type - From the rule definition.
v Service Name - From the rule definition.
v Service Name Group - From the rule definition.
v DB Name - From the rule definition.
v DB Name Group - From the rule definition.
v DB User - From the rule definition.
v DB User Group - From the rule definition.

Chapter 6. Reports 391


v App. User - From the rule definition.
v App User Group - From the rule definition.
v OS User - From the rule definition.
v OS User Group - From the rule definition.
v Src App. - From the rule definition.
v Source Program Group - From the rule definition.
v Pattern/ XML Pattern - From the rule definition.
v Period - From the rule definition.
v Min. Ct. - From the rule definition.
v Reset Interval - From the rule definition.
v Continue to next Rule/ Revoke - From the rule definition.
v Rec. Vals. - From the rule definition.
v App Event Exists - From the rule definition.
v Event Type - From the rule definition.
v App Event Text Value - From the rule definition.
v App Event Date Value - From the rule definition.
v Event User Name - From the rule definition.
v Error Code - From the rule definition.
v Exception Type - From the rule definition.
v Category Name- From the rule definition.
v Classification Name - From the rule definition.
v Severity - From the rule definition.
v Data Pattern - From the rule definition.
v SQL Pattern - From the rule definition.
v Masking Pattern - From the rule definition.
v Client IP/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Sever IP/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Net Protocol/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Field Name/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Object Name/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Command/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Service Name/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v DB Name/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v App. User/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v OS User/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v Source Program/ Group - Provides the ability to display a single attribute and
its related (if any) in a single column of the report.

392 IBM Guardium 10.0


v Error Code/ Group - Provides the ability to display a single attribute and its
related (if any) in a single column of the report.
v App. Event Text/ Numeric/ Date - The application events text, numeric, and
date attributes.
v Category/ Classification - The combined category and classification for the rule.
v GDM_Installed_Policy_Header_ID - Identifies an installed policy header.

Note: GDM_INSTALLED_POLICY_RULES_ID and ACCESS_RULE_ID are


available to users with the admin role only.

Rule Action Entity

Can be used Installed policy rule action entity or access policy rule action entity.
There is one for each rule of the installed policy/policies or access policy/policies .
v Sequence - Sequence of the action within the rule.
v Action
– Block the request - See Blocking Actions in Policies.
– Log or ignore the violation or the traffik - See Log or Ignore Actions in
Policies.
– Alert - See Alerting Actions in Policies.

Saved Data Entity

A Saved Data entity is created each time a change is detected for an item being
monitored, if the Keep data box is marked for that item in the item template
definition.
Table 106. Saved Data Entity
Attribute Description

Saved Data ID Uniquely identifies the saved data item

Saved Data The actual data saved

Timestamp Timestamp for when the saved data entity was recorded in the server
database

Change Identifies the monitored changes entity for this saved data entity
Identifier

Saved Data ID is only available to users with the admin role.

Server IP-Server Port Entity


Describes a server IP-server port entity.
Table 107. Server IP-Server Port Entity
Attribute Description

Server IP/Server A server IP value combined with a server port value.


Port

Chapter 6. Reports 393


Session Entity
This entity is created for each Client/Server database session.
Table 108. Session Entity
Attribute Description

Global ID Uniquely identifies the session - access.

Session ID Uniquely identifies the session.

Access ID Uniquely identifies the access period.

Timestamp Initially, a timestamp created for the first request on a client-server


connection where there is not an active session in progress. Later, it is
updated when the session is closed, or when it is marked inactive
following an extended period of time with no observed activity. When
tracking Session information, you will probably be more interested in
the Session Start and Session End attributes than the Timestamp
attribute.

Timestamp Date Date only from the timestamp.

Timestamp Time Time only from the timestamp.

Timestamp Weekday only from the timestamp.


Weekday

Timestamp Year Year only from the timestamp.

Session Start Date and time session started. Session Start is also a Main Entity. Access
this secondary entity by clicking on the Session primary entity.

Session Start Date only from the Session Start.


Date

Session Start Time only from the Session Start.


Time

Session Start Weekday only from the Session Start.


Weekday

Session Start Year only from the Session Start.


Year

Client Port Client port number.

Server Port Server port number.

Inactive Flag Default 0 - Open for sessions generated by SQL package.

1 - Closed (disconnect/ logout received).

2 - Probably closed; unclosed with no packets for a long time.

3 - For sessions generated from non-SQL packets.

394 IBM Guardium 10.0


Table 108. Session Entity (continued)
Attribute Description

TTL Reserved for admin role use only.

Session End Date and time the session ended. Session End is also a Main Entity.
Access this secondary entity by clicking on the Session primary entity.

Session End Date Date only from the Session End.

Session End Time only from the Session End.


Time

Session End Weekday only from the Session End.


Weekday

Session End Year Year only from the Session End.

Database Name Name of database for the session (MSSQL or Sybase only).

Note: For Oracle, Database Name may contain additional and


application specific information such as the currently executing module
for a session that has been set in the MODULE column of the
V$SESSION view

Session Ignored Indicates whether or not some part of the session was ignored
(beginning at some point in time).

Ignored Since Timestamp created when starting to ignore this session.

Uid Chain For a session reported by Unix S-TAP (K-Tap mode only), this shows the
chain of OS users, when users su with a different user name. The values
that appear here vary by OS platform - for example, under AIX the
string IBM IBM IBM may appear as a prefix.

Note: For Solaris Zones, user ids may be reported instead of user names
in the Uid Chain.

Old Session ID Points to the session from which this session was created. Zero if this is
the first session of the connection.

Terminal Id Terminal ID of the connection, used internally to resolve session


information.

Process ID The process ID of the client that initiated the connection (not always
available).

Uid Chain Values compressed. See Uid Chain.


Compressed

Duration (secs) Indicates the length of time between the Session Start and the Session
End (in seconds).

Chapter 6. Reports 395


Table 108. Session Entity (continued)
Attribute Description

Original The UTC offset. This is done in particular for aggregators that have
Timezone collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when
imported to the aggregator.

For instance, on an aggregator that aggregates data from different time


zones, you can see session start of one record that is 21:00 with original
timezone UTC-02:00 and another record where session start is 21:00 with
original timezone UTC-05:00, This means that these events occurred 3
hours apart, but at the same respective local time (9 PM).

Global ID, Session ID, and Access ID are only available to users with the admin
role.

Severity Entity

The incident severity for an incident or policy violation


Table 109. Severity Entity
Attribute Description

Severity The severity code will be one of the following:


Description
INFO, LOW, MED, HIGH

Sniffer Buffer Usage Entity

The system creates this entity at the interval set by the store system
netfilter-buffer-size CLI command (every 60 seconds by default).
Table 110. Sniffer Buffer Usage Entity
Attribute Description

Timestamp Time the record was created.

% CPU Sniffer Percentage of CPU used by sniffer.

% Mem Sniffer Percentage of memory used by sniffer.

% CPU Mysql Percentage of CPU used by MySQL.

% Mem Mysql Percentage of memory used by MySQL.

Sniffer Process Sniffer process identifier.


ID

Mem Sniffer Amount of memory used by sniffer.

Time Sniffer Elapsed time used by sniffer.

396 IBM Guardium 10.0


Table 110. Sniffer Buffer Usage Entity (continued)
Attribute Description

Free Buffer Amount of free buffer space.


Space

Analyzer Rate Rate at which messages being analyzed.

Logger Rate Rate at which messages being logged.

Analyzer Queue Size of the analyze queue.


Length

Analyzer Total Total number of messages analyzed.

Logger Queue Size of logger queue.


Length

Logger Total Total number of message logged.

Session Queue Size of session queue.


Length

Session Total Total number of sessions.

Handler Data Internal sniffing engine data.

Extra Info Internal sniffing engine data.

Analyzer Lost Packets lost by analyzer.


Packets

Eth0 Received Messages received on ETH 0.

Eth0 Sent Messages sent on ETH 0.

Logger Dbs List of database types currently being monitored.


Monitored

Logger Packets Packets ignored by policy rule action.


Ignored by Rule

Logger Session Count of sessions logged.


Count

Mysql Disk MySQL disk usage.


Usage

Mysql Is Up Boolean indicator for internal database restart (1=was restarted, 0=not
restarted).

Promiscuous Rate of received packets through the sniffing network cards


Received (non-interface ports).

Chapter 6. Reports 397


Table 110. Sniffer Buffer Usage Entity (continued)
Attribute Description

Sniffer Total number of connections that were monitored and have ended since
Connections inspection engine was restarted.
Ended

Sniffer Total number of connections currently being monitored since inspection


Connections engine was restarted.
Used

Sniffer Packets Packets dropped by sniffer.


Dropped

Sniffer Packets Packets ignored by sniffer.


Ignored

Sniffer Packets Total number of connections that have been ignored due to throttling
Throttled since inspection engine was restarted.

System Cpu System CPU utilization.


Load

System Memory System memory utilization.


Usage

System Root System Root disk utilization.


Disk Usage

System Uptime Time since last start-up.

System Var Disk System var disk utilization.


Usage

Sessions normal Count of normal sessions.

Sessions not Count of sessions not opened by sniffer.


opened

Sessions timeout Count of sessions timed-out.

Sessions ignored Count of sessions ignored by sniffer.

Session Direct Count of sessions directly closed .


closed

Session guessed Count of sessions guessed.

Open FDs Open File Descriptors.

DB Open FDs Database open File Descriptors

398 IBM Guardium 10.0


SQL Based Assessment Definition
This entity describes a SQL based assessment definition
Table 111. SQL Based Assessment Definition
Attribute Description

Bind Out Var Optional. Determines if the entered text in SQL statement is a
procedural block of code that will return a value that should be bound
to an internal Guardium variable that will be used in the comparison to
the Compare to value.

Compare To Compare value that will be used to compare against the return value
Value from the SQL statement using the compare operator.

External Reference to the Center for Internet Security (CIS) or Common


Reference Vulnerabilities and Exposures (CVE).

Operator Operator that will be used for the condition.

Recommendation The Recommended text for fail that will be displayed when the test
Text Fail fails.

Recommendation The Recommended text for pass that will be displayed when the test
Text Pass passes.

Result Text Fail The Result text for fail that will be displayed when the test fails.

Result Text Pass The Result text for pass that will be displayed when the test passes.

Return Type The Return type that will be returned from the SQL statement.

Short The short description for the assessment test.


Description

SQL For Details A SQL Statement for Detail, a SQL statement that retrieves a list of
strings to generate a detail string of Detail prefix + list of strings.

SQL The SQL statement that will be executed for the test.

SQL Entity
SQL Entity

This entity is created for each unique string of SQL. Values are replaced by
question marks - only the format of the string is stored.
Table 112. SQL Entity
Attribute Description

Sql SQL string.

Construct ID Uniquely identifies the construct in which the SQL appeared

Chapter 6. Reports 399


Table 112. SQL Entity (continued)
Attribute Description

Bind Info Bind information for this SQL string.

Truncated SQL Indicates if the SQL has been truncated or not where:

0 - false/no, not truncated

1 - true/yes, truncated

Task Receiver Entity


Indicates the action required by the results receiver.
Table 113. Task Receiver Entity
Attribute Description

Action Required Indicates if signing action is required.

Task Results To-Do List Entity

Indicates the current status of the results.


Table 114. Task Results To-Do List Entity
Attribute Description

Status Indicates the current status of the results.

(Esca) Action Indicates if to-do list action is required.


Required

Action Required Indicates if signing action is required.

Template Entity
A CAS template entity is created for each item template within a template set. An
item is a specific file or file pattern, an environment or registry variable, the output
of an OS or SQL script, or the list of logged-in users.
Table 115. Template Entity
Attribute Description

Template ID A unique identifier for the item template within the set of all item
templates

Template Set ID Unique identifier for the template set

Access Name Depending on the Audit Type, this is the OS or SQL script, environment
or registry value, or a file name or a file name pattern

Audit Type The type of monitored item

400 IBM Guardium 10.0


Table 115. Template Entity (continued)
Attribute Description

Audit Frequency The maximum interval (in minutes) between tests


(Min)

Use MD5 Indicates whether or not the comparison is done by calculating a


checksum using the MD5 algorithm and comparing that value with the
value calculated the last time the item was checked. The default is to
not use MD5. If MD5 is used but the size of the raw data is greater than
the MD5 Size Limit configured for the CAS host, the MD5 calculation
and comparison will be skipped. Regardless of whether or not MD5 is
used, both the current value of the last modified timestamp for the item
and the size of the item are compared with the values saved the last
time the item was checked.

Save Data Indicates if the Keep data checkbox has been marked. If so, previous
versions of the item can be compared with the current version

Editable Indicates whether or not this template can be modified. The default
Guardium templates cannot be modified. In addition once a template set
has been used in a CAS instance, it cannot be modified. In any case, a
template set can always be cloned and the cloned set can be modified

Description Optional description of the template

Timestamp Date and time this template was last updated

Template ID and Template Set ID are only available to users with the admin role.

Template Set Entity

A CAS Template Set entity is created for each template set, which is a set of
template items for a particular operating system or database.
Table 116. Template Set Entity
Attribute Description

Template Set Id A unique identifier for the template set, numbered sequentially

OS Type Operating system: Unix or Windows

DB Type Database Type: Oracle, MS-SQL, DB2, Sybase, Informix, or N/A for an
operating system template

Template Set The template name


Name

IsDefault Indicates whether or not this template is the default for the specified OS
Type and DB Type combination

Editable Indicates whether or not this template can be modified. The default
Guardium templates cannot be modified. In addition once a template set
has been used in a CAS instance, it cannot be modified. In any case, a
template set can always be cloned and the cloned set can be modified

Chapter 6. Reports 401


Table 116. Template Set Entity (continued)
Attribute Description

Timestamp Date and time the template was last updated

Template Set ID is only available to users with the admin role.

Test Result Entity

This entity is created for each set of test results.


Table 117. Test Result Entity
Attribute Description

Test Result Id Identifies the test result.

Assessment Identifies the assessment results set.


Result Id

Test Id1 Identifies the test.

Assessment Test Identifies the assessment test (task).


Id

Test Score Returned test score.

Report Result Id Identifies the report result.

Parameter Indicates if parameters were modified since the last test.


Modified Flag

Result Text Text returned by the test.

Test Description Description from the test definition.

Recommendation Recommendation returned by the test.

Score Description of the score.


Description

Threshold String The threshold prompt for the test (e.g. Maximum Number of Different
IP's Allowed per user)

Severity Severity assigned for the test result.

Category Category for the test result.

Assessment Identifies the test result data source.


Result data
source Id1

Result Details Details of the test.

402 IBM Guardium 10.0


Table 117. Test Result Entity (continued)
Attribute Description

Exceptions Exceptions Group Description. Populated when test is executed.


Group Desc

Test Result ID, Assessment Result ID, and Assessment Test ID are only available to
users with the admin role.

Threshold Alert Details Entity

This entity is created each time that a correlation alert is triggered.


Table 118. Threshold Alert Details Entity
Attribute Description

Alert Log ID Uniquely identifies the alert details entity.

Query Value Value returned by query.

Base Value Value assigned for the statistical alert.

Checked From The starting date and time checked for by the alert condition.
Date

Checked To Date The ending date and time checked for by the alert condition.

Alert Threshold Alert threshold defined for the alert.

Notification Sent Text of notification sent.

Timestamp Created only once, when the statistical alert is logged.

Alert Description The description contained in the alert definition.

Alert Log ID is only available to users with the admin role.

Unit Utilization Level


Two default reports are provided on the the Guardium Monitor tab, “Units
Utilization”:

Unit Utilization – For each unit the max utilization level in the given timeframe.
There is a drill down that will display the details for a unit for all periods within
the timeframe of the report.

Unit Utilization Distribution: Per unit the percent of periods in the timeframe of
the report with utilization Level Low, Medium and High.

In addition under the “Tools”/“Report Building” menu there is a new option:


Units Utilization Levels tracking that enables users to create custom queries and
reports.

Chapter 6. Reports 403


For all reports using this data: custom reports and the pre-defined reports, it is
recommended to enable aliases otherwise utilization levels will be displayed as
numbers: 1, 2, 3 instead of Low, Medium, High.

The list of attributes are:

Host Name

Period Start

Number Of restarts

Number Of restarts Level

Sniffer Memory

Sniffer Memory Level

Percent Mysql Memory

Percent Mysql Memory Level

Free Buffer Space

Free Buffer Space Level

Analyzer Queue

Analyzer Queue Level

Logger Queue

Logger Queue Level

Mysql Disk Usage

Mysql Disk Usage Level

System CPU Load

System CPU Load Level

System Var Disk Usage

System Var Disk Usage Level

Overall Unit Utilization Level

Note: Each parameter has a value and a level which is calculated based on the
value and the thresholds.

404 IBM Guardium 10.0


User Entity
Identifies the Guardium user defined as an audit process results receiver.
Table 119. User Entity
Attribute Description

Login Name Guardium user name.

First Name First name for the Guardium user.

Last Name Last name for the Guardium user.

EMAIL Address Email address defined for the Guardium user.

Last Active Timestamp for last activity for this user.

Database Entitlement Reports


You can use database entitlement reports to verify that users have access only to
the appropriate data. Your Guardium system includes predefined database
entitlement reports for several database types.

Note: DB Entitlements Reports are optional components enabled by product key. If


these components have not been enabled, the choices listed below will not appear
in the Custom Domain Builder/Custom Domain Query/Custom Table Builder
selections.

The predefined entitlement reports are listed as follows. They appear as domain
names in the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections :
v Oracle DB Entitlements Domains
v MYSQL DB Entitlements Domains
v DB2 DB Entitlements Domains
v DB2 for i 6.1 and 7.1 DB Entitlements Domains
v SYBASE DB Entitlements Domains
v Informix DB Entitlements Domains
v MSSQL 2000 DB Entitlements Domains
v MSSQL 2005 DB Entitlements Domains
v Netezza DB Entitlements Domains
v Teradata DB Entitlements Domains
v PostgreSQL DB Entitlements Domains

Oracle DB Entitlements

The following domains are provided to facilitate uploading and reporting on


Oracle DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own

Chapter 6. Reports 405


versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

Oracle
v ORA Accnts of ALTER SYSTEM - Accounts with ALTER SYSTEM and ALTER
SESSION privileges
v ORA Accnts with BECOME USER - Accounts with BECOME USER privileges
v ORA All Sys Priv and admin opt - Report showing all system privilege and
admin option for users and roles
v ORA Obj And Columns Priv - Object and columns privileges granted (with or
without grant option)
v ORA Object Access By PUBLIC - Object access by PUBLIC
v ORA Object privileges - Object privileges by database account not in the SYS
and not a DBA role
v ORA PUBLIC Exec Priv On SYS Proc - Execute privilege on SYS PL/SQL
procedures assigned to PUBL
v ORA Roles Granted - Roles granted to users and roles
v ORA Sys Priv Granted - Hierarchical report showing system privilege granted to
users including recursive definitions (i.e. privileges assigned to roles and then
these roles assigned to users
v ORA SYSDBA and SYSOPER Accnts - Accounts with SYSDBA and SYSOPER
privileges

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

grant select on sys.dba_tab_privs to sqlguard;

grant select on sys.dba_roles to sqlguard;

grant select on sys.dba_users to sqlguard;

grant select on sys.dba_role_privs to sqlguard;

grant select on sys.dba_sys_privs to sqlguard;

grant select on sys.obj$ to sqlguard;

grant select on sys.user$ to sqlguard;

grant select on sys.objauth$ to sqlguard;

grant select on sys.table_privilege_map to sqlguard;

grant select on sys.dba_objects to sqlguard;

406 IBM Guardium 10.0


grant select on sys.v_$pwfile_users to sqlguard;

grant select on sys.dba_col_privs to sqlguard;

MYSQL DB Entitlements

The following domains are provided to facilitate uploading and reporting on


MYSQL DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

MYSQL: The queries ending in "_40" use the most basic version of the mysql
schema (for MySQL 4.0 and beyond). The information_schema has not changed
since it was introduced in MySQL 5.0, so there is a set of _50 queries, but no _51
queries. The _50 queries work for MySQL 5.0 and 5.1 and for 6.0 when it comes
out, since the information_schema is not expected to change in 6.0. The queries
ending in "_502" (MYSQL502) use the new information_schema, which contains
much more information and is much more like a true data dictionary.
v MYSQL Database Privileges 40
v MYSQL User Privileges 40
v MYSQL Host Privileges 40
v MYSQL Table Privileges 40
v MYSQL Database Privileges 500
v MYSQL User Privileges 500
v MYSQL Host Privileges 500
v MYSQL Table Privileges 500
v MYSQL Database Privileges 502
v MYSQL User Privileges 502
v MYSQL Host Privileges 502
v MYSQL Table Privileges 502

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list details the minimal privileges required, in the database table (or
view of the database table), in order for the entitlement to work.

Note: In addition to the privileges required, the user should connect to the MYSQL
database to upload the data.

The entitlement queries for all MySQL versions through MySQL 5.0.1 use this set
of tables: mysql.db mysql.host mysql.tables_priv mysql.user

Beginning with MySQL 5.0.2, and for all later versions, the entitlement queries use
this set of tables: information_schema.SCHEMA_PRIVILEGES mysql.host
information_schema.TABLE_PRIVILEGES information_schema.USER_PRIVILEGES

Chapter 6. Reports 407


If a datasource has a MYSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MYSQL databases the user has access to.

DB2 DB Entitlements

The following domains are provided to facilitate uploading and reporting on DB2
DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are
available from the Custom Domain Builder/Custom Domain Query/ Custom Table
Builder selections. As with other predefined entities and reports, these cannot be
modified, but you can clone and then customize your own versions of any of these
domains or reports. To see entitlement reports, log on the user portal, and go to
the DB Entitlements tab.
v DB2 Column-level Privileges (SELECT, UPDATE, ETC.)
v DB2 Database -level Privileges (CONNECT, CREATE, ETC.)
v DB2 Index-level Privilege (CONTROL)
v DB2 Package-level Privileges (on code packages – BIND, EXECUTE, ETC.)
v DB2 Table-level Privileges (SELECT, UPDATE, ETC.) DB2 Privilege Summary

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

GRANT SELECT ON SYSCAT.COLAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.DBAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.INDEXAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.PACKAGEAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.DBAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.TABAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.SCHEMAAUTH TO SQLGUARD;

GRANT SELECT ON SYSCAT.PASSTHRUAUTH TO SQLGUARD;


DB2 z/OS entitlements
The following domains are provided to facilitate uploading and reporting
on DB2 for z/OS DB Entitlements.

DB2 zOS Executable Object Privs Granted To PUBLIC

DB2 zOS Object Privs Granted To PUBLIC

DB2 zOS System Privs Granted To GRANTEE -V8

408 IBM Guardium 10.0


DB2 zOS System Privs Granted To GRANTEE -V9

DB2 zOS System Privs Granted To GRANTEE -V10 Up

DB2 zOS Database Privs Granted To GRANTEE

DB2 zOS Schema Privs Granted To GRANTEE -V9 Up

DB2 zOS Schema Privs Granted To GRANTEE -V8 Only

DB2 zOS Database Resource Granted To GRANTEE

DB2 zOS Object Privs Granted To GRANTEE

DB2 zOS System Privs Granted With GRANT -V8

DB2 zOS System Privs Granted With GRANT -V9

DB2 zOS System Privs Granted With GRANT -V10 Up

DB2 zOS Database Resource Granted To PUBLIC

DB2 zOS Schema Privs Granted To PUBLIC

DB2 zOS Database Privs Granted To PUBLIC

DB2 zOS System Privs Granted To PUBLIC -V10 Up

DB2 zOS System Privs Granted To PUBLIC -V9

DB2 zOS System Privs Granted To PUBLIC -V8

DB2 zOS Object Privs Granted With GRANT

DB2 zOS Database Resource Granted With GRANT

DB2 zOS Schema Privs Granted With GRANT-V8 Only

DB2 zOS Schema Privs Granted With GRANT-V9 Up

DB2 zOS Database Privs Granted With GRANT

DB2 for i 6.1 and 7.1 DB Entitlements

The following domains are provided to facilitate uploading and reporting on DB2
for i DB Entitlements. Each of the following domains has a single entity (with the
same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

Use the script, gdmmonitor-db2-IBMi.sql, to detail the minimal privileges required,


in the database table (or view of the database table), in order for the entitlement to
work.

Chapter 6. Reports 409


Object privileges granted to grantee (Object type: Schema, Table, View, Package,
Routine, sequence, column, global variable, and XML schema)

Object privileges granted to PUBLIC (Object type: Schema, Table, View, Package,
Routine, sequence, column, global variable, and XML schema)

Executable Objects privileges granted to PUBLIC (Object type: package and


Routine)

Object privileges granted to grantee with GRANT OPTION (Object type: Schema,
Table, View, Package, Routine, sequence, column, global variable, and XML
schema)

All of the object privileges exclude default system schemas from a predefined
Guardium group called "DB2 for i exclude system schemas - entitlement report".
Please add to this group for schema that should be excluded.

SYBASE DB Entitlements

The following domains are provided to facilitate uploading and reporting on


SYBASE DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v SYBASE System Privilege and Roles Granted to User including Grant option
v SYBASE Role Granted to User and System Privileges Granted to user and role
including Grant option
v SYBASE Object Access by Public
v SYBASE Execute Privilege on Procedure, function assigned To Public
v SYBASE Accounts with System or Security Admin Roles
v SYBASE Object and Columns Privilege Granted with Grant option
v SYBASE Role Granted To User

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/* These are required on MASTER database */

grant select on master.dbo.sysloginroles to sqlguard

grant select on master.dbo.syslogins to sqlguard

grant select on master.dbo.syssrvroles to sqlguard

410 IBM Guardium 10.0


/*These are required on every database, including MASTER */

grant select on sysprotects to sqlguard

grant select on sysusers to sqlguard

grant select on sysobjects to sqlguard

grant select on sysroles to sqlguard

If a datasource has a SYBASE database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all SYBASE databases the user has access to.

SYBASE IQ Entitlements

Supported version: sybase IQ 15 and above.

The following custom table definition are created to upload data: (you can ignore
the id.)

139 | SybaseIQ15 Object Privileges By DB User

140 | SybaseIQ15 Object Privileges By Group

141 | SybaseIQ15 System Authority And Group Granted To User

142 | SybaseIQ15 System Authority And Group Granted To User And Group

143 | SybaseIQ15 Object Access By Public

144 | SybaseIQ15 Exec priv on proc func to PUBLIC

145 | SybaseIQ15 User Group With DBA Perms Admin etc

146 | SybaseIQ15 Table View priv granted with grant

147 | SybaseIQ15 Group granted to user and group

148 | SybaseIQ15 Login policy for user group with login

Corresponding query/reports are as follows: (you can ignore the id.)

597 | SybaseIQ15 Object Privileges By DB User

598 | SybaseIQ15 Object Privileges By Group

599 | SybaseIQ15 System Authority And Group Granted To User

600 | SybaseIQ15 System Authority And Group Granted To Users And Groups
Grantee

601 | SybaseIQ15 Object Access By Public

Chapter 6. Reports 411


602 | SybaseIQ15 Execute Privilege On Procedure and Function To PUBLIC

603 | SybaseIQ15 User Group With DBA/Perms Admin/User Admin/Remote


DBA database authority

604 | SybaseIQ15 Table View Priv Granted With Grant

605 | SybaseIQ15 Group Granted To User And Group

606 | SybaseIQ15 Login Policy For User And Group With Login Option Setting

They can be found under db entitlements with the others.

===========================================================================

Description of each - some of them are self explained. some may need a few extra
words:

1 /*

Object privileges by database user.

Object include: Table, views, procedure and functions.

These are privilege granted to users only, not including group or membership in
group.

*/

2. /*

Object privileges by group.

Object inlcude: Table, views, procedure and functions.

These are privilege granted to group only.

*/

3 /* System Authority And Group Granted To Users.

*/

4 /* System Authority And Group Granted To Users And Groups Grantee.

*/

5 /* object access by public.

Including Tables, Views, Functions and Procedures

*/

6 /* Execute privilege on procedures and functions granted to PUBLIC:

*/

412 IBM Guardium 10.0


7 /* Users and groups with DBA, Perms Admin, User Admin or Remote DBA
database authority.

*/

8 /* Tables and Views privileges granted with grant option to users and groups.

Note, this is the only grant option type allow in Sybase IQ. Routines cannot be
grant with grant option.

*/

9 /* Group granted to users and group.

*/

10 /* Login policy assigned to user and group with login option setting */

How to use GuardAPI to add a datasource to Sybase IQ reports

How to use GuardAPI to add a datasource to each of the Sybase IQ reports and
how to execute them.

See the examles below on how to add a datasource to each of the new reports and
then execute each report.

# Add a datasource for ALL SybaseIQ Entitlement Reports

grdapi create_datasource type="Sybase IQ" user=ent password=Guardium123


host=9.70.144.152 name="SybaseIQ15 entitlement6" shared=true owner=admin
application=CustomDomain port=2638 dbName=sn5qpuff

# Add a datasourceto ALL SybaseIQ Entitlement Reports

grdapi create_datasourceRef_by_name application=CustomTables


objName="SybaseIQ15 Exec priv on proc func to
PUBLIC"datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Group granted to user and
group" datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Login policy for user group with
login"datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Object Access By Public"
datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Object Privileges By DB User"
datasourceName="SybaseIQ15 entitlement 6"

Chapter 6. Reports 413


grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Object Privileges By Group"
datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 System Authority And Group
Granted To User"datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 System Authority And Group
Granted To User And Group"datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 Table View priv granted with
grant"datasourceName="SybaseIQ15 entitlement 6"

grdapi create_datasourceRef_by_name
application=CustomTablesobjName="SybaseIQ15 User Group With DBA Perms
Admin etc"datasourceName="SybaseIQ15 entitlement 6"

# Execute ALL SybaseIQ Entitlement Reports

grdapi upload_custom_data
tableName=SYBASEIQ15_EXEC_PRIV_ON_PROC_FUNC_TO_PUBLIC

grdapi upload_custom_data
tableName=SYBASEIQ15_GROUP_GRANTED_TO_USER_AND_GROUP

grdapi upload_custom_data
tableName=SYBASE_OBJ_COL_PRIVS_GRANTED_WITH_GRAN

grdapi upload_custom_data
tableName=SYBASEIQ15_OBJECT_ACCESS_BY_PUBLIC

grdapi upload_custom_data
tableName=SYBASEIQ15_OBJECT_PRIVS_BY_DB_USER

grdapi upload_custom_data
tableName=SYBASEIQ15_OBJECT_PRIVILEGES_BY_GROUP

grdapi upload_custom_data

tableName=SYBASEIQ15_SYSTEM_AUTHORITY_AND_GROUP_GRANTED_TO_USER
grdapi upload_custom_data

tableName=SYBASEIQ15_SYSTEM_AUTHORITY_AND_GROUP_GRANTED_TO_USER_AND_GRO
grdapi upload_custom_data

tableName=SYBASEIQ15_TABLE_VIEWS_PRIV_GRANTED_WITH_GRANT grdapi
upload_custom_data

tableName=SYBASEIQ15_USER_GROUP_WITH_DBA_PERMS_ADMIN_ETC

414 IBM Guardium 10.0


Informix DB Entitlements
The following domains are provided to facilitate uploading and reporting on
Informix DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v Informix Object Privileges by database account not including system account
and roles
v Informix database level privileges, roles and language granted to user including
grant option
v Informix database level privileges, roles and language granted to user and role
including grant option
v Informix Object Grant to Public
v Informix Execute Privilege on Informix procedure and function granted to Public
v Informix Account with DBA Privilege Informix Object and columns privileges
granted with Grant option
v Informix Role Granted To User and Role

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements). The following list (with
comment line heading) details the minimal privileges required, in the database
table (or view of the database table), in order for the entitlement to work.

/* Select privilege to these tables/views is required */

Since all users have sufficient privileges for system catalog SELECT privileges,
there is no need to grant privilege to any user. Informix doesn't seem to like
granting system catalog to users. The grant below would normally be used. But in
this case they are not required.

grant select on systables to sqlguard;

grant select on systabauth to sqlguard;

grant select on sysusers to sqlguard;

grant select on sysroleauth to sqlguard;

grant select on syslangauth to sqlguard;

grant select on sysroutinelangs to sqlguard;

grant select on sysprocauth to sqlguard;

grant select on sysprocedures to sqlguard;

grant select on syscolauth to sqlguard;

Chapter 6. Reports 415


If a datasource has a Informix database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all Informix databases the user has access to.

MSSQL 2000 DB Entitlements

The following domains are provided to facilitate uploading and reporting on


MSSQL 2000 DB Entitlements. Each of the following domains has a single entity
(with the same name), and there is a predefined report for each domain. All of
these domains are available from the Custom Domain Builder/Custom Domain
Query/ Custom Table Builder selections. As with other predefined entities and
reports, these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v MSSQL2000 Object Privilege By database account not including default system
user
v MSSQL2000 Role/System Privileges Granted to User including grant option
v MSSQL2000 Role granted to user and role. System Privileges Granted to User
and Role including grant option
v MSSQL2000 Object Access by PUBLIC
v MSSQL2000 Execute Privilege on System Procedures and functions to PUBLIC
v MSSQL2000 Database accounts with db_owner and db_securityadmin role
v MSSQL2000 Server account with sysadmin, serveradmin and security admin /*
only run this entitlement against MASTER database */
v MSSQL2000 Object and columns privileges granted with grant option
v MSSQL2000 Role granted to user and role

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/* These are required on MASTER database */

grant select on dbo.syslogins to sqlguard

/*These are required on every database including MASTER */

grant select on dbo.sysprotects to sqlguard

grant select on dbo.sysusers to sqlguard

grant select on dbo.sysobjects to sqlguard

grant select on dbo.sysmembers to sqlguard

416 IBM Guardium 10.0


If a datasource has a MSSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MSSQL databases the user has access to.

MSSQL 20005/2008 DB Entitlements

The following domains are provided to facilitate uploading and reporting on


MSSQL 2005 or MSSQL 2008 DB Entitlements. Each of the following domains has a
single entity (with the same name), and there is a predefined report for each
domain. All of these domains are available from the Custom Domain
Builder/Custom Domain Query/ Custom Table Builder selections. As with other
predefined entities and reports, these cannot be modified, but you can clone and
then customize your own versions of any of these domains or reports. To see
entitlement reports, log on the user portal, and go to the DB Entitlements tab.

Note: The entitlement domains for MSSQL2005 listed below cover MSSQL2008 as
well.

Note: Objects in Dynamic query Strings will NOT be shown in


xxx_DEPENDENCIES. An object in an EXECUTE IMMEDIATE SQL string called
by a stored program unit does not show dependency. This query exclude schema
owner defined in group ID 202 "Dependencies_exclude_schema-MSSQL". User has
the ability to add or subtract schema name from this group for the dependencies
query.
v MSSQL2005/8 Object privileges by database account not including default
system user.
v MSSQL2005/8 Role/System privileges granted To User
v MSSQL2005/8 Role/System Privilege granted to user and role including grant
option
v MSSQL2005/8 Object access by PUBLIC
v MSSQL2005/8 Execute Privilege on System Procedures and functions to PUBLIC
v MSSQL2005/8 Database accounts of db_owner and db_securityadmin Role
v MSSQL2005/8 Server account of sysadmin, serveradmin and security admin /*
only run against MASTER database */
v MSSQL2005/8 Object and columns privileges granted with grant option
v MSSQL2005/8 Role granted to user and role.

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/*These are required on MASTER database */

grant select on sys.server_principals to sqlguard

Chapter 6. Reports 417


/*These are required on every databases including MASTER */

grant select on sys.database_permissions to sqlguard

grant select on sys.database_principals to sqlguard

grant select on sys.all_objects to sqlguard

grant select on sys.database_role_members to sqlguard

grant select on sys.columns to sqlguard

If a datasource has a MSSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all MSSQL databases the user has access to.

Netezza DB Entitlements

The following domains are provided to facilitate uploading and reporting on


Netezza DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

Note: There is no DB error text translation for Netezza. The error appears in the
exception description. Users can clone/add a report with the exception description
for Netezza as needed.
v Netezza Obj Privs by DB Username - Object privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Admin Privs by DB Username - Admin privileges with or without grant
option by database username excluding ADMIN account.
v Netezza Group /Role Granted To User - Group (Role) granted to user
v Netezza Obj Privs By Group - Object privileges with or without grant option by
GROUP excluding PUBLIC.
v Netezza Admin Privs By Group - Admin privileges with or without grant option
by GROUP excluding PUBLIC.
v Netezza Admin Privs By DB Username, Group - Admin privileges with or
without grant option by database username, group excluding ADMIN account
and PUBLIC group.
v Netezza Obj Privs Granted - Object privileges granted with or without grant
option to PUBLIC.
v Netezza Admin Privis Granted - Admin privileges granted with or without grant
option to PUBLIC.
v Netezza Global Admin Priv To Users and Groups - Global admin privilege
granted to users and groups excluding ADMIN account.
v Netezza Global Obj Priv To Users and Groups - Global object privilege granted
to users and groups excluding ADMIN account.

418 IBM Guardium 10.0


For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/* This script must be run from the system database */

GRANT SELECT ON SYSTEM VIEW TO sqlguard;

GRANT LIST ON DATABASE TO sqlguard;

GRANT LIST ON USER TO sqlguard;

GRANT LIST ON GROUP TO sqlguard;

GRANT SELECT ON _V_CONNECTION TO sqlguard;

For Netezza entitlement queries, it is recommended to connect to SYSTEM


database, especially when granting the privilege to the user who is going to run
these reports. The granting privilege MUST take place from SYSTEM database or
else the granted privilege will only take place on one particular database. When
the granted privilege takes place from SYSTEM database, a special feature will
allow the granted privilege to carry through to all the databases.

Teradata DB Entitlements

The following domains are provided to facilitate uploading and reporting on


Teradata DB Entitlements. Each of the following domains has a single entity (with
the same name), and there is a predefined report for each domain. All of these
domains are available from the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections. As with other predefined entities and reports,
these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.
v Teradata Object privileges by database account not including default system
users.
v Teradata System privileges and roles granted to users including grant option.
v Teradata Roles granted to users and roles including grant option.
v Teradata Role granted to users and roles. System privileges granted to users
and roles including grant option.
v Teradata Objects and System privileges granted to public. Note role cannot be
granted to public in Teradata.
v Teradata Execute privileges on system database objects to public.
v Teradata System admin, Security admin privileges granted to user and role.

Note: There are no such role as System or Security admin in Teradata. User
must create their own roles. These are some important system privileges that

Chapter 6. Reports 419


would normally not be granted to normal user: ABORT SESSION, CREATE
DATABASE, CREATE PROFILE, CREATE ROLE,CREATE USER, DROP
DATABASE, DROP PROFILE, DROP ROLE, DROP USER, MONITOR
RESOURCE, MONITOR SESSION, REPLICATION OVERRIDE, SET SESSION
RATE, SET RESOURCE RATE.
v Teradata Object privileges granted with granted option to users. Not including
DBC and grantee = 'All'.

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

GRANT SELECT ON DBC.AllRights TO sqlguard;

GRANT SELECT ON DBC.Tables TO sqlguard;

GRANT SELECT ON DBC.AllRoleRights TO sqlguard;

GRANT SELECT ON DBC.RoleMembers TO sqlguard;

PostgreSQL DB Entitlements

The following domains are provided to facilitate uploading and reporting on


PostgreSQL DB Entitlements. Each of the following domains has a single entity
(with the same name), and there is a predefined report for each domain. All of
these domains are available from the Custom Domain Builder/Custom Domain
Query/ Custom Table Builder selections. As with other predefined entities and
reports, these cannot be modified, but you can clone and then customize your own
versions of any of these domains or reports. To see entitlement reports, log on the
user portal, and go to the DB Entitlements tab.

There are seven entitlement custom domains/queries/reports for PostgreSQL. They


are as follows (each is listed with Report name, description, note):

v PostgreSQL Priv On. Databases Granted To Public User Role With Or Without
Granted Option. Privilege on databases granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
v PostgreSQL Priv On Language Granted To Public User Role With Or Without
Granted Option. Privilege on Language granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Schema Granted To Public User Role With Or Without
Granted Option. Privilege on Schema granted to public, user and role with or
without granted option. Run this per database.
v PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without
Granted Option. Privilege on Tablespace granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.

420 IBM Guardium 10.0


v PostgreSQL Role Or User Granted To User Or Role. Role or User granted to user
or role including grant option. Run this once in any database. Ideally
PostgreSQL.
v PostgreSQL Super User Granted To User Or Role. Super user granted to user or
role. Run this once in any database. Ideally PostgreSQL.
v PostgreSQL Sys Privs Granted To User And Role. System privileges granted to
user and role. Run this once in any database. Ideally PostgreSQL.
v PostgreSQL Table View Sequence and Function privs Granted To Public. Tables,
Views, Sequence and Functions privileges granted to public. Run this per
database. Run this per database.
v PostgreSQL Table View Sequence and Function Privs Granted With Grant
Option. Tables, Views, Sequence and Functions privileges granted to user and
role with grant option only. Exclude PostgreSQL account.
v PostgreSQL Table View Sequence Function Privs Granted To Roles. Tables,
Views, Sequence and Functions privileges granted to roles. Not including
public. Run this per database.
v PostgreSQL Table Views Sequence and Functions Privs Granted To Login. Tables,
Views, Sequence and Functions privileges granted to logins. Not including
postgres system user. Run this per database.

Note: As of version 8.3.6, PostgreSQL does not support grant admin option to
public. There is only function, no store procedure. There is no support for column
grant, only table grant. Public is a group, not user. Public does not show up in
pg_roles. The only privileges need to run all these queries is: GRANT CONNECT
ON DATABASE PostgreSQL TO username;

For entitlements to be able to upload data from various datasources, the general
requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).

The following list (with comment line heading) details the minimal privileges
required, in the database table (or view of the database table), in order for the
entitlement to work.

/* Select privilege to these tables/views is required */

/*This is required on POSTGRES database*/

grant connect on database postgres to sqlguard;

/*These are required on every database, including POSTGRES (By default these are
already granted to PUBLIC) */

grant select on pg_class to sqlguard;

grant select on pg_namespace to sqlguard;

grant select on pg_roles to sqlguard;

Chapter 6. Reports 421


grant select on pg_proc to sqlguard;

grant select on pg_auth_members to sqlguard;

grant select on pg_language to sqlguard;

grant select on pg_tablespace to sqlguard;

grant select on pg_database to sqlguard;

If a datasource has a PostgreSQL database type, but does not have a DB name (see
Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all PostgreSQL databases the user has access to.

How to take advantage of over 600 predefined reports


Instead of creating custom reports from scratch, take advantage of the predefined
content in the Guardium application.

Get the information that you seek faster, by accessing over 600 predefined reports
already available from the Guardium application. These predefined reports can be
cloned and customized to the needs of the user.

Using the Guardium predefined reports is a best practice recommendation,


enabling organizations to quickly and easily identify security risks, such as
inappropriately exposed objects, users with excessive rights, and unauthorized
administrative actions. Examples of the many predefined reports include: accounts
with system privileges; all system and administrator privileges, which are shown
by user and role; object privileges by user; and all objects with PUBLIC access.

At installation time, the Guardium appliance is configured with a number of


predefined reports.

All parameters and values are displayed on all reports. The parameters and values
can be edited from the Customize in any report screen.

Use the search function of help to go to the specific report directly. Use quotation
marks around words or phrases to precisely define search terms.

Predefined reports are described in the online help in the following help
sub-topics:
v Predefined admin Reports - available to the admin user from the following tabs:
System View, Daily Monitor, Guardium Monitor, and Tap Monitor.
v Predefined Reports from Accessmgr (see Access Management overview): User
and Role Reports; Allowed Datasources; Allowed Servers; Databases Not
Associated; Datasources Not Associated.

Examples of predefined reports from the Guardium Monitor tab are shown.

Logins to Guardium

All values for this report are from the Guardium Logins entity. For the reporting
period, each row of the report lists the User Name, Login Succeeded (1=

422 IBM Guardium 10.0


Successful, 0=Failed), Login Date And Time, Logout Date And Time (which is
blank if the user has not yet logged out), Host Name, Remote Address (of the
user), and count of logins for the row.

Table 120. Logins to Guardium


Domain Based on Query Main Entity
Guardium Guardium Guardium Users Login
Logins Logins
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Period From >= NOW -1 DAY
Period To <= NOW

Buffer Usage Monitor

Provides an extensive set of buffer usage statistics.

Table 121. Buffer Usage Monitor


Domain Based on Query Main Entity
Buffer Usage Buff Usage Sniffer Buffer Usage Monitor
Monitor
Run-Time Operator Default Value
Parameter

Chapter 6. Reports 423


Table 121. Buffer Usage Monitor (continued)
Domain Based on Query Main Entity
Period From >= NOW -1 DAY
Period To <= NOW

Group Usage Report


This report displays the list of all defined groups and all the entities that rely on
each group.

Note: There are 328 records available in this report.

Guardium Applications
For each Guardium application, each row lists a security role that is assigned, or
the word all, indicating that all roles are assigned.

424 IBM Guardium 10.0


Table 122. Guardium Applications
Domain Based on Query Main Entity
internal - not All Guardium not available
available Applications
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 Month DAY
Period To <= NOW

Guardium Roles
This menu pane displays two reports: All Roles - Application Access - and All
Roles; User.

All Roles - Application Access For each role, this report lists the number of
applications to which it is assigned.

To list the applications to which a role is assigned, click the role and drill down to
the Record Details report.

Chapter 6. Reports 425


Table 123. All Roles - Application Access
Domain Based on Query Main Entity
internal - not All Roles - not available
available Application
Access
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW

All Roles - User

For each role, this report lists the number of users to which it is assigned. To list
the users to which a role is assigned, click the role and drill down to the Record
Details report.

426 IBM Guardium 10.0


Table 124. All Roles - User
Domain Based on Query Main Entity
internal - not Roles - User not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW

Guardium Users

Lists each user, date of last activity, and number of roles assigned. For each user,
you can drill down to the Record Details report to see the roles that are assigned to
that user.
Table 125. Guardium Users
Domain Based on Query Main Entity
internal - not User role not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW

Unit Utilization Levels

Three default reports are shown the Guardium Monitor tab for “Units Utilization”:
v Unit Utilization – For each unit the maximum utilization level in the specified
time frame. There is a drill-down that displays the details for a unit for all
periods within the time frame of the report.
v Unit Utilization Distribution: Per unit the percent of periods in the time frame of
the report with utilization levels Low, Medium, and High.
v Utilization Thresholds: This predefined report displays all low and high
threshold values for all Utilization parameters. Parameters: Number of restarts;
Sniffer Memory; Percent Mysql Memory; Free Buffer Space; Analyzer Queue;
Logger Queue; Mysql Disk Usage; System CPU Load; System Var Disk Usage.
v Unit Utilization Daily Summary - Host Name; Period Start; Max Number Of
requests; Max Number Of requests Level; Number of Requests % Increase; Max
System Var Disk Usage; Max System Var Disk Usage Level; System Var Disk
Usage % Increase; Max Mysql Disk Usage; Max Mysql Disk Usage Level; Mysql
Disk Usage % Increase; Max Overall Utilization Level

Chapter 6. Reports 427


Table 126. Unit Utilization Levels
Domain Based on Query Main Entity
Internal - not Unit Utilization Unit Utilization Levels
available Distribution
Run-Time Operator Default Value
Parameter
Period From >= NOW -24 HOUR
Period To <= NOW

List of over 600 predefined reports


Here is an alphabetical list of over 600 predefined reports available to the user
from within the Guardium application (list from Guardium v9.0).

Access Direct from Extranet/DMZ

Access Map

428 IBM Guardium 10.0


Access Map Details

Access policy violations

Access to Sensitive Objects

Accounts Created and Deleted within a Short Period

Active S-TAPs Changed

Active Users Last Login

Active Users with no Activity

Activity By Client IP

Activity Summary By Client IP

Admin Users Login

Admin Users Login Graphical

Admin Users Sessions

Administration Objects Usage

Administrative Commands Usage

Aggregation Errors

Aggregation/Archive Log

Aggregation/Archive Log - Distributed

Alerts Sent

All Guardium Applications - Role

All Roles - Application Access

All Roles - User

ALTER Commands Execution

Application Objects Summary

Application User Audit

Approved Tap Clients

Archive Candidates

Archive number

Archive results attempted

Chapter 6. Reports 429


Archive results number

Archives attempted

Assessment 1

Assessment 10

Assessment 12

Assessment 13

Assessment 2

Assessment 3

Assessment 4

Assessment 5

Assessment 6

Assessment 8

Assessment Total Requests

Asset Event Mapping to API

Asset Indicator History

Asset Map Status

Asset Map Status Change History

Asset Status

Asset Status Change History

Audit Process Log

Audit processes - Active/Inactive

Available Patches

BACKUP Commands Execution

Backup number

Backups attempted

Buff Usage Monitor

Calls to procs with Buffer Overflow

Capture-Capture List

430 IBM Guardium 10.0


Capture-Replay List

CAS Change Details

CAS Deployment

CAS Host History

CAS Instance Config

CAS Instances

CAS Saved Data

CAS Templates

Catalog View

CIS vulnerability

Classification Data Import

Classifier policy violations

Classifier Results

Client IP Activity Summary

Client IP Audit

Client IP Group Audit

Client IPs Activity

CLS_RESULT

Command Details

Commands Execution Summary

Commands List

Compare Avg Execution Time (ms)

Compare Rows Retrieved

Compare SQL Execution

Compare SQL Failures

Compliant (Pass) Results

Configuration Change History

Connection Profiling List

Chapter 6. Reports 431


Connections Quarantined

Count Of Data-Sources with No VA Results

Count Of Data-Sources with Non-approved Version/Patch level

Count Of Host That Ceased To BE Monitored

CPU Tracker

CPU Usage

CREATE Commands Execution

Critical failed tests

Custom Table Upload Log

CVE compliance

Databases Discovered

Datamart Extraction Log

DataSource Changes

DataSource Status

DataSource Version History

Data-Sources

Datasources Associated

Datasources Not Associated

Data-Sources with Non-approved Version/Patch level

DB Predefined Users Login

DB Predefined Users Sessions

DB Server List

DB Server Throughput

DB Server Throughput-Chart

DB Users Mapping List

DB2 Column Level Privs

DB2 DB Level Privs

DB2 for i S-TAP configuration

432 IBM Guardium 10.0


DB2 for i S-TAP Status

DB2 Index Level Privs

DB2 Package Level Privs

DB2 Priv Summary

DB2 Table Level Privs

DB2 z/OS Database Privileges Granted To GRANTEE

DB2 z/OS Database Privileges Granted To GRANTEE With GRANT Option

DB2 z/OS Database Privileges Granted To PUBLIC

DB2 z/OS Database Resource Granted To GRANTEE

DB2 z/OS Database Resource Granted To GRANTEE With GRANT Option

DB2 z/OS Database Resource Granted To PUBLIC

DB2 z/OS Executable Object Privileges Granted To PUBLIC

DB2 z/OS Object Privileges Granted To GRANTEE

DB2 z/OS Object Privileges Granted To GRANTEE With GRANT Option

DB2 z/OS Object Privileges Granted To PUBLIC

DB2 z/OS Schema Privileges Granted To GRANTEE V8 Only

DB2 z/OS Schema Privileges Granted To GRANTEE V9 And Higher

DB2 z/OS Schema Privileges Granted To GRANTEE With GRANT Option V8 Only

DB2 z/OS Schema Privileges Granted To GRANTEE With GRANT Option V9 And
Higher

DB2 z/OS Schema Privileges Granted To PUBLIC

DB2 z/OS System Privileges Granted To GRANTEE V10 And Higher

DB2 z/OS System Privileges Granted To GRANTEE V8

DB2 z/OS System Privileges Granted To GRANTEE V9

DB2 z/OS System Privileges Granted To GRANTEE With GRANT Option V10 And
Higher

DB2 z/OS System Privileges Granted To GRANTEE With GRANT Option V9

DB2 z/OS System Privileges Granted To GRANTEE With GRANT Option V8

DB2 z/OS System Privileges Granted To PUBLIC V10 And Higher

Chapter 6. Reports 433


DB2 z/OS System Privileges Granted To PUBLIC V8

DB2 z/OS System Privileges Granted To PUBLIC V9

DBCC Commands Execution

DDL Commands

DDL Distribution

Default DB Users Enabled

Definitions Export/Import Log

Detailed Enterprise S-TAP View

Detailed Guardium User Activity

Detailed Sessions List

Discovered Instances

Distributed Datamart Status

Distribution Of DDL Commands

DML Execution on Administrative Objects

DML Execution on Sensitive Objects

DML Executions Per Day

DROP Commands Execution

Dropped Requests

DW Dormant Objects

DW Dormant Objects-Fields

DW EXECUTE Object Access

DW SELECT Object Access

DW SELECT Object/Field Access

EBS Application Access

EBS Processes Database Access

EF - Exception

EF - Logoff

EF - Logon

434 IBM Guardium 10.0


EF - SQL Detail

EF - SQL Summary

Enterprise aggregation/traffic information

Enterprise Buffer Usage Monitor

Enterprise S-TAP association history

Enterprise S-TAP View

Event Status Transition

Exception Count

Exceptions By Client

Exceptions By Server

Exceptions By Type

Exceptions By User

Exceptions Details

Exceptions Distribution

Exceptions Distribution List

Exceptions Monitor

Exceptions Type Distribution

Excessive Errors per Period

Excessive Failed Attempts to Grant

Executed DMLs On Sensitive Objects

Execution Of ALTER Commands

Execution Of BACKUP Commands

Execution Of CREATE Commands

Execution of DBCC Commands

Execution Of DDL Commands

Execution of DML Commands on Administrative Objects

Execution Of DROP Commands

Execution of GRANT Commands

Chapter 6. Reports 435


Execution Of KILL Commands

Execution of RESTORE Commands

Execution of REVOKE Commands

Export Sensitive Data To Discovery

Failed Login Attempts

Failed User Login Attempts

Failed User Login Attempts - Distributed

Failed Vulnerability Results

Field Details

Fields List

Flat Log List

Full SQL By Client IP

Full SQL By DB User

Generated Alert Notifications

GIM Clients Status

GIM Events List

GIM Installed Modules

GRANT Commands Execution

Group Members

Groups Usage Report

Guardium API Exceptions

Guardium Group Details

Guardium Job Queue

Guardium Logins

Guardium Users - credentials

Hadoop - BigInsights MapReduce Report

Hadoop - Exception Report

Hadoop - Full Message Details report

436 IBM Guardium 10.0


Hadoop - HBase Report

Hadoop - HDFS Report

Hadoop - Hue/Beeswax Report

Hadoop - MapReduce Report

Hadoop - Unauthorized MapReduce Jobs

Hourly Access Details

ICM Application Access

IMS Access

IMS Data Access Details

IMS Event

IMS Object

Inactive S-TAPs Since

Indicators for noncompliant assets

Informix Account With Dba Privilege

Informix Execute Priv On Proc Func To Public

Informix Obj Col Privs Granted With Grant

Informix Object Grant To Public

Informix Object Privs By DB Accnt

Informix Role Granted To User/Role

Informix Sys Priv And Role Granted To User

Informix Sys Priv And Role Granted To User Role

Inspection Engine Changes

Installed Patches

Installed Policy Details

KILL Commands Execution

List Of Data-Sources with No VA Results

Location View

Locator

Chapter 6. Reports 437


Logged R/T Alerts

Logged Threshold Alerts

Logging collectors

Logins to Guardium Appliance

Long Running Queries

Lucene (Access)

Lucene (Exception)

Lucene (Violations)

Managed Units

MS-SQL Replication Procedures Call

MS-SQL Security Procedures Call

MS-SQL System Procedures Call

MSSQL2000 accnt of db_owner db_securityadmin role

MSSQL2000 Exec Priv On Sys Proc Func To Public

MSSQL2000 Obj Col Privs Granted With Grant

MSSQL2000 Obj Privs By Non-Default Sys User

MSSQL2000 Object Access By PUBLIC

MSSQL2000 Role Granted To User And Role

MSSQL2000 Role/Sys Privs Granted To User

MSSQL2000 Role/Sys Privs Granted To User And Role

MSSQL2000 Srv Accnt of sys/server/security admin

MSSQL2005/8 Accnt Of db_owner db_securityadmin Role

MSSQL2005/8 Exec Priv On Sys Proc Func To Public

MSSQL2005/8 Obj Col Privs Granted With Grant

MSSQL2005/8 Obj Privs By Non-Default Sys User

MSSQL2005/8 Object Access By PUBLIC

MSSQL2005/8 Role Granted To User And Role

MSSQL2005/8 Role/Sys Privs Granted To User

438 IBM Guardium 10.0


MSSQL2005/8 Role/Sys Privs Granted To User And Role

MSSQL2005/8 Srv Accnt of sys/server/security admin

My Restore Log

MYSQL DB Privs 40

MYSQL DB Privs 500

MYSQL DB Privs 502/up

MYSQL Host Privs 40

MYSQL Host Privs 500

MYSQL Host Privs 502/up

MYSQL Table Privs 40

MYSQL Table Privs 500

MYSQL Table Privs 502/up

MYSQL User Privs 40

MYSQL User Privs 500

MYSQL User Privs 502/up

Netezza Admin Privs by DB Username

Netezza Admin Privs By DB Username Group

Netezza Admin Privs By Group

Netezza Admin Privs Granted

Netezza Global Admin Priv To Users and Groups

Netezza Global Obj Priv To Users and Groups

Netezza Group/Role Granted To User

Netezza Obj Privs by DB Username

Netezza Obj Privs By Group

Netezza Obj Privs Granted

New SQL Statements

No Traffic

No Traffic By Server And Protocol

Chapter 6. Reports 439


Number of access policy violations

Number Of Active Privacy Set Processes

Number Of Active Processes

Number of classifier policy violations

Number of db per type

Number of failed critical tests

Number Of Inactive S-TAPs

Number of installed policies

Number of items in to-do lists

Number of open incidents

Number of outstanding audit process reviews

Object Access By Client

Object Activity Summary

Object Audit

Object Details

Object Group Audit

Object Last Referenced

Objects Access Summary

Objects List

One User One IP

Open Incidents

Open Incidents / Incident Management

Open Incidents / To do list

Open Sessions

Open Sessions By IP

Open Sessions Graphical

Open Sessions Graphical Monitor

Open Sessions Monitor

440 IBM Guardium 10.0


Optim - Failed Request Summary per Optim Server

Optim - Request Execution per Optim Server

Optim - Request Execution per User

Optim - Request Log

Optim - Request Summary

Optim - Table Usage Details

Optim - Table Usage Summary

ORA Accnts of ALTER SYSTEM

ORA Accnts with BECOME USER

ORA All Sys Priv and admin opt

ORA All Sys Priv and admin opt 8/9

ORA Obj And Columns Priv

ORA Object Access By PUBLIC

ORA Object privileges

ORA PUBLIC Exec Priv on SYS Proc

ORA Roles Granted

ORA Roles Granted 8/9

ORA Sys Priv Granted

ORA Sys Priv Granted 8

ORA SYSDBA and SYSOPER Accnts

Outstanding waiting reviews

Outstanding Audit Process Reviews

Outstanding Events

Parser Exceptions

Pending Audit Processes

Policy Changes

Policy Violation Count

Policy Violations

Chapter 6. Reports 441


Policy Violations / Incident Management

Policy Violations Details

Policy Violations List with Severity Details

PostgreSQL Priv On Databases Granted To Public User Role With Or Without


Granted Option

PostgreSQL Priv On Language Granted To Public User Role With Or Without


Granted Option

PostgreSQL Priv On Schema Granted To Public User Role With Or Without


Granted Option

PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without


Granted Option

PostgreSQL Role Granted To User Or Role

PostgreSQL Super User Granted To User Or Role

PostgreSQL Sys Privs Granted To User And Role

PostgreSQL Table View Sequence and Function privs Granted To Public

PostgreSQL Table View Sequence and Function Privs Granted With Grant Option

PostgreSQL Table View Sequence Function Privs Granted To Roles

PostgreSQL Table Views Sequence and Functions Privs Granted To Login

Pre Defined Oracle Users access

Primary Guardium Host Change Log

Privacy Set Report 1

Privacy Set Report 2

Privileged Account Utilization

Privileged User Access of Business Objects

PSFT Application Access

PSFT Processes Database Access

Purge number

Purges attempted

Queries By Execution Time

Query Entities & Attributes

442 IBM Guardium 10.0


Replay Statistics

Replay Summary

Replay-Replay List

Request Rate

RESTORE Commands Execution

Restored Data

Retro_Request

Retrospective Report Requests

Returned SQL Errors

REVOKE Commands Execution

Rogue Connections

SAP Application Access

Scheduled Jobs

Scheduled Jobs Exceptions

Scheduled Jobs Exceptions - distributed

Security Assessment Export

Sensitive Objects List

Sensitive Objects Usage

Server IP Audit

Servers Accessed

Servers Associated

Servers Not Associated

Session Count

Session Details

Sessions By Client IP

Sessions By Server IP

Sessions By Server Type

Sessions By Source Program

Chapter 6. Reports 443


Sessions By User

Sessions Details By Server

Sessions List

SIEBEL Application Access

SIEBEL OBSERVED Application Access

Slow queries

SQL Count

SQL Errors

SQL Errors - Distributed

SQL workload Match

SQL workload Match Drill Down

SQL workload Summary

SQL workload Summary Drill Down

Staging Data

S-TAP Events

S-TAP Last Response

S-TAP Status

S-TAP Status Monitor

S-TAP/Z Files

STIG compliance

SYBASE Accnts With Sys Or Sec Admin Roles

SYBASE Execute Priv On Proc Func To Public

SYBASE Obj Col Privs Granted With Grant

SYBASE Object Access By Public

SYBASE Object Privs By DB Accnt

SYBASE Role Granted To User

SYBASE Role Granted To User And Sys Privs Granted

SYBASE Sys Priv And Role Granted To User

444 IBM Guardium 10.0


SybaseIQ15 Execute Privilege On Procedure and Function To PUBLIC

SybaseIQ15 Group Granted To User And Group

SybaseIQ15 Login Policy For User And Group With Login Option Setting

SybaseIQ15 Object Access By Public

SybaseIQ15 Object Privileges By DB User

SybaseIQ15 Object Privileges By Group

SybaseIQ15 System Authority And Group Granted To User

SybaseIQ15 System Authority And Group Granted To Users And Groups Grantee

SybaseIQ15 Table View Priv Granted With Grant

SybaseIQ15 User Group With DBA/Perms Admin/User Admin/Remote DBA


database authority

System/Security Activities

Tap Event Exceptions

TCP Exceptions

Teradata Exec Privs On System DB Objs To Public

Teradata Failed Logins (The Vulnerability Assessment advanced license to be


installed in order to see the Teradata failed login report.)

Teradata Obj Privs By Accnt

Teradata Obj Privs Granted With Granted Option

Teradata Objs and System Privs Granted to Public

Teradata Roles Granted To Users And Roles

Teradata Sys Privs And Role Granted

Teradata Sys Privs And Roles Granted To Users

Teradata System and Security Admin Privs Granted

Terminated Users Failed Login Attempts

Terminated Users Logins

Tests Exceptions

Throughput

Throughput-Chart

Chapter 6. Reports 445


Top Massive Grants

Unit Utilization

Unit Utilization Daily Summary

Unit Utilization details

Unit Utilization Distribution

Use of Administrative Commands

Use Of Administrative Objects

Use of Application Accounts by Other than Application

Use of Privilege Accounts to Create a New Login

Used By View

User - Role

User Activity Audit Trail

User Activity Summary

User Comments

Users Inactive Since

Users To-Do List

Utilization Thresholds

VA Test Failing Since

Values Changed

Values Changed Details

Violations per Incident

VSAM Access

VSAM Detailed Access

VSAM RLM

Windows File Share Activity

Workload Exceptions From Drill Down

Workload Exceptions List

Workload Exceptions To Drill Down

446 IBM Guardium 10.0


Predefined Reports
At installation time, the Guardium appliance is configured with a number of
predefined reports.

All parameters and values are displayed on all reports. The parameters and values
can be edited from the Customize button in any report screen.

Use the search function of help to go to the specific report directly. Use quotation
marks around words or phrases to precisely define search terms.

Predefined reports are described on the following pages:


v Predefined admin Reports “Predefined admin Reports” on page 450 These are
the predefined reports available to the admin user.
v Predefined Reports from Accessmgr (see Access Management overview topic):
User and Role Reports; Allowed Datasources; Allowed Servers; Databases Not
Associated; Datasources Not Associated.

API to run an audit process from tabular and graphical reports

In the Guardium GUI, there is an icon (Ad-hoc process for run once now) to
invoke a call to the GuardAPI, create_ad_hoc_audit_and_run_once.

This opens a window with the following fields:


v Email Addresses - A comma separated list of email addresses.
v Content type for email receiver: PDF/CSV (a radio button 0 - PDF / 1 -CSV)
v Add user as Receiver (check box)

The behavior of this process is as follows:

1 - If new process, one or a number of email receivers can be created in the list (if
any) with a content type as indicated in the emailContentType parameter. It will
also create a user receiver for the user logged in (invoking the API) if the
includeUserReceiver parameter is true.

2 - If existing process, all email receivers will be removed and replaced with the
emails from the new list (if any) with the content type as defined in the
emailContentType parameter. If the list is empty, it will remove all email address
receivers. If there is already a receiver for the user it will NOT be removed even if
the includeUserreceiver is false, however if the parameter is true and there is no
such receiver then it will be added.

Once the audit process is generated it will be automatically executed (similar to a


Run Once Now) and users should expect an item on their to-do list for that audit
process.

The GuardAPI that creates ad hoc audit process will keep results to 7 days (instead
of 1 day). Results will be deleted after 7 days.

For further information on parameters, see the GuardAPI command,


create_ad_hoc_audit_and_run_once, in the GuardAPI Input Generation help topic.

Chapter 6. Reports 447


VA tests with default group members
These are groups that Guardium ships with default members where customers can
use as exception.
Table 127. VA groups to test mapping
Group ID Group Name Test Name Test ID Database Type
82 Sybase Allowed Grants No Non-Exempt Public 61 SYBASE ASE
to Public Privileges
83 MS-SQL Allowed No Non-Exempt Public 270 MSSQL
Grants to Public Privileges
115 DB2 Allowed Grants to No Public Object 105 DB2 LUW
Public Privileges
144 DB2 Allowed Grants to No Public Object 105 DB2 LUW
Public non-restrictive Privileges
116 Teradata Allowed Object privileges 2029 TERADATA
Grants to Public granted to public
117 PostgreSQL Allowed Objects privileges 315 POSTGRESQL
Grants to Public granted to PUBLIC
118 Netezza Allowed Grants Object privileges 2053 NETEZZA
to Public granted to public
(Netezza)
65 MS-SQL Database Only DBAs In Fixed 159 MSSQL
Administrators Server Roles
165 Oracle Only DBA Only DBA Access To 222 ORACLE
Access To SYS.USER$ SYS.USER$
166 MS-SQL DDL granted DDL granted to user 321 MSSQL
to user
167 MS-SQL Procedures Procedures granted to 322 MSSQL
granted to users users
168 MS-SQL No Individual No Individual User 154 MSSQL
User Privileges Privileges
170 Sybase IQ Procedures Procedures and 2230 SYBASE IQ
and functions granted functions granted to
to PUBLIC PUBLIC.
171 Sybase IQ No No individual 2227 SYBASE IQ
individual procedures procedures or functions
or functions privileges privileges.
172 MS-SQL No Access to No Access to Registry 215 MSSQL
Registry Access Access Extended
Extended procedures procedures
173 MS-SQL Role granted to Role granted to role 323 MSSQL
role
185 MS-SQL Access to Access to server level 2289 MSSQL
server level permissions permissions granted to
granted to non-Database
non-Database Administrators
Administrators
186 MS-SQL MSDB MSDB database Role 2296 MSSQL
database Role Members Members Privilege
Privilege

448 IBM Guardium 10.0


Table 127. VA groups to test mapping (continued)
Group ID Group Name Test Name Test ID Database Type
48 DB2 Database Version: DB2 16 DB2 LUW
Version+Patches
48 DB2 Database DB2 Patch Level 54 DB2 LUW
Version+Patches
49 Informix Database Version: Informix 17 INFORMIX
Version+Patches
49 Informix Database Informix Patch Level 55 INFORMIX
Version+Patches
50 MS Sql Server Database Version: Microsoft SQL 18 MSSQL
Version+Patches Server
50 MS Sql Server Database Microsoft SQL Server 56 MSSQL
Version+Patches Patch Level
51 MySql Database Version: MySql 19 MYSQL
Version+Patches
51 MySql Database MySql Patch Level 57 MYSQL
Version+Patches
52 Oracle Database Oracle Patch Level 58 ORACLE
Version+Patches
52 Oracle Database Version: Oracle 20 ORACLE
Version+Patches
53 Sybase Database Version: Sybase 21 SYBASE ASE
Version+Patches
53 Sybase Database Sybase Patch Level 59 SYBASE ASE
Version+Patches
109 Teradata PDE Version: Teradata PDE 284 TERADATA
Version+Patches
109 Teradata PDE Teradata PDE Patch 286 TERADATA
Version+Patches level
110 Teradata TDBMS Teradata TDBMS Patch 287 TERADATA
Version+Patches level
110 Teradata TDBMS Version: Teradata 285 TERADATA
Version+Patches TDBMS
111 Teradata TDGSS Version: Teradata 290 TERADATA
Version+Patches TDGSS
111 Teradata TDGSS Teradata TDGSS Patch 288 TERADATA
Version+Patches Level
112 Teradata TGTW Version: Teradata TGTW 291 TERADATA
Version+Patches
112 Teradata TGTW Teradata TGTW Patch 289 TERADATA
Version+Patches Level
113 Netezza Netezza version level 306 NETEZZA
Version+Patches
113 Netezza Netezza patch level 307 NETEZZA
Version+Patches
114 Postgress PostGreSQL version 308 POSTGRESQL
Version+Patches level

Chapter 6. Reports 449


Table 127. VA groups to test mapping (continued)
Group ID Group Name Test Name Test ID Database Type
114 Postgress PostGreSQL patch level 309 POSTGRESQL
Version+Patches
169 SybaseIQ Database Version: Sybase IQ 377 SYBASE IQ
Version+Patches
169 SybaseIQ Database Sybase IQ Patch Level 378 SYBASE IQ
Version+Patches

Use cases for predefined reports


Database administrator
v SQL Errors - An increase in SQL errors may indicate a SQL injection
attack.
v DDL (verify schema changes) - This report displays the client IP from
which the DDL was requested, the main SQL verb (a specific DDL
command), and the total objects accessed for that record.
v Failed logins - This report indicates attempts to access the database with
expired login credentials.
Information security officer
v Failed logins - People with proper credentials trying to access the
database.
v Terminated users - Terminated users trying to access the database.
v Policy violations - Users and issues that violate security policies.
Auditors
v Compliance reports - PCI, SOX, Data Privacy
v Compliance workflow - Shows evidence of signoffs and procedures.

Predefined admin Reports


This section provides a short description of all predefined reports on the default
administrator layout.

The Report selection of the Guardium GUI has five sections:


v Report Configuration Tools;
v Guardium Operational Reports;
v Real-time Guardium Operational Reports;
v Guardium Configuration Items; and,
v Monitoring of Guardium System.

Note: If data level security at the observed data level has been enabled (see Global
Profile settings), then audit process output will be filtered so users will see only
the information of their databases.

The predefined admin reports are listed in alphabetical order.

Active S-TAPs changed

This alert only runs on Central Manager systems. S-TAP Host, S-TAP version,
S-TAP changed, timestamp and count are shown.

450 IBM Guardium 10.0


Table 128. Active S-TAPs changed
Domain Based on Query Main Entity
internal - not Active S-TAPs not available
available changed
Run-Time Operator Default Value
Parameter
Period From none none

Admin User Logins

Summary of logins to the database using a database user name defined in the
Admin Users group. The report displays the client IP address from which the user
with administrative privileges logged into the database, database user name,
source program, session start date and time, and session total for that record.
Table 129. Admin User Logins
Domain Based on Query Main Entity
Access Admin Users Login Session
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Aggregation/Archive Log

This report lists Guardium aggregation activity by Activity Type. Each row of the
report contains the Activity Type, Start Time, File Name, Status, Comment,
Guardium Host Name, Records Purged, Period Start, Period End, and count of log
records for the row. You can limit the output by setting the Guardium Host Name
run-time parameter, which is set to % by default (to select all servers). The Records
Purged column contains a count of records purged only when the activity type is
Purge.
Table 130. Aggregation/Archive Log
Domain Based on Query Main Entity
Aggregation/ Aggregation/ Agg/Archive Log
Export/Import Archive Log
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 WEEK
Period To <= NOW
Guardium Host LIKE %
Name

All Guardium Applications - Roles

This menu pane displays two reports: All Roles - Application Access - and All
Roles; User.

All Roles - Application Access


Chapter 6. Reports 451
For each role, this report lists the number of applications to which it is assigned.
To list the applications to which a role is assigned, click on the role and drill down
to the Record Details report.
Table 131. All Roles - Application Access
Domain Based on Query Main Entity
internal - not All Roles - not available
available Application Access
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW

All Roles - User

For each role, this report lists the number of users to which it is assigned. To list
the users to which a role is assigned, click on the role and drill down to the Record
Details report.
Table 132. All Roles - User
Domain Based on Query Main Entity
internal - not Role - User not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW

Application Objects Summary

This report is a summary of every definition in the Guardium application. For


instance, type Oracle in the ObjectNameLike space in the Run-Time Parameters
page of Application Objects and find all the Object Types and Object Descriptions
where Oracle is used.

Note: This report presents metadata and as such is not filtered through the Data
Level Security mechanism. This metadata could include database related
information such as Oracle SIDs.
Table 133. Application Objects Summary
Domain Based on Query Main Entity
Application Application Objects Application Objects
Objects Summary
Run-Time Operator Default Value
Parameter
ObjectNameLike % %
ObjectTypeNameLike
% %

452 IBM Guardium 10.0


Approved TAP clients
Only specific S-TAPs are permitted to connect to the Guardium application. This
report shows which S-TAP is approved and the status of it.
Table 134. Approved TAP clients
Domain Based on Query Main Entity
internal - not Approved TAP not available
available Clients
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Audit Process Log

Audit Process Log

This report shows a detailed activity log for all tasks including start and end times.
This report is available for admin users via the Guardium Monitor tab. Audit tasks
show start and end times, however the start and end of Security Assessments and
Classifications (which go to a queue) is the same.

The Audit Process has been expanded to the signoff of specific rows beyond a user
signing off on the entire audit process. Displays a list of what has been signed off
and what is the status of specific rows.

Use this Audit Process Log to stop audit processes. Tasks can be stopped only if
the tasks have not been run or are running. Any more tasks that have not started
will not execute. Partial results will not be delivered. If tasks are complete,
stopping the audit process will not stop the sending of the results. Stopping the
audit process is done through a GrdAPI command, invoke api, from the Audit
process Log report. For any user it only shows the line belonging to the user (but
without all the details - just the tasks). Admin users get to see all the details and
can stop anyone's runs. Users can only stop their own runs.

Note:

Stopping the audit process will not cancel queries running using a remote source.
Neither will such online reports using a remote source.

Not supported for Privacy sets and External Feed. This means that if the Privacy
set task was started or the External Feed has started - it will finish even if the
process is stopped (as opposed to a query which will be killed).

Audit Process Log ID

Login Name

Run ID

Timestamp

Audit Process ID

Chapter 6. Reports 453


Audit Process Description

Audit Task ID

Audit Task Description

Event Type

Detail

Count of Audit Process Log

Available Patches

Displays a list of available patches. There are no run-time parameters, and this
reporting domain is system-only.

Buffer Usage Monitor

Provides an extensive set of buffer usage statistics. See the description of the
Sniffer Buffer Usage entity for a description of the fields listed on this report.
Table 135. Buffer Usage Monitor
Domain Based on Query Main Entity
Buffer Usage Buff Usage Monitor Sniffer Buffer Usage Monitor
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

CAS Deployment

This CAS reports details the Database type, OS name, Hostname and OS type.
Table 136. CAS Deployment
Domain Based on Query Main Entity
CAS CAS Deployment N/A
Run-Time Operator Default Value
Parameter
DB Type Like %
OS_Name Like %
Hostname Like %
OS_Type Like %

Changes (CAS)
CAS Change Details

For each monitored item, the changes are listed in order by owner.

454 IBM Guardium 10.0


Table 137. CAS Change Details
Domain Based on Query Main Entity
CAS Changes CAS Change Details Host Configuration
Run-Time Operator Default Value
Parameter
DB_Type Like %
Host_Name Like %
Instance_Name Like %
Monitored_Item Like %
OS_Type Like %
Type Like %

CAS Saved Data

This report lists the data saved for each change detected. This report is sorted by
host name, and then by the most recent modification time.
Table 138. CAS Saved Data
Domain Based on Query Main Entity
CAS Changes CAS Saved Data Saved Data
Run-Time Operator Default Value
Parameter
Host_Name Like %
Monitored_Item Like %
Saved_Data_Id Like %

Configuration (CAS)

CAS Instances

This report lists CAS instance definitions (a CAS instance applies a template set to
a specific CAS host). The default sort order for this report is non-standard. The sort
keys are, from major to minor: Host Name (ascending), Instance (ascending) and
Last Status Change (descending).
Table 139. CAS Instances
Domain Based on Query Main Entity
CAS Config CAS Instances Monitored Item Details
Run-Time Operator Default Value
Parameter
Host_Name Like %
OS_Type Like %
DB_Type Like %
Instance Like %

CAS Instance Config

Chapter 6. Reports 455


This report lists CAS instance configuration changes. The default sort order for this
report is non-standard. The sort keys are, from major to minor: Host Name
(ascending), Instance (ascending) and Last Status Change (descending). You can
limit the output by using any of the following runtime parameters, which select all
values by default.
Table 140. CAS Instance Config
Domain Based on Query Main Entity
CAS Config CAS Instance Config Monitored Item Details
Run-Time Operator Default Value
Parameter
Host_Name Like %
OS_Type Like %
Template_Id Like %

Connection Profiling List

Connection Profiling List is a group of all allowed connections (the Connection


Profiling List show all connection details).
Table 141. Connection Profiling List
Domain Based on Query Main Entity
internal - not Connection Profiling Client Server
available List
Run-time Operator Default Value
parameter
Query From Date >= NOW -1 DAY
Query To Date <= NOW

Connections Quarantined

Guardium policies can be used to terminate or quarantine connections in real time.


Use threshold alerts, based on queries. See Quarantine under the Policies topic for
configuration instructions.
Table 142. Connections Quarantined
Domain Based on Query Main Entity
Connection Connections Connection Quarantine
Quarantine Quarantined
Run-Time Operator Default Value
Parameter
Server IP LIKE %
DB User LIKE %
Server Name LIKE %
Period From >= NOW -1 DAY
Period To <= NOW

456 IBM Guardium 10.0


CPU Tracker
Lists the Software TAP Host and number of CPUs on machines running S-TAPs.
Table 143. CPU Tracker
Domain Based on Query Main Entity
internal - not not available not available
available
Run-Time Operator Default Value
Parameter
none n/a n/a

CPU Usage

By default, displays the CPU usage for the last two hours. This graphical report is
intended to display recent activity only. If you alter the From and To run-time
parameters to include a larger timeframe, you may receive a message indicating
that there is too much data. Use a tabular report to display a larger time period.
Table 144. CPU Usage
Domain Based on Query Main Entity
Sniffer Buffer CPU Usage Sniffer Buffer Usage Monitor
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW

Databases by Type/ Number of DB per type

Server type and client sources for each database type monitored.
Table 145. Databases by Type
Domain Based on Query Main Entity
Access Number of db per Client/Server
type
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Databases Discovered

For the reporting period, for each Discovered Port entity where the DB Type
attribute value is NOT LIKE Unknown, this report lists the Probe Timestamp,
Server IP, Sever Host Name, DB Type, Port, Port Type, and count of Discovered
Ports for the row.

Chapter 6. Reports 457


Table 146. Databases Discovered
Domain Based on Query Main Entity
Auto-discovery Databases Discovered Port
Discovered
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
PortNotLike NOT LIKE No default value.

DB Users Mapping List

The mapping between database users (Invokers of SQL that caused a violation)
and email addresses for real time alerts.
Table 147. DB Users Mapping List
Domain Based on Query Main Entity
Auto-discovery DB Users Mapping Guardium Users Login
List
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Default DB Users Enabled


This report details the default users found enabled after a database scan through
the group of default users and list of servers supplied to the Non-credential Scan
API. When an enabled user is found within a database, that occurrence of
database/user is reported only once. Subsequent scans will update the timestamp
and database version of the database. If a subsequent scan does not find a
previously found user the timestamp remains unaffected so as to keep a history
with the last time the user was found enabled on a database. Scans are run under
the Classifier Listener and submitted jobs (with the non_credential_scan API) may
be tracked using the Guardium Job Queue report.
Table 148. Default DB Users Enabled
Domain Based on Query Main Entity
Default DB Users Default DB Users Default DB Users Enabled
Enabled Enabled
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

458 IBM Guardium 10.0


Data Sources
Lists all datasources defined: Data -Source Type, Data-Source Name , Data-Source
Description, Host, Port, Service Name, User Name, Database Name, Last Connect,
Shared, and Connection Properties..

You can restrict the output of this report using the Data Source Name run time
parameter, which by default is set to “%” to select all datasources.
Table 149. Data Sources
Domain Based on Query Main Entity
internal - not Data-Sources not available
available
Run-Time Operator Default Value
Parameter
Data Source LIKE %
Name
Period From >= NOW -1 DAY
Period To <= NOW

Discovered Instances

This S-TAP report details the following information:

Timestamp, Host, Protocol, Port Min, Port Max, KTAP DB Port, Instance Name,
Client, Exclude Client, Proc name, Named Pipe, DB Instance Dir, DB2 Shared Mem
Adjust, DB2 Shared Mem Client Position, DB2 Shared Mem Size.
Table 150. Discovered Instances
Domain Based on Query Main Entity
Exception Discovered Instances Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Datamart Extraction Log


A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates and
organizes the data in a generic fashion that can be used later for analysis and
reports. A Data Mart begins with user-defined data analysis and emphasizes
meeting the specific demands of the user in terms of content, presentation and
ease-of-use.

The Data Mart extraction program runs in a batch according to the specified
schedule. It summarizes the data to hours, days, weeks or months according to the
granularity requested and then it saves the results in a new table in Guardium
Analytic database.

The data is then accessible to the users via the standard Reports and Audit Process
utilities, likewise any other traditional Domain/ Entity. The Data Mart extraction
data are available under DM domain and the Entity name is set according to the

Chapter 6. Reports 459


new table name specified for the data mart data. Using the standard Query Builder
and Report Builder, users can clone the default query and edit the Query and
report, generate Portlet and add to a Pane.

The extraction log consists of the following - Data Mart Name, Collector IP, Server
IP, from-time, to-time, ID, run started, run ended, number of records, status, error
code.

Definitions Export/Import Log

This report lists Guardium export/import activity by Activity Type. Each row of
the report contains the Activity Type, Start Time, File Name, Status, Comment, and
count of log records for the row.
Table 151. Definitions Export/Import Log
Domain Based on Query Main Entity
Aggregation/ Export-Import Agg/Archive Log
Archive Definitions Log
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Dropped Requests
Tracks requests dropped by an inspection engine (Exception Description =
Dropped database request). Under extremely rare, high-volume situations some
requests may be lost. When this happens, the sessions from which the requests
were lost are listed in the Dropped Requests report.
Table 152. Dropped Requests
Domain Based on Query Main Entity
Exceptions Dropped Requests Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Exception Count
For the reporting period, the total number of exceptions logged.
Table 153. Exception Count
Domain Based on Query Main Entity
Exceptions Exception Count Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

460 IBM Guardium 10.0


Enterprise S-TAP (Detailed) View
See S-TAP Info (Central Manager) for information on this report.

Enterprise S-TAP Association History

Enterprise S-TAP Association History reports on how long the S-TAP reported to
the specific Guardium system in the Load balancer environment.

Enterprise S-TAP View

See S-TAP Info (Central Manager) for information on this report.

Export Sensitive Data to Discovery

Guardium and InfoSphere Discovery have mechanisms for the Classification of


Sensitive Data.

A bidirectional interface is provided to transfer the identified sensitive data from


Guardium to InfoSphere Discovery and from InfoSphere Discovery to Guardium.

This data will be transferred via CSV files. See External Data Correlation
(Bidirectional Interface) for further information.
Table 154. Export Sensitive Data to Discovery
Domain Based on Query Main Entity
Internal - not Export Sensitive Classification Process Results
available Data to Discovery
Run-Time Operator Default Value
Parameter
Period From >= NOW -3 HOURS
Period To <= NOW
Rule Description LIKE
Schema LIKE

Enterprise Buffer Usage Monitor


This report shows the aggregate of sniffer buffer usage from all managed units.
There is a need to set the schedule for the upload. See the description of the Sniffer
Buffer Usage entity for a description of the fields listed on this report.
Table 155. Enterprise Buffer Usage Monitor
Domain Based on Query Main Entity
Enterprise Buffer Enterprise Buffer Sniffer Buffer Usage
Usage Usage
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Chapter 6. Reports 461


Guardium Job Queue
Displays the Guardium Job Queue. Previously known as Classifier/Assessment Job
Queue. For each job, it lists the Process Run ID, Process Type, Status, Guardium
Job Process Id, Report Result Id, Guardium Job Description, Audit Task
Description, Queue Time, Start Time, End Time, and Data Sources.
Table 156. Guardium Job Queue
Domain Based on Query Main Entity
Internal - not Guardium Job not available
available Queue
Run-Time Operator Default Value
Parameter
Job Description LIKE %
Period From >= NOW -1 DAY
Period To <= NOW

The job queue

Assessments and Classifications run in their own separate process called the job
queue. Jobs are queued and have their status maintained while a listener
periodically polls the queue looking for waiting jobs to run.

Stopping

Running jobs, when right-clicked for drill-down, there is an option to stop the
running job and cancel it. The job can not be restarted at this point.

Halting

Running jobs are monitored to reduce the number of hung jobs that might cause
the job queue to be come overloaded. If a job is inactive for 30 minutes, the listener
is terminated and restarted, effectively stopping the operation of a job. Before the
listener is restarted, a process called the cleaner runs, the status is set from
RUNNING to HALTED, and then the listener is restarted. A status of HALTED
means the job was not able to run to completion.

Resubmitting

Sometimes the listener gets restarted for reasons other than a job hanging, for
example rebooting the machine. When the cleaner halts the running jobs, it will see
if the job has responded in the past 8 minutes. If it has, the job will be copied and
that copy will be resubmitted onto the job queue. The original halted will still
display on the queue, and still have the results it was able to process available.

Monitoring

The mechanism by which jobs maintain their active status is by touching the
timestamp on the job queue record. It is important to note that the job queue
record is used for the entire job. Each individual classifier rule or assessment test
interacts with the timestamp for its parent process, and they do not have
individual timestamps that are monitored.

462 IBM Guardium 10.0


The classifier will update its timestamp before every rule is tested and after every
SQL operation. For example, if the classifier is scanning the data in a database that
supports paging, it will touch the timestamp after each batch of data is brought
back from the database. This is because, depending on the state of the target
database, the classifier has the potential to invoke some long-running queries that
will be limited to 30 minutes of execution.

Assessments touch the timestamp after each test in the assessment is evaluated.
Most assessment tests run in a few seconds or less.

Observed Tests

The exception to the relatively quick-running assessment tests is the category of


observed assessment tests. These tests are based on queries and reports that use
the internal sniffing data on the Guardium appliance and can run for longer
periods of time and are unable to update the timestamp while they are in process.
Therefore, observed assessment tests have their timestamps set two hours into the
future when they are started, essentially giving them two hours and thirty minutes
to run to conclusion. This can be confusing when looking at the job queue and
seeing the timestamp set to a time in the future. Just like any other assessment test,
when the observed test ends, the timestamp will be touched. If the next test is an
observed test, the timestamp will once again be set two hours into the future.
Otherwise, the timestamp will be set to the current time.

GIM Clients Status

Displays a list of GIM clients.


Table 157. GIM Clients Status
Domain Based on Query Main Entity
GIM Clients GIM Clients Status GIM Clients
Status
Run-Time Operator Default Value
Parameter
Client Name % N/A
Client OS % N/A

GIM Events List


Displays a list of GIM Events.
Table 158. GIM Events List
Domain Based on Query Main Entity
GIM Events GIM Events GIM Events
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

GIM Installed Modules

Displays a list of installed GIM Modules.

Chapter 6. Reports 463


Note: This report shows the modules that have been associated with the host. If a
module has been assigned to a host, the assigned version will appear in this
report, even if the module has not yet been scheduled or installed. To check the
currently installed module, review the GIM Client Status report.
Table 159. GIM Installed Modules
Domain Based on Query Main Entity
GIM Installed GIM Installed Base GIM Installed
Base
Run-Time Operator Default Value
Parameter
none not applicable not applicable

Group Usage Report


Displays the list of all defined groups and all the entities that rely on each group.

Guardium API Exceptions

Displays a time stamp and description of all GuardAPI exceptions. These are jobs
where the Exception Type ID is GUARD_API_EXCEPTION.
Table 160. Guardium API Exceptions
Domain Based on Query Main Entity
Exception Guardium API Exception
Exceptions
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Guardium Applications
For each Guardium application, each row lists a security role assigned, or the word
all, indicating that all roles are assigned.
Table 161. Guardium Applications
Domain Based on Query Main Entity
internal - not All Guardium not available
available Applications
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 Month DAY
Period To <= NOW

Guardium Group Details

For the reporting period, each row of the report lists a group member. The
columns contain the following information: Group Description, Group Type, Group
Subtype, Timestamp (from the Group Member entity), Group Member, and count

464 IBM Guardium 10.0


of Group Member entities for the row. The value of the timestamp is set to the
current time whenever the record is updated.

You can restrict the output of this report using the run-time parameters, both of
which are used with the LIKE operator and a default value of %, which selects all
values.
Table 162. Guardium Group Details
Domain Based on Query Main Entity
Group Guardium Group Group Member
Details
Run-Time Operator Default Value
Parameter
Group LIKE %
Description
Group Type LIKE %
Period From >= NOW -100 MONTH
Period To <= NOW

Guardium Users

Lists each user, date of last activity, and number of roles assigned. For each user,
you can drill down to the Record Details report to see the roles assigned to that
user.
Table 163. Guardium Users
Domain Based on Query Main Entity
internal - not User Role not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -100 MONTH
Period To <= NOW

Host History (CAS)


CAS Host History

This report lists CAS host events. The default sort order for this report is
non-standard. The sort keys are, from major to minor: Host Name (ascending),
Instance and Event Time (descending).
Table 164. CAS Host History
Domain Based on Query Main Entity
CAS Host History CAS Host History Host Event
Run-Time Operator Default Value
Parameter
Host_Name Like %
OS_Type Like %
Event_Type Like %

Chapter 6. Reports 465


Inactive Inspection Engines

Lists all inactive inspection engines


Table 165. Inactive Inspection Engines
Domain Based on Query Main Entity
internal - not Inactive Inspection S-TAP Verification Header
available Engines
Run-Time Operator Default Value
Parameter
Query from date >= NOW -3 HOUR
Query to date >= NOW

Inactive S-TAPs Since

Lists all inactive S-TAPs defined on the system. It has a single run-time parameter:
Period From, which is set to now -1 hour by default. Use this parameter to control
how you want to define inactive. This report contains the same columns of data
for the S-TAP Status report with the addition of a count for each row of the report.
Table 166. Inactive S-TAPs Since
Domain Based on Query Main Entity
internal - not Inactive S-TAPs not available
available Since
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 HOUR

Installed Patches

Displays a list of installed patches. There are no run-time parameters, and this
reporting domain is system-only.
Table 167. Installed Patches
Domain Based on Query Main Entity
internal - not Installed Patches not available
available
Run-Time Operator Default Value
Parameter
none not applicable not applicable

Logins to Guardium
All values for this report are from the Guardium Logins entity. For the reporting
period, each row of the report lists the User Name, Login Succeeded (1=
Successful, 0=Failed), Login Date And Time, Logout Date And Time (which will be
blank if the user has not yet logged out), Host Name, Remote Address (of the user)
and count of logins for the row.

466 IBM Guardium 10.0


Table 168. Logins to Guardium
Domain Based on Query Main Entity
Guardium Logins Guardium Logins Guardium Users Login
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Period From >= NOW -1 DAY
Period To <= NOW

Logged R/T Alerts

For the reporting period, the total number of logged real time alerts, listed by rule
description.
Table 169. Logged R/T Alerts
Domain Based on Query Main Entity
Policy Violations Logged R/T Alerts Policy Rule Violation
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Logged Threshold Alerts

For the reporting period, the total number of threshold alerts logged.
Table 170. Logged Threshold Alerts
Domain Based on Query Main Entity
Alert Logged Alerts Threshold Alert Details
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Logging Collectors (valid only from aggregation unit)


The Logging Collectors report appears under the Daily Monitor Tab and it is valid
only on an aggregator unit. This report shows the number of sessions per Server
IP, per collector and per day. For example: on May 19, aggregator #1 collected 100
sessions for Server 192.168.x.x1, 50 sessions for Server 192.168.x.x2; aggregator #2
collected 30 sessions for Server 192.168.x.x3, 90 sessions for Server 192.168.x.x4; etc.
Table 171. Logging Collectors
Domain Based on Query Main Entity
Exceptions Logging Collectors Logging Collectors
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY

Chapter 6. Reports 467


Table 171. Logging Collectors (continued)
Domain Based on Query Main Entity
Period To <= NOW

Managed Units (Central Manager)

Enterprise report on a Central Manager that shows which managed units are up.
Use this report in a Statistical Alert to send an email to an ADMIN anytime a
managed unit is down.
Table 172. Managed Units (Central Manager)
Domain Based on Query Main Entity
internal - not Managed Units Managed Units
available
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Remote Data drop-down
Source
Show Aliases Radio-buttons (On, Off, Default)

Number of Active Audit Processes

Number of active Guardium audit processes. When central management is used,


this report contains data only on the Central Manager, and is empty on all
managed units (the standard message, No data found for requested query,
displays). There are no run-time parameters for this report.
Table 173. Number of Active Audit Processes
Domain Based on Query Main Entity
Audit Process Number of Active Audit Process
Processes
Run-Time Operator Default Value
Parameter
none not applicable not applicable

Outstanding Audit Process Reviews


Number of outstanding Guardium audit processes, listed by Guardium users.
Table 174. Outstanding Audit Process Reviews
Domain Based on Query Main Entity
Audit Process Outstanding Audit Task Results To-Do List
Process Reviews
Run-Time Operator Default Value
Parameter
none not applicable not applicable

468 IBM Guardium 10.0


Primary Guardium Host Change Log
Log of primary host changes for S-TAPs. The primary host is the Guardium unit to
which the S-TAP sends data. Each line of the report lists the S-TAP Host,
Guardium Host Name, Period Start and Period End.
Table 175. Primary Guardium Host Change Log
Domain Based on Query Main Entity
internal - not Primary SGuard not available
available host change log
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Query Entities and Attributes

This report lists all the entities and attributes in Guardium reports and was created
to simplify the linkage between the Guardium attributes to the GuardAPI calls.

Use this report to also invoke Use this report to also invoke
create_constant_attribute, create_api_parameter_mapping,
delete_api_parameter_mapping, or list_param_mapping_for_function.
Table 176. Query Entities and Attributes
Domain Based on Query Main Entity
Any of Guardium Any of the entities for the Any of the attributes within the
reporting domains reporting domain entity
Run-Time Parameter Operator Default Value
Report Name Like not applicable not applicable

if <> '%' it will show only


the domain/entity and
attributes used by reports
that match the new
parameter.

IF '%' then all domains,


queries and attributes are
displayed (including those
not used by any report).

Replay Statistics

This report shows Replay Statistics for Execution Start/End Date; Configuration
Name; Schedule Setup Name; Job Status; Statistic Description; Session ID;
Successful Queries; Failed Queries; Total Queries; Type; Active/Waiting/Completed
Tasks.
Table 177. Replay Statistics
Domain Based on Query Main Entity
Replay Results Replay Statistics Replay Result Statistics
Tracking

Chapter 6. Reports 469


Table 177. Replay Statistics (continued)
Domain Based on Query Main Entity
Run-Time Operator Default Value
Parameter
Query from date >= NOW -1 DAY
Query to date <= NOW
Session >= N/A
Session <= N/A

Replay Summary

For the reporting period, a measure of what query failed or succeeded. Checkmark
required in Replay Configuration for Query Failed or Query Succeeded.
Table 178. Replay Summary
Domain Based on Query Main Entity
Replay Results Replay Summary Replay Results
Run-Time Operator Default Value
Parameter
Query from date >= NOW -1 DAY
Query to date <= NOW
Results status % N/A
Schedule setup % N/A
name

Restored Data

This report has two columns: RESTORED_DAY and EXPIRATION_DATE. When


the user restores data from archive, this table is populated according to the data
restored and the duration specified for keeping this data. The purge process looks
at this table to determine what data can be purged and cleans up records that
expired. RESTORED_DAY is the date of the data that was restored and is in the
past. EXPIRATION_DATE is the date when this data will be purged and is a date
in the

future.
Table 179. Restored Data
Domain Based on Query Main Entity
Restored Data Restored Data Restored Data
Run-Time Operator Default Value
Parameter
Period From >= NOW -10 DAY
Period To <= NOW +10 DAY

470 IBM Guardium 10.0


Request Rate
By default, displays the request rate for the last two hours. This graphical report is
intended to display recent activity only. If you alter the run-time parameters to
include a larger timeframe, you may receive a message indicating that there is too
much data. Use a tabular report to display a larger time period.
Table 180. Request Rate
Domain Based on Query Main Entity
Sniffer Buffer Request Rate Sniffer Buffer Usage Monitor
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW

Rogue Connections

This report is available only when the Hunter option is enabled on Unix servers.
The Hunter option is only used when the Tee monitoring method is used. This
report lists all local processes that have circumvented S-TAP to connect to the
database.
Table 181. Rogue Connections
Domain Based on Query Main Entity
Rogue Rogue Connections Rogue Connections
Connections
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Scheduled Job Exceptions


Displays a timestamp and the description for each scheduled job exception
(including assessment errors). . These are jobs where the Exception Type ID is one
of the following: SCHED_JOB_EXCEPTION, ASSESSMENT_EXCEPTION, or
ASMT_ERROR.
Table 182. Scheduled Job Exceptions
Domain Based on Query Main Entity
Sniffer Buffer CPU Usage Sniffer Buffer Usage
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW

Chapter 6. Reports 471


Scheduled Jobs
Displays the list of currently scheduled jobs.
Table 183. Scheduled Jobs
Domain Based on Query Main Entity
internal - not Scheduled Jobs not available
available
Run-Time Operator Default Value
Parameter
none not applicable not applicable

Session Count

For the reporting period, the total number of different sessions open.
Table 184. Session Count
Domain Based on Query Main Entity
Access Session Count Session
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

SQL Count

For the reporting period, the total number of different SQL commands issued.
Table 185. SQL Count
Domain Based on Query Main Entity
Access SQL Count SQL
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

S-TAP Configuration Change History

This report is displayed only when an inspection engine is added or changed. Lists
S-TAP configuration changes - each inspection engine change will be displayed on
a separate row. Each row lists the S-TAP Host, DB Server Type, DB Port From, DB
Port To, DB Client IP, DB Client Mask, and Timestamp for the change.
Table 186. S-TAP Configuration Change History
Domain Based on Query Main Entity
internal - not Configuration not available
available Change History
Run-Time Operator Default Value
Parameter

472 IBM Guardium 10.0


Table 186. S-TAP Configuration Change History (continued)
Domain Based on Query Main Entity
Period From >= NOW -1 DAY
Period To <= NOW

S-TAP Status
Displays status information about each inspection engine defined on each S-TAP
Host. This report has no From and To date parameters, since it is reporting current
status. Each row of the report lists the S-TAP Host, DB Server Type, Status, Last
Response, Primary Host Name, Yes/No indicators for the following attributes:
KTAP Installed, TEE Installed, Shared Memory Driver Installed, DB2 Shared
Memory Driver Installed, LHMON Driver Installed, Named Pipes Driver Installed,
and App Server Installed. In addition, it lists the Hunter DBS.

Note: The DB2 shared memory driver has been superseded by the DB2 Tap
feature.
Table 187. S-TAP Status
Domain Based on Query Main Entity
internal - not S-TAP Status not available
available
Run-Time Operator Default Value
Parameter
none n/a n/a

S-TAP Verification
List all results of S-TAP verifications.
Table 188. S-TAP Verification
Domain Based on Query Main Entity
internal - not S-TAP Verification S-TAP Verification Header
available
Run-Time Operator Default Value
Parameter
Query from date >= NOW -3 HOUR
Query to date >= NOW

S-TAP Events
Use this report for information on the S-TAP (from SOFTWARE_TAP_EVENT table
in internal database).
Table 189. S-TAP Events
Domain Based on Query Main Entity
internal - not S-TAP Events not available
available
Run-Time Operator Default Value
Parameter

Chapter 6. Reports 473


Table 189. S-TAP Events (continued)
Domain Based on Query Main Entity
event type LIKE %
host type LIKE %
Period From >= NOW -3 DAY
Period To <= NOW

S-TAP Info (Central Manager)

Report: See S-TAP Reports. On a Central Manager, an additional report, S-TAP


Info, is available. This report monitors S-TAPs of the entire environment. Upload
this data using the Custom Table Builder.

S-TAP info is a predefined custom domain which contains the S-TAP Info entity
and is not modifiable like the entitlement domain.

When defining a custom query, go to upload page and click Check/Repair to


create the custom table in CUSTOM database, otherwise save query will not
validate it. This table loads automatically from all remote sources. A user cannot
select which remote sources are used - it pulls from all of them.

Based on this custom table and custom domain, there are two reports:

Enterprise S-TAP view shows, from the Central Manager, information on an active
S-TAP on a collector and/or managed unit (If there are duplicates for the same
S-TAP engine, one being active and one being inactive, then the report will only
use the active).

Detailed Enterprise S-TAP view shows, from the Central Manager, information on
all active and inactive S-TAPs on all collectors and/or managed units.

If the Enterprise S-STAP view and Detailed Enterprise S-TAP view look the same,
it is because there only one S-TAP on one managed unit being displayed. The
Detailed Enterprise S-TAP view would look different if there is more S-TAPs and
more managed units involved.

These two reports can be chosen from the TAP Monitor tab of a standalone system,
but they will display no information.

Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and
S-TAP - alert on any activity related to inspection engine and S-TAP configuration

S-TAP Last Response

Pre-defined query and report are available, but not added to any panels.

The query/report displays All S-TAP Hosts and the last response (heartbeat) sent
by each host.

The purpose of this query is to be able to define an alert that will trigger when
S-TAP on a host did not respond for a given period of time.

The input parameters are: Last response From, and, Last Response To.

474 IBM Guardium 10.0


For example, when executed with Last response From = NOW -5 DAYS and Last
Response To = NOW - 3 HOURS, it will display the host name and the last
response time for those hosts that sent the last response in the last 5 days, but had
no response in the last 3 hours.

S-TAP Status Monitor

For each S-TAP reporting to this Guardium appliance, this report identifies the
S-TAP Host, S-TAP Version, DB Server Type, Status (active or inactive), Last
Response Received (date and time), Primary Host Name, and true/false indicators
for: KTAP, TEE, MS SQL Server Shared Memory, DB2 Shared Memory, Local TCP
monitoring, Named Pipes Usage, and Encryption.

This report has no run-time parameters, and is based on a system-only query that
cannot be modified.

STAP/Z Files

STAP/Z provides files with raw data collected from DB2 (on z/OS) containing
DB2 events, SQL statements, etc. This report lists an Interface ID, UA file name
(Un-normalized Audit Event), UT file name (Un-normalized Audit Event text), UH
file name (Un-normalized Audit Event host variables), File Status, Total Number of
Events Processed, Number of Events Failed, and Timestamp. The Run-time
parameters are FileName Like % and FileStatus Like %.

This report has two run-time parameters, FileName Like % and FileStatus Like %.
It is based on a system-only query that cannot be modified.

TCP Exceptions

For the reporting period, for each exception where the Exception Description of the
Exception Type entity is TCP/IP Protocol Exception, a row of this report lists the
following attribute values from the Exception entity: Exception Timestamp,
Exception Description, Source Address, Destination Address, Source Port,
Destination Port, and count of Exceptions for that row.
Table 190. TCP Exceptions
Domain Based on Query Main Entity
Exceptions TCP Exceptions Exception
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Templates (CAS)

CAS Templates

This report lists CAS templates. By default, all template items are listed.
Table 191. CAS Templates
Domain Based on Query Main Entity
CAS Templates CAS Templates Template

Chapter 6. Reports 475


Table 191. CAS Templates (continued)
Domain Based on Query Main Entity
Run-Time Operator Default Value
Parameter
Access_Name Like %
Template_Set_NameLike %
Audit_Type Like %

Tests Exceptions

Indicate pairs of test/datasource that are exempted temporarily. See


create_test_exception for more information on the use of Test Exceptions.
Table 192. Tests Exceptions
Domain Based on Query Main Entity
internal - not Tests Exceptions not available
available
Run-Time Operator Default Value
Parameter
Period From >= NOW -12 MONTH
Period To <= NOW

Throughput

For each Access Period in the reporting period, each row lists the Period Start time,
the count of Server IP addresses, and the total number of accesses (Access Period
entities).

You can restrict the output of this report using the Server IP run time parameter,
which by default is set to % to select all IP addresses.
Table 193. Throughput
Domain Based on Query Main Entity
internal - not DB Server not available
available Throughput
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Server IP LIKE %

Throughput (graphical)

This report is a Distributed Label Line chart version of the tabular Throughput
report. It plots the total number of accesses over the reporting period, one data
point per Period Start time.

You can restrict the output of this report using the Server IP run time parameter,
which by default is set to % to select all IP addresses.

476 IBM Guardium 10.0


Table 194. Throughput (graphical)
Domain Based on Query Main Entity
Access DB Server Access Period
Throughput - Chart
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW
Server IP LIKE %

User Activity Audit Trail Reports

The User Activity Audit Trail menu selection displays two reports. In addition,
from each of those reports, a third report can be produced. See:
v User Activity Audit Trail
v System/Security Activities
v Detailed Guardium User Activity (Drill-Down)

User Activity Audit Trail

For the reporting period, for each User Name seen on a Guardium User Activity
Audit entity, each row displays the Guardium User Name, an Activity Type
Description (from the Guardium Activity Types entity), a Count of Modified Entity
values, the Host Name, and the total number of Guardium Activity Audits entities
for that row.

From any row of the this report, the Detailed Guardium User Activity report is
available as a drill-down report.
Table 195. User Activity Audit Trail
Domain Based on Query Main Entity
Guardium User Activity Audit Guardium User Activity Audit
Activity Trail
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Period From >= NOW -1 DAY
Period To <= NOW

System/Security Activities

For the reporting period, for each User Name seen on a Guardium User Activity
Audit entity, each row displays the Guardium User Name, an Activity Type
Description (from the Guardium Activity Types entity), a Count of Modified Entity
values, the Host Name, and the total number of Guardium Activity Audits entities
for that row.

From any row of the this report, the Detailed Guardium User Activity report is
available as a drill-down report.

Chapter 6. Reports 477


Table 196. System/Security Activities
Domain Based on Query Main Entity
Guardium User Activity Audit Guardium User Activity Audit
Activity Trail
Run-Time Operator Default Value
Parameter
Host Name LIKE %
Period From >= NOW -1 DAY
Period To <= NOW

Detailed Guardium User Activity (Drill-Down)

This report is not available from the menu, but can be opened for any row of the
User Activity Audit Trail report, or the System/Security Activities report. For the
selected row of the report, based on the User Name and Activity Type Description,
this report lists the following attribute values, all of which are from the Guardium
User Activity Audit entity, except for the Activity Type Description, which is from
the Guardium Activity Types entity: User Name, Timestamp, Modified Entity,
Object Description, All Values, and a count of Guardium User Activity Audits
entities for the row.
Table 197. Detailed Guardium User Activity (Drill-Down)
Domain Based on Query Main Entity
Guardium Detailed Guardium Guardium User Activity Audit
Activity User Activity
Run-Time Operator Default Value
Parameter
Activity Type value from calling report
Description
Period From >= NOW -1 DAY
Period To <= NOW
User Name value from calling report

Warning: Users should be aware that activities of the root user, and other sensitive
system accounts, are logged. Drilling down into the activity of these users may
show sensitive commands and passwords that have been entered on the command
line. Therefore users, whenever possible, should not enter sensitive command line
information that they would not like to show on this drill-down report.

User To-Do Lists

Displays for each Guardium audit process: a description, login name, action
required (review or approve), status, user who has signed or reviewed, and
execution date of the specified task.
Table 198. User To-Do Lists
Domain Based on Query Main Entity
internal - not Users To-do List not available
available

478 IBM Guardium 10.0


Table 198. User To-Do Lists (continued)
Domain Based on Query Main Entity
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

User Comments - Sharable

Sharable user comments are all comments except for inspection engine, installed
policy, and audit process results comments. For each sharable user comment, this
report lists the date created, the type of item to which it applies (an alert, for
example), the user who created the comment, and the contents of the comment.

Note: Comments defined for inspection engines, installed policies, or audit process
results can be viewed from the individual definitions, but they cannot be displayed
on a report.
Table 199. User Comments - Sharable
Domain Based on Query Main Entity
Comments Comments Defined Comments
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 MONTH
Period To <= NOW

Unit Utilization Levels

Three default reports are provided the Guardium Monitor tab, “Units Utilization”:
v Unit Utilization – For each unit the max utilization level in the given timeframe.
There is a drill down that will display the details for a unit for all periods
within the timeframe of the report.
v Unit Utilization Distribution: Per unit the percent of periods in the timeframe of
the report with utilization levels Low, Medium and High.
v Utilization Thresholds: This predefined report displays all low and high
threshold values for all Utilization parameters. Parameters: Number of restarts;
Sniffer Memory; Percent Mysql Memory; Free Buffer Space; Analyzer Queue;
Logger Queue; Mysql Disk Usage; System CPU Load; System Var Disk Usage.
v Unit Utilization Daily Summary - Host Name; Period Start; Max Number Of
requests; Max Number Of requests Level; NUmber OF Requests % Increase; Max
System Var Disk Usage; Max System Var Disk Usage Level; System Var Disk
Usage % Increase; Max Mysql Disk Usage; Max Mysql Disk Usage Level; Mysql
Disk Usage % Increase; Max Overall Utilization Level
Table 200. Unit Utilization Levels
Domain Based on Query Main Entity
Internal - not Unit Utilization Unit Utilization Levels
available Distribution
Run-Time Operator Default Value
Parameter

Chapter 6. Reports 479


Table 200. Unit Utilization Levels (continued)
Domain Based on Query Main Entity
Period From >= NOW -24 HOUR
Period To <= NOW

Values Changed
For the reporting period, this report provides detailed information about
monitored value changes. All attribute values displayed are from the Monitor
Values entity. The query this report is based upon has a non-standard sorting
sequence, as follows:
v Server IP
v DB Type
v Audit Timestamp
v Audit Table Name
v Audit Owner

The query this report is based upon has a number of run-time parameters, all of
which use the LIKE operator and default to the value %, meaning all values will
be selected.

For each monitored value selected, a row of the report lists the Timestamp, Server
IP, DB Type, Service Name, Database Name, Audit Login Name, Audit Timestamp,
Audit Table Name, Audit Owner, Audit Action, Audit Old Value, Audit New
Value, SQL Text, Triggered ID, and a count of Change Columns entities for that
row.
Table 201. Values Changed
Domain Based on Query Main Entity
Value Changed Values Changed Changed Columns
Run-Time Operator Default Value
Parameter
Audit Action LIKE %
Audit Login LIKE %
Name
Audit Owner LIKE %
Audit Table LIKE %
Name
DB Type LIKE %
Period From >= NOW -1 DAY
Period To <= NOW
Server IP LIKE %

Predefined user Reports


This section provides a short description of all predefined reports on the default
user layout.

480 IBM Guardium 10.0


For a description of the reports on the default administrator layout, see
“Predefined admin Reports” on page 450.

Note: If data level security at the observed data level has been enabled (see Global
Profile settings), then audit process output will be filtered so users will see only
the information of their databases.

View > Overview Tab

View Installed Policy

The Currently-Installed Policy report displays information about the installed


policy. Click the installed policy link to display the policy rules in a separate
window.

Number of db per type

Displays the number of servers and clients for each monitored database type
(default time period is the current day).

Request Rate

By default, displays the request rate for the last two hours. This graphical report is
intended to display recent activity only. If you alter the From and To run-time
parameters to include a larger timeframe, you may receive a message indicating
that there is too much data. (Use a tabular report to display a larger time period.)

View > DB Activities Tab

View > DB Activities Tab

Sessions By Server Type

For each server type (DB2, Informix, etc.), a row of this report displays the total
number of sessions that were open during the reporting period (by default, the last
three hours).

DML Execution on Sensitive Objects

For each SQL Verb from the DML Commands group that references an Object
Name in the Sensitive Objects group, this report displays a row for each Access
Period, Client IP, and Source Program, with a total count of objects referenced in
that row. Although the report title contains the word Executions, there is no
guarantee that all commands reported were actually executed.

Sensitive Objects Usage

For each object in the Sensitive Objects group, displays a row for each Client IP
and Source Program that referenced the object during the reporting period, and a
count of object references.

The Sensitive Objects group is empty at installation time. Someone at your


company must populate the group with the appropriate set of members.

Activity By Client IP

Chapter 6. Reports 481


For each Client IP address seen during the reporting period, a row counts the
number of SQL Verbs, Object Names, and the total number of sessions.

Database Servers

For each Server IP address accessed during the reporting period, a row of the
report displays the Server Type, Database Name, Service Name, a count of source
programs accessing that server, and the total number of sessions for that row.

IMS Access (z/OS)

Use this to report access to IMS (z/OS).


Table 202. IMS Access (z/OS)
Domain Based on Query Main Entity
Access IMS Access Client Server
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW

IMS Object (z/OS)

Use this to report object to IMS (z/OS).


Table 203. IMS Object (z/OS)
Domain Based on Query Main Entity
Access IMS Object Object
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW

IMS Event (z/OS)

Use this to report event to IMS (z/OS).


Table 204. IMS Event (z/OS)
Domain Based on Query Main Entity
Access IMS Event SQL
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW

IMS Data Access Details (z/OS)

Use this to report data access details to IMS (z/OS).

482 IBM Guardium 10.0


Table 205. IMS Data Access Details (z/OS)
Domain Based on Query Main Entity
Access IMS Data Access Full SQL
Details
Run-Time Operator Default Value
Parameter
Period From >= NOW -2 HOUR
Period To <= NOW
Client IP LIKE
DBUserName LIKE
IMS Name LIKE
ServerIP LIKE

Two VSAM predefined reports: VSAM Detailed Access and VSAM RLM.

View > Exceptions Tab


Policy Violations

For every policy rule violation logged during the reporting period, this report
provides the Timestamp from the Policy Rule Violation entity, Access Rule
Description, Client IP, Server IP, DB User Name, Full SQL String from the Policy
Rule Violation entity, Severity Description, and a count of violations for that row.
You cannot access the query that this report is based upon (Policy Violations List
with Severity), but you can clone the report.

Exceptions Distribution

Each wedge of the pie chart represents the proportion of exceptions for each
Exception Description attribute value (from the Exception Type entity) that was
logged during the reporting period.

As with any chart, you can drill down on the pie chart to display the tabular
version of the query on which the chart is based. There are several exceptions
reports that are accessible from this tabular report (or drill-downs from it) that are
available here, but are not included on any menu.

Exceptions Monitor

A count of exceptions logged during the reporting period. One datapoint is created
each time that you refresh the report on your portal.

Failed User Login Attempts

For each failed login attempt during the reporting period, lists the User Name,
Source Address, Destination Address, and Database Protocol Type for the server
the user was attempting to log into.

SQL Errors

Chapter 6. Reports 483


For each SQL error during the reporting period, displays the Client IP address,
Server IP address, Server Type, database user name, database error text, and error
occurrence total for that record.

Exception Count

The total number of exceptions (Exception entities) logged during the reporting
period.

Terminated Users Logins

Lists all logins by database users who are members of the Terminated DB User
group. Each row lists a DB User Name, Client IP, Server IP, Server Type, Source
Program, last login time (the maximum value of the Session Start attribute), and
the count of sessions for the row.

The Terminated DB Users group is empty at installation time. It must be populated


by someone at your location. The query that this report is based upon (Terminated
Users Logins) cannot be accessed from any query builder.

Active Users Last Login

Last login recorded during the reporting period for each member of the Active
Users group. All members of the group will be listed, even if there were no logins
during the reporting period. This is unlike most other reports based on members
of a group. In the “normal” case, if no activity is found for a member, that member
is not listed.

Each row lists a DB User Name, Client IP, Server IP, Server Type, Source Program,
last login time (the maximum value of the Session Start attribute), and the count of
sessions for the row.

The Active Users group is empty at installation time. It must be populated by


someone at your location. The query that this report is based upon (Active Users
Last Logins) cannot be accessed from any query builder.

Active Users with no Activity

Listing of members in the Active Users group who have had no activity during the
reporting period. This report will be empty if all users have had activity during the
reporting period.

The Active Users group is pre-defined, but empty at installation time. It must be
populated by someone at your location. The query that this report is based upon
(Active Users with no Activity) cannot be accessed from any query builder.

Terminated Users Failed Login Attempts

Lists failed login attempts by database users who are members of the Terminated
DB User group. This report will be empty if there were no failed login attempts by
anyone in this group during the reporting period.

The Terminated DB Users group is pre-defined, but empty at installation time. It


must be populated by someone at your location. The built-in query for this report
cannot be accessed. The query that this report is based upon (Terminated Users
Failed Login Attempts) cannot be accessed from any query builder.

484 IBM Guardium 10.0


Two more predefined reports on Exception Tab:

Excessive Errors per period - Display #Errors/Period; E.g., more than N errors in
60min for the same Client IP address, Server IP address, Server Type, database user
name.

Users inactive since - Show User and Last Session Start for all users having Access
records and having max Session Start time earlier than 90 days ago. (an inactive
user is missed if they never once logged in, or if all their old logins have been
purged)

View > DB Administration Tab

View > DB Administration Tab

Admin Users Login

For each DB User Name included in the Admin Users group, who had one or
more sessions during the reporting period, each row lists the Client IP, DB User
Name, Source Program, Session Start time, and Count of Sessions for that row.

DB Predefined Users Login

For each DB User Name included in the DB Predefined Users group, who had one
or more sessions during the reporting period, each row lists the DB User Name,
Client IP, Server IP, Source Program, Database Name, Service Name, and Count of
Sessions for that row.

Administrative Commands Usage

For each SQL Verb included in the Administrative Commands group that was seen
during the reporting period, this report lists the SQL Verb, Depth, Object Name,
and Client IP, and a count of objects referenced.

Administrative Objects Usage

For each Object Name included in the Administration Objects group that was seen
during the reporting period, each row lists the Object Name, Client IP, Server IP,
Service Name, Database Name, Source Program, DB User Name, and Count of
Objects for that row.

DML Execution on Administrative Objects

For each SQL Verb from the DML Commands group that references an Object
Name in the Administration Objects group, this report displays a row for the DB
User Name, Client IP, Server IP, Server Type, Service Name, Database Name, SQL
Verb, Object Name, and Count of Objects referenced in the row.

BACKUP Commands Execution

For each SQL Verb from the BACKUP Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

RESTORE Commands Execution

Chapter 6. Reports 485


For each SQL Verb from the BACKUP Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

REVOKE Commands Execution

For each SQL Verb from the REVOKE Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

KILL Commands Execution

For each SQL Verb from the KILL Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

DBCC Commands Execution

For each SQL Verb from the DBCC Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, SQL statement, and Count of Objects referenced
in the row.

GRANT Commands Execution

For each SQL Verb from the GRANT Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

Two more predefined reports:

Privileged Account Utilization - Show User, Verb, and the Count of Periods within
which the Verb was performed by a User in the group Admin Users

Privileged User Access of Business Objects - Show User, Verb, Object where the
User in Admin Users and the Verb was performed by the on an Object that is in a
selected group of Business Objects

View > Schema Changes Tab

View > Schema Changes Tab

CREATE Commands Execution

For each SQL Verb from the CREATE Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

DDL Commands

486 IBM Guardium 10.0


All DDL commands sent to the database. The report displays the client IP from
which the DDL was requested, the main SQL verb (a specific DDL command), and
the total objects accessed for that record.

For each SQL Verb from the DDL Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Server Type, SQL Verb, and
Count of Commands referenced in the row.

ALTER Commands Execution

All ALTER commands issued. The report displays the client IP from which the
DDL was requested, server IP address, service name, database user name, source
program, database name, object name, and main SQL verb (a specific DDL
command) for each combination of client IP/DDL command listed on that specific
line.

For each SQL Verb from the ALTER Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

DDL Distribution

This bar graph displays the distribution of commands seen from the DDL
Commands group during the reporting period. For each command seen, a single
bar represents the total number of objects affected.

DROP Commands Execution

For each SQL Verb from the DROP Commands group seen during the reporting
period, this report displays the Client IP, Server IP, Service Name, DB User Name,
Source Program, Database Name, Object Name, SQL Verb, and Count of Objects
referenced in the row.

View > Detailed Activities Tab

View > Detailed Activities Tab

One User One IP

For each DB User Name for which session data was collected during the reporting
period, each line of this report displays the count of Client IP addresses from
which the user logged in, and a total number of sessions.

Client IP Activity Summary

This report displays reporting period activity from a single Client IP address,
which is specified as a run time parameter. Each row of the report displays the
Client IP, Source Program, SQL Verb, Depth (of sentence within the SQL
command), an Object Name, and a count of times that object was referenced for
that row.

Sessions List

This report lists all database sessions for the reporting period. For each session, the
report displays the session (entity) Timestamp, the Session Start (timestamp),

Chapter 6. Reports 487


Server Type, Client IP, Server IP, Client Port, Server Port, Network Protocol, DB
Protocol, DB Protocol Version, DB User Name username, Source Program, and
Count of Sessions for that row (which should always be 1).

As with most reports, drill-down reports are available. There are a number of
session reports that are accessible from this report, but are not included on any
menu. This includes the following reports, with the run time parameters for those
reports set by using values from the selected row of the report:
Table 206. Sessions List
Report Run-time Parameters
Sessions by Client IP Server IP, Server Type
Sessions by Server IP Server Type
Sessions by Source Server Type, Sever IP
Program
Sessions by User Server Type, Server IP
Sessions Details by Server Type, Server IP
Server

Commands List

This report lists all SQL Verbs seen during the reporting period. At the outermost
level, commands are grouped by the Period Start time from the Access Period
entity, which is usually one hour, on the hour. Your Guardium administrator can
modify the access period length by changing the logging granularity, which is one
hour by default. For each Access Period in the reporting period, each row lists the
access Period Start time, a SQL Verb, Depth of the verb in the SQL statement,
Parent (a pointer to the owning verb), and a count of occurrences for the row.

Objects List

This report lists all objects seen during the reporting period. At the outermost
level, objects are grouped by the Period Start time from the Access Period entity,
which is usually one hour, on the hour. Your SQL Guard administrator can modify
the access period length by changing the logging granularity, which is one hour by
default. For each Access Period in the reporting period, each row lists the access
Period Start time, an Object Name, and the count of occurrences for that row.

Object Activity Summary

This report displays reporting period activity for a single Object Name, which is
specified as a run time parameter. Each row of the report displays the Client IP,
Source Program, SQL Verb, Depth (of sentence within the SQL command), an
Object Name, and a count of times that object was referenced for that row.

Archive Candidates

This report lists objects (database tables or stored procedures, for example) that
have not been accessed for an extended period of time. You cannot access the
query this report is based upon.

Windows File Share Activity

488 IBM Guardium 10.0


This report lists all Windows File Share SQL activity seen during the reporting
period. At the outermost level, the SQL commands are grouped by the Period Start
time from the Access Period entity, which is usually one hour, on the hour. Your
Guardium administrator can modify the access period length by changing the
logging granularity, which is one hour by default. For each Access Period in the
reporting period, each row lists the access Period Start time, the Service Name,
Client IP, Server IP, Source Program, SQL (from the SQL entity), and a count of
occurrences for the row. You cannot access the query this report is based upon, but
you can clone the report.

Hourly Access Details

This report produces a highly detailed listing for each DB User Name seen in the
reporting period, which is one hour by default for this report. Each row of the
report lists a DB User Name, Client IP, Server IP, Period Start, Source Program,
SQL (from the SQL entity), and a count of occurrences during the access period.

Full SQL By DB User Name

This report displays reporting period Full SQL attribute values that have been
logged for a single DB User Name, which is specified as a run time parameter.
Each row of the report displays the Full SQL ID, Timestamp (of the Full SQL
entity), Client IP, DB User Name, Session Start, Source Program, Full SQL, and a
count of occurrences for the row.

Full SQL By Client IP

This report displays reporting period Full SQL attribute values that have been
logged for a single Client IP, which is specified as a run time parameter. Each row
of the report displays the Full SQL ID, Timestamp (of the Full SQL entity), Client
IP, DB User Name, Session Start, Source Program, Full SQL, and a count of
occurrences for the row.

Flat LOG List

Lists flat log processing tasks.

Classification Process Results

Lists classification process tasks.

View > Performance Tab


View > Performance Tab

There are five predefined reports that use monitored data to show object names.
These reports all start with the prefix DW (Data Warehouse). See the topic, How to
report on dormant tables/columns, for further information on how to use these
predefined reports.

DW Dormant Objects

Shows all the members of one group that are not members in a second group, with
a focus on dormant tables. For example, this report shows objects that are in the all
objects group, but have not been used in a Select.

Chapter 6. Reports 489


DW Dormant Objects/Fields

Shows all the members of one group that are not members in a second group, with
a focus on dormant tables and columns. In this instance, groups are a 2-tuple type
(members that are a composite of a pair of value attributes). For example, this
report shows objects that are in the all objects and fields group, but have not been
used in a Select.

DW EXECUTE Object Access

Use this report to populate the group called DW EXECUTE Objects with a set of
stored procedure names that being executed. Then use indirect mapping in Group
Builder/Auto Generate Calling Prox to generate all the objects being used within
these procedures.

DW SELECT Object Access

This report shows all object names that have been accessed through a SELECT
statement.

DW SELECT Object-Field Access

This report shows all object and field names that have been accessed through a
SELECT statement.

Long Running Queries

For the reporting period, this report lists the longest running queries, with the
longest average execution time first. For each query, lists the Client IP, Server IP,
SQL, Period Start (from the Access Period entity), Average Execution Time, and the
count of occurrences for this row. You cannot access the query this report is based
upon.

Throughput

This report produces a count of all Server IPs seen, and total accesses, during the
reporting period. At the outermost level, accesses are grouped by the Period Start
time from the Access Period entity, which is usually one hour, on the hour. Your
Guardium administrator can modify the access period length by changing the
logging granularity, which is one hour by default. Each row lists the Period Start
time, the count of Server IPs seen, and a total count of accesses for the row.

You can restrict the output of this report using the Server IP run time parameter,
which by default is set to “%” to select all IP addresses.

Throughput (Graphical)

This report is a Distributed Label Line chart version of the tabular Throughput
report, plotting the total number of accesses over the reporting period, one data
point per Period Start time.

You can restrict the output of this report using the Server IP run time parameter,
which by default is set to “%” to select all IP addresses.

490 IBM Guardium 10.0


View > Access Map Tab
View > Access Map Tab

Number of db per type

For the reporting period, this report displays a double bar for each type of
database server for which traffic was seen. Each double bar is labeled with the
server type. For each server type, the first bar represents the number of Client IPs,
and the second bar represents the total number of Server IPs.

DB Server List

This report lists all database servers seen during the reporting period. It displays
the Server Type, Server IP, Server OS, Server Host Name, Server Description, and
the total count of Client/Server entities for that row (the total number of clients).

View > DB Entitlements Tab

View > DB Entitlements Tab

See the Database Entitlement Reports topic for a description of Database


Entitlements reports.

Monitor/Audit > Privacy Sets Tab

Monitor/Audit > Privacy Sets Tab

Number of Active Privacy Set Tasks

Number of active Guardium audit processes that contain one or more privacy set
tasks. When central management is used, this report contains data on the Central
Manager only, and is empty on all managed units (the standard message, No data
found for requested query, displays). This report has non-standard run time
parameters: there are no from and to date parameters, so all audit processes
containing one or more privacy set tasks will be reported. You can clone the query
that this report is based upon (Number of Active Privacy Set Processes), but you
cannot clone or regenerate the default report. The cloned query will have all of the
standard run-time parameters (including the from and to dates).

Discover > Classification Tab


Guardium Job Queue

Displays the Guardium Job Queue. For each job, lists the Process Run ID, Process
Type, Status, Cls/Asmt Process Id, Report Result Id, Cls/Asmt Description, Audit
Task Description, Queue Time, Start Time, End Time, and Data Sources.
Table 207. Guardium Job Queue
Domain Based on Query Main Entity
internal - not Guardium Job not available
available Queue
Run-Time Operator Default Value
Parameter
Job Description LIKE %

Chapter 6. Reports 491


Table 207. Guardium Job Queue (continued)
Domain Based on Query Main Entity
Period From >= NOW -1 DAY
Period To <= NOW

Discover > DB Discovery Tab


Databases Discovered

For the reporting period, for each Discovered Port entity where the DB Type
attribute value is NOT LIKE Unknown, this report lists the Probe Timestamp,
Server IP, Sever Host Name, DB Type, Port, Port Type, and count of Discovered
Ports for the row.
Table 208. Databases Discovered
Domain Based on Query Main Entity
Auto-discovery Databases Discovered Port
Discovered
Run-Time Operator Default Value
Parameter
Period From >= NOW -1 DAY
Period To <= NOW

Data Sources

This report appears on the default layout for both administrators and users. See
Data Sources on the Predefined Reports - Common page.

Data Source Version History

This report appears on the default layout for both administrators and users. See
Data Source Version History on the Predefined Reports - Common page.

Assess/Harden > Vulnerability Assessment Tab

Guardium Job Queue

Displays the Guardium Job Queue. For each job, lists the Process Run ID, Process
Type, Status, Cls/Asmt Process Id, Report Result Id, Cls/Asmt Description, Audit
Task Description, Queue Time, Start Time, End Time, and Data Sources.

Assess/Harden > Change Reports Tab


See CAS Reporting in the Assess/Harden help book.

Comply Tab
Outstanding Audit Process Reviews

For each Guardium user Login Name, this report lists the number and type of
outstanding Guardium audit processes. An outstanding audit process has a Status
attribute value (in the Task Results To-Do-List entity) other than Reviewed or

492 IBM Guardium 10.0


Signed. This report has non-standard run time parameters: there are no from and
to dates, which means that all outstanding task results will be reported. You can
clone the query that this report is based upon (it has the same name), but you
cannot clone or regenerate the default report. The cloned query will have all of the
standard run-time parameters (including the from and to dates).

Number of Active Audit Processes

The number of active Guardium audit processes. When central management is


used, this report contains data on the Central Manager only, and is empty on all
managed units (the standard message, No data found for requested query,
displays). This report has non-standard run time parameters: there are no from and
to date parameters, so all active audit processes will be reported. You can clone the
query that this report is based upon (Number of Active Processes), but you cannot
clone or regenerate the default report. The cloned query will have all of the
standard run-time parameters (including the from and to dates).

Protect > Security Policies Tab

View Installed Policy

In the Currently Installed Policy panel, this special report displays the installed
policy name, the number of rules it contains, and the number of baseline rules. You
cannot access the query this report is based upon.

Policy Violation Count

For the reporting period, this report displays the number of policy violations
logged.

Protect > Correlation Alerts Tab

Protect > Correlation Alerts Tab

Logged Threshold Alerts

This report displays a bar representing the total number of alerts logged during the
reporting period, for each type of threshold alert logged, based on the Alert
Description attribute of the Threshold Alert Details entity.

Logged R/T Alerts

This report displays a bar representing the total number of alerts logged during the
reporting period, for each type of real-time alert logged, based on the Access Rule
Description attribute of the Policy Rule Violation entity.

Protect > Incident Management Tab

Protect > Incident Management Tab

Violations/Incidents

See the Incident Management topic.

Chapter 6. Reports 493


Capture/Replay > Data Staging
Staged Data

Shows, for a selected replay configuration, the staged SQL. By default the value of
the config ID is empty and the user must modify the runtime parameter through
the customize option and enter the config ID that you would like to see.
Table 209. Staged Data
Domain Based on Query Main Entity
Replay Statistics Replay Results Statistics
Run-Time Operator Default Value
Parameter
Client IP LIKE %
Configuration ID
DB Name LIKE %
DB User LIKE %
Full SQL LIKE %
Remote Data LIKE none
Source
Server IP LIKE %
Aliases Radio none
Source Program LIKE %

Replay Statistics

This report shows Replay Statistics for Execution Start/End Date; Configuration
Name; Schedule Setup Name; Job Status; Statistic Description; Session ID;
Successful Queries; Failed Queries; Total Queries; Type; Active/Waiting/Completed
Tasks.
Table 210. Replay Statistics
Domain Based on Query Main Entity
Replay Statistics Replay Results Statistics
Run-Time Operator Default Value
Parameter
Period From <= NOW - 1MONTH
Period To >= NOW
Data Source LIKE none
Session ID >= 0
(greater than)
Session ID (less <= 99999999999
than)
Aliases Radio none
Type LIKE %

494 IBM Guardium 10.0


Capture/Replay > Capture-Capture List
Is a listing of all possible combinations of all captures that have been done (staged
replay configurations) for the purpose of examining two different captures side by
side.
Table 211. Capture/Replay > Capture-Capture List
Domain Based on Query Main Entity
Capture-Capture List
Run-Time Operator Default Value
Parameter
Data Set (from) LIKE %
Data Set (to) LIKE %
Period From >= NOW -1 MONTH
Period To <= NOW
Data Source none
Aliases Radio none

Capture/Replay > Capture-Replay List

A listing of all the Captures that have been configured and have a Replay
associated with them. This listing is used for the purpose of examining the
differences in captured SQL/replayed SQL on a target database system. If a
capture configuration has not been replayed, then it will not appear in the list.
Table 212. Capture/Replay > Capture-Replay List
Domain Based on Query Main Entity
Capture-Replay List
Run-Time Operator Default Value
Parameter
Data Set (from) LIKE %
Data Set (to) LIKE %
Period From >= NOW -1 MONTH
Period To <= NOW
Data Source none
Aliases Radio none

Capture/Replay > Replay-Replay list

Is a listing of all the Replays that have been performed against the same capture
configuration.
Table 213. Capture/Replay > Replay-Replay list
Domain Based on Query Main Entity
Replay-Replay List
Run-Time Operator Default Value
Parameter
Data Set LIKE %

Chapter 6. Reports 495


Table 213. Capture/Replay > Replay-Replay list (continued)
Domain Based on Query Main Entity
Period From >= NOW -1 MONTH
Period To <= NOW
Remote Data none
Source
Aliases Radio none

Capture/Replay > Workload Comparison

Capture/Replay > Workload Comparison > Data Staging

Shows the Full SQL, the staging data that was used and that was executed during
replay

Capture/Replay > Workload Comparison > Summary Comparison

Summary Comparison provides a high-level look into the differences in the capture
and replay, consisting of:

Compare Avg Execution Time - how the execution time differed between capture
and replay

Compare SQL Exceptions - how the number of SQL exceptions differed between
capture and replay

Compare Rows Retrieved - how the number of rows returned differed between
capture and replay

Compare SQL Failures - how many SQL failures there were between capture and
replay

Capture/Replay > Workload Comparison > Workload Aggregate Match

Workload Aggregate Match - after invoking gueue_replay_agg_match_by_id or


queue_replay_object_agg_match_by_id, from the Capture-Capture List,
Capture-Replay List, or Replay-Repaly List, aggregates by SQL the statistics that
allow the user to compare the differences between the selected workloads.
Depending on the two workloads selected, whether they are of the same type or
not, determines which API to use. for databases of the same type use
queue_replay_agg_match_by_id, for databases of differing type use
queue_replay_object_agg_match_by_id.

Capture/Replay > Workload Comparison > Workload Exceptions

Workload Exceptions - shows the SQL that generated exceptions during replay

Capture/Replay > Workload Comparison > Workload Match

Workload Match - after invoking queue_replay_match_by_id, provides a side by


side comparison of each SQL statement and a statistical comparison between the
two selected workloads. The queue_replay_match_by_id also provides the ability
to use defined groups that can aid in the inclusion or exclusion of database objects.

496 IBM Guardium 10.0


Two predefined groups, Replay-Exlude from Compare & Replay-Include in
Compare, you can go to Group Builder to see which objects have been defined or
modify these groups.

Capture/Replay > Capture-Capture,Capture-Replay,Replay-Replay


List drill-down reports

From the Capture-Capture, Capture-Replay, and Replay-Replay Lists, the following


reports are available by double clicking on the listing detail:
v Compare Avg Execution Time - list the average execution time between capture
and repaly
v Compare Rows Retrieved - list the number of rows retrieved between capture
and replay
v Compare SQL Execution - list the execution counts between capture and replay
v Compare SQL Failures - lists the failure count between capture and replay
v Replay Exception From Drill Down - list the exceptions encountered from the
capture
v Replay Exception To Drill Down - lists the exceptions encountered during replay
v SQL workload Match Drill Down - after invoking queue_replay_match_by_id,
lists a side-by-side comparison for SQL various statistics between capture and
replay
v SQL Workload Summary Drill Down - after invoking one of the aggregation
APIs (from the invoke icon), allows the user to compare the differences between
the two captured workloads; providing insight into how SQL ran between the
two captures.

Optim-Audit Predefined Reports tab

The role must be set to Optim-audit as a user to see these reports.

See the help topic, Optim-Audit Interface, for further information.

These reports are:


v Optim - Failed Request Summary per Optim Server
v Optim - Request Execution per User
v Optim - Request Execution per
v Optim Server Optim - Table Usage Details
v Optim - Request Log
v Optim - Table Usage Summary
v Optim - Request Summary

Predefined Reports Common


This section provides a short description of all predefined reports on both the
default user and default administrator layouts.

The common reports are:


v Data Source Version History
v Data Sources

Chapter 6. Reports 497


Current® Status Monitor
The Current Status Monitor graphical report displays the current state of the
Guardium appliance: how many packets per second and requests per second it is
processing, how much disk space and memory is being used, and so forth. Each
field is described in the following table.

The box displays the output of the Linux VMSTAT command. If you are familiar
with that command, these statistics should be familiar to you.
Table 214. Current Status Monitor
Field Description
procs The number of processes:

r: Waiting for run time.

b: In uninterruptable sleep (blocked, waiting for another event).


memory Memory use (kB):

swpd: Amount of virtual memory used.

free: Amount of idle memory.

buff: Amount used as buffers.

cache: Amount reserved for cache.


swap Amount of memory (kB):

si: Swapped in from disk.

so: Swapped out to disk.


io Input/Output blocks (kB/s):

bi: Blocks received from a block device

bo: Blocks sent to a block device


system System:

in: Interrupts per second, including the clock

cs: Context switches per second


cpu Percentage of total CPU time used by:

us: Time spent running non-kernel code

sy: Time spent running kernel code

id: Idle time (not including waiting for IO)

wa: Time spent waiting for IO

st: Time stolen from a virtual machine


(n)pps / (m)rps In the arrow next to the Analysis Engine, two averages are calculated
for the last five seconds: n is the average number of network packets
per second, and m is the average number of network database
requests per second.

498 IBM Guardium 10.0


Table 214. Current Status Monitor (continued)
Field Description
Analysis Engine For the Analysis Engine, the first line lists the total number of
messages queued for processing (q), followed by the number of
(q-d) ------ (p) messages dropped (d) because the buffer was in danger of becoming
filled. The second line lists the total number of messages processed
(p). The number processed will be reset to zero whenever the
inspection engine is restarted.
Server Type For each server type, the number of messages awaiting processing (q)
is listed and the number of messages processed (p) is listed.
(q) ---- (p)
Free Disk Space The number of bytes free.
DB n% Full The percentage of the database space allocation that is used.
Files/Other The Files/Other portion of Current Status Monitor represents the data
accumulated in nondb-sql logger.

Nondb-sql logger logs close session events arriving to the Analyzer


from “ignored” sessions that have been internally closed by the
Analyzer (INACTIVE_FLAG=-1). The Analyzer has the ability to close
connections by timeout (if session has been inactive for a long time). If
close session data arrives to the Analyzer from “ignored” session that
has been closed by timeout, it is recorded in the nondb-sql-logger
section.

Analyzer never records data directly to database. This section also


represents number of DB requests (like inserts into
GDM_SECURE_PARAMS) sent by Analyzer to Logger, as well as
other supported protocols such as FTP

Data Source Version History

Default Layout Location


v admin: available as drill-down from the Data Sources report
v user: Discover > DB Discovery

Data Sources

Lists all datasources defined: Data -Source Type, Data-Source Name , Data-Source
Description, Host, Port, Service Name, User Name, Database Name, Last Connect,
Shared, and Connection Properties..

You can restrict the output of this report using the Data Source Name run time
parameter, which by default is set to “%” to select all datasources.
Table 215. Data Sources
Domain Based on Query Main Entity
internal - not Data-Sources not available
available
Run-Time Operator Default Value
Parameter
Data Source LIKE %
Name
Period From >= NOW -1 DAY

Chapter 6. Reports 499


Table 215. Data Sources (continued)
Domain Based on Query Main Entity
Period To <= NOW

Predefined Audit Processes

There is one predefined audit process named Appliance Monitoring, which


contains the proceeding reports listed. This audit process is inactive by default. The
administrator can activate and schedule it according to his or her needs.

Note: When scheduling this audit process, check that the FROM/TO dates for
each report make sense for the process interval being defined (for example, it
doesn’t make sense to have a reporting period of one day if the audit process runs
only once a week - you will miss six days of activity).

The Appliance Monitoring audit process contains the following reports:


v Failed Logins to Guardium
v Active Guardium Users
v Aggregation/Archive Errors
v Policy Related Changes
v Inspection Engines and S-TAP Changes
v Data Source Changes
v CAS Instance Configuration Changes
v CAS Instances
v CAS Templates
v Scheduled Jobs Excep

How to build a report and customize parameters


Create and generate a new report, place the report on a pane so it can be viewed
again and again, and customize runtime and presentation parameters.

About this task


A report defines how the data collected by a query is presented. The default report
is a tabular report that reflects the structure of the query, with each attribute
displayed in a separate column. All runtime parameters (query from/query to
times) and presentation components of a tabular report (the column headings, for
example) can be customized using the Report Builder.

For additional information, see Reports, Portal Customization, and Queries.

A summary of steps for this task:


1. Create a default tabular report.
2. Generate the tabular report.
3. Place the report on a pane.
4. Customize the run-time parameters.
5. Customize the report presentation (column descriptions, report attributes,
background color)
6. View the report.

500 IBM Guardium 10.0


Procedure
1. Create the report
a. Do one of the following to open the Report Finder:
v Users with the admin role: Select Tools > Report Building > Report
Builder.
v All Others: Select Monitor/Audit > Build Reports > Report Builder.
b. Click the New button to open the Create Report panel.
c. From the Query list, select a query value to be used by the report (for
example, Guardium Logins)
d. Enter a unique name for the report in the Report Title field.
e. To create a default tabular report (default column headings, runtime
parameter prompts, etc.), click the Generate Tabular button.

2. Add to My New Report Tab


a. After generating a Tabular Report (see previous section on Creating a
Report), add this tabular report to the My New Reports tab, by clicking
Add to My New Reports. (If no tabular report portal has been generated
yet for the query, you will need to click the Generate Tabular button first.)

Note:
The default user portal contains a My New Reports pane, but the default
admin portal does not. If your portal does not contain a My New Reports
pane, you will receive an error message.
To add a tab to the outer-most row of tabs, click the Customize word link.
When creating a My New Reports pane, be sure to:
v Use the exact spelling shown.
v Define the pane with a Menu pane layout .
The step procedure for adding a My New Reports pane for an admin user
is (1) Click on the Add Pane button, type My New Reports and then click
on Apply. (2) Next, highlight My New Reports and click on the Edit Layout
button. (3) Specify the menu pane layout. (4) Click Save.
In order to see meaningful data in the tabular report within My New
Reports pane, click on the customize button on the same line as the title of

Chapter 6. Reports 501


the tabular report in order to access the run-time parameters (for example,
change the QUERY FROM DATE and QUERY TO DATE).

3. Place a report on a different pane


a. Once a report has been saved and a portlet generated, follow the
proceeding procedure to place the report on a different pane than My New
Reports. All run-time parameters are empty in a report definition. Therefore,
after placing a report on a pane, you need to set run-time parameters for
the report before it will be populated with data (see Set Report Parameters
in the next section). To place a report on a pane:
b. Do one of the following depending on admin user or non-admin user:
v As a user with the admin role, select the Report Title from the Report
Finder screen.
v As non-admin user, select Monitor/Audit > Build Reports, and click the
Place report on portal page button.
c. The Customize Pane panel displays. Click the Add Pane button. You are
prompted to supply a name. Enter a new name, and click the Apply button.
The new pane is added to the list of panes. To place a portlet on the new
pane, click its name to open the Customize Pane panel for this pane.
d. Click the Add Portlet button to open a list of all portlets available.
Optionally use the Filter portlets by category button to limit the number of
portlets displayed. If there are more portlets than can be displayed on a
single page, scroll through the list of portlets using Previous and Next
buttons.
e. Mark the checkbox next to the portlet you to be included on the pane, and
click the Apply button. This action places the portlet in the default location
for the pane being customized.
f. Click the Save and Apply button, which returns to the list of panes in the
Customize Pane panel.
g. Click the Apply button to save the modified pane. The new pane name will
appear as a new tab.
h. Click on the new tab to open the new pane. This pane contains only the
portlet that was just added. As mentioned earlier, when placing a portlet on
a page, all runtime parameters are empty, including the date range for the

502 IBM Guardium 10.0


report. To run the report, you will need to set the date range and possibly
set other runtime parameters. The report looks like the following example:

4. Set Report Parameters


a. There are two types of report parameters:
v A run-time parameter provides a value to be used in a query condition.
There is a default set of run-time parameters for all queries (see the
proceeding table), and any number of run-time parameters can be defined
in the query used by the report (see Query Conditions Overview).
v A presentation parameter describes a physical characteristic the report; for
example whether a graphical report includes a legend or labels, or what
colors to use for an element. All presentation parameters are provided
with initial settings when you define a report.
To set report parameters:
b. Click the Customize button on the report tab.
c. In the Customize Portlet panel (see previous Customize Portlet screen),
enter runtime and presentation parameters in the boxes provided, as
necessary for the task to be performed.
d. Click the Update button.
The following runtime parameters are present for all reports.
Table 216. Standard Run-time Parameters
Run-time Parameter Default and Description
QUERY_FROM_DATE None for a new report, varies for default reports. The starting
date for the report is always required.
QUERY_TO_DATE None for a new report, varies for default reports, though the
default is almost always NOW. This is the ending date for the
report, and is always required.
REMOTE_SOURCE In a Central Manager environment, you can run a report on a
managed unit by selecting that Guardium appliance from the
Remote Data Source list.
SHOW_ALIASES None (meaning the system-wide default will be used). Select the
On button to always display aliases, or the Off button to never
display aliases. Select the default button to revert to the
system-wide default (controlled by the administrator) after either
the On or Off button has been used.

5. parameters
a. Select the report in the Report Finder and click Search. The application
takes you to Report Search results. Click Modify to open up additional
configuration menus.
b. In the Report Column Descriptions panel,
v Optionally override the Report Title. The default is from the report
definition. You can modify the title on most subsequent panels.

Chapter 6. Reports 503


v Optionally override any Column Description (the column headings).
c. Click the Next button to open the Report Parameter Description panel, and
optionally override any parameter description.
d. Click the Next button to open the Report Attributes panel:
v Optionally enter a Refresh Rate (in seconds) for the report. The default
value for refresh rate is zero.
v Mark the Tabular button.
v Click Next.
e. In the Report Color Mapping pane for a tabular report, set the background
fill color for each column to have special coloring, as follows:
v Select a column from the Column list.
v Select an operator from the Operator list. The choices are =, IN GROUP,
or NOT IN GROUP for tabular reports and >= and <= for graphical (for
example, bar chart) reports.
v In the Value column, enter a specific value (a number or a user name), or
if IN GROUP or NOT IN GROUP has been selected as the operator, select
the group from the list. Without a value chosen, there will no results in
the report table.
v Click on the Color box and select the color you want from the pop-up
Color Picker window. It is recommended to choose light colors so text
can be seen through the color chosen.
v Click on the Add button.
Note: Colors will be visible when viewed in My New Reports, when
viewed as a PDF, and when the report is used in the Audit process.
f. Click the Next button to open the Submit Report panel, and click on the
Save button.

504 IBM Guardium 10.0


Chapter 6. Reports 505
506 IBM Guardium 10.0
6. View Report To view a report, select the tab or menu entry for the report.

How to ask questions of the data


Use the Query Builder to define and modify questions about the collected data.

There is a distinction between queries and reports:


v A query describes a set of information to be obtained from the collected data; for
example, find all clients updating a specific database during weekend hours or
what unauthorized users have attempted to access sensitive data (Social Security
numbers or credit card number).
v A report describes how the data returned by a query is presented.

There is a separate Query Builder for each domain, and it is opened from the
Query Finder for that domain (see Open the Query Finder section). By default, the
Query Builder panel name is Custom Reporting for a user portal, but for admin
role users, the Query Builder panel takes its name from the menu selection that is
used to open the query builder (Access Tracking, Exceptions Tracking, Alert
Tracking, etc).

The query builder contains three panes:


v The Entity List pane identifies all entities and attributes that are contained in the
domain. Entities are represented as folders, and attributes are the items within.
Click an entity folder to display its attributes, or click again to hide them. For a
description of all entities and attributes, see Entities and Attributes in the
Domains, Entities, and Attributes help topic.
v The Query Fields pane lists all fields to be accessed, what is to be displayed for
that field (its value, a count, minimum, maximum, or average), and the sort
order. For more information about using this pane, see the Query Fields
Overview.
v The Query Conditions pane specifies any conditions for selecting the fields that
are listed (for example, “where VERB = UPDATE”). For more information about
using this pane, see the Query Conditions Overview in the Queries help topic.

Chapter 6. Reports 507


For complete information, see the Queries help topic.

Open the Query Finder

There is a separate Query Builder for each reporting domain, so it is important to


open the correct Query Builder. Otherwise, the information you want will not be
displayed. All domains are described in the Domains topic of the Domains,
Entities, and Attributes Appendix.

After determining which domain to use, do one of the following to open the Query
Finder for that domain:
v Users with the admin role: Select Tools > Report Building, and then select one of
the Query Builders from the menu. The Query Builders all end with the word
Tracking (Access Tracking, for example).
v All Others: Select Monitor/Audit > Build Reports, and select one of the Query
Builder buttons from the panel.

Either one of these options opens the Query Finder for the selected domain.

Search for a Query

To locate and view a query definition in the Query Builder, there are several
options:
1. Use the Query Finder - see Use the Query Finder.
2. From a report portlet that is based on the query, click Edit this Report's Query
in the toolbar of the report.
3. If the query is used in a report on your portal, and you know some portion of
the report name, use the Portal Search tool, and then open the query.
4. From the Customize Portlet panel for a report that is based on the query, click
Edit this Query next to the query name on the panel.

Use the Query finder


1. Open the Query Finder for the appropriate domain (see Open the Query
Finder).
2. Optional. If you know the Main Entity for the query, select it from the list.
3. Click Search.
If there is only one query that is defined for the selected Main Entity, that
query opens immediately in the query definition panel.
If there are multiple queries that are defined for the selected Main Entity, or if
no Main Entity was selected, a list of queries display in the Query List panel.
If a Main Entity was selected for which no queries have been defined, you will
be informed.
4. Do one of the following:
508 IBM Guardium 10.0
To open the Query Builder panel for one of the listed queries, click on it. To
define a new query, click New.

Create a Query
1. Open the Query Finder for the appropriate domain (see Open the Query
Finder).
2. Click New to open the New Query – Overall Details panel.
3. Type a unique query name in the Query Name box. Do not include apostrophe
characters in the query name.
4. Select the main entity for the query from the Main Entity list. The main entity
controls the level of detail that is available for the query, and that it cannot be
changed. Basically, each row of data that is returned by the query represents a
unique instance of the main entity, and a count of occurrences for that instance.
5. Click Next. The new query opens in the Query Builder panel. To complete the
definition, see next section on Query Fields.

Query Fields Overview

Query Fields Overview

The Query Fields pane basically lists the columns of data to be returned by the
query.

There are two ways to add a field to the Query Fields pane:
v Pop-Up Menu Method:
1. Click the field to be added.
2. Select Add Field from the pop-up menu.
v Drag-and-Drop Method:
1. Click the icon of the field (not on the field name).
2. Drag the icon to the Query Fields list and release it.

Regardless of the method that is used, the field is added to the end of the list.

Move or Remove Fields in the Query Fields Pane

To move a field in the Query Fields pane:


1. Mark the checkbox for the field.
2. Use the following buttons to move the field to the desired location:
Click Up to move the field up one row.
Click Down to move the field down one row.

Chapter 6. Reports 509


Modify a Query
1. Open the Query Finder for the appropriate domain (see Open the Query
Finder).
2. Use the Query Finder to open the query to be modified.
3. Refer to the Query Builder Overview topic to modify any component of the
query definition.

Generate a Tabular Report Quickly


Once a query has been defined, there are several options for adding a tabular
report that is based on that query to an existing menu layout, quickly. These
options apply only for tabular reports, and those reports can be added only to
menu layouts.

1. Open the Query Finder for the appropriate domain (see Open the Query
Finder).
2. Use the Query Finder to open the query to use for the report.
3. Do one of the following:
To add a tabular report to the end of an existing menu layout, first click
Generate Tabular and then the Add to Pane buttons on the panel. Then,
navigate to the desired menu layout, and click it. To redo an existing tabular
report, click Regenerate.
To add a tabular report to the My New Reports tab, click Add to My New
Reports in the panel. (If no tabular report portal has been generated yet for the
query, you need to click the Generate Tabular first.)
Note: The default user portal contains a My New Reports pane, but the default
admin portal does not. If your portal does not contain a My New Reports pane,
you will receive an error message. If it does not exist, you can create this pane
anywhere on your portal (see Customize the Portal. If you create a My New
Reports pane, be sure to:
v Use the exact spelling shown.
v Define the pane with a Menu pane layout.
In order to see meaningful data in the tabular report within My New Reports
pane, click Customize next to the title of the tabular report in order to access
the run-time parameters (change the time from and now).

How to create custom reports from stored data


View stored data (for example, how many times the SQL command INSERT was
used in a database session).

About this task


In addition to prepackaged report templates, a graphical drag-and-drop interface is
provided for easily building new reports or modifying existing reports. This
interface allows users without database expertise to quickly define and view
custom reports to address custom compliance, analysis, or forensic requirements.

510 IBM Guardium 10.0


Steps
1. Begin a query definition.
2. Name the query and select the main entity.
3. Identify fields to be listed.
4. Add a query condition.
5. Generate the report.
6. View the results.

Domains, Entities, and Attributes

Data is collected and stored in domains. There are three types:


v Standard Domains, for example: Access (all monitored SQL requests) Exceptions
(from database servers or appliance components) Alerts, Policy Violations, etc.
v Administrator Domains, for example: Aggregation/Archive (archive, backup,
restore, etc.) Logins, Activity, etc.
v Optional Product Domains, for example: Classifier Results CAS Changes
(database server configuration file changes, for example)

A domain provides a view of the stored data. Each domain contains a set of data
related to a specific purpose or function (data access, exceptions, policy violations,
and so forth)

Each domain contains one or more entities. An entity is a set of related attributes,
and an attribute is basically a field value. A query returns data from one domain
only. When the query is defined, one entity within that domain is designated as
the main entity of the query. Each row of data returned by a query will contain a
count of occurrences of the main entity matching the values returned for the
selected attributes, for the requested time period. This allows for the creation of
two-dimensional reports from entities that do not have a one-to-one relationship.

For further information, See Domains, Entities, and Attributes Overview.

Domains

See Domains for a description of all Guardium domains. On the default admin
portal, all query builders can be opened from the menu of the Tools > Report
Building tab. On the default user portal, many query builders can be opened from
the Custom Reporting application: Monitor/Audit > Build Reports.

Entities and Attributes

See Entities and Attributes for a description of all attributes contained in each
entity. The two illustrations show a list of entities for the Data Access Domain and
the attributes available under the Client/Server entity.

Chapter 6. Reports 511


The Entities of the Data Access Domain
v Client/Server: Client and database server connection info (IPs, OS, etc.)
v Session: Database name, session start and end times
v Server IP/ Server port
v Application Events: Events from the Guardium API
v Changed Data Value
v App user name - Application user name
v Full SQL Values: Values logged separately for faster search
v Full SQL: The full SQL string (with values)
v SQL: The SQL request (no values)
v Access Period: When – Logging granularity
v Command: SQL command
v Object: SQL object

512 IBM Guardium 10.0


v Object-Command: Command detected in object
v Join
v Object-Field: Field detected in object
v Qualified Object
v Field: Field
v Field SQL Value: Field value logged separately for faster search

The Entity Hierarchy for the Data Access Domain

There are six levels in the entity hierarchy for this domain.
Table 217. Entities Hierarchy
Number Entities Description
1 Client/Server Session Each client/server connection has one or more sessions.
Each session has one of more requests.
2 Application Events Each request has some combination of this entity.
3 Full SQL Values Each request has some combination of these entities.

Full SQL

SQL

Access Period
4 Command Each request may contain commands.
5 Object Each command may contain objects.
6 Object-Command Each object may contain these entities.

Field

Field SQL Value

Object-Field

Main Entity

The Main Entity determines:


v What a record in the report represents.
v What the query counts.
v What attribute values and aggregates (Count, Minimum, Maximum, Average)
will be available.
v What drill downs are available.

Build Queries

There is a distinction between queries and reports:


v A query describes a set of information to be obtained from the collected data; for
example: “find all clients updating a specific database during weekend hours.”
v A report describes how the data returned by a query is presented. Most often,
the report is in tabular form, but extensive graphical reporting capabilities are
provided as well.

Chapter 6. Reports 513


Use the Query Builder to define and modify queries. There is a separate Query
Builder for each domain, and it is opened from the Query Finder for that domain
(see Open the Query Finder). By default, the Query Builder panel name is Custom
Reporting for a user portal, but for admin role users, the Query Builder panel takes
its name from the menu selection used to open the query builder (Access Tracking,
Exceptions Tracking, Alert Tracking, etc).

The query builder contains three panes:


v The Entity List pane identifies all entities and attributes contained in the
domain. Entities are represented as folders, and attributes are the items within.
Click on an entity folder to display its attributes, or click again to hide them.
v The Query Fields pane lists all fields to be accessed, what is to be displayed for
that field (its value, a count, minimum, maximum, or average), and the sort
order.
v The Query Conditions pane specifies any conditions for selecting the fields
previously listed (for example, “where VERB = INSERT”).

See Queries for further information.

Procedure
1. Begin a query definition
From the user portal, go the Custom Reporting application.
a. Click the Monitor/Audit tab.
b. Click the Build Report tab.
c. Click Track data access to open the Query Finder. See the following
examples.

The proceeding screen will appear.

514 IBM Guardium 10.0


d. Click New to define a new query.
2. Name the query and select the main entity
a. Enter a query name.
b. Select Command from the Main Entity list.
c. Click Next.

3. Identify fields to be listed


a. Open the Client/Server entity.
b. Click Client/Server, and a pop-up menu appears. Click on Query Fields to
add fields to the Query Definition Panel. The proceeding example shows
Query Fields Server IP, Client IP and Timestamp Time added.
c. Then add the SQL verb field from the Command entity. See the following
example.

4. Add a query condition


a. Go back to the Client/Server entity. Click on Client/Server, and a pop-up
menu appears. Click on Query Conditions to add conditions fields to the
Query Definition Panel.
b. Add SQL Verb to the Query Conditions list.
c. Continue along the Query Conditions table and select IN GROUP from the
list of Operators.
d. Select DML Commands from the list of groups.
See the example for the Query Conditions changes listed in previous steps 1-4.

Chapter 6. Reports 515


5. Generate the report
a. Click the Save button to save the query definition.
b. Click the Generate Tabular button to define a default tabular report based
on this query.
c. Click the Add to My New Reports button to add the report to the default
layout page for new reports.

6. View the results


Click on the My New Reports tab to seek the results. Drill down to display the
Command Details report for the INSERT commands.

How to report on dormant tables and columns


Guardium offers functionality that can help data architects and DBAs discover
which tables and which fields are not being used.

516 IBM Guardium 10.0


About this task
The basic concept is the following. You want to know which tables are not being
accessed. You upload all table names from your database or from your
Configuration Management Database (CMDB) using Guardium's custom domain
and custom query functions. Then you use the report (from the custom query) to
populate a group of objects.

Next, you use a report that uses monitored data to show all object names that have
participated in a SELECT statement. There are predefined reports for this in
Guardium 8, all starting with the prefix DW (Data Warehouse). Then, use the
output to populate one of the predefined groups.

Finally, use a predefined report that shows all members in the first group that are
not members in the second group.

There are two sets of such reports and groups – one which focuses on tables and
one which focuses on tables and columns. The only difference is that in the later
case groups are of a 2-tuple type (members that are a composite of a pair of value
attributes, referred to as tuple).

Let's look at an example from start to finish involving an Oracle database and the
EMP user.

Follow these steps.


1. Upload all table names and/or all table/column combinations from the set of
system catalog tables (definitions of the database objects).
2. Use monitored data to determine which tables and/or table/columns have
been accessed over a period of time.
3. Create a report of all items of step 1 that are not in step 2.

The following Guardium functions are used for this task.


v External Data Correlation for uploading table names and columns names
v Populate groups from queries
v Reporting

Procedure
1. Upload all the tables from the system catalog. Do this by creating a custom
table.
Prerequisites
a. Define datasource/test database connection
b. Upload data (create custom table)
c. Create new domain (merge custom tables with existing reports)
See External Data Correlation for further information.
The following example is available from User > Monitor/Audit > Build Reports
> Custom Table Builder > Upload Definition > Import Table Structure. When
the configuration is complete, click the Retrieve button.

Chapter 6. Reports 517


Configuration - Upload Definition, Import Table Structure
Upload the data so that it is in the Guardium system (as a custom table) and if
desired, schedule this upload. This data will be used to determine the superset
of all tables defined in the system.
Mapping all the objects (and/or objects-fields) in the system
In this example, dormant data based on table names is used. But the analysis
can include columns, provided the upload tasks are defined to bring back pairs
of <object,field> and use tuple groups to compare with an observed tuple of
object+field.
For instances of Object-Field, replace the DW Dormant Objects report with the
DW Dormant Objects-Fields report. For instances of Object-Field, replace the
DW Select Object Access report with the DW Object-Field Access report.
Once you complete the upload, define a custom domain based on this single
custom table and define a report that retrieves the table names.
This is the end result.

518 IBM Guardium 10.0


Report – Table Names
Next, populate the group DW All Objects group from this report and schedule
this Import from Query action if desired. This creates a group that has all the
tables as defined by the system catalog.
The following example is available from User > Monitor/Audit > Build Reports
> Group Builder > Choose DW All Objects > Populate from Query> Query
scott_table_names.
When done, click the Save button.

Configuration - Populate Group from Query, Table Name


2. Mapping the object directly
Use monitored data to determine which tables and/or table/columns have
been accessed over a period of time.
Look at some additional predefined reports. The DW SELECT Object Access
report on the Performance pane in the View tab shows all object names that
have been accessed through a SELECT statement. For example, here are two
report outputs before and after a user executes the statement select * from
EMP. The difference between the two reports is the object name EMP.

Chapter 6. Reports 519


Report - Before

Report - After
Now, populate the group DW SELECT Accessed Objects group from the report,
filling in the filtering attributes that you require.
The following example is available from User > Monitor/Audit > Build Reports
> Group Builder > Choose DW Select Accessed Objects > Populate from
Query> DW Select Object Access.
When done, click the Save button.

520 IBM Guardium 10.0


Configuration - Populate Group from Query, Object Name
3. Create a report of all items of step 1 that are not in step 2.
Use the DW Dormant Objects report to view objects that are in the all objects
group, but have not been used in a Select.
The following example of the DW Dormant Objects report is available from the
Performance pane in the View tab.

Report – DW Dormant Objects


Contrast this report with the earlier Report – Table Names. Notice that EMP is
not in this report because it was used in a SELECT statement.

Note: Because group members are centrally managed and synchronized


between the Central Manager and managed units, the content of this report
may be delayed by up to 30-minutes. If you need access to the information that

Chapter 6. Reports 521


is most up-to-date, run this report on the Central Manager or ask your
Guardium administrator to synchronize the managed unit from the Central
Manager's administration console.
Further ways to access tables
Mapping objects indirectly
In addition to direct SELECT access, tables may be accessed through stored
procedures and functions. In this case, you will need to do a bit more mapping
to allow Guardium to calculate such SELECTs.
First, use the report DW EXECUTE Object Access to fill in the group called DW
EXECUTE Objects with a set of stored procedure names that are being
executed. Then, use indirect mapping to generate all the objects being used
from within these procedures.
Assume that you have a procedure defined.
create or replace procedure num_depts(deptnums out NUMBER) is
begin
select count(*) into deptnums from dept;
end;
In this case, every execution of num_depts also does a select on DEPT.
Use the “populate group from query” feature to use the Object Name column
in the DW EXECUTE Object Access report to populate the “DW Execute
Objects” group. Then, use this group to populate the DW EXECUTE Accessed
Objects group.
In the Group Builder select the DW EXECUTE Objects from the list and click
on Auto Generate Calling Prox. Select either Using Reverse Dependencies,
which is supported only for Oracle in Guardium 8, or Generate Selected
Objects.
If you choose to use dependencies then you will need to choose a database that
has access to DBA_DEPENDENCIES and what type of dependencies to follow.
Choose to append members to the DW EXECUTE Accessed Objects group.
The following example is available from User > Monitor/Audit > Build Reports
> Group Builder > Choose DW EXECUTE Accessed Objects > Auto Generated
Calling Prox > Using Reverse Dependencies > Analyze Stored Procedures.

522 IBM Guardium 10.0


Configuration - Auto Generated Calling Prox, Using Reverse Dependencies
In our examples, this will add the following dependent objects to the group
DW EXECUTE Accessed Objects.

Chapter 6. Reports 523


Configuration - Modify, Manage Members for Selected Groups
If you choose to use source analysis (available for all databases but only for
stored procedures), you can also specify the command to base dependency on,
in this case SELECT.
The following example is available from User > Monitor/Audit > Build
Reports > Group Builder > Choose DW EXECUTE Accessed Objects > Auto
Generated Calling Prox > Using Observed Procedures > Analyze Observed
Stored Procedures.

524 IBM Guardium 10.0


Configuration - Auto Generated Calling Prox, Using Observed Stored
Procedures
Using DW Dormant Object Reports The DW Dormant Objects report (available
from the Performance pane in the View tab) will show all objects not being
used by a direct SELECT or through this indirect mapping.
See the previous Report - DW Dormant Objects.

Chapter 6. Reports 525


How to Generate API Call from Reports
Generate Guard API calls from a report either from a single row within a report or
based on the whole report

Value-added: Through a GUI, by using existing data on the system that is


displayed in reports as parameters for API calls, quickly and easily generate and
populate API calls without having to perform system level commands or type
lengthy API calls to quickly perform operations such as create datasources, define
inspection engines, maintain user hierarchies, or maintain the Guardium features
such as S-TAP.

Single Row API call

For this scenario, we will generate API function calls to populate the Data Security
User Hierarchy.
1. To begin, let's show the current Data Security User Hierarchy for the user
scott

2. To invoke an API function we must find a report that currently has the
desired API functions linked to it. Since creating a user hierarchy is related to
users, selection of a user report should yield good results. For this scenario
we've selected the User - Role report.

3. Double-clicking on a row for drill-down will display an option to Invoke...

526 IBM Guardium 10.0


4. Click on the Invoke... option to display a list of API functions that are
mapped to this report

5. Click on the API you'd like to invoke; bringing up the API Call Form for the
Report and Invoked API Function
6. Fill in the Required Parameters and any non-Required Parameters for the
selected API call. Many of the parameters are pre-filled from the report but
may be changed to build a unique API call. For specific help in filling out
required or non-required parameters please see the individual API function
calls within the GuardAPI Reference guide.

7. Use the drop-down list to select the Log level, where Log level represents the
following (0 - returns ID=identifier and ERR=error_code as defined in Return

Chapter 6. Reports 527


Codes, 1 - displays additional information to screen, 2 - puts information into
the Guardium application debug logs, 3 - will do both)
8. Use the drop-down list to select a Parameter to encrypt.

Note: Parameter Encryption is enabled by setting the Shared Secret and is


relevant only for invoking the API function through script generation.
9. Choose to Invoke Now or Generate Script.
a. If Invoke Now is selected the API call will run immediately and display
an API Call Output screen showing the status of the API call.

b. If Generate Script is selected: Open the generated script with your favorite
editor or optionally save to disk to edit and execute at a later time --
replacing any of the empty parameter values (denoted by '< >') if
contained within the script.

Note: Empty parameters may remain in the script as the API call will
ignore them
Example Script
# A template script for invoking guardAPI function create_user_hierarchy :
# Usage: ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
# replace any < > with the required value
#
grdapi create_user_hierarchy userName=jkoopmann
parentUserName=scott
c. Execute the CLI function call.
Example Call
$ ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
10. Validate. For this scenario it is a redisplay of the Data Security User Hierarchy.

528 IBM Guardium 10.0


Multi Row API call

This scenario uses a custom report with mapped parameters to report fields. Please
see additional scenarios further in this section for additional information.
1. To begin, let's show the current Data Security User Hierarchy for the user scott

2. Click on the Invoke... icon to display a list of APIs that are mapped to this
report

3. Click on the API you'd like to invoke; bringing up the API Call Form for the
Report and Invoked API Function. Invoking an API call from a report for
multiple rows will produce an API Call Form that displays and enables the
editing of all records displayed on the screen (dependent on the fetch size) to a
maximum of 20 records.

Chapter 6. Reports 529


4. Use the check boxes to select / de-select the rows that will be targeted for the
API call.
5. Fill in the Required Parameters and any non-Required Parameters for the
selected API call. Many of the parameters are pre-filled from the report but
may be changed to build a unique API call. For specific help in filling out
required or non-required parameters please see the individual API function
calls within the GuardAPI Reference guide. Additionally, use the set of
parameters for the API to enter a value for a parameter and upon clicking the
down arrow button populate that parameter for all records.
6. Use the drop-down list to select the Log level, where Log level represents the
following (0 - returns ID=identifier and ERR=error_code as defined in Return
Codes, 1 - displays additional information to screen, 2 - puts information into
the Guardium application debug logs, 3 - will do both)
7. Use the drop-down list to select a Parameter to encrypt.

Note: Parameter Encryption is enabled by setting the Shared Secret and is


relevant only for invoking the API function through script generation.
8. Choose to Invoke Now or Generate Script.

530 IBM Guardium 10.0


a. If Invoke Now is selected the API call will run immediately and display an
API Call Output screen showing the status of the API call. In this scenario
the last two API calls will fail since we can not have a cyclical relationship
in the hierarchy.
b. If Generate Script is selected: Open the generated script with your favorite
editor or optionally save to disk to edit and execute at a later time --
replacing any of the empty parameter values (denoted by '< >') if contained
within the script. With this scenario, we could easily delete the last two
lines of the script -- knowing they would create cyclical errors.

Note: Empty parameters may remain in the script as the API call will
ignore them.
Example Script
# A template script for invoking guardAPI function create_user_hierarchy :
# Usage: ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
# replace any < > with the required value
#
grdapi create_user_hierarchy userName=ADAMS parentUserName=SCOTT
grdapi create_user_hierarchy userName=JOHNY parentUserName=SCOTT
grdapi create_user_hierarchy userName=MARY parentUserName=SCOTT
grdapi create_user_hierarchy userName=SCOTT parentUserName=SCOTT
grdapi create_user_hierarchy userName=SCOTT parentUserName=SCOTT
c. Execute the CLI function call.
Example Call
$ ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt

Chapter 6. Reports 531


9. Validate. For this scenario it is a re-display of the Data Security User Hierarchy.

How to use Constants within API Calls


Create a new entity attribute to be used during an API function call.

Value-added: Through a GUI, create a user-defined constant that can be used for
filling in a parameter in an API function call .
1. From our report, we can modify it to have a field that we could use for
parameter mappings.

532 IBM Guardium 10.0


2. Go to the Query Entities & Attributes report for the Client/Server entity
within the ACCESS RULES VIOLATIONS domain. Double-click on a row and
select the Invoke... option.

3. Invoke the API function create_constant_attribute.

Chapter 6. Reports 533


4. Fill in the constant value to use ('SCOTT'), fill in the attributeLabel you like to
name it ('OracleTopParent'), and then click on the Invoke now button to create
the constant.

5. Clicking on the Invoke now button will produce a API Call Output status
showing the constant was created.

534 IBM Guardium 10.0


6. A re-display of the Query Entities & Attributes report will show the new
attribute created.

7. The newly create constant can now be mapped for the report. Double-click on
the new row and select the Invoke... option.

Chapter 6. Reports 535


8. Select the create_api_parameter_mapping option.

9. Fill in the functionName and the parameterName and click on the Invoke
now button.

536 IBM Guardium 10.0


10. The newly created attribute must be added to the report. Edit the query
through Query Builder and add the field.

Chapter 6. Reports 537


11. Now when the report is displayed the new attribute is displayed.

12. To validate the new constant's usage, double-click on a row and select the
Invoke... option.

538 IBM Guardium 10.0


13. Select the API function

14. Now the parentUserName is populated from the newly added constant. Click
the Invoke now button.

Chapter 6. Reports 539


15. Validate the new Data Security User Hierarchy.

How to use API Calls from Custom Reports


Link API functions to reports and map report fields to the API functional
parameters.

Value-added: Through a GUI, quickly and easily map API parameters to custom
report fields to be used in API function calls.
1. By default, a newly created custom report will not have any API functions
linked to it. This can be seen by the proceeding custom report where
double-clicking on a row will only produce a list of additional drill-down
reports to run but lacks the Invoke option.

540 IBM Guardium 10.0


2. The linking of API functions to reports is done through Guardium's Report
Builder. Open Report Builder, find your custom report, and then click on the
API Assignment button.

3. The API Assignment panel shows all the API functions assigned to the
selected report. Notice for our scenario the report selected has no API
functions assigned to it.

Chapter 6. Reports 541


4. To assign an API function to a report, find an API you'd like to link to the
report, click the greater than arrow, and then click the apply button. For our
scenario we selected create_uer_hierarchy. When selected a pop-up window
will appear that shows the report parameter mappings (which report fields
will be used when calling the API function). Notice there are no mapped
report fields to parameter names.

5. At this point, none of the report fields are mapped to the API parameters.
Users can go to the Query Entities & Attributes report to create these
mappings, otherwise when invoking the API call none of the parameters will
have values. add the API parameter mappings. Open the Query Entities &
Attributes report and create the mappings. Since our report for this scenario
uses the Client/Server entity within the ACCESS RULES VIOLATIONS
domain, filter the report by using the customize button; modifying the report
to display only the Client/Server entity.

542 IBM Guardium 10.0


6. Double-click on the attribute you'd like to assign to a parameter name and
click on the Invoke... option.

7. Select the create_api_parameter_mapping API function.

Chapter 6. Reports 543


8. Fill in the functionName and parameterName in the API Call Form and click
on the Invoke now button.

9. Now, when we go back to the Report Builder for our report and look at the
API Assignment; clicking on the create_user_hierarchy API function displays
the API - Report Parameter Mapping with our mapping of userName to the
Report field Client/Server.DB User Name.

544 IBM Guardium 10.0


10. Click on the greater-than arrow '>' and click the Apply button

11. Now when we invoke the create_user_hierarchy API function through our
report the parameter userName will be populated from the report. To see this,
go back to the report and double-click on a row and then click on the Invoke...
option.

Chapter 6. Reports 545


12. Click on the API function (in our case create_user_hierarchy).

13. Notice that the userName is now populated from the report field.

546 IBM Guardium 10.0


14. Fill in the parentUserName and click the Invoke now button.

15. Verify that the new Data Security User Hierarchy has been added.

Optional External Feed


External feeds allow you to send Guardium report data directly to an external
database.

Chapter 6. Reports 547


Sending reporting data to an external database is useful in several scenarios, for
example when combining or correlating Guardium data with non-Guardium data,
when using Guardium data with external reporting tools, or when
machine-parsing records in especially large reports.

Before using external feeds, verify the following prerequisites:


v Map a feed between Guardium and an external database. External feeds
currently support relational databases and may not function with other database
types.
v Create a report defining the data to send via the external feed. Predefined
reports will not work with external feeds. If you want to use a predefined
reports, make a copy with the report and use the copy for the external feed.
v Define an audit process that will use the external feed.

The first time that an optional external feed task runs, the necessary internal
representation of the audit sources will be created. One limitation is that data that
is time-stamped with a date earlier than the audit source creation date cannot be
stored. This means that the first time the task runs, it will only export data for the
current date. On subsequent executions of the task following that date, any data
from that date forward can be exported. (In other words, the next day, you will be
able to export that day's data plus the prior day's data.)

Create an Optional External Feed Task

If you have not yet started to define a compliance workflow automation process,
see Create a Workflow Process before performing this procedure.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click External Feed.
3. Select the feed type from the Feed Type list. (The controls that appear next
depend on the feed type selected.) One predefined feed type is Object Last
Referenced.

Note: You must map an external feed before attempting to use this feature.
4. Select an event type from the Event Type list.
5. Select a report from the Report list. Depending on the report selected, a
variable number of parameters appear in the Task Parameters pane.
6. In the Extract Lag box, enter the number of hours by which the feed is to lag,
and mark the Continuous box to include data up to the time that the audit task
runs. Extract Log only works when the Continuous box is marked.
7. In the Datasources pane, identify one or more datasources for the external feed.
For instructions on how to define or select datasources, see Datasources.
8. Enter all parameter values in the Task Parameters pane. The parameters will
vary depending on the report selected. Count column is not supported in
External Feed.
9. Click Apply.
Related concepts:
“Building audit processes” on page 195
Streamline the compliance workflow process by consolidating, in one spot, the
following database activity monitoring tasks: asset discovery; vulnerability
assessment and hardening; database activity monitoring and audit reporting; report
distribution; sign-off by key stakeholders; and, escalations.
Related tasks:

548 IBM Guardium 10.0


“Mapping an External Feed”
Learn how to map an external feed to send Guardium report data directly to an
external database.

Mapping an External Feed


Learn how to map an external feed to send Guardium report data directly to an
external database.

Before you begin

Verify the following prerequisites before mapping an external feed:


v Identify the external database that will receive data from the feed, and gather
the connection information required for that database (ip address, port number,
username, password, etc.). External feeds currently support relational databases
and may not function with other database types.
v Identify the Guardium report that will provide data to the external feed.
v

About this task

External feeds allow you to send Guardium report information directly to an


external database. Anything that can be defined in a report can be sent via an
external feed. These feeds depend on mapping DOMAIN_ID and ATTRIBUTE_ID
from Guardium's reporting mechanism to table fields on the external database.
Each mapping consists of the records in four tables (EF_MAP_TYPE_HDR,
EF_MAP_TABLE, EF_MAP_COLUMN, and EF_MAP_GDM_TYPE). Use the
grdapi_create_ef_mapping function to help create these tables and establish the
mapping.

Procedure
1. Generate a report with the data you would like to transfer using an external
feed. You can do this from a central manager, aggregator, or stand-alone
Guardium instance, provided that system can access the report data you
require.
2. From the CLI, run grdapi create_ef_mapping reportName="My report". In
addition to establishing the mapping, the grdapi_create_ef_mapping function
also generates a sample create table statement to be used in subsequent steps.
3. On the Guardium system where your report is defined, search /var/log/guard
for a filename like ef_sample_[my_report].sql. This file contains the example
create table statements. You must modify the statements in this file to match
the requirements of your external database. After modifying the file, run the
statements against your external database to create the target tables.
4. The external feed should now be available for use in workflow processes
defined through the audit process builder. See the “Optional External Feed” on
page 547 documentation for additional information.
Related concepts:
“Optional External Feed” on page 547
External feeds allow you to send Guardium report data directly to an external
database.

Chapter 6. Reports 549


Distributed Report Builder
This Central Manager feature provides a way to automatically gather data from all
or a subset of the Guardium managed units that are associated with this particular
Central Manager. Distributed reports are designed to provide a high-level view, to
correlate data from across data sources, and, to summarize views of the data. You
would continue to use aggregators for the row level data gathering across
collectors.

This capability alleviates an issue that can arise in complex enterprise


environments when users do not always know the exact managed unit that has the
data that is required to for a particular report. This can happen because the link
between Guardium collectors and databases can change over time that is based on
configuration options such as load balancing. This is further complicated by
considerations such as the time period and data retention policy on the Aggregator
and Collectors.

It is easy to create a Distributed Report. Simply define it via the Distributed Report
screen, add to a Pane and it is ready for your use.

Furthermore, this feature optionally makes use of data marts on the Central
Manager to enable scheduled collection of aggregated data over time. In essence,
the distributed report data is stored on the Central manager as a flat table, so no
complex joins are required to create the report you want, which can significantly
improve response time for these enterprise reports.

Distributed report data can be gathered from Collectors, Aggregators, and even
Central Managers. The default distributed versions of the reports includes the host
name of the unit responsible for that data.

The following are predefined distributed reports:


v Enterprise S-TAP Verification
v Aggregation/Archive Log
v Failed User Login Attempts
v Scheduled Jobs Exceptions
Running Distributed reports: Immediate or scheduled
When you define a distributed report, run it immediately or schedule it to
run in the background and gather the results to the Central Manager:
v Immediate: This mode gathers data on demand (upon execution via the
GUI) and displays results while gathering the results from the relevant
managed units. The distributed report includes a status indicator that
data is still in transit or that all data has been received from a particular
managed unit. In this mode, data is not saved on the Central Manager.
As soon as the report is closed, the data is gone.
v Scheduled: This mode gathers data in advance in order to enable instant
response. On the time interval you specify in the scheduler, all relevant,
aggregated data from the specified managed units is sent to a designated
data mart table on the Central Manager machine and creates a default
report against this table. This table also has its own domain and entity to
enable creation of additional queries and reports using the query builder.
Those reports can be added to an audit process in order to run the
process periodically and assign the results of the process to a Role, User
and/or User Group for review or sign-off.

550 IBM Guardium 10.0


Planning considerations for distributed reports
v In a mixed environment where the Central Manager is 32-bit and
managed units are 64-bit, the Distributed Report will not show
information from the 64-bit systems. To see information in this situation,
the Central Manager needs to be upgraded to 64-bit.
v Because of the coordination of data to be sent to the Central Manager, it
is critical that the clock time on all managed units is set to the real-time
at the time zone where the managed units are located. Even a difference
of ten minutes between the Central Manager and the managed units
impact the performance and reliability of the distributed reports.
v Distributed report definitions cannot be exported and imported. It is
recommended that you keep a record of the definitions and scheduling
if needed to re-create on another system such as a backup or test Central
Manager. System backup does include distributed report configurations.
v If you specify that report data is collected from both aggregators and
collectors, it is conceivable that the default distributed report includes
duplicate data (although the Guardium host name is different). In this
case, it is best to specify only collectors or only aggregators for the
distributed report configuration.
v Distributed reports are based on existing non-distributed reports. When
defining a distributed report in scheduled mode, if the original query
includes run-time parameters, then you will be asked to provide those
values (or wildcards, %).
v Plan for the fact that now you will have data residing on your Central
Manager in a database that you did not before. So you will need to plan
for operational changes for purging, for upgrades, and for backup.
Creating a distributed report
Distributed report building is available only from an appliance that is
configured as a Central Manager. To access the distributed report builder
when logged in as an administrator, go to Tools > Report Configuration
Tools > Distributed Report Builder.
From the Distributed Report Builder, you can select from a list of existing
reports to modify the configuration or add to a pane, or click New to
create a new distributed report. In general, any existing report on the
Central Manager can be distributed immediately or run on a schedule (or
both).
Creating a new distributed report
From the Report Builder, select New, which clears any existing data in the
report builder, in the Based on Report pulldown, select one of the existing
reports that are available for distribution. Each report from the list can be
distributed once as immediate and once as scheduled. Those that are
defined to be distributed immediately have the term (Immediate)
appended to the distributed report name.
Select an existing report to create one distributed report.
In the Gather Data From section of the builder, choose All Managed Units
(that the Central Manager is managing) or specify certain Groups and/or
specific Managed Units.

Chapter 6. Reports 551


Note: You can define managed unit groups from the Central Management
portal on the Administration Console. Examples of groups are: Group of
collectors versus aggregators; groups that are based on application,
responsibility, or geography.
In the Operation Mode section of the builder, choose the report operation
mode:
v Immediate: Run the report when the user requests it. When you select
this option there are no additional options to consider. You can click
Apply to save the changes and then optionally click Add to Pane to add
the report to the GUI.
v Schedule: Run in a batch that prepares and gathers the data in advance.
With the Schedule option, you specify the following additional values:
v Time Granularity: Specify the time period for which the report data is
aggregated. For example, if you specify a Time Granularity of 1 hour for
the Count Of Failed Logins report, the count is based on an hourly
aggregation of failed logins.
v Purge After: Specify how long to keep the report data in the data mart
before it is automatically purged.
v Runtime parameters: Depending on what report you are basing the
distributed report on, you must specify the runtime parameters. To see
valid values for these fields, examine the query for the original report, or
specify the wildcard, %.
Click Apply. When the system is done saving the distributed report
configuration, Modify Schedule and Roles are activated.
To create the schedule, click Modify Schedule, which takes you into the
general-purpose scheduler.
The schedule definition is pushed down to the managed units and tells
each managed unit when and how often to send the aggregated data to the
Central Manager.
To specify which roles can see this distributed report, click Roles.
Modifying an existing distributed report
For existing distributed reports, you can:
v Change the configuration, including managed units, schedule details, or
runtime parameters
v Add a report to a pane
v Delete a distributed report
v Create a scheduled report that is based on an existing immediate report.
This option replaces the immediate report. You cannot create an
immediate report from an existing scheduled report.
To select an existing report, use the text search box or scroll through the
list of existing reports and select the one you want to modify.
Viewing distributed reports
The following additional columns are included in distributed reports:
v Source: The Guardium system where the data was gathered from.
v TZ: Time Zone - because the Guardium system might be located in a
different time zone from the Central Manager.

552 IBM Guardium 10.0


v Date: This column shows the Start Period time for scheduled reports and
enable grouping results according to hour/day. For Immediate mode,
this column shows start time and will not be meaningful.

Note: Only a maximum of three date fields are permitted.


More about time
When running a report, the report customizer lets you specify an absolute
time window for the query (from 3-31-2014 8:00am to 3-31-2014 11:00am)
or a relative time window (NOW -3 HOUR).
For absolute time, each Guardium system will run in its local time. For
example, if a distributed report gathers data from Guardium systems in
Eastern Standard Time (EST) and Pacific Standard Time (PST), then each
system will execute the query based on local time. In the example (useful
for checking morning peak hours, midnight or any specific absolute time),
a system in New York will gather the results from 08:00 - 11:00 EST and a
system in California will gather the results from 08:00 – 11:00 PST.
For a relative time specification, each system will run NOW –N according
to the current time on that system. This is important for real-time reports.
Absolute Time cannot be used for real-time or near real-time reports. Use
the Immediate mode for real-time monitoring.
Viewing Distributed Report Status
Every distributed report is accompanied by a status report that show the
user what machines succeed in bringing in the results and what did not.
The link to access the status report is highlighted when you navigate to the
report in the GUI.
For scheduled reports, clicking on a line on the Status Report enables
execution of API to rerun the report on the specific unit(s).
If the specific run for Distributed Report in Scheduled mode comes back
with an error, you can rerun the report from the status report as follows:
1. Double click on one of the rows in the status report to bring up the
Invoke menu. Click on Invoke.
2. Click the selection, rerun_distributed_report.
3. This will open up a pop-up screen that lets you choose the specific run
to rerun. Any row of the report can be opened, but only rows with
ERROR status can be rerun.
GuardAPI for Rerun Distributed Report
The retry command described in the GUI, for invoking the status report,
can also be accessed via GuardAPI command.
Syntax
grdapi rerun_distributed_report

Chapter 6. Reports 553


This diagram illustrates the process to run an Immediate Distributed Report.

This diagram illustrates the process to schedule a Distributed Report.


Distributed Report enhancement - set Target system to any Guardium system
The Distributed Reports distributes the query request to the specified
Guardium systems, it gathers the data into the Target system, consolidates

554 IBM Guardium 10.0


the results and provides views on the consolidated results. The results are
available via the Query Builder for additional queries definition.
The Distributed Report feature can now set the Target system to any
Guardium system. The previous version does not allow setting the Target
system and it always goes to the Central Manager (CM).
Requirement justification
In many cases the CM is overloaded (regardless of the Distributed Report)
and the CM is sometime used as an Aggregator which adds load to the
CM.
In those cases it will be much more efficient to enable the user to
determine the target system.
Solution
v A target System can be set for each Distributed Report. A CLI command
is available to set the optional Target systems. The list set via the CLI is
shown in the Distributed Report builder GUI.
v Important note: This change affects the Distributed Report Scheduled
mode only. The Immediate mode is not included in this change! This
means that the ad-hoc distributed report result viewer is accessible via
the CM only.
v The Distributed Report definition is still editable via the CM only.
GUI Change
v A new field "Send Data To" is added to the Distributed Report Builder
screen to enable the user to set the target System(s) (either Collector(s) or
Aggregator(s)) for the Distributed Report.
v This field is relevant only in case of Scheduled Mode (otherwise, it is
disabled).
v The default is set to the CM.
v The list of available Target Systems is limited to the Systems that were
set via the CLI (see CLI list below).
v The Distributed Report definition is editable via the CM and View-Only
via the target.
v The "Add To Pane" of the report (adding the report viewer to the menu)
is available from the definition screen on the Target System and CM.
v This option is available on CM even if the CM is not the Target System
for that report. It's done to give a possibility to view Distributed Report
Status on CM but no data will be displayed in the report itself.
The CLI commands (available via the CM only)
1. Set System as a Target System
grdapi set_distributed_report_target target_host_name=[unit host name]
2. Cancel System to be a Target system
grdapi cancel_distributed_report_target target_host_name=[unit host name]
If there are still distributed reports with this unit as target then returns
error and the list of such reports
3. Get list of Target system
grdapi get_distributed_report_target_info
Notes:

Chapter 6. Reports 555


This patch must be installed on central manager and its managed units.

How to create a Distributed Report


Guardium offers a function that provides a way to automatically gather data from
all or a subset of the Guardium managed units that are associated with a particular
Guardium Central Manager.

About this task - In this example, we see how to get a broader view and
correlation insight for Exceptions (for example, SQL Errors) that are recorded on
specific collectors.

Summary of steps

Prerequisites – create group of Managed Units via the Central Management screen.
1. Create Distributed Report.
2. Review the data gathered.
3. Create additional summary reports on the data gathered.

Procedure
1. Distributed Reports builder is available from (admin) Tools > Report Building
> Distributed Report Builder.
2. Click New.
3. Select Based on Report from the list (the list shows the User-Defined Reports).
For this example, choose Exceptions Details.

4. Move down the screen to specify the Managed Units to be included in this
distributed report. For this example, choose two groups from the Group list
and in addition a few managed units from the Managed units list. In this
example, leave the ‘Central Manager’ unchecked (in the case the Central
Manager is also an Aggregator, it might need to be included).

556 IBM Guardium 10.0


5. The next screen capture shows the setting for the Operation Mode. The
Immediate mode is mainly for online / real-time monitoring, such as, view
the recent Failed Login Attempts, view recent Excessive Exception, or view
real-time alerts. The Scheduled mode is an ongoing data-gathering that runs
periodically based on the Schedule defined. This example summarizes the
exceptions every hour.

6. Click Apply to create the Distributed Report. The next screen appears while
saving the new Distributed Report.

Chapter 6. Reports 557


7. Once applied, the new Distributed Report is added and highlighted in the list
box.

8. The next step is to schedule it by clicking Modify Schedule (this is mandatory


to activate the process).

558 IBM Guardium 10.0


9. This report can be limited to specific roles by clicking Roles and selecting the
relevant Roles. Now that all are defined, the report view can be added to the
menu by clicking Add to Pane.

Chapter 6. Reports 559


10. In this specific example, the report is performed hourly - there is no need to
wait at least an hour to get the initial results.

Note: The line saying ‘Distributed Report status – click here for details’,
shows the status of data gathering, if data is missing from managed units then
the line is colored in red; clicking the line navigates to details report of status
per units per hour.

11. The data is gathered from all the specified Managed Units and stored in new
designated entity (table). This entity is now available via the Query Builder
and Report Builder to create additional Queries and Reports against this new
table. The option to build additional Queries and Report are available via the
Distributed Report result screen as well. Click Edit the query for this report.

This default Report cannot be changed, click Clone, name it, remove all
attributes and leave the Date, User Name, Exception Type Description, and
Sum Of Count Of Exceptions.

560 IBM Guardium 10.0


The following screen capture shows an example of the Correlate Total Exceptions
By User (Distributed). This view sum the total exceptions per user from all
databases that are associated to the Guardium Managed Units selected for this
Distributed Report. Likewise, you can view the Total Failed Login Attempts system
wide, or the Total Exceptions per Source Programs.

Chapter 6. Reports 561


562 IBM Guardium 10.0
Chapter 7. Assess and harden
The Guardium Vulnerability Assessment solution is the first step in the security
and compliance lifecycle management for any IT environment. You can use a set of
predefined or custom assessments and process workflow audits to identify and
address database vulnerabilities in an automated fashion—proactively improving
configurations and hardening infrastructures.

Introducing Guardium Vulnerability Assessment


Guardium Vulnerability Assessment enables you to identify and correct security
vulnerabilities in your database infrastructure.

Database Vulnerability Assessment is included in the Guardium Vulnerability and


Threat Management solution to scan the database infrastructure for vulnerabilities
and provide evaluation of database and data security health, with real time and
historical measurements.

Vulnerability Assessment uses three types of artifacts:


Test A test checks the database environment for vulnerabilities for a particular
threat or area of concern.
VA for V10 tests - over 2000 - Aster 34, DB2 246, DB2 for i 130, DB2 zOS
208, Informix 65, MongoDB 54, MS SQL Server 120, MySQL 316, Netezza
21, Oracle 509, PostgreSQL 96, SAP HANA 65, Sybase 73, Sybase IQ 38,
Teradata 39
Assessment
An assessment is a job that includes a set of tests that are run together.
Data source
The source of data itself, such as a database or XML file, and the
connection information necessary for accessing the data.

The Guardium Vulnerability Assessment application enables organizations to


identify and address database vulnerabilities in a consistent and automated
fashion. Guardium’s assessment process evaluates the health of your database
environment and recommends improvement by:
v Assessing system configuration against best practices and finding vulnerabilities
or potential threats to database resources, including configuration and behavioral
risks. For example, identifying all default accounts that haven’t been disabled;
checking public privileges and authentication methods chosen, etc.
v Finding any inherent vulnerabilities present in the IT environment, like missing
security patches,
v Recommending and prioritizing an action plan based on discovered areas of
most critical risks and vulnerabilities. The generation of reports and
recommendations provide guidelines on how to meet compliance changes and
elevate security of the evaluated database environment

Guardium’s Database Vulnerability Assessment combines three essential testing


methods to guarantee full depth and breadth of coverage. It leverages multiple
sources of information to compile a full picture of the security health of the
database and data environment.

563
1. Agent-based-Using software installed on each endpoint (e.g. database server).
They can determine aspects of the endpoint that cannot be determined
remotely, such as administrator’s access to sensitive data directly from the
database console.
2. Passive detection-Discovering vulnerabilities by observing network traffic.
3. Scanning-Interrogating an endpoint over the network through credentialed
access.

Included in the Guardium Vulnerability and Threat Management solution are:


v Database Auto-Discovery performs a network auto-discovery of the database
environment and creates graphical representation of interactions among database
clients and servers.
v Database Content Classifier automatically discovers and classifies sensitive data,
such as 16-digit credit card numbers and 9-digit Social Security
numbers—helping organizations quickly identify faulty business or IT processes
that store confidential data.
v Database Vulnerability Assessment scans the database infrastructure for
vulnerabilities and provides evaluation of database and data security health,
with real time and historical measurements.
v CAS (Configuration Auditing System) tracks all changes to items such as
database structures, security and access controls, critical data values, and
database configuration files.
v Compliance Workflow Automation automates the entire compliance process
through starting with assessment and hardening, activity monitoring to audit
reporting, report distribution, and sign-off by key stakeholders.

CAS (Configuration Auditing System) plays an important role in the identification


of vulnerabilities and threats. Guardium pre-configured and user-defined CAS
templates can be used in the Assessment test and bring a holistic view of the
customer’s database environment; With CAS, Guardium can identify vulnerabilities
to the database in the OS level such as file permissions, ownership and
environment variables. These tests can be seen through the CAS Template Set
Definition panel and have the word Assessment in their name.

Note: Vulnerability Assessment (VA) and Configuration Auditing System (CAS)


are only supported in English.

Common Vulnerabilities and Exposures (CVE®) is a dictionary of common names


(i.e., CVE Identifiers) for publicly known information security vulnerabilities.
CVE’s common identifiers makes it easier to share data across separate network
security databases and tools, and provide a baseline for evaluating coverage such
that, if a report incorporates CVE Identifiers, users may quickly and accurately
access fix information in one or more separate CVE-compatible databases to
remediate the problem.

Numerous organizations have made their information security products and


services CVE compatible by incorporating CVE Identifiers. Guardium constantly
monitors the common vulnerabilities and exposures (CVE) from the MITRE
Corporation and add these tests for the relevant database related vulnerabilities.

To aid in the finding of individual vulnerabilities while viewing the CVE names
for specific databases, the user, when configuring tests through Security
Assessment Builder, can select the CVE radio button for the desired database and

564 IBM Guardium 10.0


then select and add the appropriate CVE identifier. Additional information can
always be found on the master copy of the CVE list maintained by the MITRE
Corporation.

To keep CVEs current within the Guardium solution, Guardium will download
and use the most current CVE database to populate a database table with all
current CVE entries and candidates. Guardium the programmatically compares the
downloaded CVE data with the CVE data already in the Guardium Vulnerability
Assessment repository; producing a list of new CVEs for review. Guardium
Database Security Team then manually reviews these candidates for the Guardium
Vulnerability Knowledgebase, tests them and adds the relevant ones to the GA
Guardium Vulnerability Assessment Knowledgebase. These tests are tagged with
the appropriate CVE number, and once in the GA repository, these tests can
automatically run using the Guardium Vulnerability Assessment application.

Note: When using an expiring product license key, or license with a limited
number of datasources, the following message may appear: Cannot add
datasource. The maximum number of datasources allowed by license has been
reached. The License valid until date and Number of datasources can be seen on
the System Configuration panel of the Administrator Console. A Vulnerability or
Classification process with N datasources are counted as N scans every time they
run.

Note: Guardium Vulnerability Assessments requires access to the databases it


evaluates. To do this, Guardium provides a set of SQL scripts (one script for each
database type) that creates users and roles in the database to be used by
Guardium.

Note: The Guardium Vulnerability Assessment solution is not supported for


AS/400

Guardium Vulnerability Assessment (VA) Test Exceptions

These are exception groups created by Guardium and pre-populated in order to


enhance the experience of users working with these tests. This is important
because users generally do not know what exceptions are available for what tests.

The list is not categorized by DBMS type or test name. But the exception group
name itself are very obvious indicating what DBMS type it is and the test name.

MongoDB
Developed in 2007, MongoDB is a NoSQL, document-oriented database. MongoDB
uses JSON documents with dynamic schemas (this format is called BSON). In
MongoDB, a collection is the equivalent of a RDBMS table while documents are
equivalent to records in an RDBMS table.

MongoDB is the largest and fastest growing NoSQL database system. It tends to be
used as an operational system and as a backend for web applications due to an
ease of programming for non-relationally formatted data like JSON documents
which are often found in web applications.
v First NoSQL database supported for Guardium Vulnerability Assessment (VA)
v First non-JDBC database connection. Connection uses a Java driver.
v MongoDB data sources support SSL server and client/server connections with
SSL client certificates.

Chapter 7. Assess and harden 565


v Guardium's VA solution for MongoDB Clusters can be run on mongos, a
primary node and all secondary nodes for replica sets.
v Entitlement reports and Query Based Builder are not supported for MongoDB.

MongoDB – GDMMONITOR Scripts

Gdmmonitor scripts are located under /var/log/guard/gdmmonitor_scripts.

MongoDB Datasource with SSL

You can import server cert which we do behind the scene for self signed. Customer
can also import their certificate. Certificates also work on central manager and
push down to collectors.

CAS for MongoDB

The Mongo CAS Assessment template allows you to specify multiple paths in the
datasource to scan various components of the file system.

CLI commands

snif_mongo export

snif_mongo list

1. Compress all the .ready files in the auditlog directory and use --remove_file
option to remove file.

2. Export to the users system

3. Delete the compressed file.

4. If user quits the operation or export fails, put back the .ready files.

Teradata Aster

Aster Data

Acquired by Teradata in 2011, typically used for data warehousing and analytic
applications (OLAP). Aster Data created a framework called SQL-MapReduce that
allows the Structured Query Language (SQL) to be used with Map Reduce. Most
often associated with clickstream kinds of applications.

A security assessment should be created to execute all tests on the queen node. All
database connections for Aster Data goes through the queen node only.

Testing on worker and loader nodes are only required when performing CAS tests
(File permission and File ownership).

Privilege tests loop through all the databases in a given Aster’s instance.

Aster- GDMMONITOR script

Gdmmonitor-Aster.sql is locate under /var/log/guard/gdmmonitor_scripts

The same script is use for VA and Entitlement reporting

566 IBM Guardium 10.0


Connect privilege must be grant to sqlguard user in every databases

SAP HANA

SAP HANA is an in-memory, column-oriented, relational database management


system developed and marketed by SAP SE. HANA's architecture is designed to
handle both high transaction rates and complex query processing on the same
platform.

Gdmmonitor-SAP-Hana.sql is locate under /var/log/guard/gdmmonitor_scripts

The same script is use for VA and Entitlement reporting

Deploying VA for DB2 for i


Enable a group of users to run vulnerability assessments, and configure and run
the tests.

About this task

Deployment Steps
1. Vulnerability Assessment is deployed from the Guardium system.
2. User runs a Guardium-supplied script against the target database to create a
role with the appropriate privileges. User then creates a datasource connection
to the database.
3. Create a security assessment, then select your datasources and desired tests to
execute.
4. Once the execution is done, a report is created, showing what tests have passed
and/or failed along with detailed hardening recommendations.

IBM for i version support:

IBM for i 6.1, 7.1 and 7.2 partitions

VA test Coverage (115 tests in total):

Profiles with Special Authorities

Profiles with access to Database Function Usage

Password policies

Database Objects privilege granted to PUBLIC

Database Objects privilege granted to individual user

Database Objects privilege granted with grant option

Security APARs

Entitlement Reports:

Profiles with Special Authorities

Group granted to user

Chapter 7. Assess and harden 567


Database Objects privilege granted to PUBLIC

Database Executable Objects privileges granted to PUBLIC

Database Objects privilege granted to individual user

Database Objects privilege granted with grant option

Procedure
1. Use the Group Builder to create a group of users that you want to use VA.
Open the Group Builder by clicking Setup > Tools and View > Group Builder.
The next step uses a script for a group named gdmmonitor.
2. Run the following script on your DB2 for i system to grant privileges needed
for executing VA to the group. This is done outside the Guardium system using
a database native client.
grant select on SYSIBMADM.FUNCTION_INFO to gdmmonitor;
grant select on SYSIBMADM.FUNCTION_USAGE to gdmmonitor;
grant select on SYSIBMADM.GROUP_PROFILE_ENTRIES to gdmmonitor;
grant select on SYSIBMADM.SYSTEM_VALUE_INFO to gdmmonitor;
grant select on SYSIBMADM.USER_STORAGE to gdmmonitor;
grant select on Qsys2.Authorizations to gdmmonitor;
grant select on SYSIBMADM.USER_INFO to gdmmonitor;
grant select on QSYS2.SYSSCHEMAAUTH to gdmmonitor;
grant select on QSYS2.SYSTABAUTH to gdmmonitor;
grant select on QSYS2.SYSPACKAGEAUTH to gdmmonitor;
grant select on QSYS2.SYSROUTINEAUTH to gdmmonitor;
grant select on QSYS2.SYSSEQUENCEAUTH to gdmmonitor;
grant select on QSYS2.SYSCOLAUTH to gdmmonitor;
For IBM DB2 for i v7.1 and higher, also include the scripts:
grant select on QSYS2.SYSVARIABLEAUTH to gdmmonitor;
grant select on QSYS2.SYSXSROBJECTAUTH to gdmmonitor;
3. Create a JDBC connection to your DB2 for i system . Open Datasource Finder
by clicking Setup > Tools and Views > Datasource Definitions, and then
Security Assessment from the Application Selection menu.
a. Click New and enter the appropriate information. For Connection Property,
enter “property1=com.ibm.as400.access.AS400JDBCDriver;translate
binary=true”.
4. Create an assessment using the Assessment Builder. Open the Assessment
Builder by clicking Harden > Vulnerability Assessment > Assessment Builder.
a. Enter a description for the assessment.
b. Add the datasource created in the previous step by clicking Add
Datasource, selecting the datasource from the Datasource Finder, and
clicking Add.

Note: You must click Apply to save the assessment before you can
configure tests.
5. Add tests to the assessment by clicking Configure Tests. Click the IBM for i
tab, select the tests that you want to add, and click Add Selections.
6. Click Return to go back to the Security Assessment Finder. Run the test by
clicking Run Once Now, or schedule the test using Audit Process Builder.
Open the Audit Process Builder by clicking, Discover > Classifications > Audit
Process Builder.
7. Click View Results to view the details of all the executed tests, including
recommendations for improving your score.

568 IBM Guardium 10.0


Results
What to do when a test fails?
v You can patch your database if it is relating to patches.
v You can re-configure database parameters to best practice recommendation
v You can revoke objects or system privileges that are not required by your
applications.
v You can revoke objects granted directly to grantee and grant the object privileges
to a role/group and assign the grantee to that role/group
v You can change password policy setting or change users default password.
v If your application required specific grant, you can create exception group and
link that to your failed test and re-execute.

Vulnerability Assessment tests


Guardium provides several types of tests to enable you to assess your
vulnerability.

Vulnerability Assessment Tests

Guardium provides over two hundred predefined tests to check database


configuration parameters, privileges, and other vulnerabilities. You can also define
your own tests.

A Vulnerability Assessment may contain one or more of the following types of


tests.

Predefined Tests

Predefined tests are designed to illustrate common vulnerability issues that may be
encountered in database environments. Because of the highly variable nature of
database applications and the differences in what is deemed acceptable in various
companies or situations, some of these tests may be suitable for certain databases
but totally inappropriate for others (even within the same company). Most of the
predefined tests are customizable to meet requirement of your organization.
Additionally, to keep your assessments current with industry best practices and
protect against newly discovered vulnerabilities, Guardium distributes new
assessment tests and updates on a quarterly basis as part of its Database Protection
Subscription Service. Please refer to Guardium Administration Guide for more
details.

Predefined Tests include:


v Behavioral Tests
v Configuration Tests

Behavioral Tests
This set of tests assesses the security health of the database environment by
observing database traffic in real-time and discovering vulnerabilities in the way
information is being access and manipulated.

As an example, some of the behavioral vulnerability tests included are:


v Default users access

Chapter 7. Assess and harden 569


v Access rule violations
v Execution of Admin, DDL, and DBCC commands directly from the database
clients
v Excessive login failures
v Excessive SQL errors
v After hours logins
v Excessive administrator logins
v Checks for calls to extended stored procedures
v Checks that user ids are not accessed from multiple IP addresses

Configuration Tests

This set of assessments checks security-related configuration settings of target


databases, looking for common mistakes or flaws in configuration create
vulnerabilities.

As an example, the current categories, with some high-level tests, for configuration
vulnerabilities include:
v Privilege
– Object creation / usage rights
– Privilege grants to DBA and individual users
– System level rights
v Authentication
– User account usage
– Remote login usage
– Password regulations
v Configuration
– Database specific parameter settings
– System level parameter settings
v Version
– Database versions
– Database patch levels
v Object
– Installed sample databases
– Recommended database layouts
– Database ownership

Query-based Tests

A query based tests is either a pre-defined or user-defined test that can be quickly
and easy created by defining or modifying a SQL query, which will be run against
database datasource and results compared to a predefined test value. See Define a
Query-based Test for additional information on building a user defined
query-based test.

CAS-based Tests

A CAS-based test is either a pre-defined or user-defined test that is based on a


CAS template item of type OS Script command and uses CAS collected data.

570 IBM Guardium 10.0


Users can specify which template item and test against the content of the CAS
results. See Create a New Template Set Item for assistance on creating an OS Script
type CAS template.

Guardium also comes pre-configured with some CAS template items of type OS
Script that can be used for creating a CAS-based test. These tests can be see
through the CAS Template Set Definition panel and have a name which contains
the word Assessment. For instance, the Unix/Oracle set for assessments is named
Guardium Unix/Oracle Assessment. Additionally, any template that is added that
involves file permissions will also be used for permission and ownership checking.
See Modify a Template Set Item for viewing these template sets and seeing those
items with type OS Script.

Whether using a Guardium pre-configured or defining your own, once defined,


these tests will appear for selection during the creation or modification of
CAS-based tests. See Define a CAS-based Test for additional information.

CVE Tests

Guardium constantly monitors the common vulnerabilities and exposures (CVE)


from the MITRE Corporation and add these tests for the relevant database related
vulnerabilities.

Defining a query-based test


Create a test based on a query that runs an SQL statement.

About this task

You can create a new query-based test by using any of these approaches:
New Start from the beginning and define all the fields.
Clone Clone an existing query-based test.
Modify
Modify an existing query-based test.

Procedure
1. Open the Assessment Builder by clicking Harden > Vulnerability Assessment
> Assessment Builder.
2. From the User-defined tests, click Query-based Tests.
3. Click New, Clone or Modify to open the Query-based Test Builder.
4. Enter a unique Test Name.
5. Select a Database Type.
6. Select a Category.
7. Select a Severity.
8. Optional: Enter a Short Description for the test.
9. Optional: Enter an External Reference for the test.
10. Enter the Result text for pass that will be displayed when the test passes.
11. Enter the Result text for fail that will be displayed when the test fails.
12. Enter the SQL statement that will be run for the test.
Use the following convention to add and reference group members within a
SQL statement:

Chapter 7. Assess and harden 571


For example:
To reference a group of users defined for the group MyUsersGroup and
replace it with the actual members of the group use:
Select ... from DBA_GRANTS where ... AND USER in (~~G~MyUsersGroup~~) and ...
This will result in a SQL Statement such as the following where U1, U2, etc
are the members of the MyUsersGroup group:
Select ... from DBA_GRANTS where ... AND USER in (’U1’,’U2’,’U3’,...) and ...

If the group has no members, the database returns an error. In this case the
reference is replaced with a single pair of quotation marks, like this:
Select ... from DBA_GRANTS where ... AND USER in (’’) and ...
Use the following convention to replace a reference to a specific alias (of a
specific group type) with the actual alias:
For example:
Select ... from USER_OBJECTS where ... AND OBJECT_TYPE =
'~~A~GroupType~TYPE~~'
If there is an alias to TYPE of group type GrouptType it will replace the string
and the resulting SQL will look like:
Select ... from USER_OBJECTS where ... AND OBJECT_TYPE = 'TYPE'
where TYPE is the actual ALIAS
13. Optional: Enter a SQL Statement for Detail, a SQL statement that retrieves a
list of strings to generate a detail string of Detail prefix + list of strings. See
the example in Detail prefix.

Note: The detail generated is only displayed when the query-based test fails;
allowing the user to enter a SQL statement that can retrieve the information
that caused the test to fail and help identify the cause of failure.

Note: Detail string can be seen within a Security Assessment Results by


clicking on the Assessment Test Name and also queried through the Result
Details attribute of the Test Result Entity.
14. Optional: Enter a Pre-test check SQL statement. This statement is run before
running the test. If the statement returns 0, the test is not run. If the test
returns 1 or an error, the test is run.
15. Optional: Enter a Pre-test fail message. This message is inserted into the
assessment results if the test is not run due to the SQL statement returning 0.
16. Optional: In Loop databases, enter a list of databases through which the test
should loop. The test returns the union or sum of the results returned from all
the specified databases. You can use this function only when the test returns
an integer value, and only with these database types: Informix, SQL Server,
Sybase SE, PostgreSQL and MySQL. The looping is performed if the DB loop
flag box is checked. One or more of the specified databases might be
unavailable when the test is run. In that case the test will either skip that
database and continue, or stop and issue a failure message, depending on
whether the Skip on error box is checked.
17. Optional: Enter a Detail prefix that will appear at the beginning of the detail
string.
Example for SQL Statement for Detail & Detail prefix:
Test that checks for objects with certain grants.
Detail prefix: "Objects found with certain GRANT:"
SQL Statement for Detail: SELECT object FROM....--returning 4 records:
Obj1
Obj2

572 IBM Guardium 10.0


Obj3
Obj4
==> Details: Objects found with certain GRANT: Obj1, Obj2, Obj3, Obj4
18. Optional: Check the Bind output variable check box if the entered text in SQL
statement is a procedural block of code that will return a value that should be
bound to an internal Guardium variable that will be used in the comparison
to the Compare to value.
Example (Oracle):
declare
retval integer := 0;
strval varchar2(255) := ’’;
nver number;
sver varchar2(255) := ’’;
begin
select VERSION
into sver
from V$INSTANCE;
nver := to_number(substr(sver,1,(instr(sver,’.’,1,2) - 1)));
if nver >= 11.1 then
select VALUE
into strval
from V$PARAMETER
where NAME = ’sec_case_sensitive_logon’;
end if;
if (nver < 11.1 or strval = ’TRUE’) then
retval := 0;
else
retval := 1;
end if;
? := retval;
end;
19. Select the Return type that will be returned from the SQL statement.
20. Select the operator that will be used for the condition.
21. Enter in a Compare to value that will be used to compare against the return
value from the SQL statement using the compare operator. It is this
comparison that determines whether this test have passed or failed. You may
also click on the RE (regex) to define a regular expression for the compare
value.
22. Do one of the following:
v Click Back to cancel changes and return to the previous screen.
v Click Apply to save the query-based test.

Results
You can add this newly created query-based test to an assessment.

What to do next

Defining a CAS-based test


Vulnerability Assessments use the CAS mechanism to run-OS level tests on the
database server, and identify vulnerabilities.

Before you begin


About this task
You can create a new CAS-based test by modifying an existing CAS-based test or
by starting from the beginning and defining all the fields.

Chapter 7. Assess and harden 573


Procedure
1. Open the Assessment Builder by clicking Harden > Vunerability Assessment
> Assessment Builder.
2. From the User-defined tests, click CAS-based Tests to open the CAS-based
Test Finder panel.
3. Click New or Modify to create a new test.
4. Enter a unique Test name.
5. Select a database from the Database Type menu.
6. Select a category from the Category menu.
7. Select a category from the Severity menu.
8. Optional: Enter a Short Description for the test.
9. Optional: Enter an External reference for the test.
10. Enter a Result text for pass that will be displayed when the test passes.
11. Enter a Result text for fail that will be displayed when the test fails.
12. Enter a Recommendation text for pass that will be displayed when the test
passes.
13. Enter a Recommendation text for fail that will be displayed when the test
fails. Recommendation text for fail - To prevent cross site hacking, any name
from this list, used in the Recommendation text for fail text box, will be
rewritten: expression; function; javascript; script; alert; eval; <img;
ContentType
14. Select a template to use from the CAS Template menu.
15. Select an operator to use from the operator menu.
16. Enter a Search string that will be used with the operator to compare what is
returned from the CAS template. This comparison that determines whether
this test passes or fails. You may also click on the RE icon to define a regular
expression for the search string.
17. Optional: Check the Fail if match check box if the test should fail when a
match is made with the search string.
18. Click Apply to save the CAS-based test.

Results

You can add this newly created CAS-based test to an assessment.

Assessments
Assessments are a group of tests that scan database infrastructures for
vulnerabilities and provide an evaluation of database and data security health with
real-time and historical measurements.

Creating an assessment
Create an assessment, or modify or clone an existing assessment.

Before you begin


Open the Assessment Builder by clicking Harden > Vulnerability Assessment >
Assessment Builder.

574 IBM Guardium 10.0


About this task
Procedure
1. In the Security Assessment Finder panel, click New to create an entirely new
assessment. Click Clone or Modify to work with an existing assessment.
Clicking any of these buttons opens the Security Assessment Builder panel. If
you are creating an entirely new assessment, complete all of the following
steps. If you are cloning or modifying an existing assessment, enter a new
Description and then modify only the fields that you want to change.
2. Enter a unique Description for the assessment
3. Add a datasource by clicking Add Datasource, entering the required
information, and clicking Add.
4. Add tests to the assessment by clicking Configure Tests.
a. From the Tests available for addition pane, select the appropriate tab for the
datasource you added previously.
b. Select the tests you want, and click Add Selections to add them to the
assessment. Once added, your selections will appear in the Assessment Test
Selections pane.
c. Use the Assessment Test Selections to manage tests for your assessment.
Delete any selected test, or click Adjust this test's tuning for any test to
customize the test's parameters.
5. Add Roles to the Assessment.

Note: You cannot assign roles to an assessment until you have assigned roles
to the datasources it is based on.
6. Click Apply to save the assessment.
Click CAS Support to supply appropriate data for an assessment.
You can also Add Comments to any assessment to document or log what
changes were made to assessments and why.

Results

Your new assessment is ready to be run.

Creating a VA Test Exception


Use a test exception to exclude specific members of a group from a security
assessment. Run the security assessment against the exception group to see if a
specific member of a group is affecting your assessment results. This is useful if
you do not want to or are not authorized to change group settings.

Procedure
1. Open the Group Builder by clicking Setup > Tools and Views > Group
Builder.
2. Select VA Tests Exception from the Group Type menu to view the list of
predefined exception groups.
3. Select a group from the Modify Existing Groups menu and click Modify.
4. Add the group members that you want to exclude from the VA test.
5. Open the Assessment Builder by clicking Harden > Vulnerability Assessment
> Assessment Builder. Select an assessment from the Security Assessment
Finder and click Configure Tests.

Chapter 7. Assess and harden 575


6. Find the test you want add the exception to, and click the test's Adjust this
test's tuning button from the Tuning column.
7. Select your exception group from the menu, and click Save. Run your
assessment again to see if the exception group affects the outcome of the test.

Note: By default, Guardium includes an exception group called IBM iSeries


Profile User Exclusions. You can clone and modify this group to suit your
needs.

All the Database Objects privilege tests exclude default system schemas from
Guardium groups.

How to create a security assessment


Run security assessments against selected datasources to proactively identify and
address vulnerabilities, improve configurations, and harden infrastructures.

About this task


The basic steps for creating a security assessment are:
1. Create the assessment
2. Add datasources to the assessment
3. Add tests to the assessment

Procedure
1. Create or modify an assessment by opening the Assessment Builder. Open the
Assessment Builder by clicking Harden > Vulnerability Assessment >
Assessment Builder.

576 IBM Guardium 10.0


2. Create a new security assessment by clicking New.

Chapter 7. Assess and harden 577


3. Enter a unique name for the assessment in Description and click Apply to save
the assessment.

4. Add a datasource to the assessment by clicking Add Datasource. Select a


datasource from the Datasource Finder and click Add. Add a new datasource
578 IBM Guardium 10.0
by clicking , filling in the information in the Database Definition window, and
clicking Apply. See Datasources for assistance.

After clicking the Add button, the datasource will appear in the Datasources
section of the Security Assessment Builder.

Chapter 7. Assess and harden 579


5. Click Apply to save the assessment.

6. Click Configure Tests to add tests to the assessment. In the Tests available for
addition panel, click the tab for the appropriate datasource you created, select
the tests you want to add to the assessment, and click Add Selections. Use the
radio buttons to filter the tests to be added. See Predefined Tests, Query Based
Tests, or CVE Tests for assistance.

580 IBM Guardium 10.0


Chapter 7. Assess and harden 581
7. Click Back to return to the Security Assessment Builder, and click Roles to add
roles to the assessment.

Note: You cannot assign roles to the assessment until you have assigned roles
to the datasources the assessment is based on.
8. Save your assessment by clicking Apply. The assessment can now be run
against the selected datasources.

Running an assessment
To get the results of an assessment, it must be run once it is created.

Assessments run in a serialized mode one after the other. If more than one
assessment is scheduled to run they will have to be queued. This queue can be
viewed through the Guardium Job Queue report.

Clicking the Run Once Now button will enter the assessment into the queue and
immediately run it. A short period of time is required for the job to be executed
and become viewable. See Viewing assessment results for more information on the
results of an assessment.

You can optionally define and schedule an automated process for running of an
assessment definition. The Audit Process finder panel is the starting point for
creating or modifying an audit process schedule. create a schedule to automatically
run your assessments by going to the Audit Process finder panel. See Compliance
Workflow Automation for assistance in defining an audit process

Viewing assessment results


You can take various actions while you view the results of an assessment.

582 IBM Guardium 10.0


View Results of an Assessment
View the results of an assessment in the Report Builder. Open the Report Builder
by clicking Harden > Reports > Report Builder, and use the filters to find the
report you are looking for.

Interpreting the Results of an Assessment


An assessment evaluates multiple tests based on multiple reports. The overall
results are displayed in a separate browser window entitled Security Assessment
Results and have the following sections:

Assessment Identity

The Assessment results identifies:


v The assessment name
v The date and time the assessment was run
v The time period for the assessment
v The Client and Server IP addresses or subnets

Assessment Selection

Use the drop-down menu to select and display past results for an assessment. The
latest result is displayed by default.

Assessment Results History


The Assessment Results History shows the percentage of tests passing over a
period of time. Further recommendations to improve the percentage of passing
tests are given under the Assessment Test Results section.

View log

When clicked, the Execution Log will be displayed in a new window that shows
the runtime execution of the assessment test. A timestamp, along with events, and
messages can aid in the debugging of issues that might have caused certain tests to
fail.

Results Summary

A tabular graph summarizes all the tests that were executed within this
assessment. The X-axis represents the test’s severity (CRITICAL, MAJOR, MINOR,
CAUTION, or INFO). The Y-axis represents the type of test (Privilege,
Authentication, Configuration, Version, or Other). Within the grid is the
representation of the number of tests that have either Passed, Failed, or had an
Error when trying to execute. These numbers are directly related to the detail for
the assessment tests that is given under the Assessment Test Results section.

Current filtering applied

If you would like to change the filtering from what is currently applied, use the
following two options to filter the results as you would like:

Reset Filtering - Removes all filtering options selected through the Filter / Sort
Controls options.

Chapter 7. Assess and harden 583


Filter / Sort Controls - Use this to open a filter/sort options for the report. Options
allow you to filter by Severities, Datasource Severity Classification (DS sev. class),
Scores (pass, fail, or error), and Test Types (Observed/Database type). The sort
option allows you to sort across combinations of severity, score, and datasource.
Click Apply when you would like the chosen filter/sort options to take effect.

Assessment Test Results


The Assessment Test Results section provides a detailed description of the test
taken, information about the target datasource and datasource severity
classification, and the test's Pass/Fail status, severity, the external reference, and
reason for the current status. Each test name is clickable and will filter all
information off the report except for relevant information about that particular test.
A hover-over feature on the Reason field will display the recommendation to help
remedy failed or tests in error.

The assessment results include a count of the number of tests and the number of
passed tests in each of these categories:
v CIS tests
v CVE tests
v STIG tests
These values are displayed in the assessment result viewer and available for
reporting as part of the VA results domain.

Datasource Details

When expanded, the Datasource Details section will show all of the datasources
that were referenced within this assessment including the datasource's specific
environmental information.

CVE and CVSS information


CVE Records and CVSS information will be displayed in the Assessment test result
viewer.

The reference links are clickable (opens new window). Either section will be absent
when there is no corresponding record for a result.

The CVSS fields of interest are:


v CVSS Score
v Access Complexity
v Availability Impact
v Confidentiality Impact
v Integrity Impact
v Authentication
v Access Vendor
v Source
v Generated on Datetime

584 IBM Guardium 10.0


Working with failed tests
If some of the tests in your assessment show a failed status, you might want to
take one of these actions:
Add an exception for the test
This action causes the test to always pass for a period of time. For
example, you might have a group of servers that fail a test that checks that
the latest available service updates are applied. You cannot apply the
updates until your weekend maintenance window. You do not want the
test to keep failing until that time. Right-click on the word Fail in the
results panel and an Add Test exception popup menu appears. Specify an
end date and time for the exception, and optionally a comment. The test
will pass, on all datasources, whenever it is run before the exception
expires, whether it is run from this assessment or as part of another
assessment.
Add failing elements to an exception group
When a test fails, you can view more information by clicking the name of
the test. The new panel will include an area titled Details. Elements of the
test that failed are displayed after this heading. If any elements are
displayed, you can add them to an exception group for this test. To do this,
click the heading Details: to open a new dialog. This dialog displays the
failing elements, with a check box next to each one. Check the boxes for
the elements that you want to add to an exception group and clear the
other check boxes. Then select a group. If a default exception group is
defined for this test, it will appear pre-selected in the dialog. A drop-down
list displays all other groups of type VA test exception that have been
defined. To choose a group from the list, click the radio button next to the
list, then choose the group from the list. Click Save to implement your
choices. To add remaining elements to a different group, click Details
again.

Export to PDF or to SCAP or AXIS XML

You can generate a PDF version of Assessment result by clicking Download PDF.

Use the Download XML button to open two menu choices: Download as SCAP
xml and Download as AXIS xml. Choose one of these selections in order to
download to your workstation an XML file representing the displayed assessment
results. The file can be formatted for Security Content Automation Protocol (SCAP)
XML or Apache EXtensible Interaction System (AXIS) XML, which is used by
QRadar.

VA summary
The following table list information per test and database key displayed in the VA
summary table: test result by unique identifier; cumulative failed age; first failed
date/ last failed date; last passed date; and, last scanned date. This information is
tracked and users can create a report on this information.

VA Summary

The key may include, in addition to the three original elements, the datasource
Name. The default is Host, port and Instance Name.

Use VA Summary Tracking in Query Builder to define queries and reports.

Chapter 7. Assess and harden 585


This table can be exported/imported. Import Data will override existing data on
the Guardium system (per key).
Table 218. VA Summary
Table Column Type Description
VA_SUMMARY_ID Int Auto-increment – primary key
DATA_SOURCE_HASH
Varchar(40) Hash for the Key
DB_TYPE Varchar Database Type
SERVICE_NAME Varchar Database instance Name (if part of the key, "N/A"
otherwise)
DB_PORT Varchar Database Port (if part of the key, "N/A" otherwise)
DB_HOST Varchar Host / IP (if part of the key, "N/A" otherwise)
TEST_ID Int Id of the Test
FIRST_EXECUTION DateTime First time the test was executed
LAST_EXECUTION DateTime Last time the test was executed
FIRST_FAIL DateTime First time the test failed on this DB
LAST_FAIL DateTime Last time the Test failed on this DB
FIRST_PASS DateTime First time the Test passed on this DB
LAST_PASS DateTime Last time the Test passed on this DB
CURRENT_SCORE varchar Pass / Fail / Error
CURRENT_SCORE_SINCE
Datetime Date Since the test is in the current status
CUMULATIVE_FAIL_AGE
Int Cumulative fail age (in days)
CUMULATIVE_PASS_AGE
Int Cumulative pass age (in days)

The CLI commands are: store va_test_show_query and show va_test_show_query.


Use export va_summary to export this information.

The GuardAPI commands to change or display the key are: grdapi


modify_va_summary_key and grdapi reset_va_summary_by_key. The GuardAPI
command to reset cumulative ages, both pass and fail, is grdapi
reset_va_summary_by_id. Use grdapi export_va_summary to export this
information.

An additional parameter, datasourceName, has been added to grdapi


reset_va_summary_by_key and grdapi modify_va_summary_key.

The VA Summary entity has an additional attribute, Datasource Name, that is


populated ONLY if the datasource name is part of the key.

Note: The GrdAPI command, modify_va_summary_key, will allow the key to be


empty by calling the GrdAPI with all four parameters: useHost, usePort,
useServiceName, useDatasourceName, equal to false. In this case, when the key is
empty, the VA Summary calculation is disabled (no summary data will be
calculated, updated or saved).

586 IBM Guardium 10.0


Required schema change
The schema used by vulnerability assessment tests on IBM DB2 for z/OS changed
in Guardium V9.1. If you upgrade from a release prior to 9.1, you must update
your database in order to continue using these tests.

About this task


When you upgrade your Guardium system to version 9.1, you must create new
database tables on your database server. These tables add support for a new set of
tests, but you must create them whether you want to use the new tests or not.

In prior releases you created and populated tables in the gdmmonitor schema:
v GDMMONITOR.OS_GROUP
v GDMMONITOR.OS_USER
These tables are replaced by tables in the CKADBVA schema:
v CKADBVA.CKA_OS_GROUP
v CKADBVA.CKA_OS_USER

Procedure
1. Install Guardium 9.1.
2. Copy create_CKADBVA-schema_tables_zOS.sql from the /var/log/guard/
gdmmonitor_scripts directory on your Guardium system to your database
server. Run the fileserver command on your database server to retrieve the
file.
3. The script contains instructions that describe steps to be performed before and
after running the script. Read these instructions and run the script.
4. Populate the new tables with data similar to the data that was stored in the old
tables.

Results
Your system is now configured to use current vulnerability assessment tests.

What to do next

Assessing RACF vulnerabilities


If you use IBM DB2 for z/OS, you can use vulnerability assessment tests to assess
your RACF vulnerabilities. You must have at least version 9.1 of Guardium
installed to use RACF assessments.

About this task

Assess your Resource Access Control Facility (RACF) privileges whether they are
granted within the database or external to the database. The tests, that comprise
the RACF vulnerability assessments, identify the access control for object
privileges, database privileges, and system privileges.

In order to use these tests, you must obtain and install IBM Security zSecure Audit,
Version 2.1. This product enables the commands that are used in these tests to
interact with RACF.

Chapter 7. Assess and harden 587


Tests that examine entitlements do not return a pass/fail grade; they return a list
of entitled users. Examples of these reports include table and view privileges
granted to grantees and package privileges granted to grantees. In a large
environment that includes very large numbers of users and applications, these
reports generate an overwhelming amount of data. When you run these reports in
such a large environment, the process can run for a long time and consume large
amounts of resources, and it might eventually time out.

Procedure
1. Upgrade the database schema used to support vulnerability assessment on your
database server.
2. Install zSecure Audit on your database server. Use the instructions and tools
that are provided with zSecure Audit to learn how to populate approximately
24 tables in the CKADBVA schema to support the new zSecure tests.
3. The zSecure team will issue a PTF that enables zSecure Audit to work with
Guardium vulnerability assessment. Obtain this PTF and apply it according to
the accompanying instructions.

Results

Your system is now configured to take advantage of the new zSecure tests.

What to do next

Choose the new tests that you want to run to assess your RACF vulnerabilities.
Configure and run the tests.

Configuration Auditing System


CAS tracks such changes and reports on them. The data is available on the
Guardium system and can be used for reports and alerts.

Configuration Auditing System Overview

Databases can be affected by changes to the server environment; for example, by


changing configuration files, environment or registry variables, or other database
or operating system components, including executable files or scripts used by the
database management system or the operating system. CAS tracks such changes
and reports on them. The data is available on the Guardium system and can be
used for reports and alerts.

Note: Vulnerability Assessment (VA) and Configuration Auditing System (CAS)


are only supported in English.

CAS Agent
CAS is an agent installed on the database server and reports to the Guardium
system whenever a monitored entity have changed, either in content or in
ownership or permissions. You install a CAS client on the database server system,
using the same utility that is used to install S-TAP. CAS shares configuration
information with S-TAP, though each component runs independently of the other.
Once the CAS client has been installed on the host, you configure the actual
change auditing functions from the Guardium portal.

588 IBM Guardium 10.0


CAS Server
The CAS server is a component of Guardium and runs on the Guardium system. It
runs as a standalone process, independent of the Tomcat application server. It is
controlled through the innittab file.

The CAS server is configured to use only a few of the available processors on the
Guardium system. The number of processors that CAS uses is determined by using
the parameter divide_num_of_processors_by. This parameter is stored in the
cas.server.config.properties file and its default value is 2. The number of
available processors on the Guardium system is divided by this value. This ensures
that even when CAS uses 100% of the CPU on the allocated processors, the rest of
the processors are available for use by other applications.

CAS Server Authentication

In addition to the basic security SSL provides, Guardium provides CAS Server
authentication support on the CAS client that runs on the database server. This
will guarantee that CAS client communicates only with Guardium's CAS server.
Unauthenticated connections and Common Names (CN) mismatches will be
reported in the CAS log file.

When configured, when the CAS server starts it will load a signed certificate as
well as a private key and assigns them to a server socket on which it accepts
connections. On the database server side the CAS client will support the following
connection modes:
1. Non-secure connection (use_tls=’0')
2. Secure connection without authentication (use_tls ='1',
guardium_ca_path=NULL). This mode forces the use of SSL as the means of
communication with the CAS server (i.e. uses SSL without server
authentication).
3. Secure connection with server authentication ( use_tls ='1',
guardium_ca_path=<public key location>). The public key is used by the CAS
client in order to authenticate the CAS server. The public key (ca.cert.pem) is
going to be located under <install_dir>/etc/pki/certs/trusted.
ca.cert.pem - is a file containing Root Certificate Authorities certificates (which
are self signed). In a browser equivalent those would be trusted CA certificates,
such as VeriSign's, etc.
All gmachine certificates are issued/signed by the root authority - that's how
they are validate and how the chain of trust is established.
It is possible to set guardium_ca_path with either the full path including the
actual public key file name , or just the directory name (<install_dir>/etc/pki/
certs/trusted), in which all the public keys within this directory will be used in
order to authenticate the server. If guardium_ca_path is set with a file or
directory that doesn't contain the public key, the connection attempt will fail.
4. Secure connection with server authentication and common name verification.
This mode has an additional check in which the certificate CN from the server
is compared with the one set in the parameter sqlguard_cert_cn. If
sqlguard_cert_cn is NULL or empty this check will be disabled. Otherwise it
needs to be set with the same CN Guardium's self signed certificate has
('gmachine').

Note: All the parameters mentioned are from the guard_tap.ini file.

Chapter 7. Assess and harden 589


Using SSL with CAS
You can configure the CAS agent to use a Secure Sockets Layer (SSL) connection to
send data to the CAS server. The CAS server that is installed with version 10.0
complies with the requirements of US Federal Information Processing Standard
140-2 (FIPS 140-2). Only a FIPS-complint CAS agent can communicate with this
CAS server using SSL. If you want to use this approach, you must upgrade your
CAS agents to the version delivered with this patch. You must also have IBM Java
installed on the server where the CAS agent runs, and the CAS agent must be
configured to use it. In order to use FIPS communication, certificate-based
authentication must be in use.

If you attempt to use an older CAS agent to communicate with the updated CAS
server using SSL, you will see this message in the log file on the CAS agent
system:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure

You might also see this message in the CAS log file on the Guardium system
javax.net.ssl.SSLHandshakeException: Client requested protocol SSLv3 not
enabled or not supported

If you want to use a non-SSL connection between the CAS agents and the CAS
server, you can continue to use your existing CAS agents.

Template Set

A CAS template set contains a list of item templates, bundled together, share a
common purpose such as monitoring a particular type of database (Oracle on
Unix, for example), and is one of two types:
v Operating System Only (Unix or Windows)
v Database (Unix-Oracle, Windows-Oracle, Unix-DB2, Windows-DB2, etc.)

A database template set is always specific to both the database type and the
operating system type.

CAS Template Item


The definition or set of attributes of a monitoring task over a single Monitored
Entity. Users can define new CAS tests by creating new CAS templates or users
can use predefined CAS templates that can be modified.

A template item is a specific file or file pattern, an environment or registry


variable, the output of an OS or SQL script, or the list of logged-in users. The state
of any of these items is reflected by raw data, i.e. the contents of a file or the value
of a registry variable. CAS detects changes by checking the size of the raw data, or
computing a checksum of the raw data. For files, CAS can also check for system
level changes such as ownership, access permission, and path for a file.

In a federated environment where all units (collectors and aggregators) are


managed by one manager, all templates are shared by both collectors and
aggregators and CAS data can be used in reporting or vulnerability assessments.
When the collector and aggregator (or host where archived data is restored) are not
part of the same management cluster the templates are not shared and therefore
CAS data cannot be used by vulnerability assessments even when the data is

590 IBM Guardium 10.0


present, to remedy this use export/import of definitions to copy the templates
from the collector to the aggregator (or restore target).

Note: CAS should not be asked to monitor more than 10,000 files per client.

Note: It is recommended to configure CAS to handle no more than 1,000


monitored files per hour.

Monitored Entity

The actual entity being monitored, can be A File (its content and properties), Value
of an Environment Variable or Windows Registry, Output of an OS command or
Script or SQL statement

CAS Instance

Application of a CAS Template Set on a specific Host (creating an Instance of that


Template Set and applying it on a specific host)

CAS Configuration

A CAS configuration defines one or more CAS instances, each of which identifies a
template set to be used to monitor a set of items on that host.

Default Template Sets

For each operating system and database type supported, Guardium provides a
preconfigured, default template sets for monitoring a variety of databases on either
Unix or Windows platforms. A default template set is one that will be used as a
starting point for any new template set defined for that template-set type. A
template-set type is either an operating system alone (Unix or Windows), or a
database management system (DB2, Informix, Oracle, etc.), which is always
qualified by an operating system type - for example, UNIX-Oracle, or
Windows-Oracle. Many of the preconfigured, default template sets are used within
Guardium's Vulnerability Assessments where, for example, known parameters, file
locations, and file permissions can be checked. See Vulnerability Assessment for
additional information.

You cannot modify a Guardium default template set, but you can clone it and
modify the cloned version. Each of the Guardium default template sets defines a
set of items to be monitored. Make sure that you understand the function and use
of each of the items monitored by that default template set and use the ones that
are relevant to your environment. After defining a template set of your own, you
can designate that template set as the default template set for that template-set
type. After that, any new template sets defined for that operating system and
database type will be defined using your new default template set as a starting
point. The Guardium default template set for that type will not be removed; it will
remain defined, but will not be marked as the default.

Rationale for creating template sets to meet specific database


configurations
Although Guardium supplies predefined CAS template sets for each database type,
the wide variety of possible database configurations make means that you may
have to tweak the predefined template sets or create new ones to meet all of your
needs in a production environment -- particularly as regards database software and

Chapter 7. Assess and harden 591


data file locations. You should plan on creating additional templates if you want
CAS to monitor ownership of, permissions on, and changes to your database files.

For example, the predefined CAS template set for Oracle contains these templates,
among others:
v $ORACLE_HOME/oradata/../.*dbf
v $ORACLE_HOME/oradata/../.*ctl
v $ORACLE_HOME/oradata/../.*log
v $ORACLE_HOME/../init.*.ora

As you can see, these file-pattern templates all start with the same root,
$ORACLE_HOME (NOTE: This is not necessarily the $ORACLE_HOME
environment variable defined on your database server; by preference, CAS uses the
datasource field “Database Instance Directory” as the value for $ORACLE_HOME).

It is possible that in a production environment your Oracle data files will not be in
the same directory tree, or even on the same device, as your log files, and your
Oracle configuration files might be in still another location.

You might create additional CAS templates using absolute paths to allow CAS to
find and monitor all of your Oracle files, for example:
v /u01/oradata/mydb/*.dbf
v /u02/oradata/mydb/*.dbf
v /u03/oradata/mydb/*.dbf
v /u01/oradata/mydb/*.ctl
v /u02/oradata/mydb/*.ctl
v /u03/oradata/mydb/*.ctl
v /home/oracle11/admin/mydb/bdump/*.log
v /home/oracle11/product/11.1/db_1/dbs/init*.ora

You can even use additional environment variables that are defined in your Oracle
instance account. As an example, if you have variables defined as $ORA_DATA1,
$ORA_DATA2 and $ORA_SOFT you can use:
v $ORA_DATA1/mydb/*.dbf
v $ORA_DATA2/mydb/*.dbf
v $ORA_DATA1/mydb/*.ctl
v $ORA_DATA2/mydb/*.ctl
v $ORA_SOFT/admin/mydb/bdump/*.log
v $ORA_SOFT/product/11.1/db_1/dbs/init*.ora

Sourcing files from different locations


CAS templates assume that certain files, such as user profiles, are in specific
locations. You can configure CAS to look for these files in other locations that you
specify by using a regular expression. To use this feature, add the
user_profile_files parameter to the cas.client.config.properties file in the
config directory. The format for each entry is
identifying_string=comma-separated list of files

592 IBM Guardium 10.0


For example, suppose that you want to find .profile files in any DB2 user’s home
directory. For this example we assume that the names of all of these home
directories include the string "db2." Add this line to the properties file:
user_profile_files=.*db2.*=.profile

If you need to specify more than one pattern, use the bar symbol (|) to separate
patterns. If you want to add the profiles of your mysql users to the previous entry,
replace the previous example with this:
user_profile_files=.*db2.*=.profile|.*mysql.*=.profile

CAS Start-up and Failover


Various failover and connect parameters can be modified through S-TAP Control
Change Auditing.

When the CAS client starts on the host, it looks for a checkpoint file that it may
have written to the system. This file tells CAS what it was doing the last time it
was running. CAS then connects to its Guardium system. If it has found a
checkpoint file, CAS will ask the Guardium system to verify its version of its
monitoring assignment against what is stored in the Guardium database. While the
CAS client and the Guardium system have been disconnected, there may have
been changes to the assignment. When any differences are resolved, CAS will
resume monitoring. If CAS does not find a checkpoint file, it will ask the
Guardium system what it should do. If the Guardium system finds the CAS host
in its database, then the associated template sets will be sent to the CAS client,
expanded into monitored items, and monitoring will begin. If the Guardium
system cannot find the CAS host in its database, it will add it to the database and
send the default template set for the CAS host operating system.

When connectivity is lost between the CAS client and Guardium system, it may
take the CAS client and Guardium system up to five minutes (the wait time for a
CAS client to expect a message from the Guardium system) to discover that it has
lost contact with the primary Guardium system, but may happen sooner if the
communication error is detected.

If the CAS client loses its connection to the Guardium system or cannot make an
initial connection, it opens a failover file and begins writing the messages that it
would have sent to the Guardium system, to the failover file. The path to this fail
over file is stored in guard_tap.ini with the name cas_fail_over_file. When
communication is reestablished the CAS client shuts down and restarts, sends all
messages stored in the failover file to the Guardium system, and deletes the file. If
the CAS client was unable to make the initial connection, it will use the checkpoint
file to determine what to monitor, and continues doing what it was doing before
communication failed.

When communication is lost, the client also starts a thread which periodically tries
to reconnect with the primary Guardium system. The number of times CAS will
attempt to reconnect, and the average time interval between reconnect attempts,
are configurable parameters. It will try to reconnect for a period of time set in
guard_tap.ini with the name cas_server_failover_delay. After that time has
passed, the client will also try to connect to any secondary servers identified in
guard_tap.ini. The secondaries will be tried in the order of the value of the
primary attribute listed in the SQL_Guard sections of guard_tap.ini. When
primary is not 1, it is a secondary. While the client is connected to a secondary
server it will continue to try to reconnect to the primary server.

Chapter 7. Assess and harden 593


If the reconnect attempt limit is met, the CAS client stops trying to reconnect, but
continues to write data to a failover file. To cap disk space requirements on the
database server, there are actually two failover files. CAS writes to one file until it
reaches its maximum failover file size (which is configurable), and then switches to
the other, overwriting any previous data on that file. The default failover file size is
50MB (for each of the files).

You can specify one or more secondary Guardium systems when configuring the
CAS client. In failover mode, CAS only tries to reconnect to its primary server
until the time specified by cas_server_failover_delay in guard_tap.ini is
exceeded. At that time, CAS begins trying to connect to any of the secondary
servers, as well as its primary server (which is always the first server it tries to
connect with during any reconnect attempt). While it is connected to a secondary
server, CAS continues to try to reconnect to its primary server.

Changes to the CAS client configuration can only be made from the primary server
and only while the host is online. Whenever the configuration of the CAS client is
changed on the primary server and Guardium system is in standalone
configuration, an export file is saved on the host. If the CAS client connects to a
secondary server, the saved export file is imported from the host to the secondary
server.

There is no need to separately maintain configurations on both primary and


secondary servers. However, if on the primary server, the parameters for an
individual monitored item have been changed from those defined in the template,
then these changes will not be transferred to the secondary server. For example,
even if the test interval on a particular file was changed from the template default
of 1 hr to 10 min, the test interval on the secondary server will again be 1 hr.
Essentially, monitored items are regenerated from the templates of the imported
configuration. The delay before searching for secondary servers is based directly on
time rather than failover file size. The delay is set with the
cas_server_failover_delay parameter in guard_tap.ini and has a default of 60
minutes.

Various failover and connect parameters can be modified through S-TAP Control
Change Auditing.

As with S-TAP, CAS connectivity outages create exceptions on the Guardium


system, so alerts can be issued within moments of detecting the outage.

Setting Up and Maintaining Secondary Servers


In the S-TAP/CAS configuration file on the database server system, one or more
secondary Guardium servers can be defined. If the primary Guardium server
becomes unavailable, CAS on that database server system will connect to a
secondary Guardium system (as described previously, see Start Up and Failover).

Rules of Failover
Rule # Guardium system Fails over to Valid
1 stand alone stand alone Yes
2 managed managed (same Yes
manager)
3 managed managed (different No
manager)

594 IBM Guardium 10.0


Rule # Guardium system Fails over to Valid
4 managed stand alone No
5 stand alone managed No

CAS Failover Limitations


1. CAS instances will not be relocated to the failed-over Guardium system when
the source Guardium system is a managed unit and the target Guardium
system is either:
v a stand-alone Guardium system
v a managed unit which is being managed by a different manager
2. CAS import/export option will be limited to manager and stand-alone
machines only.

Exporting CAS Hosts


1. Click Manage > Aggregation & Archive > Export to open the Definitions
Export panel. Select CAS Hosts from the Type menu, select the to-be exported
definitions from the Definitions to Export menu, and click .in the Export
2. A file named exp_<date>_<time>.sql is saved on your system. This file will
contain the definitions of all CAS hosts selected, and the definitions of any
template sets used by those CAS hosts.

Importing CAS Hosts


1. Click Manage > Aggregation & Archive > Import to open the Definitions
Import panel.
2. Use the Browse and Upload buttons to select files and upload them, then select
the definition from the Import Uploaded Definitions pane.
3. Click Import this set of definitions to import the definition.
4. Confirm the selected action (or not).

Note: An import operation does not overwrite an existing definition. If you


attempt to import a definition with the same name as an existing definition,
you are notified that the item was not replaced. If you want to overwrite an
existing definition with an imported one, you must delete the existing
definition before performing the import operation.

Maintaining Secondary Servers for a CAS Host


CAS configurations can also be maintained through the use of export and import
operations. Since the import operation will not replace an existing definition, on
each secondary server you must delete the old CAS host definition before
importing the new one.

Be sure to perform this procedure only while the selected CAS host is connected to
its primary server.
1. Export the definition of the CAS host (see the previous section).
2. On each secondary server:
v Delete the old CAS host definition that you want to replace.
v Import the definitions that were exported from the primary server (see
Importing CAS Hosts, previous).

Chapter 7. Assess and harden 595


CAS Client Installation
The CAS client agent is typically installed together with the S-TAP agent. It can be
installed later under Windows from the installation DVD, or under Unix by
running the installation script install_cas.sh, which is located in the S-TAP
installation directory, which by default is: /usr/local/guardium/guard_stap.

CAS Client Ignore Change Alerts

The CAS client agent can avoid sending change notifications to the CAS server
based on a predefined settings.

The CAS client agent will now look for a new parameter ignore_change_alerts in
the CAS client agent's cas.client.config.properties configuration file.

If the parameter is not found or not set, the CAS client will work without any
changes and the Ignore change alerts functionality will not be enabled (for
example, the CAS client will alert on any file change).

If the new parameter is set, CAS client agent will ignore sending change
notifications based on the change-types specified in the parameter value.

The possible change-types are:

PERMISSION, SIZE, OWNER, GROUP, TIMESTAMP

Ignoring multiple change-types can be set by + delimited concatenation of any of


the specified change-type.

For example:

In order to avoid sending change notification on OWNER and GROUP changes, set
up the parameter as follows:

ignore_change_alerts=OWNER+GROUP

Note: In the initial installation or when defining a new template, the FIRST scan of
the files will be performed and these files will appear in the CAS changes report
regardless to settings of Ignore change alerts.

Correcting an invalid non-IP hostname


In case the user installs CAS agent with a bogus tap_ip, guard_tap.ini param, or
CAS_TAP_IP (GIM param), Windows datasources defined for that host might be
useless (if used for activity that requires accessing the remote database).

If the scenario happens, the user will have to delete the datasource and change the
tap_ip parameter to the correct database server hostname/ip.

CAS Templates
Guardium provides a set of CAS templates, one for each type of data repository.

CAS templates - DB2

OS Script

596 IBM Guardium 10.0


Designates an OS script to be executed. Must begin with the variable $SCRIPTS,
which refers to the scripts directory beneath the CAS home directory, and identify
the script to be executed, e.g., $HOME/ db2_spm_log_path_group_test.sh". The
script itself must, of course, reside in the CAS $SCRIPTS directory. Output from
the script is stored in the Guardium database to be used by security assessments.
This can be either a shell/batch script to be run, or a set of commands that could
be entered on the command line. Because of the fickle nature of Java's parsing it is
suggested that any but the simplest commands be put into a script rather than run
directly. On Unix the script is run in the environment of the OS user entered. Three
environment variables will be defined for the run environment which the user
could use in writing scripts: $UCAS is the DB username, $PCAS is the DB
password, and $ICAS is the DB instance name. For Windows these three values
will be appended as the last three arguments to the batch file execution. For
example, if you had an OS Script template %SCRIPTS%\MyScript.bat my-arg1
my-arg2, then %3, %4 and %5 would be the DB username, password, and instance
name respectively.

File

Designates a file to be tracked and monitored by security assessments. The path to


the file can be absolute, or relative to the $INSTHOME variable. Set the value of
the $INSTHOME variable in Database Instance Directory on the Datasource
Definition panel. This is assumed to name a single file. Environment variables from
the OS user environment can be used in the file name and will be expanded. For
example, $HOME/START.sh will name the startup script in the DB2 user's home
directory.

File Pattern

Designates a group of files to be tracked and monitored by security assessments.


The path to the files can be absolute, or relative to the $INSTHOME variable. Set
the value of the $INSTHOME variable in Database Instance Directory on the
Datasource Definition panel. A .. in the path indicates one or more directories
between the portion of the path before it and the portion of the path after it. A .+
in the path indicates exactly one directory between the portion of the path before it
and the portion of the path after it. For example: $INSTHOME/sqllib/../db2.* is
just a short-hand for creating many single file identifications from a single
identification string, a file pattern which will match all files in the directory. A file
pattern can be viewed as a series of regular expressions separated by /'s. A file is
matched if each element of its full path can be matched by one of the regular
expressions in order. If an element of the pattern is an environment variable, it is
expanded before the match begins. If .. is one of the elements of the pattern, it will
match zero or more directory levels. For example, /usr/local/../foo will match
/usr/local/foo and /usr/local/gunk/junk/bunk/foo. Using more than one ..
element in a file pattern should not be necessary and is discouraged because it
makes the pattern very slow to expand. Because of the confusion with its use in
regular expressions \ cannot be used as a separator as it might be in Windows.

Additionally, the Guardium Unix/DB2 Assessment: UNIX - DB2 for Unix set
includes the following templates:

Db2govd Setuid Bits Is Not Set

This test monitors that the SETUID bit on DB2GOVD has been disabled

Db2start Setuid Bits Is Not Set

Chapter 7. Assess and harden 597


This test monitors that the SETUID bit on DB2START has been disabled

Db2stop Setuid Bits Is Not Set

This test monitors that the SETUID bit on DB2STOP has been disabled

File ownership

This test monitors file ownership, and changes thereto, of DB2 files.

File permissions

This test monitors file permissions, and changes thereto, of DB2 files.

CAS templates - Informix

OS Script

Designates an OS script to be executed. Must begin with the variable $SCRIPTS,


which refers to the scripts directory beneath the CAS home directory, and identify
the script to be executed, e.g., $HOME/ informix_rootpath_owner.sh". The script
itself must, of course, reside in the CAS $SCRIPTS directory. Output from the
script is stored in the Guardium database to be used by security assessments. This
can be either a shell/batch script to be run, or a set of commands that could be
entered on the command line. Because of the fickle nature of Java's parsing it is
suggested that any but the simplest commands be put into a script rather than run
directly. On Unix the script is run in the environment of the OS user entered. Three
environment variables will be defined for the run environment which the user
could use in writing scripts: $UCAS is the DB username, $PCAS is the DB
password, and $ICAS is the DB instance name. For Windows these three values
will be appended as the last three arguments to the batch file execution. For
example, if you had an OS Script template %SCRIPTS%\MyScript.bat my-arg1
my-arg2, then %3, %4 and %5 would be the DB username, password, and instance
name respectively.

File

Designates a file to be tracked and monitored by security assessments. The path to


the file can be absolute, or relative to the $ INFORMIXDIR variable. Set the value
of the $INFORMIXDIR variable in Database Instance Directory on the Datasource
Definition panel. This is assumed to name a single file. Environment variables from
the OS user environment can be used in the file name and will be expanded. For
example, $HOME/START.sh will name the startup script in the Informix user's home
directory.

Additionally, the Guardium Unix/Informix Assessment for Unix set includes the
following templates:

Scan log files for errors

This test monitors for error in the online.log file

File ownership

This test monitors file ownership, and changes thereto, of Informix files.

598 IBM Guardium 10.0


File permissions

This test monitors file permissions, and changes thereto, of Informix files.

CAS templates - Oracle

OS Script

Designates an OS script to be executed. Must begin with the variable $SCRIPTS,


which refers to the scripts directory beneath the CAS home directory, and identify
the script to be executed, e.g., $SCRIPTS/oracle_user.sh. The script itself must, of
course, reside in the CAS $SCRIPTS directory. Output from the script is stored in
the Guardium database to be used by security assessments. (This can be either a
shell/batch script to be run, or a set of commands that could be entered on the
command line. Because of the fickle nature of Java's parsing it is suggested that
any but the simplest commands be put into a script rather than run directly. On
Unix the script is run in the environment of the OS user entered. Three
environment variables will be defined for the run environment which the user
could use in writing scripts: $UCAS is the DB username, $PCAS is the DB
password, and $ICAS is the DB instance name. For Windows these three values
will be appended as the last three arguments to the batch file execution. For
example, if you had an OS Script template $SCRIPTS/mysql_mysqld_user.sh, then
%3, %4 and %5 would be the DB username, password, and instance name
respectively. )

File

Designates a file to be tracked and monitored. The path to the file can be absolute,
or relative to the $ORACLE_HOME variable. The value of the $ORACLE_HOME
variable is the value you set in the Database Instance Directory field of the
Datasource Definition panel. (This is assumed to name a single file. Environment
variables from the OS user environment can be used in the file name and will be
expanded. For example, $HOME/START.sh will name the startup script in the Oracle
user's home directory.)

File Pattern

Designates a group of files to be tracked and monitored. The path to the files can
be absolute, or relative to the $ORACLE_HOME variable. Set the value of the
$ORACLE_HOME variable in Database Instance Directory on the Datasource
Definition panel. A .. in the path indicates one or more directories between the
portion of the path before it and the portion of the path after it. A .+ in the path
indicates exactly one directory between the portion of the path before it and the
portion of the path after it. For example: $ORACLE_HOME/oradata/../*.dbf (This is
just a short-hand for creating many single file identifications from a single
identification string, a file pattern. A file pattern can be viewed as a series of
regular expressions separated by /'s. A file is matched if each element of its full
path can be matched by one of the regular expressions in order. If an element of
the pattern is an environment variable, it is expanded before the match begins. If ..
is one of the elements of the pattern, it will match zero or more directory levels.
For example, /usr/local/../foo will match /usr/local/foo and
/usr/local/gunk/junk/bunk/foo. Using more than one .. element in a file pattern
should not be necessary and is discouraged because it makes the pattern very slow
to expand. Because of the confusion with its use in regular expressions \ cannot be
used as a separator as it might be in Windows. The file pattern shown previously
is not correct because *.dbf is not a valid regular expression. It should be .*dbf.

Chapter 7. Assess and harden 599


Additionally, the default Guardium Unix/Oracle template set includes the
following templates:

ADMIN_RESTRICTIONS Is On

This test monitors that the listener.ora parameter ADMIN_RESTRICTIONS is set


properly.

File ownership

This test monitors file ownership, and changes thereto, of the Oracle data files,
logs, executables, etc.

File permissions

This test monitors file permissions, and changes thereto, on the Oracle data files,
logs, executables, etc.

Scan log files for errors

This test scans the Oracle log files for occurrences of error strings.

SPOOLMAIN.LOG Does Not Exist

This test checks the existence of the Oracle SPOOLMAIN.LOG.

CAS templates - MongoDB


MongoDB is typically used as an operational system and as a backend for web
applications due to ease of programming for non-relational formatted data like
JSON documents.

Use the Unix/MongoDB template to specify multiple paths and multiple


directories in the datasource to scan various components as specified in the
MongoDB datasource definition.

Scan a file pattern by selecting template items beginning with a “$”.

Do not select the $SCRIPTS/mongodb_unmask_value.sh item - it is a Guardium


reserve item.

If the template item is not specified as part of the Database Instance Directory in
the MongoDB datasource definition, the item will be skipped over and not
scanned.

Note: For CAS scripts to work, you must enable log in for the MongoDB account
on the Mongo DB server. To enable log in, log in as root, run the command chsh
mongod, and when prompted for new shell, enter /bin/bash.

Note: You can create your own template with multiple file paths for any type of
datasource. When creating your own template, we recommend that you use the
Unix/MongoDB as a reference. To create a new template for a MongoDB
datasource, you can clone and modify the Unix/MongoDB template.

600 IBM Guardium 10.0


Note: MongoDB datasources support SSL server and client/server connections
with SSL client certificates. MongoDB connections use a Java driver, instead of a
JDBC database connection.

Note: The VA solution for MongoDB clusters can be run on mongos, a primary
node and all secondary nodes for replica sets.

CAS templates - Netezza

File Ownership

This test checks whether the files are owned and belongs to the correct group
according to the definition within the CAS template.

File Permission

This test checks whether the file permission is properly set according to the
definition within the CAS template.

Scan Log files for errors

This test checks for these events (FATAL, ERROR, DEBUG, ABORT and PANIC) in
these two log files. /nz/kit/log/postgres/pg.log and /nz/kit/log/startupsvr/
startupsvr.log

CAS templates - Oracle

OS Script

Designates an OS script to be executed. Must begin with the variable $SCRIPTS,


which refers to the scripts directory beneath the CAS home directory, and identify
the script to be executed, e.g., $SCRIPTS/oracle_user.sh. The script itself must, of
course, reside in the CAS $SCRIPTS directory. Output from the script is stored in
the Guardium database to be used by security assessments. (This can be either a
shell/batch script to be run, or a set of commands that could be entered on the
command line. Because of the fickle nature of Java's parsing it is suggested that
any but the simplest commands be put into a script rather than run directly. On
Unix the script is run in the environment of the OS user entered. Three
environment variables will be defined for the run environment which the user
could use in writing scripts: $UCAS is the DB username, $PCAS is the DB
password, and $ICAS is the DB instance name. For Windows these three values
will be appended as the last three arguments to the batch file execution. For
example, if you had an OS Script template $SCRIPTS/mysql_mysqld_user.sh, then
%3, %4 and %5 would be the DB username, password, and instance name
respectively. )

File

Designates a file to be tracked and monitored. The path to the file can be absolute,
or relative to the $ORACLE_HOME variable. The value of the $ORACLE_HOME
variable is the value you set in the Database Instance Directory field of the
Datasource Definition panel. (This is assumed to name a single file. Environment
variables from the OS user environment can be used in the file name and will be
expanded. For example, $HOME/START.sh will name the startup script in the Oracle
user's home directory.)

Chapter 7. Assess and harden 601


File Pattern

Designates a group of files to be tracked and monitored. The path to the files can
be absolute, or relative to the $ORACLE_HOME variable. Set the value of the
$ORACLE_HOME variable in Database Instance Directory on the Datasource
Definition panel. A .. in the path indicates one or more directories between the
portion of the path before it and the portion of the path after it. A .+ in the path
indicates exactly one directory between the portion of the path before it and the
portion of the path after it. For example: $ORACLE_HOME/oradata/../*.dbf (This is
just a short-hand for creating many single file identifications from a single
identification string, a file pattern. A file pattern can be viewed as a series of
regular expressions separated by /'s. A file is matched if each element of its full
path can be matched by one of the regular expressions in order. If an element of
the pattern is an environment variable, it is expanded before the match begins. If ..
is one of the elements of the pattern, it will match zero or more directory levels.
For example, /usr/local/../foo will match /usr/local/foo and
/usr/local/gunk/junk/bunk/foo. Using more than one .. element in a file pattern
should not be necessary and is discouraged because it makes the pattern very slow
to expand. Because of the confusion with its use in regular expressions \ cannot be
used as a separator as it might be in Windows. The file pattern shown previously
is not correct because *.dbf is not a valid regular expression. It should be .*dbf.

Additionally, the default Guardium Unix/Oracle template set includes the


following templates:

ADMIN_RESTRICTIONS Is On

This test monitors that the listener.ora parameter ADMIN_RESTRICTIONS is set


properly.

File ownership

This test monitors file ownership, and changes thereto, of the Oracle data files,
logs, executables, etc.

File permissions

This test monitors file permissions, and changes thereto, on the Oracle data files,
logs, executables, etc.

Scan log files for errors

This test scans the Oracle log files for occurrences of error strings.

SPOOLMAIN.LOG Does Not Exist

This test checks the existence of the Oracle SPOOLMAIN.LOG.

Configuration for Oracle RAC systems


This is the required configuration for Oracle RAC systems.

Change guard_tap.ini on each node installed with S-TAP:

unix_domain_socket_marker=<key>

602 IBM Guardium 10.0


where <key> value can be found in listener.ora in the IPC protocol definition

Example 1:

If the following is a description in the listener.ora

LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ORCL))))

Then change the following parameter accordingly

unix_domain_socket_marker=ORCL

Example 2:

In the case where there is more than one IPC line in listener.ora, use a common
denominator of all the key
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LIST

Guardium uses a string search in the path so LISTENER will work for all four and
should be used in this case:

unix_domain_socket_marker=LISTENER

CAS templates - PostgreSQL


Note: It is very important that PostgreSQL_BIN and PostgreSQL_DATA environment
variables are defined correctly. An invalid setting will cause other CAS assessment
tests not to work properly or at all.

File Ownership

This test checks whether the files are owned and belongs to the correct group
according to the definition within the CAS template.

File Permission

This test checks whether the file permission is properly set according to the
definition within the CAS template.

PostgreSQL_BIN environment variable defined

This test check if the $PostgreSQL_BIN environment variable is defined in your


database server. This variable need to be defined under the root account for
Unix/Linux or you can add to .profile for root login. For Windows OS, it needed
to be defined for the Administrator login. For Red Hat Linux, PostgreSQL BIN
folder is usually in /usr/bin. For Solaris, it is usually something like
/data/postgres/postgres/8.3-community/bin/64. Setting this environment variable
is very important as other assessment tests relied on the location of this folder.

PostgreSQL_DATA environment variable defined

This test check if the $PostgreSQL_DATA environment variable is defined in your


database server. This variable need to be defined under the root account for

Chapter 7. Assess and harden 603


Unix/Linux or you can add to .profile for root login. For Windows OS, it needed
to be defined for the Administrator login. For Red Hat Linux, the default for DATA
folder is usually in /var/lib/pgsql/data. For Solaris, there is no consistent
location. Setting this environment variable is very important as other assessment
tests relied on the location of this folder to find the correct configuration files.

CAS templates - SQL Server


OS Script

Designates an OS script to be executed. Output from the script is stored in the


Guardium database. This can be either a shell/batch script to be run, or a set of
commands that could be entered on the command.

Registry Variable

Search Windows registry for specific key value that are required by security
assessments test.

CAS templates - Sybase

OS Script

Designates an OS script to be executed. Must begin with the variable $SCRIPTS,


which refers to the scripts directory beneath the CAS home directory, and identify
the script to be executed, e.g., $HOME/sybase_sysdevice_type_test.sh. The script
itself must, of course, reside in the CAS $SCRIPTS directory. Output from the
script is stored in the Guardium database to be used by security assessments. This
can be either a shell/batch script to be run, or a set of commands that could be
entered on the command line. Because of the fickle nature of Java's parsing it is
suggested that any but the simplest commands be put into a script rather than run
directly. On Unix the script is run in the environment of the OS user entered. Three
environment variables will be defined for the run environment which the user
could use in writing scripts: $UCAS is the DB username, $PCAS is the DB
password, and $ICAS is the DB instance name. For Windows these three values
will be appended as the last three arguments to the batch file execution. For
example, if you had an OS Script template %SCRIPTS%\MyScript.bat my-arg1
my-arg2, then %3, %4 and %5 would be the DB username, password, and instance
name respectively.

File

Designates a file to be tracked and monitored by security assessments. The path to


the file can be absolute, or relative to the $SYBASE variable. The value of the
$SYBASE variable is the value you set in the Database Instance Directory field of
the Datasource Definition panel. This is assumed to name a single file.
Environment variables from the OS user environment can be used in the file name
and will be expanded. For example, $HOME/START.sh will name the startup script in
the Sybase user's home directory.

File Pattern

Designates a group of files to be tracked and monitored by security assessments.


The path to the files can be absolute, or relative to the $SYBASE variable. The
value of the $SYBASE variable is the value you set in the Database Instance
Directory field of the Datasource Definition panel. A .. in the path indicates one or

604 IBM Guardium 10.0


more directories between the portion of the path before it and the portion of the
path after it. A .+ in the path indicates exactly one directory between the portion of
the path before it and the portion of the path after it. For example:
$SYBASE/../.*dat" This is just a short-hand for creating many single file
identifications from a single identification string, a file pattern. A file pattern can
be viewed as a series of regular expressions separated by /'s. A file is matched if
each element of its full path can be matched by one of the regular expressions in
order. If an element of the pattern is an environment variable, it is expanded before
the match begins. If .. is one of the elements of the pattern, it will match zero or
more directory levels. For example, /usr/local/../foo will match /usr/local/foo
and /usr/local/gunk/junk/bunk/foo. Using more than one .. element in a file
pattern should not be necessary and is discouraged because it makes the pattern
very slow to expand. Because of the confusion with its use in regular expressions
\cannot be used as a separator as it might be in Windows.

Additionally, the Guardium Unix/Sybase Assessment : UNX - SYBASE set includes


the following templates :

Scan log files for errors

This test monitors for errors in Sybase log files.

sysdevice Owner is sysbase

This test monitors for ownership of sysdevice.

File ownership

This test monitors file ownership, and changes thereto, of Sybase files.

File permissions

This test monitors file permissions, and changes thereto, of Sybase files.

CAS templates - Teradata


File ownership
This test checks whether the files are owned and belongs to the correct
group according to the definition within the CAS template.
File permission
This test checks whether the file permission is properly set according to the
definition within the CAS template.
Aster Data
Aster Data was acquired by Teradata in 2011, typically used for data
warehousing and analytic application (OLAP). Aster Data created a
framework called SQL-MapReduce that allows the Structured Query
Language (SQL) to be used with Map Reduce. Aster Data is most often
associated with clickstream kinds of applications.
An Aster nCluster includes a Queen Node Group, a Worker Node Group,
and a Loader Node Group. A CAS agent is installed on all three node
groups.

Chapter 7. Assess and harden 605


A security assessment should be created to execute all tests on the queen
node. All database connections for Aster Data go through the queen node
only.
Testing on worker and loader nodes are only required when performing
CAS tests (File permission and File ownership).
Privilege tests loop through all the databases in a given instance.
When running VA tests that require CAS access, and filling in the CAS
datasource configuration choices, specify the usernname that Aster is
installed under for Database Instance Account. This username typically is
called beehive.
For Database Instance Directory, this is the home directory of the beehive
user. The default typically is /home/beehive.
When running VA tests that are do not use CAS, the customer should
create their datasource, pointing to the QUEEN node within the cluster.
When running VA tests that are CAS dependent, if the node you are
testing is one of the worker, then you would have to setup “Custom URL”
in the datasource to point to the Queen node as that is how it is listening.
Example
Host Name/IP = Worker.guard.xxx.xxx..com or 1xx.1xx.111.111 (This is the
actual worker host even though worker is not listening to this. CAS needs
this so it can send and receive data from the Worker's node)
Port = 2046 or whatever the port used.
Database = beehive
Custom URL= jdbc:ncluster://aster6q:2406/beehive (This JDBC example
shows that we are actually connecting to the aster6q which is the queen
node on port 2406 and beehive database)
Database instance account = beehive
Database instance directory = /home/beehive

Working with CAS Templates


This section describes how to maintain CAS templates

Define a Template/Template Set


v Create a New Template Set
v Modify a Template Set
v Clone a Template Set
v Delete a Template Set

Create a New Template Set


1. Open the CAS Configuration Navigator by clicking Harden > Configuration
Change Control (CAS Application) > CAS Template Set Configuration.
2. Click New to open the Monitored Item Template Definitions panel.
3. Select OS Type.
4. Select DB Type. If the template set does not require any specific DB type then
select N_A as the DB Type.
5. Enter a unique name for Template Set Name.

606 IBM Guardium 10.0


Note: Template Set Names over 128 characters will be truncated
6. Click Apply to save the CAS Template Set Definition.
7. To add items to the new template set, click Add to Set and see Define a
Template Set Item.

Finding the Guardium CAS Panel

Access to CAS Configuration Functions, by default, is restricted to the admin user


and to users who have been assigned the CAS role.

Click Harden. The list of CAS functions is listed within the Configuration Change
Control (CAS Application) header.

Opening the CAS Configuration Navigator

The CAS Configuration Navigator panel is the starting point for creating or
modifying CAS Template Sets.

Open the CAS Configuration Navigator panel by clicking Harden > Configuration
Change Control (CAS Application) > CAS Template Set Configuration.

The list can be filtered by OS type and DB type.

Modify a Template Set

Use the CAS Configuration Navigator panel to modify an existing CAS template
set. Once a template set is in use on any CAS host, the modifications that you can
make to that template set are limited. You will be able to make minor changes to
various elements of the definition, but you will not be able to add or remove
templates.
1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to modify and click Modify to open the
CAS Template Set Definition panel.
4. Make your desired changes and click Apply to save them.

Clone a Template Set


1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to clone and click Clone to open the CAS
Template Set Definition panel.
4. Once cloned, modify the clone to suit your needs.

Delete a Template Set


1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to delete and click Delete.

Chapter 7. Assess and harden 607


Define a Template Set Item
Once a template set is in use on any CAS host, the modifications that you can
make to that template set are limited. You will be able to make minor changes to
various elements of the definition, but you will not be able to add or remove
templates.
v Create a New Template Set Item
v Modify a Template Set Item
v Delete a Template Set Item

Create a New Template Set Item


1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Click New to open the Monitored Item Template Definitions panel.
3. Enter in a Template Set Name, select an OS Type and DB Type, and click
Apply.
4. Click Add To Set to create a new item.

Modify a Template Set Item


1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to modify and click Modify to open the
CAS Template Set Definition panel.
4. Select the items you want to modify, and click Edit Selected.... Make your
desired changes and click Apply to save them.

Delete a Template Set Item


1. Open the CAS Configuration Navigator panel by clicking Harden >
Configuration Change Control (CAS Application) > CAS Template Set
Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to modify and click Modify to open the
CAS Template Set Definition panel.
4. Select the items you want to delete, and click Delete Selected.

CAS Item Template Definition Panel


Component Description
OS Type The operating system type: Windows or
Unix. You can change this selection when
the template set is empty, but you cannot
change it if the template set contains one or
more items.
DB Type The database type (Oracle, MS-Sql, DB2,
Sybase, Informix, etc.) or N/A for an
operating system template set. You can
change this selection when the template set
is empty but you cannot change it if the
template set contains one or more items.

608 IBM Guardium 10.0


Component Description
Description An optional name for the item used in
reports and to identify the item in other
CAS panels (the CAS Template Set
Definition for example). If omitted, the item
name defaults to the file name or pattern,
variable name, or script (as appropriate for
the type).
Type One of the following: SQL Query, OS Script,
Environment Variable, Registry Variable,
Registry Variable Pattern, File, and File
Pattern.

See Template and Audit Types for further


information.

Note: If being used with CAS-based


assessment tests this must be of type OS
Script.
Content Type dependent text defining the specific
item to monitor, or how to generate it.

See Template and Audit Types for further


information.

Note: For an OS script CAS will wait for a


script to complete. To limit the time allowed
for an OS script to run and allowing CAS to
terminate the script, use the
cas_command_wait guard_tap.ini parameter.
The default wait time is 300 seconds or 5
minutes. When changing this parameter
there is no need to restart CAS.
Permission Limit For File and File Pattern Type only.

Used for Unix only - the permissions that


this file should not exceed.
File Owner For File and File Pattern Type only. The
owner of the file(s).
File Group For File and File Pattern Type only. The
group owner of the file(s).
Period The maximum interval between tests,
specified as a number of minutes(m),
hours(h), or days(d). Data becomes available
after the initial period is realized and up to
and before the next period begins.
Keep Data If selected, a copy of the actual data is saved
with each change. For example: for a file
item, a copy of the file is saved. If selected,
but the size of the raw data for the item is
greater than the Raw Data Limit configured
for this CAS host, no data will be saved.

Chapter 7. Assess and harden 609


Component Description
Use MD5 Indicates whether or not an additional
comparison is done by calculating a
checksum of the raw data using the MD5
algorithm. Computing the MD5 checksum is
time consuming for large character objects.
However, it is a better indicator of change
than just the size. The default is not to use
MD5. If MD5 is used, but the size of the raw
data is greater than the MD5 Size Limit
configured for the CAS host, the MD5
calculation and comparison will be skipped.
Enabled Selected by default; indicates whether or not
the item will be checked for changes.

Template and Audit Types


Type Description
SQL Query The content should be a valid SQL
statement. The result returned by the
statement will be compared to the result
returned the last time the query was run.
The query will be run with the parameters
specified in the datasource that is being
used: username, password, DB port, and so
forth. Care should be taken when filling out
these parameters in the datasource or the
query will fail to return a result.
OS Script The content can be a valid command line
entry, or the name of a file containing an OS
executable script. The script is executed in
the environment of the OS user specified in
the Database Instance Account field of the
datasource definition.
Environment Variable The content should name an environment
variable that is defined in the context of the
OS user specified in the Database Instance
Account field of the datasource definition.
Registry Variable The content is interpreted as the path to a
variable in the Windows Registry of the
host. The value found on that path is
compared to the value found the last time
the path was traced.

610 IBM Guardium 10.0


Type Description
Registry Variable Pattern The content is a sequence of regular
expressions that is used to match the
components of paths in the Windows
Registry. The pattern is used to develop
registry variable type monitored items which
will be treated as described previously.

The regular expressions are joined by / so


that the pattern resembles a registry path.
The more familiar \ character cannot be
used, since that is a special character in the
syntax of Java regular expressions. If a / is
needed in one of the regular expressions, it
must be escaped with a \. (e.g. U\/235
would be used to match U/235).

The pattern .. can be used to match zero or


more components within a path. For
example, HKLM/Software/../buzz will match
HKLM\Software\buzz, or HKLM\Software\one\
two\three\buzz. This type of pattern can
lead to a computationally expensive registry
search, so use it carefully.

Other than these exceptions, the regular


expressions follow the syntax of Java regular
expressions.
File The content is interpreted as an absolute file
path on the host. The characteristics of the
file found on the path will be compared to
the characteristic found the last time the
path was traced. The path may include
environment variables which will be
expanded in the context of the OS user
specified in the datasource. The path may
also begin with a substitution variable, like
“$SYBASE_HOME”, which will be replaced
by the value entered in the Database
Instance Directory field of the datasource
definition.
File Pattern The content is a sequence of regular
expressions that is used to match the
components of file paths and to generate
File type monitored items. The regular
expressions are joined by / so that the
pattern resembles an actual file path. As
with registry patterns, the \ cannot be used
for Windows files because of the regular
expression syntax. If the pattern begins with
?: on a Windows machine, the pattern match
will be started on each of the drives of a
multi-drive machine. The .. construction
described with registry patterns can also be
carefully used in a file pattern. Environment
variables from the context of the OS user
can be used in a file pattern and will be
expanded before the expansion of the
regular expressions.

Chapter 7. Assess and harden 611


GuardAPI commands
create_cas_template_set

create_cas_template

create_datasource

create_cas_host_instance

CAS Hosts
A Configuration Auditing System (CAS) host configuration defines one or more
CAS instances.

Once you have defined one or more CAS template sets, and have installed CAS on
a database server, you are ready to configure CAS on that host. A CAS host
configuration defines one or more CAS instances. Each CAS instance specifies a
CAS template set, and defines any parameters needed to connect to the database.
For each database server on which CAS is installed, there is a single CAS host
configuration, which typically contains multiple CAS instances - for example, one
CAS instance to monitor operating system items, and additional CAS instances to
monitor individual database instances.
v Define a CAS Instance
v Modify a CAS Instance
v Delete a CAS Instance
v Disable a CAS Instance

Define a CAS Instance


1. Open the CAS Configuration Navigator by clicking Harden > Configuration
Change Control (CAS Application) > CAS Host Configuration.
The menu lists all database servers where CAS has been installed and this host
has connected to the Guardium system.
2. Use list filtering to filter by OS Type or DB Type and find the host you would
like to work with.
3. Highlight the host you want to modify and click Modify.
4. Select a Template Set from the menu.

Note: CAS Instance cannot be defined if the host is off line or this is a
secondary Guardium system for the host.
5. Click Add Datasource to open the Datasource Finder panel.

Note: If no compatible datasource is available for this template set on this host
you may click New to open the Datasource Definition panel and add a
datasource.
6. Select the datasource that you want to add to the template set, and click Add
to add it to the template set.

Finding the Guardium CAS Panel

Access to CAS Configuration Functions is restricted to the admin and users who
have been assigned the CAS role.

612 IBM Guardium 10.0


Click Harden. All the CAS functions are listed within the Configuration Change
Control (CAS Application) header.

Open the CAS Configuration Navigator


The CAS Configuration Navigator panel is the starting point for creating or
modifying CAS Hosts.

Open the CAS Configuration Navigator panel by clicking Harden > Configuration
Change Control (CAS Application) > CAS Host Configuration.

Modify a CAS Instance


1. Open the CAS Configuration Navigator
2. Use list filtering to filter by OS Type or DB Type and find the instance you
would like to work with.
3. Highlight the host you want to modify and click Modify.
A list of defined CAS instances associated with the selected host will be
displayed with the following information and editing options:
Table 219. Modify a CAS Instance
Component Description
Disable/Enable Instance Click the Disable Instance icon to disable/enable the CAS
Icon instance
Delete Instance Icon Click the Delete Instance icon to delete the CAS instance
Datasource Identifies the datasource used by the instance. Click the
Datasource to open the Datasource Definition panel to edit the
datasource definition
Template Set Identifies the CAS template set used by the instance. Click this
link to open the Monitored Item Template Definitions panel to
view or modify the template set definition.

See “Working with CAS Templates” on page 606 for more


information
Monitored Items A count of items currently monitored by the instance. Click this
link to open the Monitored Items Definitions panel which
displays the list of all items currently monitored.

See Viewing Monitored Items Lists for more information.

Note: There is a default of 10,000 monitored items that are


viewable for reports regardless of the number of monitored items
defined. It is suggested that multiple instances be defined when
the number of monitored items approach this limit.

Delete a CAS Instance


1. Open the CAS Configuration Navigator
2. Use list filtering to filter by OS Type or DB Type and find the instance you
would like to work with.
3. Click Delete Instance to delete a CAS instance. All collected change data will
be deleted as well.

Disable a CAS Instance


1. Open the CAS Configuration Navigator.

Chapter 7. Assess and harden 613


2. Use list filtering to filter by OS Type or DB Type and find the instance you
would like to work with.
3. Highlight the host you want to modify and click Modify, or double-click to
open the Host Instance Definitions panel.
4. Click the Disable Instance icon to disable a CAS Instance. Change data will not
be collected until the instance is enabled by again clicking on the icon.

View Monitored Item Lists

In the Host Instance Definitions panel, click a Monitored Items link to view the
complete list of items monitored in the Monitored Items Definitions panel. The
following table describes the components seen on the Monitored Items Definitions
panel for this Host Configuration.

All the monitored items refer to raw data, a character object on the host, the result
of a SQL query, the output of an OS script, or the contents of a file. The size of that
character object is computed. If the item is a file, then the permissions, owner,
group, and last modified time are also checked. If any of these have changed since
the last time the item was checked, the change will be noted.
Table 220. View Monitored Item Lists
Component Description
Select Box Check the Select Box if you'd like to edit a monitored item
individually or as a group.

Double click any monitored item to edit that item.


Item The name of the monitored item from the description in the CAS
Item Template Definition panel
Type One of the following: OS Script, SQL Query, File, Environment
Variable, or Registry Variable

OS Script or SQL Script: The actual text or the path to an


operating system or SQL script, whose output will be compared
with the output produced the next time it runs

File or File Pattern: A specific file or a pattern to identify a set of


files

Environment Variable or Registry Variable: An environment


variable or a (Windows) registry variable
Period The average interval between tests, specified as a number of
seconds(s), minutes(m), hours(h), or days(d).
Keep Data If marked a copy of the actual data is saved with each change.
For example, for a file item, a copy of the file is saved. If marked
but the size of the raw data for the item is greater than the Raw
Data Limit configured for this CAS host, no data will be saved
Use MD5 Indicates whether or not the comparison is done by calculating a
checksum of the raw data using the MD5 algorithm. Computing
the MD5 checksum is time consuming for large character objects.
However, it is a better indicator of change than just the size. The
default is not to use MD5. If MD5 is used but the size of the raw
data is greater than the MD5 Size Limit configured for the CAS
host, the MD5 calculation and comparison will be skipped.

614 IBM Guardium 10.0


GuardAPI Commands
delete_cas_host

list_cas_hosts

create_cas_host_instance

delete_cas_host_instance

list_cas_host_instances

update_cas_host_instance

CAS Reporting
This section describes Configuration Auditing System (CAS) reporting.

The admin user has access to all query builders and default reports. The admin
role allows access to the default CAS reports, but not to the CAS query builders.
The CAS role allows access to both the default CAS reports and the query builders.
v Accessing CAS Query Builders
v Accessing Default CAS Reports
v CAS Reporting Domains

Accessing CAS Query Builders

This section describes how to access the CAS Query Builders from the
administrator and user portals. For help on how to use the query builders or
report builders, see Queries or Reports.

From the administrator portal:


1. Open the Report Builder by clicking Setup > Report Builder.
2. Click New, choose a Query from the menu, specify a report title, click Next,
and fill out the rest of the Report Builder to suit your needs.

Accessing Default CAS Reports

View the default reports related to CAS by clicking Harden > Reports.

CAS Reporting Domains


Domain Description
CAS Templates Track CAS template definitions. Templates
identify items to be monitored for changes.
Monitored items can be files, environment or
registry variables, OS or SQL script output
sets, or the set of logged on users.

Chapter 7. Assess and harden 615


Domain Description
CAS Config Tracks CAS host configurations, where a
configuration is the application of one or
more template sets to a specific database
server host. From configuration instances
you can see which items within template
sets are enabled or disabled, or exactly
which files are selected and monitored (or
not) by file name pattern templates.
CAS Host History Tracks CAS host events, including servers or
clients going in or out of service.
CAS Changes Tracks changes to monitored items (files,
registry variables, etc.)

CAS Templates Domain


Entity Description
Template Set Describes a template set definition
Template Describes a template item within a template
set

Template Set Entity


Attribute Description
Template Set ID A unique identifier for the template set,
numbered sequentially
OS Type Operating system: Unix or Windows
DB Type Database Type (Oracle, MS-SQL, DB2,
Sybase, Informix, etc.) or N/A for an
operating system template
Template Set Name The template name
IsDefault Indicates whether or not this template is the
default for the specified OS type and DB
type combination
Editable Indicates whether or not this template can
be modified. The default Guardium
templates cannot be modified. In addition,
once a template set has been used in a CAS
instance, it cannot be modified. However, a
template set can always be cloned and the
cloned set can be modified.
Timestamp Date and time the template was last updated

Template Entity
Attribute Description
Template ID A unique identifier for the template set,
numbered sequentially
Access Name Depending on the Audit Type, this is the OS
or SQL script, environment or registry value,
or a file name or a file name pattern

616 IBM Guardium 10.0


Attribute Description
Audit Type The type of monitored item
Audit Frequency (minutes) The maximum interval (in minutes) between
tests
Use MD5 Indicates whether or not the comparison is
done by calculating a checksum using the
MD5 algorithm and comparing that value
with the value calculated the last time the
item was checked. The default is to not use
MD5. If MD5 is used but the size of the raw
data is greater than the MD5 Size Limit
configured for the CAS host, the MD5
calculation and comparison will be skipped.
Regardless of whether or not MD5 is used,
both the current value of the last modified
timestamp for the item and the size of the
item are compared with the values saved the
last time the item was checked.
Save Data Indicates if the Keep Data checkbox has
been marked. If so, previous versions of the
item can be compared with the current
version.
Description Optional description of the template
Timestamp Date and time the template was last
updated.

CAS Templates Domain Default Reports


Default Report Description
CAS Templates Report Lists CAS templates

CAS Templates Report


Entity Attribute Operator Default Value
Template Access_Name Like %
Template Set Template_Set_Name Like %
Template Audit_Type Like %

CAS Config Domain


Entity Description
Host Identifies a CAS host (a database server) and
the current status of CAS (online/offline).
This entity is also available in the CAS Host
History domain
Instance Config For each host, an Instance Config entry
describes a CAS instance, which contains
database connection parameters (if needed)
and identifies the template set used by the
instance. It provides current status of the
instance (in use, enabled, or disabled) and
the date of the last revision.

Chapter 7. Assess and harden 617


Entity Description
Monitored Item Details Identifies an item (a file or an environment
variable, for example) monitored by a CAS
instance. In contains the item definition and
indicates whether or not the item is enabled.

Host Entity
Entity Description
Host Name Database server host name (may display as
IP address)
OS Type Operating system: UNIX or WIN
Is Online Online status (yes or no) when record was
written

Instance Config Entity


Attribute Description
DB Type Database Type (Oracle, MS-SQL, DB2,
Sybase, Informix, etc.) or N/A for an
operating system instance
Instance The name of the instance
User The user name that CAS uses to log onto the
database; or N/A for an operating system
instance.
Port The port number CAS uses to connect to the
database; this can be empty for an operating
system instance
DB Home Dir The home directory for the database; this
can be empty for an operating system
instance
Template Set ID Identifies the template set used by this
instance

Monitored Item Details Entity


Attribute Description
Template ID Database Type (Oracle, MS-SQL, DB2,
Sybase, Informix, etc.) or N/A for an
operating system instance
Monitored Item The name of the instance
Audit Type The user name that CAS uses to log onto the
database; or N/A for an operating system
instance.
Enabled The port number CAS uses to connect to the
database; this can be empty for an operating
system instance
In Synch The home directory for the database; this
can be empty for an operating system
instance

618 IBM Guardium 10.0


Attribute Description
Audit Frequency Identifies the template set used by this
instance
Use MD5 Indicates whether or not the comparison is
done by calculating a checksum using the
MD5 algorithm and comparing that value
with the value calculated the last time the
item was checked. The default is to not use
MD5. If MD5 is used but the size of the raw
data is greater than the MD5 Size Limit
configured for the CAS host, the MD5
calculation and comparison will be skipped.
Regardless of whether or not MD5 is used,
both the current value of the last modified
timestamp for the item and the size of the
item are compared with the values saved the
last time the item was checked.
Save Data When marked, previous version of the item
can be compared with the current version
Description Optional description of the instance
Template Content The template entry that is the basis for this
monitored item, set from the Template entity
Access Name attribute when the instance
was created. Typically this will be the same
as the monitored item, but in the case where
a file pattern was used in the template, this
will be the file pattern

CAS Config Domain Default Reports


Default Reports Description
CAS Instances Lists CAS instances
CAS Instance Config Lists CAS instance configuration changes

CAS Instances Report


Entity Attribute Operator Default Value
Host Host_Name Like %
Host OS_Type Like %
Instance Config DB_Type Like %
Instance Config Instance Like %

CAS Instance Config Report


Entity Attribute Operator Default Value
Host Host_Name Like %
Host OS_Type Like %
Monitored Item Template_Id Like %
Details

Chapter 7. Assess and harden 619


Drill-Down Reports
Report Description
Report Details Displays the monitored items included in
the count of monitored item column

CAS Host History Domain


Entity List Domain Description
Host Identifies a CAS host (a database server) and
the current status of CAS (online/offline).
This entity is also available in the CAS
Config domain.
Host Event Date and time of an event in the CAS
client/server relationship, details a client or
server going in and out of service.

Host Entity
Attribute Description
Host Name Database server host name
OS Type Operating system: Unix or Windows
Is Online Current online status (Yes/No)

Host Event
Attribute Description
Event Time Date and time that the event was recorded
Event Type Identifies the event being recorded:

“Client Down”: CAS stopped on database


server host

“Client Up”: CAS started on database server


host

“Failover Off”: A server is available


(following a disruption), so CAS data is
being written to the server

“Failover On”: The server is not available, so


CAS data is being written to the failover file

“Server Down”: The database server stopped

“Server Up”: The database server started

CAS Host History Domain Default Reports


Default Report Description
CAS Host History Report Lists CAS events for each CAS host

620 IBM Guardium 10.0


CAS Host History Report
Entity Attribute Operator Default Value
Host Host_Name Like %
Host OS_Type Like %
Host Event Event_Type Like %

CAS Changes Domain


Entity Description
Monitored Changes Created each time a monitored item changes
Host Configuration Created each time a monitored item changes
Saved Data Contains saved data for the changes made

Monitored Changes Entity


Attribute Description
Change Identifier Unique identifier for the change
Sample Time Timestamp (date and time on host) that
sample was taken
Saved Data ID Identifies the Saved Data entity for this
change
Audit State Label ID Identifies the Host Configuration entity for
this change
Timestamp Data and time this change record was
created on the server (Guardium appliance
server clock)
Owner UNIX only. If the item type is a file, the file
owner
Permissions UNIX only. If the item type is a file, the file
permissions
Size File size, but there are special values as
follows:

-1, File exists, but has zero bytes

0, File does not exist, but this file name is


being monitored (it never existed or may
have been deleted)
Last Modified Timestamp for the last modification, taken
from the file system at the sample time
Last Modified Date Date for the last modification
Last Modified Weekday Day of the week for the last modification
Last Modified Year Year for the last modification
Group UNIX only. If the item type is a file, the
group owner

Chapter 7. Assess and harden 621


Host Configuration Entity
Attribute Description
Audit State Label ID Unique numeric identifier for the
configuration item
Host Name Database server host name or IP address
OS Type Operating system: Unix or Windows
DB Type Database Type (Oracle, MS-SQL, DB2,
Sybase, Informix, etc.) or N/A if the change
is to an operating system instance
Instance Name Name of the template set instance
Type Type of monitored item that changed.

OS Script or SQL Script: A change triggered


by the OS script contained in the monitored
item template definition.

Environment Variable: An environment


variable (Unix only)

Registry Variable: A registry variable


(Windows only)

File: A specific file. There is no host


configuration entity for a file pattern defined
in the template set used by the instance.
Instead, there is a separate host
configuration entity for each file that
matches the pattern.
Monitored Item The name of the changed item, from the
Description (if entered), otherwise a default
name depending on the Type (a file name,
for example).

Saved Data Entity


Attribute Description
Saved Data ID Unique numeric identifier for the saved data
item
Saved Data The actual data saved
Timestamp Timestamp for when the saved data entity
was recorded in the server database
Change Identifier Identifies the monitored changes entity for
this saved data entity

CAS Changes Domain Default Reports


Default Report Description
CAS Change Details For each monitored item, lists changes by
owner. This report lists changes to the
properties of the file, such as the owner or
access permissions. It does not list changes
to the contents of the file.

622 IBM Guardium 10.0


Default Report Description
CAS Saved Data For monitored items with the optional Keep
data box checked, lists the data for each
changed detected. This report lists changes
to the contents of the file, not to its
properties.

CAS Changes Details


Entity Attribute Operator Default Value
Host Configuration DB_Type Like %
Host Configuration Host_Name Like %
Host Configuration Instance_Name Like %
Host Configuration Monitored_Item Like %
Host Configuration OS_Type Like %
Host Configuration Type Like %

Drill-Down Reports
Report Description
Record Details Displays the saved data included in the
Count of Saved Data column

CAS Saved Data


Entity Attribute Operator Default Value
Host Configuration Host_Name Like %
Host Configuration Monitored_Item Like %
Monitored Changes Saved_Data_Id Like %

Drill-Down Reports
Report Description
View Difference Displays the difference between the selected
data and prior version

CAS Status
Open the Configuration Auditing System Status by clicking Manage > Change
Monitoring > CAS Status

For each database server where CAS is installed and running, and where this
Guardium system is configured as the active Guardium host, this panel displays
the CAS status, and the status of each CAS instance configured for that database
server.

If you have trouble distinguishing the colors of the status indicator lights, hover
your mouse over status lights, and a text box will display the current status.

Chapter 7. Assess and harden 623


Component Description
CAS System Status indicator light The light found on this panel indicates
whether CAS is actively running on the
Guardium system.

Red: CAS is not running on this Guardium


system.

Green: CAS is active on this Guardium


system.
CAS agent status indicator lights These status lights indicate whether the
individual CAS agent is connected to a
Guardium system. Identify each CAS agent
by referencing the IP address that appears
before the row of status indicator lights

Red: Host and/or the CAS agent is offline


or unreachable.

Green: Host and CAS agent are online.

Yellow: The Guardium system is a


secondary for the CAS host.
Reset Reset the CAS agent on this monitored
system. This stops and restarts the CAS
agent on the database server.

Note: This will also reset checkpoint files;


allowing for a fresh start and re-scan of files
from scratch.
Delete (X) Remove this monitored system from CAS
and also deleting the data on the Guardium
system that was associated with the CAS
client.

This button is disabled if the CAS agent is


running on this system. You must stop the
CAS agent to be able to delete. See Stopping
and Starting the CAS Agent for more
information.
Red/Yellow/Green light Each set of lights indicates the status of a
CAS instance on the monitored system. If
the owning monitored system status is red
(indicating that the CAS agent is offline),
ignore this set of status lights.

Red: The instance is disabled.

Green: The instance is enabled and online,


and its configuration is synchronized with
the Guardium system configuration.

Yellow: The instance is enabled, but the


instance configuration on the Guardium
system does not match the instance
configuration on the monitored system (it
has been updated on the Guardium system,
but that update has not been applied on the
monitored system).

624 IBM Guardium 10.0


Component Description
Refresh Click Refresh to re-check the status of all
servers in the list. This button does not stop
and/or restart CAS on a database server – it
only checks the connection between CAS on
the Guardium system and CAS on each
database server.

Note: The TAP_IP entry in the guard_tap.ini file is required. If TAP_IP is missing
CAS will not start and an error message will be logged in the log file on the CAS
client.

Stopping and Starting the CAS Agent

There are several situations where you may need to stop or start the CAS agent on
a monitored system.

Note: If you want to stop and restart the CAS agent, you can do so by clicking
Manage > Change Monitoring > CAS Status.

Stopping CAS on a UNIX Host


1. Edit the file /etc/inittab.
2. Find the CAS respawn line:
cas:2345:respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/guar
3. Comment out the line by inserting # in the first character position.
4. Save the file.
5. Enter the following command: init -q
6. Enter the following command: ps -er | grep cas
7. Note the PID of each of the processes listed.
8. For each of the processes listed, issue the following command: kill -9 <pid>
9. In the Configuration Auditing System Status panel of the Guardium
administrator portal, the status light for this CAS host should be red, and the
Remove button should be enabled. This enables you to remove data from this
CAS host from the Guardium system internal database.

Starting CAS on a Unix Host

Use this procedure to restart the CAS agent only when it has been stopped by
editing the /etc/inittab file as described previously.
1. Edit the file /etc/inittab.
2. Find the line:
#cas:2345:respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/gua
3. Uncomment the line, in our example (step 2.), by removing the # in the first
character position. Depending on the operating system the comment character
may be something else.
4. Save the file.
5. Enter the following command to restart the CAS agent: init -q

Starting and Stopping CAS on a Windows Host

On Windows CAS runs as a System Service.

Chapter 7. Assess and harden 625


1. In the Services panel, highlight the Configuration Auditing System Client item.
2. Select either Start or Stop from the Action menu.

Amazon RDS Discovery


Use this Guardium feature to discover Amazon Relational Database Systems (RDS),
create datasource definitions for each discovered datasource, and run Vulnerability
Assessment (VA) tests automatically for discovered RDS.

This feature works only with MySQL, MS SQL and Oracle databases.

Prerequisites
1. An Amazon account.
2. A few RDS under the Amazon account.
3. Amazon credentials, including:
Access Key ID
Identifies user as the party responsible for service requests. It needs to
be included it in each request. It is not confidential and does not need
to be encrypted.
Secret Access Key
Is associated with Access Key ID calculating a digital signature
included in the request. Secret Access Key is a secret, and only the user
and AWS should have it.

Amazon RDS requires the clock time of the Guardium system to be correct (within
15 minutes). A larger discrepancy results in an Amazon error. If there is too large a
difference between the request time and the current time, the request is not
accepted.

If the Guardium system time is not correct, set the correct time by using the
following CLI commands:
show system ntp server
store system ntp server (An example is ntp server: ntp.swg.usma.ibm.com)
store system ntp state on

See the terminology terms section at the end of this help topic for Amazon
definitions.

Step procedures
1. Configure discovery of Amazon Relational Databases Systems (RDS).
2. Create datasource definitions for each discovered datasource.
3. Run Guardium Vulnerability Assessment (VA) tests automatically for
discovered RDS.

Buttons
Table 221. Menu buttons
Menu screen buttons Description
Discover Use this button after adding Access Key and Secret Access
Key values.
Errors Use this button to read all error messages.

626 IBM Guardium 10.0


Table 221. Menu buttons (continued)
Menu screen buttons Description
Associate with Guardium In order for Guardium to access the RDS, use this button after
Security Group specifying the IP address of Guardium system needs to be
recognized by the security group defined for the RDS.
Add IP Range to Use this button to add the IP address range to the Guardium
Guardium Security Group Security group.
Create/ Update Clicking on this button will create a new datasource definition
Datasources or will update the existing one with the new user name and
password.
Datasource Definition Use this button to opens the Datasource Definition panel.
Launch Vulnerability The Launch Vulnerability Assessment button provides the
Assessment option to create a Vulnerability Assessment, add it to an Audit
process, and submit the execution of the Audit Process.

Discover Amazon Relational Database Systems (RDS)

Use this menu to discover Amazon relational databases in different regions.

Amazon regions are distributed in different geographical locations. All available


Amazon regions are shown in a list and user can select any or all of the existing
regions from that list. Currently there are nine regions.

Amazon credential access key id and secret access key are required in order to
access RDS.

The Discover button stays disabled if the Access Key ID or Secret Access Key field
is empty.

Entering any text in Secret Access Key enables the discover button. There is no
validation for a valid secret access code at this point.

Discovery of RDS without a valid access key id and secret access key results in
error and can be seen by clicking on the Errors button.

One error message for each selected RDS indicates the problem with invalid access
key ID or secret access key.

Filter text field is designed to limit the number of regions shown in the list.

Example - entering “west” in the filter text displays only regions with the word
“west” in their names.

Clicking the check box Amazon region selects all the shown regions.

Selecting at least one region to enable the Discover button.

All RDS belonging to the account, with the access key and secret access key for the
selected regions, will be displayed in a list.

The number of discovered RDS can be reduced by entering a text in the filter field.

Chapter 7. Assess and harden 627


Associate an RDS with a Guardium security group
In order for Guardium to access the RDS, the IP address of Guardium system
needs to be recognized by the security group defined for the RDS. The security
groups can be defined either in Amazon console (RDS console or VPC console
depending where the database is located – the RDS can be created in a virtual
private cloud) or from Amazon RDS panel using the Associate with Guardium
Security Group button. This button becomes enabled as soon as an RDS is
selected.

Associating an RDS with Guardium Security Group does (any/all) of following


three actions:
1. Create the Guardium security group if it does not exist.
2. Assign the Guardium system IP address to the security group, as determined
by Amazon.
3. Add this security group to the list of RDS security groups which can be viewed
on Amazon RDS console.

The Associate an RDS with a Guardium security group button does nothing if the
security group already exists; RDS already has that security group; and, the IP
address of the Guardium system has already been assigned to the security group.

The default name for Guardium security group is “ibm-use-only-guardium-rds-sg”


and it will be displayed in the RDS Discovery Page for that specific RDS. Currently
only one Guardium security group is supported.

If the selected RDS does not have any Guardium Security Group assigned with it,
clicking on the button creates one.

When Guardium security created and information showing up under Guardium


Security Group column, the Add IP Range to Guardium security group becomes
enabled and users have the option to add additional IP address allowing access
from additional IP addresses to this database.

Clicking the Add IP Range to Guardium Security Group button opens a dialog
box.

Use the menu shown to add the IP address range to the Guardium Security group.

The IP address nomenclature is based on CIDR-IP (Classless Inter-Domain


Routing). CIDR notation is a syntax of specifying IP addresses and their associated
routing prefix. It appends to the address a slash character and the decimal number
of leading bits of the routing prefix, for example, 192.168.2.0/24 for IPv4, and
2001:db8::/32 for IPv6.

Unlike adding IP addresses that can be done from both RDS console and Amazon
RDS discovery page, deleting IP address only can be done from Amazon RDS
console.

Note: Currently there is no support for deleting IP addresses from Guardium


Amazon RDS Discovery page.

628 IBM Guardium 10.0


Create datasource definition for a RDS
To create datasource definition for the RDS, user name and password for that RDS
should be entered in Datasource User and Datasource Password fields. Entering
these two values enables the Create / Update Datasources button.

Clicking on the button will create a new datasource definition or will update the
existing one with the new user name and password.

When the datasource definition is created, the new datasource definition info will
be displayed in Guardium Datasource column and Datasource Definition and
Launch Vulnerability Assessment buttons become enabled.

Configure/update datasource definition for an RDS

After creating a datasource for the RDS you can go to the Guardium Datasource
Definition page and modify the configuration.

The Datasource Definition button opens the Datasource Definition panel. All
necessary information has already been filled. The information on this panel is the
same as the one that exists in Guardium Datasource Definition for a non Amazon
database. You can modify existing information or add additional information for
the datasource. Use the Test Connection button on this page to test the connection
to Amazon RDS.

Note: Amazon controls the port number, do not change.

Note: The security group must allow Internet access. Click on the Errors button to
read any error messages.

Run Vulnerability Assessment for an RDS

The Launch Vulnerability Assessment button provides the option to create a


Vulnerability Assessment, add it to an Audit process, and submit the execution of
the Audit Process.

In the Result dialog, the user can give a description for the Vulnerability
Assessment and Audit Process (otherwise default names are used); enter email
addresses to be added as receivers for the audit process as well as determining
whether the user executing it should be added also as a receiver.

Once the user submits the execution, a Vulnerability assessment is created with all
the datasources selected as well as all the relevant tests for the datasource types
included. An Audit process is created that contains the Vulnerability assessment
and the execution is submitted.

Once the Vulnerability assessment is complete the result is distributed to all


receivers.

The submitted job can be viewed in Guardium Job Queue report.

Note: The description (default or user defined) is used to identify the Security
Assessment, if there is such security assessment already defined (with that
description) the existing security is the one that will be used otherwise a new
security assessment will be created.

Chapter 7. Assess and harden 629


If a new security assessment is created it will be created with all the data-sources
the user checked and all the available tests relevant to the database types of the
checked data-sources.

If the security assessment is already present then it will add the data source the
user checked (if not already in the assessment). If the assessment has NO tests at
all, then it will add all available tests (same as if it was a new assessment),
however if the assessment was present before and has some tests, the tests will
remain untouched (will not add and will not remove any tests).

Finally an audit process will be created (unless already exists) with the same
description and one task which is the security assessment, and execution of the
process will be submitted.

All this is done within GuardAPI command,


create_ad_hoc_audot_for_security_assessment, that will receive the following
parameters:
v emails (optional)
v users (optional)
v vulnerability assessment name (optional, use a default if none supplied)
v datasource ids - these will be the IDs for the new Amazon datasource, ready for
vulnerability assessment

For additional information on how to view and work with VA results, go to the
“Viewing assessment results” on page 582 help topic.

Note: All the functionality available in Guardium VA is not matched in the


cloud-based Amazon RDS.

Terminology terms

AWS (Amazon Web Services): A set of services delivered by Amazon that can be
used to meet the needs for a cloud-based application.

Regions: Compute power you use from Amazon (EC2 and EBS volumes) runs in a
physical datacenter, whereby there are currently five datacenter regions you can
use: Northern Virginia, Northern California, Ireland, Singapore and Tokyo.

Availability Zones: Each physical region is further broken data into zones, whereby
a zone is an independent section of a datacenter that adds redundancy and fault
tolerance to a given region.

Amazon Virtual Private Cloud (VPC): Amazon Virtual Private Cloud (Amazon
VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual
network that you've defined. This virtual network closely resembles a traditional
network that you'd operate in your own data center, with the benefits of using the
scalable infrastructure of AWS.

Amazon Relational Database Service is a web service that makes it easy to set up,
operate, and scale a relational database in the cloud.

RDS: Relational Database Service: A relational database (MySQL) that is hosted and
managed by Amazon, and made available to developers that do not want to
manage their own database platform.

630 IBM Guardium 10.0


Access Key ID: Access Key ID (a 20-character, alphanumeric sequence) ID is
associated with user’s AWS account. It is included in AWS service requests to
identify user as the sender of the request.

Secret Access Key: Secret Access Key (a 40-character sequence). Each Access Key ID
has a Secret Access Key associated with it. This key is just a long string of
characters (and not a file) that is used to calculate the digital signature that needs
to be included in the request. Secret Access Key is a secret, and only the user and
AWS should have it.

Amazon RDS Security Group: Security groups control the access that traffic has in
and out of a DB instance. Three types of security groups are used with Amazon
RDS: DB security groups, VPC security groups, and EC2 security groups. In simple
terms, a DB security group controls access to a DB instance that is not in a VPC, a
VPC security group controls access to a DB instance (or other AWS instances)
inside a VPC, and an EC2 security group controls access to an EC2 instance.

Additional Information

docs.aws.amazon.com

aws.amazon.com/rds

Chapter 7. Assess and harden 631


632 IBM Guardium 10.0
Index
A Domains, Entities, and Attributes
Overview 323
L
Access rule 57 Drill-down reports 308 Limit amount of logging on Guardium
Add or Edit Rules within a policy 72 appliance 62
Alerting Log Flat option 62
Custom notifications 62 Log or Ignore commands 62
Email messages 62 E
SNMP traps 62 Empty value 57
Syslog messages 62
Alerts
Entities and Attributes 341
Exception rule 57
M
export 308 Main Entity 314
Daily 62
External Data Correlation 230 Match any value, specific value 57
Once per session 62
Extrusion rule 57 Match group, individual 57
Per match 62
Minimum counts 57
Per time granularity 62
Minimum Counts and Reset Intervals 57
Alternative characters sets 62
Modify a policy 72
Amazon RDS Discovery 626 F Monitored Table Access 270
Audit and Report Overview 230, 313 Filter Rules within a policy 72
Flat Log Process 246

B N
Negative rule 57
Baselines 51 G
GIM Parameters with FAM 161

C
GIM Status 623
Guardium for Applications JavaScript
O
API 174 Optional External Feed 548
CAS Hosts help book 612
Outliers Detection 294
CAS Reporting 615
CAS Start-up and Failover help
book 593 H P
CAS Templates help book 606 How to ask questions 507
Category, Classification and Severity 57 How to create a Distributed Report 556 Pattern Matching using Regular
Change Audit System help book 588 How to distribute workflow through Expressions 57
Classification 24 Guardium groups 217 PDF 308
Classification Process 25 How to Generate API Call from Policies overview 57
Clone a policy 72 Reports 526 Policies, Install 80
Compliance Workflow Automation 195 How to install a policy and detail group Policy Simulator 72
concepts members 85 Policy violation 57
IBM Guardium for Applications 163 How to integrate custom rules with Predefined admin Reports 450
Configure data masking policy 166 Guardium policy 95 Predefined Reports 447
Continue to Next Rule 57 How to terminate connections 139 Predefined Reports Common 497
Copy rules with a policy 72 How to use API Calls from Custom Predefined user Reports 481
Correlation Alerts 131 Reports 540 Privacy Sets 241
Create a policy 72 How to use Constants within API
Create an Audit Database 266 Calls 532
Creating and running an assessment help
book 583
How to use PCI/DSS Accelerator to Q
implement PCI compliance 272 Query
csv 308 How to use predefined reports 422 Entity List 314
Custom Alerting 243
Operators
Custom Domains 328
Query conditions 314
custom URL 21
I Query Fields 314
IBM Guardium for Applications 163 Query Builder 314
query rewrite 185, 186, 187, 188, 189,
D Identify Users via API 259
Identify Users via Application User 191, 192
Dashboards, creating 304 Translation 251
Data Mart 310 Identify Users via Stored
Database Auto-discovery 22
Database Entitlement Reports 247, 405
Procedures 261 R
Ignore action, how to use RACF vulnerability 587
Discover help book 13, 563 appropriate 104 Record Values with Policy Violation 57
Distributed Report Builder 550 Incident Management 179 Regular Expressions 45
Domains 323
Remove a policy 72
Reports 299

633
Reports parameters 303
Reports, customizing 307
Security policy (continued)
Selective Audit policy 57
V
Reports, refreshing 307 Selective Audit Trail policy 62 VA schema update 571, 573, 587
Rule Actions service name 21 Value Change Auditing 263
Alert 62 Special Pattern Tests View a report 306
Attach alternative character sets 62 guardium://CREDIT_CARD 57 Vulnerability Assessment help book 569
Block request 62 guardium://PCI_TRACK_DATA 57
Log or ignore the violation or guardium://SSEC_NUMBER 57
traffic 62 Suggested Object Groups 72 W
Rules on flat 62 What to discover 36
Rules Suggested from Baseline 72 Workflow Builder 283
Rules suggested from Database ACL 72 T Workflow Process Results 216
To-Do List, Audit Process 229

S
S-GATE 62
S-TAP Terminate 62
U
User Identification 251
Security policy
Construct 57
Instance 57

634 IBM Guardium 10.0


IBM

Administration
ii Administration
Contents
Chapter 1. Configuring your Guardium Manage Custom Classes . . . . . . . . .. 129
system . . . . . . . . . . . . . .. 1 Uploading a Key File . . . . . . . . . .. 130
System Configuration . . . . . . . . . .. 1 SSH Public Keys . . . . . . . . . . .. 130
Inspection Engine Configuration . . . . . . .. 5 How to install an appliance certificate to avoid a
Portal Configuration . . . . . . . . . .. 11 browser SSL certificate challenge . . . . . .. 131
Generate New Layout . . . . . . . . . .. 12 Express Security Setup . . . . . . . . .. 132
Configure Authentication . . . . . . . . .. 13 GRC Heatmap . . . . . . . . . . . .. 136
Global Profile. . . . . . . . . . . . .. 14 Self Monitoring. . . . . . . . . . . .. 138
Alerter Configuration . . . . . . . . . .. 22 How to monitor the Guardium system via alerts 147
Anomaly Detection . . . . . . . . . . .. 23 Monitoring with SNMP . . . . . . . .. 156
Session Inference . . . . . . . . . . .. 24 Running Query Monitor . . . . . . . .. 158
IP to Hostname Aliasing . . . . . . . . .. 25 Groups . . . . . . . . . . . . . .. 158
System Backup . . . . . . . . . . . .. 25 Groups Overview . . . . . . . . . .. 158
Configuring patch backup . . . . . . . .. 31 Using groups in queries and policies . . .. 160
Configure Permission to Socket connection . . .. 32 Example: Using groups to create rules and
policies . . . . . . . . . . . . .. 161
Creating a new group . . . . . . . .. 162
Chapter 2. Access Management Modifying a group . . . . . . . . .. 162
Overview . . . . . . . . . . . . .. 33 Predefined Groups . . . . . . . . .. 164
Understanding Roles . . . . . . . . . .. 34 Populating groups. . . . . . . . . .. 171
Managing roles and permissions . . . . . .. 38 Security Roles . . . . . . . . . . . .. 179
How to create a role with minimal access . . .. 39 Notifications. . . . . . . . . . . . .. 180
Manage Users . . . . . . . . . . . .. 41 How to create a real-time alert . . . . . .. 181
How to create a user with the proper entitlements Custom Alerting Class Administration . . . .. 183
to login to CLI . . . . . . . . . . . .. 46 Predefined Alerts . . . . . . . . . . .. 183
Importing Users from LDAP . . . . . . .. 48 Scheduling . . . . . . . . . . . . .. 185
Data Security - User Hierarchy and Database Aliases . . . . . . . . . . . . . .. 186
Associations . . . . . . . . . . . . .. 51 Dates and Timestamps . . . . . . . . .. 188
How to define User Hierarchies . . . . . .. 54 Time Periods . . . . . . . . . . . .. 191
Time Periods . . . . . . . . . . . .. 191
Chapter 3. Aggregation and Central Comments . . . . . . . . . . . . .. 192
management . . . . . . . . . . .. 57 How to install patches . . . . . . . . .. 193
Aggregation . . . . . . . . . . . . .. 57 Support Maintenance . . . . . . . . . .. 197
Central Management . . . . . . . . . .. 68
Guardium Component Services . . . . . .. 69 Chapter 5. Product integration . . .. 199
Implementing Central Management . . . .. 72 Configure BIG-IP Application Security Manager
Using Central Management Functions . . .. 78 (ASM) to communicate with Guardium system .. 199
Investigation Center . . . . . . . . . .. 91 Guardium Integration with BigInsights . . . .. 199
OPTIM to Guardium Interface. . . . . . .. 203
Chapter 4. Managing your Guardium Combining real-time alerts and correlation analysis
system. . . . . . . . . . . . . .. 95 with SIEM products . . . . . . . . . .. 204
How to transfer sensitive data . . . . . . .. 209
Guardium Administration . . . . . . . .. 95
CEF Mapping . . . . . . . . . . . .. 212
Certificates . . . . . . . . . . . . .. 96
LEEF Mapping . . . . . . . . . . . .. 215
Unit Utilization Level . . . . . . . . . .. 99
Customer Uploads . . . . . . . . . .. 101
Services Status panel . . . . . . . . . .. 105 Chapter 6. Troubleshooting problems 219
Archive, Purge and Restore. . . . . . . .. 105 Techniques for troubleshooting problems . . .. 219
Guardium catalog . . . . . . . . . . .. 113 Searching knowledge bases . . . . . . .. 221
Archiving a catalog . . . . . . . . .. 114 Getting fixes from Fix Central . . . . . .. 222
Exporting a catalog . . . . . . . . .. 115 Contacting IBM Support. . . . . . . .. 222
Importing a catalog . . . . . . . . .. 115 Basic information for IBM Support . . . .. 223
How to manage backup and archiving . . . .. 115 Exchanging information with IBM . . . .. 228
Exporting Results (CSV, CEF, PDF) . . . . .. 121 Subscribing to Support updates . . . . .. 229
Export/Import Definitions . . . . . . . .. 122 Problems and solutions . . . . . . . . .. 230
Distributed Interface . . . . . . . . . .. 127 User Interface . . . . . . . . . . .. 230

iii
Policies . . . . . . . . . . . . .. 233 S-TAPs and other agents . . . . . . .. 251
Reports . . . . . . . . . . . . .. 236 GIM . . . . . . . . . . . . . .. 260
Assess and Harden . . . . . . . . .. 241 Installing Your Guardium System . . . .. 260
Configuring your Guardium system . . . .. 242
Access Management . . . . . . . . .. 246 Index . . . . . . . . . . . . . .. 265
Aggregation . . . . . . . . . . . .. 247
Central Management . . . . . . . . .. 249

iv Administration
Chapter 1. Configuring your Guardium system
You can configure several aspects of your Guardium system to enable you to meet
your business goals effectively and efficiently.

System Configuration
Most of the information on the System Configuration panel is set by using the CLI
at installation time.

For instructions on how to configure the system, or to modify any other System
Configuration settings, see Modify the System Configuration.

There must be a valid license to use various functions within the appliance. When
a license is entered after the system starts, a restart of the GUI is needed.

About System Shared Secret

The Guardium® administrator defines the system shared secret in the System
Configuration. The system shared secret is used for two general purposes:
v To encrypt files that are exported from the appliance by archive/export activities
v To establish secure communications between Central Managers and managed
units

If you are using Central Management and/or aggregation, you must set the System
Shared Secret for all related systems to the same value.

The system shared secret value is null at installation time. Depending on a


company’s security practices, it may be necessary to change the system shared
secret on a periodic basis. Each appliance maintains a shared secret keys file,
containing an historical record of all shared secrets defined on that appliance. The
same system thus will have no problem at a later date decrypting information that
has been encrypted on that system.

When information is exported or archived from one system, and imported or


restored on another, the latter must have access to the shared secret used by the
former. For these cases, there are CLI commands that can be used to export the
system shared secrets from one Guardium system, and import them on another.

See the following commands in the CLI appendix:


v aggregator backup keys file
v aggregator restore keys file

Modifying the System Configuration


1. Click Setup > Tools and Views > System to open the System Configuration.
2. Make your changes.
3. Click Apply to save the updated system configuration.

Note: The applied changes do not take effect until the Guardium system is
restarted. After you apply configuration changes, click Restart to stop and restart
the system.

1
Table 1. System Configuration Panel Reference
Field or Control Description
Unique Global This value is used for collation and aggregation of data. The default
Identifier value is a unique value that is derived from the MAC address of the
machine. Do not change this value after the system begins
monitoring operations.
System Shared Any value that you enter here is not displayed. Each character you
Secret type is masked.

The system shared secret is used for archive/restore operations, and


for Central Management and aggregation operations. When used, its
value must be the same for all units that will communicate. This
value is null at installation time, and can change over time.

The system shared secret is used:


v When secure connections are being established between a Central
Manager and a managed unit.
v When an aggregated unit signs and encrypts data for export to
the aggregator.
v When any unit signs and encrypts data for archiving.
v When an aggregator imports data from an aggregated unit.
v When any unit restores archived data.

Depending on your company’s security practices, you might be


required to change the system shared secret from time to time.
Because the shared secret can change, each system maintains a
shared secret keys file, containing a historical record of all shared
secrets defined on that system. This allows an exported (or archived)
file from a system with an older shared secret to be imported (or
restored) by a system on which that same shared secret has been
replaced with a newer one.

Caution: When used, be sure to save the shared secret value in a


safe location. If you lose the value, you will not be able to access
archived data.
Retype Secret When you enter or change the system shared secret, retype the new
value a second time. Any value that you enter here is not displayed.
Each character you type is displayed as an asterisk.

2 Administration
Table 1. System Configuration Panel Reference (continued)
Field or Control Description
License Key The license key is inserted in the configuration during installation.
Do not modify this field unless you are instructed to do so by
Technical Support. You might need to paste a new product key here
if optional components are being added.

If you install a new product key on the central management unit,


when you click Apply, you will receive a warning message that
reads: Warning: changing the license on a Central Management
Unit requires refreshing all managed units. After you click OK
to close the message window, you must click Apply a second time
to install the new product key. You will know that the new license
has been installed when you receive the message: Data successfully
saved.

If you install a new product key on a Central Management Unit, you


might get a warning that states the license applied to the CM must
be refreshed on the managed unit. This requires a refresh done from
the Central Manager and is done by pressing the refresh icon from
the Central Manager to each of the collectors listed.

License entitles user to access products and the corresponding


features.

License can be appended or overridden.

Active license is stored in LICENSE_KEY in


ADMINCONSOLE_PARAMETER

Product types DAM; FAM; VA; GFA

Edition for product types: Express; Standard; Advanced


Number of If a limited license is applied, the maximum number of datasources
Datasources permitted per datasource license is displayed.
Metered Scans Left If a limited license is applied, the number of vulnerability
assessment scans permitted (datasource metering) per metering
license is displayed. Each time a vulnerability assessment is
triggered, the scan counter decreases by one.
License valid until If a limited license is applied, a fixed date when the license will be
disabled is displayed.
# of Licenses This value indicates the number of licenses remaining.
Note: Configure These settings cannot configured through the GUI and appear
Network Address, grayed-out on the System Configuration user interface.
Secondary
Management
Interface and
Routing settings
using the CLI
System Hostname The resolvable host name for the Guardium system. This name must
match the DNS host name for the primary System IP Address.
Domain The name of the DNS domain on which the Guardium system
resides.
System IP Address The primary IP address that users and S-TAP® or CAS agents use to
connect to the Guardium system. It is assigned to the network
interface labeled ETH0.
SubNet Mask The subnet mask for the primary System IP Address.

Chapter 1. Configuring your Guardium system 3


Table 1. System Configuration Panel Reference (continued)
Field or Control Description
Hardware (MAC) The MAC address for the primary network interface.
Address
System IP Address Optional: A port can also be configured to team with the primary
(Secondary) interface in order to provide high-availability failover IP teaming.

Alternatively, a port on the device can be configured as a secondary


management interface with a different IP address, network mask,
and gateway from the primary.

These two options are mutually exclusive.

There are two different, and mutually exclusive, kinds of secondary


management connections, both controlled by options to the same
CLI command:
Bonding or teaming
Turns eth0 and another specified network interface card
(NIC) into a bonded pair with standby failover. To
implement this option, use the CLI command store
network interface high-availability on <nic>, where nic
is an available NIC.
Secondary interface
Allows the GUI and CLI to be accessible from another NIC
in the Guardium system. To implement this option, use the
CLI command store network interface secondary on
<nic> <ip> <mask> <gateway> to specify the secondary NIC,
its IP address and network mask, and optionally a gateway.

BOTH physical and VM systems have the same capabilities.


dependent on the number of NICs installed on the Guardium system
or VM.

To display the network interfaces installed on the unit, use the show
network interface inventory CLI command. For example:
show network interface inventory
Current network card configuration:
Device | Mac Address |Member of
-----------------------------------------
eth0 | 00:50:56:3b:c3:73 |
eth1 | 00:50:56:8a:0d:fa |
eth2 | 00:50:56:8a:0d:fb |
eth3 | 00:50:56:8a:00:c1 |

Note: The “Member of” will show which NICs are in a bond pair, if
a bonding exists.

To locate the eth connectors on your appliance, use the show


network interface port CLI command, which will blink the orange
light on that port, 20 times. For example:

guard14.xyz.com> sho net int port 3

The orange light on port eth5 will now blink 20 times.

Note: The secondary IP address and its associated port are NOT
related to the high availability feature, which provides fail-over
support via IP Teaming for the primary connection. For more
information about the high-availability option, see the store network
interface commands in the CLI Appendix.

4 Administration
Table 1. System Configuration Panel Reference (continued)
Field or Control Description
SubNet Mask Optional. The subnet mask for the secondary System IP Address.
(Secondary)
Default Route/ The IP address of the default router for the system./ The IP address
Secondary Route of the Secondary Router
Primary Resolver The IP address for the Primary Resolver (DNS) is required. The
Secondary Resolver secondary and tertiary are optional.
Tertiary Resolver
Test Connection Click Test Connection to test the connection to the corresponding
DNS (Domain Name System) server. This only tests that there is
access to port 53 (DNS) on the specified host. It does not verify that
this is a working DNS server. You will receive a message box
indicating if the DNS server responded.
Stop Click Stop to shut down the system.
Restart Click Restart to stop and then restart the system. You will be
prompted to confirm the action.
Apply Click Apply to save the changes. The changes will be applied the
next time the system restarts.

Inspection Engine Configuration


An inspection engine monitors the traffic between a set of one or more servers and
a set of one or more clients using a specific database protocol (Oracle or Sybase,
for example).

The inspection engine extracts SQL from network packets; compiles parse trees that
identify sentences, requests, commands, objects, and fields; and logs detailed
information about that traffic to an internal database.

You can configure and start or stop multiple inspection engines on the Guardium
appliance.

Inspection engines cannot be defined or run on a Central Manager unit. However,


you can start and stop inspection engines on managed units from the Central
Manager control panel.

Inspection engines are also defined on S-TAPs. If S-TAPs report to this Guardium
appliance, be sure the appliance does not monitor the same traffic as the S-TAP. If
that happens, the analysis engine will receive duplicate packets, will be unable to
reconstruct messages, and will ignore that traffic.

Selecting IP addresses

Each inspection engine monitors traffic between one or more client and server IP
addresses. In an inspection engine definition these are defined using an IP address
and a mask. You can think of an IP address as a single location and a mask as a
wild-card mechanism that allows you to define a range of IP addresses.

IP addresses have the format: n.n.n.n, where each n is an eight-bit number (called
an octet) in the range 0-255.

Chapter 1. Configuring your Guardium system 5


For example, an IP address for your PC might be: 192.168.1.3. This address is used
in the examples. Since these are binary numbers, the last octet (3) can be
represented as: 00000011.

The mask is specified in the same format as the IP address: n.n.n.n. A zero in any
bit position of the mask serves as a wildcard. Thus, the mask 255.255.255.240
combined with the IP address 192.168.1.3 matches all values from 0-15 in the last
octet, since the value 240 in binary is 11110000. But it only matches the values
192.168.1 in the first three octets, since 255 is all 1s in binary (in other words, no
wildcards apply for the first three octets).

Specifying binary masks can be a little confusing. However, for the sake of
convenience, IP addresses are usually grouped in a hierarchical fashion, with all of
the addresses in one category (desktop computers, for example) grouped together
in one of the last two octets. Therefore, in practice, the numbers you see most often
in masks are either 255 (no wildcard) or 0 (all).

Thus a mask 255.255.255.255 (which has no zero bits) identifies only the single
address specified by IP address (192.168.1.3 in the example).

Alternatively, the mask 255.255.255.0, combined with the same IP address matches
all IP addresses beginning with 192.168.1.

Selecting all addresses

The IP address 0.0.0.0, which is sometimes used to indicate all IP addresses, is not
allowed by Guardium. To select all IP addresses when using an IP address/mask
combination, use any non-zero IP address followed by a mask containing all zeroes
(for example: 1.1.1.1/0.0.0.0).

Configure Settings that apply to all Inspection Engines


1. Click Manage > Activity Monitoring > Inspection Engines to open the
Inspection Engine Configuration.
2. Refer to the table and make any changes desired.
3. Click Apply to save the updated system configuration when you are done
making changes.
4. Optionally add comments to the Inspection Engine Configuration.
5. Click Restart Inspection Engines.

Note: The applied changes do not take effect until the inspection engines are
restarted. After applying inspection engine configuration changes, click the Restart
button to stop and restart the system (using the new configuration settings).

Note: For HTTP support, there are Inspection Engine configuration limitations.
The following Inspection Engine settings are not supported for HTTP: Default
Capture Value; Default Mark Auto Commit; Log Request Sql String; Log
Sequencing; Log Exception Sql String; Log Records Affected; Compute Avg.
Response Time; Inspect Returned Data; Record Empty Sessions.

6 Administration
Table 2. Settings that Apply to All Inspection Engines
Control Description
Default Capture Default value is false. Used by Replay function to distinguish between
Value transactions and capture values, meaning that if you have a prepared
statement, assigned values will be captured and replayed. If you want
to replay your captured prepared statements as prepared statements
the check box should be checked for the captured data.
Default Mark Auto Default value is true. Due to various auto-commit models for different
Commit databases, this value is used by Replay function to explicitly mark up
the transactions and auto commit after each command.
Note: If the check box is checked then commits and rollbacks will be
ignored. Databases currently supported include DB2®, Informix®, and
Oracle.
Log Request Sql If enabled, this option will automatically log DB2 application events
String which use the procedure WLM_SET_CLIENT_INFO. These events will
only be logged if there is an application issuing them in the
environment. They can be added to reports by using attributes from
the Application Events entity.
Log Sequencing If marked, a record is made of the immediately previous SQL
statement, as well as the current SQL statement, provided that the
previous construct occurs within a short enough time period.
Log Exception Sql If marked, when exceptions are logged, the entire SQL statement is
String logged.

Chapter 1. Configuring your Guardium system 7


Table 2. Settings that Apply to All Inspection Engines (continued)
Control Description
Log Records Records affected - Result set of the number of records which are
Affected affected by each execution of SQL statements.

If marked, the number of records affected is recorded for each SQL


statement (when applicable). Default value for log records affected is
FALSE (0).
Note: When using JDBC, this must be marked to properly log Oracle
bind variable traffic
Note: Enabling Log Records Affected is important within
Capture/Replay in order to provide comparisons results
Note: The records affected option is a sniffer operation which requires
sniffer to process additional response packets and postpone logging of
impacted data which increases the buffer size and might potentially
have a adverse effect on overall sniffer performance. Significant impact
comes from really large responses. To prevent large amount of
overhead associated with this operation, Guardium uses a set of
default thresholds that allows sniffer to decide to skip processing
operation when exceeded.
Note: Usually, Records Affected is set correctly when the user turns
on Log Records Affected via Inspection Engines > Log Records
Affected. However using MS-SQL via stored procedure will set
Records Affected as -1.

Example of result set values:


v Case 1, record affected value, positive number - this represents
correct size of the result set.
v Case 2, record affected value, -2 - This means number of records
exceeded configurable limit (This can be tuned through CLI
commands).
v Case 3, record affected value, -1 - This shows any unsupported
cases of packets configurations by Guardium.
v Case 4, record affected value, -2 - If the result set is sent by
streaming mode.
v Case 5, record affected value, -2 - Intermediate result during record
count to update user about current value, ends up with positive
number of total records.
Note: Records Affected feature is not supported in DB2 when
streaming to used to send the results.
Log timestamp per If marked, allows you to display the distribution of requests down to
second the second, regardless of the default logging granularity.
Compute Avg When marked, for each SQL construct logged, the average response
Response Time time will be computed.
Note: Enabling Compute Avg Response Time is important within
Capture/Replay to see response times between statement executions.
Inspect Returned Mark to inspect data returned by SQL requests as well as update the
Data ingress and egress counts.

If rules will be used in the security policy, this checkbox must be


marked.
Record Empty When marked, sessions containing no SQL statements will be logged.
Sessions When cleared, these sessions will be ignored.
Parse XML The Inspection Engine will not normally parse XML traffic. Mark this
checkbox to parse XML traffic.

8 Administration
Table 2. Settings that Apply to All Inspection Engines (continued)
Control Description
Logging The number of minutes (1, 2, 5, 10, 15, 30, or 60) in a logging unit. If
Granularity requested in a report, Guardium summarizes request data at this
granularity. For example, if the logging granularity is 60, a certain
request occurred n times in a given hour. If the check box is not
marked, exactly when the command occurred within the hour is not
recorded. But, if a rule in a policy is triggered by a request, a real time
alert can indicate the exact time. When you define exception rules for
a policy, those rules can also apply to the logging unit. For example,
you might want to ignore 5 login failures per hour, but send an alert
on the sixth login failure.
Max. Hits per When returned data is being inspected, indicate how many hits (policy
Returned Data rule violations) are to be recorded.
Ignored Ports List A list of ports to be ignored. Add values to this list if you know your
database servers are processing non-database protocols, and you want
Guardium to not waste cycles analyzing non-database traffic. For
example, if you know the host on which your database resides also
runs an HTTP server on port 80, you can add 80 to the ignored ports
list, ensuring that Guardium will not process these streams. Separate
multiple values with commas, and use a hyphen to specify an
inclusive range of ports. For example:

101,105,110-223
Buffer Free: n % Display only. n is the percent of free buffer space available for the
inspection engine process. This value is updated each time the
window is refreshed. There is a single inspection engine process that
drives all inspection engines. This is the buffer used by that process.
Restart Inspection Click Restart Inspection Engines to stop and restart all inspection
Engines engines.
Add Comments Click Comment to add comments to the Inspection Engine
Configuration.
Apply Click the Apply to save the configuration.
Note: Any global changes made (and saved by using Apply) do not
take effect until you restart the inspection engines. However,
individual inspection engine attributes, such as exclude, sequence
order, etc., take effect immediately.

Create an Inspection Engine


1. Click Manage > Activity Monitoring > Inspection Engines to open Inspection
Engines.
2. Click Add Inspection Engine to expand the panel.
3. Enter a name in the Name box. It must be unique on the appliance. We
recommend that you use only letters and numbers in the name, as the use of
any special characters prevents working with this inspection engine via the
CLI.
4. From the Protocol box, select either the protocol to be monitored (Cassandra,
CouchDB, DB2, DB2 Exit, exclude IE, FTP, GreenPlumDB, Hadoop, HTTP,
ISERIES, Informix, KERBEROS, MongoDB, MS SQL, Mysql, Named Pipes,
Netezza, Oracle, PostgreSQL, SAP Hana, Sybase, Teradata, or Windows File
Share) or the keyword exclude IE. Select exclude IE if you want all traffic
between the specified clients and servers to be ignored.

Chapter 1. Configuring your Guardium system 9


Note: Exclude IE only works on ports, IP does not matter. Enter a range of
ports to ignore. To exclude a specific IP for this port, the exclude DB Client IP
can be used within the inspection engine created. If there is a need not to pick
up packets on a certain port range, define a separate inspection engine of the
type Exclude IE (IGNORE). The only values that have to be defined in that
engine are PORT_RANGE_START and PORT_RANGE_END. This kind of
exclusion might be needed, for instance, when an all-inclusive Oracle
Inspection Engine is defined with ports range 1024-65535, but certain ports
have to be excluded. When using Oracle for Windows, expand the port range
to 1000 to 65535.

Note: When sending IPC traffic from the GreenPlum database, it will be
logged on the Guardium system as PostgresSQL traffic. When sending TCP
traffic from the GreenPlum database, it will be logged on as GreenPlum
database with the inspection engine. For TCP traffic, Guardium determines the
database according to the Port (port 5432 for GreenPlum). For IPC traffic,
Guardium is using named pipe, and for GreenPlum database, the Guardium
system is using PostgresSQL as the name of the database. When both
PostgresSQL and Greenplum database are on the same system, their IPC
traffic will log in DB_PROTOCOL according to the first PostgresSQL/
Greenplum database IE set in the guard_tap.ini file.
5. In the DB Client IP/Mask boxes, enter a list of clients (a client host from
which the database connection was initiated) to be monitored (or excluded if
the Exclude DB Client IP box is marked). The clients are identified by IP
addresses and subnet masks. There are detailed instructions on how to use
these fields in the overview.
Click the plus sign to add additional IP address and subnet mask. Click the
minus sign to remove the last IP address and subnet mask.
6. In the DB Server IP/Mask boxes, enter a list of database servers (where a
database sits) to be monitored. The servers are identified by IP addresses and
subnet masks. There are detailed instructions on how to use these fields in the
overview.
Click the plus sign to add additional IP address and subnet mask. Click the
minus sign to remove the last IP address and subnet mask.
7. In the Port box, enter a single port or a range of ports over which traffic
between the specified clients and database servers will be monitored. Most
often, this should be a single port.
Warning: Do not enter a wide range of ports, just to be certain that you have
included the correct one! You may cause the inspection engine to bog down
attempting to analyze traffic on ports that carry no database traffic or traffic
that is of no interest for your environment.
8. Mark the Active on startup box if this inspection engine should be started
automatically on start-up.
9. Mark the Exclude DB Client IP box if you want the inspection engine to
monitor traffic from all clients except for those listed in the DB Client
IP/Mask list. Be sure that you understand the difference between this and the
Ignore protocol selection. This includes all traffic except for the from IP
addresses. To ignore a specific set of clients without including all other clients,
define a separate inspection engine for those clients and use the Ignore
protocol.
10. Click Add to save the definition.
11. Optionally reposition the inspection engine in the list of inspection engines.
Filtering mechanisms defined in the inspection engines are executed in the

10 Administration
order. If necessary, reposition the new inspection engine configuration, or any
existing configurations, using the Up and/or Down buttons in the border of
the definition.
12. Optionally click Start to start the inspection engine just configured. The Start
button will be replaced by a Stop button, once the engine has been started.
13.

Note: If you provide a value for TAP_IDENTIFIER and the value contains
spaces, Guardium will automatically replace the spaces with hyphens. For
example, the value “Sample description” will become “Sample-description”.

Start or Stop an Inspection Engine


Click Manage > Activity Monitoring > Inspection Engines to open the Inspection
Engines. To start an inspection engine, click Start. To stop an inspection engine,
click Stop.

Remove an Inspection Engine

If you are no longer using an inspection engine, we suggest that you remove the
definition, so that it is not restarted accidentally.
1. Click Manage > Activity Monitoring > Inspection Engines to open the
Inspection Engines.
2. If the inspection engine to be removed has not been stopped, click Stop.
3. To remove an inspection engine, click Delete.

Portal Configuration
You can keep the Guardium appliance Web server on its default port (8443) or
reset the portal. We strongly recommend that you use the default port.
1. Click Setup > Tools and Views > Portal to open the Portal.
2. If it is not marked, mark the Active on Startup checkbox (this should never be
disabled).
3. Set the HTTPS Port to an integer value between 1025 and 65535.
4. Click Apply to save the value. (The Guardium security portal will not start
listening on this port until it is restarted.) Or click Revert to restore the value
stored by the last Apply operation.
5. Click Restart to restart the Guardium Web server if you have made and saved
any changes. You can now connect to the unit on the newly assigned port.

Note: To re-connect to the unit after it has restarted with the new port number,
you must change the URL used to open the Guardium Login Page on your
browser.

The Guardium Portal Configuration is used to define the way user passwords are
authenticated when logging into the Guardium appliance. There are three choices.

These choices are Local (Guardium Default), RADIUS or LDAP.

The Portal configuration screen under Setup > Tools and Views > Portal is used
for the following:
1. To define the best way to authenticate a user password.
2. To restart GUI to reset the authentication type.

Chapter 1. Configuring your Guardium system 11


The Local connection will work when a password for a given user is defined from
a login. The login is defined using the accessmgr role. By default login into the
accessmgr account which has the accessmgr role. This role gives a user the ability
to add or uploaded user accounts and create passwords.

When you define your username and password using the accessmgr role type, the
defined password per user will be used when logging into the Guardium
appliance.

The RADIUS connection allows login authentication through a radius server. The
Radius/RSA server can be defined using both a password and a SecurID token
number. The SecurID token numeric password is displayed via a hardware token.

The Radius/RSA server is defined on a Windows server. The security RSA SecurID
token is also defined and stored on the Radius server and does not have to be
downloaded in order for the Radius portal to work.

In addition, a Radius server connection can be defined using a UNIX platform.


Radius is also defined as FreeRadius. User account and passwords are defined on
the Radius servers and do not have to be downloaded. In order to use FreeRadius,
the client (Guardium server), username and passwords are defined on the
FreeRadius UNIX servers and used when the Radius Portal connection is defined.

The default portal is set to Local.

The LDAP connection will work when the password is defined and stored on a
given LDAP server. In order for a user to use the LDAP portal and to login, a user
account name must be imported from the LDAP server first. Use the User LDAP
Import function available from the accessmgr account to define the LDAP location
and then import the LDAP users. The password does not have to be uploaded.

Generate New Layout


Generate a new layout for a role based on a user layout
The Guardium administrator or access manager can generate, via CLI, a default
layout for a role. After that, any new user who is assigned that role will have that
layout after logging in for the first time.

Note: Default .psml structures for user and role can be defined, via the GUI, by
the admin user. See Portlet Editor for further information.

Use the generate-role-layout CLI command to generate a new layout for an


existing role, based on the layout for the specified user. Once the new role layout
has been defined, any users who are assigned that role before they log in for the
first time, will receive the layout for that role.

generate-role-layout

Syntax generate-role-layout <user> <role>

Note: user (login name) and role are not case-sensitive.

Parameters

12 Administration
If either of the following parameters contains spaces (John Doe is user , or DBA
Managers is role), replace the space characters with underscore characters.

For example:

generate-role-layout John_Doe DBA_Managers

user - The name of the user whose layout will be used as a model for the role
layout. If the user does not exist, you will receive the following error message: No
such user '<user>'.

role - The role to which the new layout will be attached.

Configure Authentication
By default, Guardium user logins are authenticated by Guardium, independent of
any other application.

For the Guardium admin user account, login is always authenticated by Guardium
alone. For all other Guardium user accounts, authentication can be configured to
use either RADIUS or LDAP. In the latter cases, additional configuration
information for connecting with the authentication server is required.

Note: FreeRadius client software is supported.

When an alternative authentication method is used, all Guardium users must still
be defined as users on the Guardium appliance. It is only the authentication that is
performed by another application.

While user accounts and roles are managed by the accessmgr user, the
authentication method used is managed by the admin user. This is a standard
separation-of-duties best practice.

To configure authentication, see the proceeding topic.

Configure Guardium Authentication


1. Click Setup > Tools and Views > Portal to open the Authentication
Configuration.
2. Select the Guardium radio button in the Authentication Configuration panel.
3. Click Apply.

Configure RADIUS Authentication


1. Click Setup > Tools and Views > Portal to open the Authentication
Configuration.
2. Select the RADIUS radio button in the Authentication Configuration panel.
Additional fields will appear in the panel.
3. In the Primary Server box, enter host name or IP address of the primary
RADIUS server.
4. Optionally enter the host name or IP address of the secondary and tertiary
RADIUS servers.
5. Enter the UDP Port used (1812 or 1645) by RADIUS.
6. Enter the RADIUS server Shared Secret, twice.
7. Enter the Timeout Seconds (the default is 120).

Chapter 1. Configuring your Guardium system 13


8. Select the Authentication Type:
v PAP - password authentication protocol
v CHAP - Challenge-handshake authentication protocol
v MS-CHAPv2 - Microsoft version 2 of the challenge-handshake
authentication protocol
9. Optionally click Test to verify the configuration. You will be informed of the
results of the test. The configuration will also be tested whenever you click the
Apply button to save changes.
10. Click Apply. Guardium will attempt to authenticate a test user, and inform
you of the results.

Configure LDAP Authentication


1. Click Setup > Tools and Views > Portal to open the Authentication
Configuration.
2. Select the LDAP radio button in Authentication Configuration.
3. In the Server box, enter the host name or IP address of the LDAP server.
4. Enter the Port number (the default is 636 for LDAP over SSL).
5. Enter the User RDN Type (relative distinguished name type) type, which is
uid by default.

Note:

This attribute identifies a user for LDAP authentication. The Access Manager
should be made aware of what attribute is used here, since the Access
Manager performs the LDAP User Import operation. Click on this help link
LDAP User Import for further information on Importing LDAP Users.

If a user is using SamAccountName as the RDN value, the user must use
either a =search or =[domain name] in the full name.

Examples: SamAccountName=search, SamAccountName=dom


6. Enter the User Base DN (distinguished name).
7. Mark or clear the Use SSL checkbox, as appropriate for your LDAP Server.
8. Optional. To inspect one or more trusted certificates, click Trusted Certificates
and follow the instructions in that panel.
9. Optional. To add a trusted certificate, click Add Trusted Certificates and
follow the instructions in that panel.
10. Optional. Click Test to verify the configuration. You will be informed of the
results of the test. The configuration will also be tested whenever you click
Apply to save changes.
11. Click Apply. Guardium will attempt to authenticate a test user, and inform
you of the results.

Global Profile
The Global Profile panel defines defaults that apply to all users.

Override the Default Aliases Setting

By default, for any new report, or for any report that is contained in a default
layout, aliases are not used.

14 Administration
An alias provides a synonym that substitutes for a stored value of a specific
attribute type. It is commonly used to display a meaningful or user-friendly name
for a data value. For example, Financial Server might be defined as an alias for IP
address 192.168.2.18.

If you want to see aliases by default, you can change the default aliases setting for
all reports, as follows:
v Click Setup > Tools and Views > Global Profile to open the Global Profile.
v Mark the Use Aliases in Reports unless otherwise specified check box.
v Click Apply.

Customize the PDF Page Footer

PDF files created by various Guardium components (audit tasks, for example) have
a standard page footer. To customize that footer:
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the PDF Footer Text field, enter the text to be printed at the foot of each
page.
3. Click Apply.

Edit the Alert Message Template

To customize the message template used to generate alerts:


1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the Message Template text box, edit the alert template text.
You can mark the no wrap check box to see where the line breaks appear in the
message.
3. Click Apply when you are done.
4. Changes will not take effect until the inspection engines are restarted. To do
that now, click Manage > Activity Monitoring > Inspection Engines to open
the Inspection Engines. Click Restart Inspection Engines.
Table 3. Alert Message Template Variables
Variable Description
%%addBaselineConstruct
To add to baseline
%%AppUserName Application user name
%%AuthorizationCodeAuthorization code
%%category Category from the rule definition
%%classification Classification from the rule definition
%%clientHostname Client host name
%%clientIP Client IP address
%%clientPort Client port number
%%DBName Database name
%%DBProtocol Database protocol
%%DBProtocolVersionDatabase protocol version
%%DBUser Database user name
%%lastError Last error description; available only when a SQL error request
triggering an exception rule contains a last error description field

Chapter 1. Configuring your Guardium system 15


Table 3. Alert Message Template Variables (continued)
Variable Description
%%netProtocol Network protocol, for K-TAP on Oracle, this may display as either
IPC or BEQ
%%OSUser Session information. (OS_USER in GDM_ACCESS)
%%receiptTime Timestamp representing the time when the alert occurred
%%receiptTimeMills Numeric representing the time when the alert occurred, in
milliseconds since the fixed date of Jan 1 1900
%%requestType Request type
%%ruleDescription The rule description from the policy rule definition
%%ruleID The rule number from the rule definition
%%serverHostname Server hostname
%%serverIP Server IP address
%%serverPort Server port number
%%serverType The database server type
%%serviceName Service name
%%DBName Database name
%%sessionStart Session start time (login time)
%%sessionStartMills Numeric representing the start of the session where the alert
occurred, in milliseconds since the fixed date of Jan 1 1900
%%severity Severity from the rule definition
%%SourceProgram Source program name
%%SQLNoValue SQL string with masked values. The value of SQL will be replaced
by ? in the syslog.
%%SQLString SQL string (if any)
%%SQLTimestamp The time on the packet/request (TIMESTAMP in
GDM_CONSTRUCT_TEXT)
%%Subject[ ] If this variable is used in the message template, all that appears
between [ ] (for example, file name, email sender, description) will
be the subject line of the email sent to user.
%%violationID Numeric representing the POLICY_VIOLATION_LOG_ID of this
alert in GDM_POLICY_VIOLATION_LOG (this is the same as the
Violation Log ID in the Policy Violations / Incident Management
report)

Named Template
Message templates are used to generate alerts.

The feature defines multiple message templates and facilitates the use of different
templates on different rules. In the past, only a single message template was
available for all rules, all receiver types, etc.

To add, modify and delete named message templates, click Edit. When creating a
new named template, the starting value of the string is a copy of whatever is
currently in the Message template of the Global Profile. "R/T Alert" is the only
level of severity permitted.

16 Administration
Predefined message templates have been created for the SIEM solutions, ArcSight,
EnVision, and QRadar. The Guardium system comes preloaded with two certified
(agreed upon) templates to integrate with these two SIEM solutions.

The Named Template builder can select from two template types - Real-time Alerts
and Audit Process Report.

Use the Audit Process Report to audit process tasks. The CSV generated will use
the Named Template to adjust the content.

Click Edit Named Templates. Choose an SIEM and then click Modify. Select
Real-time Alerts or Audit Process Report.

After editing, the multiple message templates can be selected from within the
Policy Builder menu.

Adding the QRadar template allows sending real-time alerts or Audit Process
Report to QRadar using the LEEF Format (this is QRadar's format).

Follow the steps to send real-time alerts or Audit Process Results to the QRadar
SIEM.
Real-time alert, Guardium to QRadar
1. Create an real-time alert.
2. Write to syslog
3. Select Template type (Read-time Alert)
4. Forward to Q1 Labs QRadar SIEM (via LEEF mapping/ predefined
message template) - choose QRadar Named Template from Global
Profile
5. From the CLI, run the CLI command "store remotelog" to forward the
syslog messages to QRadar.
Audit Process Report, Guardium to QRadar
Click Harden > Vulnerability Assessment > Audit Process Builder to
open the Audit Process Builder.
1. Create an Audit Process report (Audit Process Builder)
2. Write to syslog
3. Select Template type (Audit Process Report)
4. Forward to Q1 Labs QRadar SIEM (via LEEF mapping/ predefined
message template) – choose QRadar Named Template from Global
Profile
5. From the CLI, run the CLI command "store remotelog" to forward the
syslog messages to QRadar.
For example, here is the default LEEF template for the Databases
Discovered report:
LEEF:0|IBM|Guardium|9.0|Databases Discovered|Time Probed=${1}|Server IP=${2}|Server Host N
Here are the report columns that are mapped to the template:
Time Probed Server IP Server Host Name DB Type Port Port Type
1. Check Export to CSV file and Write to Syslog.
2. Select the Named Template, LEEF Discovered Databases
3. Configure Remote Syslog by using the store remotelog command. For
example:

Chapter 1. Configuring your Guardium system 17


store remotelog add user.info 9.70.145.68 udp
This will now push all records from the audit process to the supplied
IP address.
Sender Encoding
To encode outgoing messages (email and SNMP traps) in an encoding
scheme other than UTF8, use the CLI command, store sender_encoding.
Filter templates of one type
There is a filter mechanism to select all Real Time Alerts or Audit Process
Report. Check or clear each selection.
Envision 2 message template
GUARDIUM_ALERT:
rule-id=%%ruleID^^category=%%category^^classification=
%%classification^^severity=%%severity^^session-start-time=
%%sessionStart^^client-hostname=%%clientHostname^^client-ip=
%%clientIP^^server-type=%%serverType^^server-ip=%%serverIP^^src-
program=%%SourceProgram^^os-user=%%OSUser^^db-user=
%%DBUser^^app-user=%%AppUserName^^service-name=
%%serviceName^^req-type=%%requestType^^rule-desc=
%%ruleDescription^^sql=%%SQLNoValue
Threshold Default Template
As in real-time alerts, you can choose a template for the message that is
sent when the threshold is reached. The template uses a predefined list of
variables that are replaced with the appropriate value for the specific alert.
Those variables are:
%%alertName - alert name
%%description - alert description
%%alertQueryValue - query value that caused the alert
%%alertThreshold - alert threshold
%%alertQueryFromDate - start of the query period
%%alertQueryToDate - end of the query period
%%alertBaseQueryValue - base query value of the alert
%%classification - alert classification
%%category - alert category
%%severity - alert severity
%%recommendation - recommended action for the alert
%%Subject[] - subject of the message
The default template for threshold alerts is as follows (can be cloned and
edited):
%%Subject[Guardium Alert. Severity: (%%severity), Alert Name:
%%alertName]
Alert Name: %%alertName. Alert Description: %%description.
Current value: %%alertQueryValue
Base query value: %%alertBaseQueryValue

18 Administration
Threshold: %%alertThreshold
Query period: %%alertQueryFromDate - %%alertQueryToDate
Alert Classification: %%classification
Category: %%category
Severity: %%severity
Recommended Action: %%recommendation
Customize real-time alerts and email
Control appearance of Prefix email subject with Guardium appliance name.
Control appearance of email subject in email body.
Add naming template parameter %%applianceHostName so Guardium
users can add appliance hostname to Name Templates (any position
subject or body).
To accomplish this, use two fields in ADMINCONSOLE_PARAMETERS
table:
APPEND_APPLIANCENAME_SUBJECT
APPEND_SUBJECT_IN_BODY
Use the following CLI commands to control the content of these fields:
show alerter email append_name_subject
store alerter email append_name_subject
show or store the flag to append the appliance name in email subject
show alerter email append_subject_body
store alerter email append_subject_body show or store the flag to append
email subject in the beginning of the email body
Each time the value in CLI changes, it takes effect immediately on the
outgoing emails.

CSV Separator

To define a separator to be used in the audit process:


1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. Choose Comma, Semicolon, Tab, or define your own in Other box to define the
CSV Separator that is used.
3. Click Apply.

Add other HTML content to the Guardium Window

To add other HTML content to the Guardium window:


1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the HTML - Left and HTML - Right text boxes, enter the HTML for the text
or any other items you want to include on the window.
3. Optionally click the preview button to verify that your HTML is displayed as
you expect.
4. Click Apply.

Chapter 1. Configuring your Guardium system 19


Add or Disable a Login Message
To add a message to display in a message box, each time a user logs in:
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the Login Message text box, enter the text that you want to display when
each user logs in.
3. Mark the show login message box to enable the display of the login message
(or clear the box to disable the display).
4. Click Apply.

Enable or Disable Concurrent Same-user Logins

By default, the same Guardium user can log in to an appliance from multiple IP
addresses. You can disable concurrent logins from the same user. When disabled,
each Guardium user will be allowed to log in from only one IP address at a time.
If a user closes their browser without logging out, the connection will time out due
to inactivity, so the user account will not be blocked for long.

To change this setting:


1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. Locate the field Concurrent login from different IP.
3. Click Enable or Disable, depending on the current status, to change the setting.

Note: When the feature is disabled, an Unlock button appears next to the
Enable button. You can click Unlock to allow a second user to log in with this
user account, from a different IP address. This is provided for support
purposes.

Enable Data Level Security at the Observed Data Level

This feature assumes that specific Guardium users are responsible for certain
specific databases. Therefore a mechanism exists that will filter results,
system-wide, in a way that each user will only be able to see the information
from those databases that the user is responsible for.

To change this setting:


1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. Click the Enable or Disable button for the Data level security filtering option

Note: The datasec-exempt role is activated when data level security is enabled
and the datasec-exempt role has been assigned to a user.
3. Additional choices include:
v Show-all - Permits the logged-in viewer to see all the rows in the result
regardless of who these rows belong to. When used with the Datasec-exempt
role permits an override of the data level security filtering.
v Include indirect records - Permits the logged-in viewer to see the rows that
belong to the logged-in user, but also all rows that belong to users under the
logged-in user in the user hierarchy.

Note: If data level security at the observed data level is enabled, then audit
process escalation is allowed only to users at a higher level in the user hierarchy.

20 Administration
Default Filtering
Online viewer default setting and for audit process results distribution.

Show-all. The default setting is disabled.

Escalate result to all users


Escalate result to all users - A check mark in this check box escalates audit process
results (and PDF versions) to all users, even if data level security at the observed
data level is enabled. The default setting is enabled. If the check box is disabled
(no check mark in the check box), then audit process escalation is allowed only to
users at a higher level in the user hierarchy and to users with the datasec-exempt
role. If the check box is disabled, and there is no user hierarchy, then no escalation
is permitted.

Custom database table maximum size

Set the size of the custom database table (in MB). The Default value is 4000 MB.

At this point in the Global Profile menu is a button to see Current usage. Click on
the Current Usage button to show values for INNODB, MYISAM and Total.

Note: The custom size limit is tested before importing data. The import can exceed
the maximum size limit. After the limit is exceeded, the next import will be
prevented.

SCP and FTP files via different ports

Change the ports that can be used to send files over SCP and FTP.

For Global Profile - Export and Patch Backup can be changed. The default port for
ssh/scp/sftp is 22. The default port for FTP is 21.

Note: Seeing a zero 0 in the Guardium GUI as the port indicates that the default
port is being used and that there is no need to change.

Add a logo to the Guardium Window


To add a company logo graphic to the Guardium window, or to add other HTML
content to the Guardium window:
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In Upload Logo Image, if you want to include a logo image in the portal
window, enter an image file name or click Browse to select a file to upload to
the Guardium appliance, and then click Upload.
3. Refresh your browser window. The new logo appears.

Note: The name of the uploaded logo file cannot contain a single quotation mark,
double quotation mark, less than sign, or greater than sign.

Encrypt Must Gather

Encrypt Must Gather was added to the Global Profile. Default value is cleared
(Do not encrypt). If it is cleared, must gather output is just compressed and not
encrypted. When the check box is checked, all future must gather output will be

Chapter 1. Configuring your Guardium system 21


encrypted. Encryption can be also set on by using the store encrypt_must_gather
on CLI command and set off by using store encrypt_must_gather off.

Alerter Configuration
No e-mail messages, SNMP traps, or alert related Syslog messages will be sent
until the Alerter is configured and activated.

Other components create and queue messages for the Alerter. The Alerter checks
for and sends messages based on the polling interval that has been configured for
it.

For correlation alerts and appliance alerts to be produced, Anomaly Detection must
also be started. For real-time alerts to be produced, a security policy must be
installed.

Mail/SNMP/SYSLOG messages are sent out according to their priority.

Automatically activate the Alerter on startup


1. Click Setup > Tools and Views > Alerter to open the Alerter or click Protect >
Database Intrusion Detection > Alerter to open the Alerter.
2. Mark the Active on Startup checkbox. Each time the appliance restarts, the
Alerter will be activated automatically.
3. Click Apply.
4. If the Alerter is not running, and you want to start it, click Restart.

Set the frequency that the Alerter checks for and sends
messages
1. Click Setup > Tools and Views > Alerter to open the Alerter or click Protect >
Database Intrusion Detection > Alerter to open the Alerter.
2. Enter the Polling Interval, in seconds.
3. Click Apply.

Configure the Alerter to send SMTP (email) messages


1. Click Setup > Tools and Views > Alerter to open the Alerter or click Protect
> Database Intrusion Detection > Alerter to open the Alerter.

Note: All remaining items in this topic are in the SMTP section of the Alerter
panel.
2. Enter the IP address for the SMTP gateway, in the IP Address box.
3. Enter the SMTP port number (it is almost always 25) in the Port box.
4. Optional: Click the Test Connection hypertext link to verify the SMTP address
and port. This only tests that there is access to specified host and port. It does
not verify that this is a working SMTP server. A dialog box is displayed,
informing you of the success or failure of the operation.

Note: If this SMTP server uses authentication, you must supply a valid User
Name and Password for that mail server in the following two fields.
Otherwise, those fields can be blank.
5. Enter a valid user name for your mail server in the User Name box if your
SMTP server uses authentication.
6. Enter the password for the user in the Password box if your SMTP server uses
authentication. Re-enter it in the Re-enter Password box.

22 Administration
7. In the Return E-mail Address box, enter the return address for e-mail sent by
the system. This address is usually an administrative account that is checked
often.
8. Select Auth in the Authentication Method if your SMTP server uses
authentication. Otherwise, select None. When Auth is selected, you must
specify the user name and password to be used for authentication.
9. Click Apply to save the configuration.

Note: The Alerter will not begin using a new configuration until it is
restarted.
10. Click Restart to restart the Alerter with the new configuration.

Configure the Alerter to send SNMP traps


1. Click Setup > Tools and Views > Alerter to open the Alerter or click Protect >
Database Intrusion Detection > Alerter to open the Alerter.

Note: All remaining items in this topic are in the SMTP section of the Alerter
panel.
2. In the IP Address box, enter the IP address to which the SNMP trap will be
sent.
3. Optional: Click the Test Connection hypertext link to verify the SNMP address
and port (162). This only tests that there is access to specified host and port. It
does not verify that this is a working SNMP server. A dialog box is displayed,
informing you of the success or failure of the operation.
4. In the ”Trap” Community box, enter the community name for the trap. Retype
the community in the Retype Community box.
5. Click Apply to save the configuration.

Note: The Alerter will not begin using a new configuration until it is restarted.
6. Click Restart to restart the Alerter with the new configuration.

Anomaly Detection
The Anomaly Detection process runs every polling interval to create and save, but
not send, correlation alert notifications that are based on an alert's query.

This notification is run according to the schedule defined for each alert. See
“Alerter Configuration” on page 22 for more information about sending
notifications.

The Anomaly Detection process uses the results of a correlation alert's query, which
looks back over a specified period of time, and the correlation alert's threshold, to
determine whether a condition is satisfied (an excessive number of failed logins,
for example).

In a Central Manager environment, the Anomaly Detection panel for each


Guardium system can be used to turn off correlation alerts that are not appropriate
for that particular Guardium system. Under Central Management, all correlation
alerts are defined on the Central Manager, regardless of which Guardium system
they were created or updated. These correlation alerts are the same for all
Guardium system, and when activated, are activated on all Guardium system by
default.

Chapter 1. Configuring your Guardium system 23


Note: The Alerter component must be configured and started to send a saved alert
message to SYSLOG, email, or an SNMP trap.

Note: Anomaly Detection does not play a role in the production of real-time alerts,
which are produced by security policies.

Automatically activate Anomaly Detection on startup


1. Click Setup > Tools and Views > Anomaly Detection to open Anomaly
Detection.
2. Mark the Active on Startup check box. Each time the Guardium system
restarts, Anomaly Detection is activated automatically.
3. Click Apply.

Set the frequency that Anomaly Detection checks for appliance


issues
1. Click Setup > Tools and Views > Anomaly Detection to open Anomaly
Detection.
2. Enter the Polling Interval in minutes.
3. Click Apply.

Enable or Disable Active Alerts

To disable an alert globally in a Central Manager environment, it is easier to clear


the Active check box in the Modify Alert panel.

To enable or disable an alert on a single Guardium system in a Central


Management environment, follow these steps:
1. Log in to the UI of the Guardium system on which you want to disable one or
more alerts.
2. Click Setup > Tools and Views > Anomaly Detection to open Anomaly
Detection.
3. To disable an alert, select it from the Active Alerts box, and click Disable.
4. To enable an alert, select it from the Locally Disabled Alerts box, and click
Enable.

Stop or Restart Anomaly Detection


1. Click Setup > Tools and Views > Anomaly Detection to open Anomaly
Detection.
2. Click Stop to stop Anomaly Detection, or click Restart to restart it.

Session Inference
Session Inference checks for open sessions that have not been active for a specified
period of time, and marks them as closed.

To configure the Session Inference options:


1. Click Setup > Session Inference to open Session Inference.
2. Mark the Active On Startup box to start Session Inference on startup of the
Guardium system.
3. In the Polling Interval box, enter the frequency (in minutes) with which Session
Inference checks for open sessions. The default is 120 (minutes).

24 Administration
4. In the Max Inactive Period box, enter the number of minutes of inactivity after
which a session is marked closed. The default is 720 (minutes).
5. Click Applyto store the values in the configuration database. Session Inference
will not begin using a new configuration until it is restarted.
6. Click Restart to restart Session Inference with the new configuration.

To stop Session Inference, open the Session Inference panel and click Stop.

IP to Hostname Aliasing
The IP-to-Hostname Aliasing function accesses the Domain Name System (DNS)
server to define hostname aliases for client and server IP addresses.

There are two separate sets of IP addresses: one for clients, and one for servers.
When IP-to-Hostname Aliasing is enabled, alias names will replace IP addresses
within Guardium where appropriate.
1. Click Protect > Database Intrusion Detection > IP-to-Hostname Aliasing to
open IP-to-Hostname Aliasing.
2. Mark the check box for Generate Hostname Aliases for Client and Server IPs
(when available) to enable hostname aliasing.
A second check box can now be accessed. The name of this check box is
Update existing Hostname Aliases if rediscovered.
3. Mark the check box to update a previously defined alias that does not match
the current DNS hostname (usually indicating that the hostname for that IP
address has changed). You may not want to do this if you have assigned some
aliases manually. For example, assume that the DNS hostname for a given IP
address is dbserver204.guardium.com, but that server is commonly known as
the QA Sybase Server. If QA Sybase Server has been defined manually as an
alias for that IP address, and the check box for Update existing Hostname
Aliases if rediscovered is marked, that alias will be overwritten by the DNS
hostname.
4. Click Apply to save the IP-to-Hostname Aliasing configuration.
5. Do one of the following:
v Click Run Once Now to generate the aliases immediately.
v Click Define Schedule to define a schedule for running this task.

To view the aliases defined, see “Aliases” on page 186.

System Backup
Use the System Backup function to define a backup operation that can be run on
demand or on a scheduled basis. Use the Patch Backup function to create the
backup profile settings.

This help file details System Backup and Patch Backup.

System Backup
System backups are used to backup and store all the necessary data and
configuration values to restore a server in case of hardware corruption.

Chapter 1. Configuring your Guardium system 25


All configuration information and data is written to a single encrypted file and
sent to the specified destination, using the transfer method configured for backups
on this appliance.

To restore backed up system information, use the restore system CLI command.
The CLI command, diag, can also be used, provided that diag is defined as a role
for given user.

System backup supports the following methods:


v SCP - defined by default and accessible via CLI and the GUI
v FTP - defined by default and accessible via CLI and the GUI
v Centera - can be added to the GUI by logging into CLI and running the
following command, store storage centera backup on
v TSM - can be added by logging into CLI and running the following command,
store storage tsm backup on
v AMAZON S3 - is defined by default and accessible via CLI and GUI. It is
accessible from CLI as long as it is defined in the GUI.
v Softlayer - Softlayer cloud backup

Note: System restore must be done to the same patch level of the system backup.
For example, if a customer backed up the appliance when it was on Version 7.0,
Patch 7 and then wants to restore this backup into a newly-built appliance, then
there is a need to first install Version 7.0, Patches 1 to 7 on the appliance and only
then to restore the file.

To back up system information:


1. Click Manage > Data Management > System Backup to open System Backup.
2. Mark one or both of the Backup check boxes:
v Mark the Configuration check box to back up all definitions.
v Mark the Data check box to back up all data. (If you are archiving data on a
regular basis, this is unnecessary.)
3. Select storage method radio button from the list. Depending on how the
Guardium system has been configured, one or more of these buttons may not
be available.
v EMC CENTERA
v TSM
v SCP
v FTP
v AMAZON S3
v Softlayer
4. Perform the appropriate procedure depending on the storage method selected:
v Configure SCP or FTP Archive or Backup
v Configure EMC Centera Archive or Backup
v Configure TSM Archive or Backup
v Configure AMAZON S3 Archive or Backup
v Configure Softlayer object storage cloud backup
5. Click Apply to verify and save the configuration changes. The system will
attempt to verify the configuration by sending a test data file to that location.
v If the operation fails, an error message will be displayed and the
configuration will not be saved.

26 Administration
v If the operation succeeds, the configuration will be saved.
6. To run or schedule the system backup operation, do one of the following:
v Click Run Once Now to run the operation once.
v Click Modify Schedule to schedule the operation to run on a regular basis.
7. Click Done when you are finished.

Note: During a SCP/FTP/TSM/Centera/AMAZON S3/Softlayer file transfer,


if the backup file transfer fails, the last file of each set of backup/archive files
(system backup, configuration backup, archive, CSV archive, etc.) will be saved
in the diag/current folder. Then when the backup file destination is again
online, a manual transfer of the backup files can be made from the
diag/current folder to the destination. The set of backup/archive files will only
be saved in the diag/current folder if the file transfer is unsuccessful. If during
another backup file transfer there is a file transfer failure, the set of
backup/archive files will again be saved in the diag/current folder. However,
in order to avoid saving too many files and running out of disk space, ONLY
the latest file of each type will be saved. The earlier backup files will be
overwritten.

Note: When performing a system backup and restore from one server, which
has GIM defined, to another server, then the user must configure a GIM
failover to the restore server. This GIM configuration applies to a Backup
Central Manager or a System backup and restore.

SCP and FTP files via different ports

Change the ports that can be used to send files over SCP and FTP.

For System Backup or Patch Backup - Set the protocol (SCP or FTP) and specify
Host, Directory and Port. The default port for ssh/scp/sftp is 22. The default port
for FTP is 21.

Prevent backup/archive scripts from filling up /var


The backup process will check for room in /var before running and fail. This
process will also warn the user if there is insufficient space for backup.

The archive process will check the size of the static tables and make sure there is
room in /var to create the archive.

An error is logged in the logfile and GUI if the backup is over 50%. For example:
ERROR: /var backup space is at 60% used. Insufficient disk space for backup.

Patch Backup

Patch Backup in the GUI copies the functions available in the CLI command, store
backup profile. Use this function to maintain the backup profile data (patch
mechanism).

All four fields must be filled in - backup destination host, backup destination
directory, backup destination username, and backup destination password.

Enter 0 or press the Enter key to use the default port. Then, click Apply.

Chapter 1. Configuring your Guardium system 27


Amazon S3 Archive and Backup in Guardium
Use this feature to archive and backup data, from Guardium, to Amazon S3.

Amazon S3 (Amazon Simple Storage Service) provides a simple web service


interface that can be used to store and retrieve any amount of data, at any time,
from anywhere on the web. It gives any developer access to the same highly
scalable, reliable, secure, inexpensive infrastructure that Amazon uses to run its
own websites.

Prerequisites
1. An Amazon account.
2. Register for S3 service
3. Amazon S3 credentials are required in order to access Amazon S3. These
credentials are:
v Access Key ID - identifies user as the party responsible for service requests.
It needs to be included it in each request. It is not confidential and does not
need to be encrypted. (20-character, alphanumeric sequence).
v Secret Access Key - Secret Access Key is associated with Access Key ID
calculating a digital signature included in the request. Secret Access Key is a
secret, and only the user and AWS should have it (40-character sequence).
This key is just a long string of characters (and not a file) that is used to
calculate the digital signature that needs to be included in the request.

There are two archive operations available on the Administration Console, in the
Data Management section of the menu:
v Data Archive backs up the data that has been captured by the appliance, for a
given time period.
v Results Archive backs up audit tasks results (reports, assessment tests, entity
audit trail, privacy sets and classification processes) as well as the view and
sign-off trails and the accommodated comments from work flow processes.

When Guardium data is archived, there is a separate file for each day of data.

Archive data file name format:


<time>-<hostname.domain>-w<run_datestamp>-d<data_date>.dbdump.enc

The archive function creates signed, encrypted files that cannot be tampered with.
The names of the generated archive files should not be changed. The archive
operation depends on the file names created during the archiving process.

System backups are used to backup and store all the necessary data and
configuration values to restore a server in case of hardware corruption.

All configuration information and data is written to a single encrypted file and
sent to the specified destination, using the transfer method configured for backups
on this appliance.

Backup system file format:


<data_date>-<time>-<hostname.domain>-SQLGUARD_CONFIG-9.0.tgz
<data_date>-<time>-<hostname.domain>-SQLGUARD_DATA-9.0.tgz

28 Administration
The Aggregation/Archive Log report can be used to verify that the operation
completes successfully. There should be multiple activities listed for each Archive
operation, and the status of each activity should be Succeeded.

Regardless of the destination for the archived data, the Guardium catalog tracks
where every archive file is sent, so that it can be retrieved and restored on the
system with minimal effort, at any point in the future.

A separate catalog is maintained on each appliance, and a new record is added to


the catalog whenever the appliance archives data or results.

Catalog entries can be transferred between appliances by one of the following


methods:
v Aggregation - Catalog tables are aggregated, which means that the aggregator
will have the merged catalog of all of its collectors
v Export/Import Catalog - These functions can be used to transfer catalog entries
between collectors, or to backup a catalog for later restoration, etc.
v Data Restore - Each data restore operation contains the data of the archived day,
including the catalog of that day. So, when restoring data, the catalog is also
being updated.

When catalog entries are imported from another system, those entries will point to
files that have been encrypted by that system. Before restoring or importing any
such file, the system shared secret of the system that encrypted the file must be
available on the importing system.

Enable Amazon S3 from the Guardium CLI

Amazon S3 archive and backup option is enabled by default in the Guardium GUI.
To enable Amazon S3 via Guardium CLI, run the following CLI commands:
store storage-system amazon_s3 archive on
store storage-system amazon_s3 backup on

Amazon S3 requires that the clock time of Guardium system to be correct (within
15-minutes). Otherwise, this will result in an Amazon error. If there is too large a
difference between the request time and the current time, the request will not be
accepted.

If the Guardium system time is not correct, set the correct time using the following
CLI commands:
show system ntp server
store system ntp server (An example is ntp server: ntp.swg.usma.ibm.com)
store system ntp state on

User Interface

Use the System Backup screen (Manage > Data Management > System Backup) to
configure the backup. After enabling Amazon S3 through the CLI commands,
Amazon S3 will appear in the list of protocols.

User input requires:


v S3 Bucket Name (Every object stored in Amazon S3 is contained in a bucket.
Buckets partition the namespace of objects stored in Amazon S3. Within a
bucket, you can use any names for your objects, but bucket names must be
unique across all of Amazon S3.

Chapter 1. Configuring your Guardium system 29


v Access Key ID
v Secret Access Key

If bucket name does not exist, it will get created.

Secret Access Key is encrypted when saved into the database.

Check that files got uploaded on Amazon S3


1. Log onto AWS Management Console using your email address and password.

http://aws.amazon.com/console/
1. Click on S3.
2. Click on the bucket that you specified in Guardium UI.

Softlayer Object Storage


SoftLayer Object Storage is a redundant and highly scalable cloud storage service.
Use it to easily store, search, and retrieve data across the Internet. It is based on
the OpenStack Swift platform and may be accessed through a RESTful API and
Web Portal.

Information needed beforehand:


v Authentication Endpoints - Authentication requests should be sent to the
endpoint associated with the location of your Object Storage account.
https://dal05.objectstorage.softlayer.net/auth/v1.0
v Container - The basic storage unit for all the data within Object Storage is a
container. It stores data/files and must be associated with an Object Storage
account.
v X-Auth-User - Username to authenticate with: Tenant value:username
v X-Auth-Key - API key (Password) to authenticate with.

Account credentials can be retrieved by logging onto https://


control.softlayer.com/
System Backup by Softlayer from GUI
1. Click Manage > Data Management > System Backup or Manage >
Data Management > Data Archive and Results Archive.
2. Select the Softlayer protocol.
3. Fill in Authentication Endpoint URL (example, https://
dal05.objectstorage.softlayer.net/auth/v1.0)
4. Specify an Object Storage container name (example,
yourname_Container)
5. Specify the X-Auth-User (Tenant value: Username) (example, username)
6. Fill in the X-Auth Key (example, password)
7. Specify what to Backup - Configuration or Data
8. Modify Scheduling or Run Once Now.
System Backup via CLI (Configuration)
Access CLI.
CLI> backup system.
1. DATA

30 Administration
2. CONFIGURATION
Please enter the number of your choice: (q to quit) 1
1. SCP
2. CONFIGURED DESTINATION
Please enter the number of your choice: (q to quit) 2
Make sure destination is configured in the GUI under the <System
Backup> option
Please wait, this may take some time.
Performing a DEFAULT backup, config=
System Backup and System Restore
Access CLI.
CLI> restore system
1. SCP
2. FTP
3. TSM
4. CENTERA
5. AMAZONS3
7. SOFTLAYER
8. SFTP
Please enter the number of your choice: (q to quit) 7
Enter the SoftLayer Authentication Endpoint URL:
Enter Softlayer Object Storage Container name:
Enter Softlayer X-Auth-User:
Enter X-Auth-Key:
Enter a file name from list:
Authenticate success!
Download file success!
Select your recovery type, for most cases, use the normal option:
1. normal
2. upgrade

Configuring patch backup


Use this feature to store backup profile information.

Procedure
1. Click Setup > Patch Backup to open the Patch Backup panel.
2. Choose the method of file transfer.
3. Enter the name of the host and the directory where the information is to be
stored.
4. Enter a user name and password to own the file on the destination host.

Chapter 1. Configuring your Guardium system 31


5. Click Apply when you are finished.

Configure Permission to Socket connection


This topic applies to Custom Alerting Classes.

Follow this procedure to configure permissions for socket all connections that are
used by custom classes.
1. Click Setup > Evaluations > Communication Permissions to open the
Communication Permissions.
2. Click Add permission To Socket Connection to expand that pane.
3. Enter the IP address or Host name for the host.
4. Enter a Port number for the socket connection.
5. Enter a description.
6. Click Save.

32 Administration
Chapter 2. Access Management Overview
Access management consists of four tasks: account administration, maintenance,
monitoring, and revocation.

Access Management is separate from system administration duties.

There are two predefined users on a Guardium appliance: accessmgr and admin.
v accessmgr is the user name assigned to the access manager. By default, the access
manager is the only user authorized to manage user accounts and security roles.
v admin is the user name assigned to the (primary) Guardium administrator. By
default, the administrator does not have authority to manage user accounts or
security roles. The admin user has a more extensive set of privileges.

Note:

Admin and accessmgr roles can not be assigned to the same user. The same user
may contain both of these roles through a legacy situation or as a result of an
upgrade. However, current use will not allow the two roles to be assigned to the
same user.

In the past, when a unit was upgraded, the accessmgr role was assigned to the
admin user, and the accessmgr user was disabled. In this upgrade situation, it was
necessary to first log in as admin and enable the accessmgr user, then log in as
accessmgr (with initial password “accessmgr”, the system prompted the user to
change it), and remove the accessmgr role from the admin user.

Access Management Selection


v User Browser - Manage users
v Role Browser - Manage permissions and customize layouts for roles
v Role Permissions - Manage application permissions
v LDAP User Import - Import users from LDAP

Data Security Selection


v Datasources Associated
v Datasources Not Associated
v Servers Associated
v Servers Not Associated
v User Hierarchy
v User-DB Association

Predefined Reports from Accessmgr

The following predefined reports are available from the Accessmgr user.

33
User and Role Reports
Defining and modifying users (see Manage Users) involves deciding both who will
be using the Guardium system and to what roles (see Manage Roles) they will be
assigned. A role is a group of users, all of whom are granted the same access
privileges.

The User and Role Reports consist of reports:


v User - Role -- a report that shows, by user, the number of roles that user belongs
to.
v All Roles - User -- a report that shows, by role, the number or users that belong
to that role.

Note: admin and access manager are pre-existing, other roles are created by the
Access manager.

The following reports are available on a Central Manager or a standalone unit. If


trying to use on a managed machine, an error message will appear, This Report
can not Run on a Managed Unit. Servers Not Associated will show servers from
ALL managed units in Central Manager systems.

Datasources Associated

This report identifies Datasource Name, Host, Service Name, Login Name and
Association Type. This information comes from the choices made in the
User-Database Associations activity. See the Data User Security - Hierarchy and
Associations help topic.

Datasources Not Associated

This report is a list of datasources not associated with any users. This report
identifies Datasource Name, Datasource Type, Host, and Service Name. This
information comes from the choices made in the User-Database Associations
activity. See the Data User Security - Hierarchy and Associations help topic.

Servers Associated

This report identifies Server IP, Service Name, Login Name and Association Type.
This information comes from the choices made in the User-Database Associations
activity. See the Data User Security - Hierarchy and Associations help topic.

Servers Not Associated


This report is a list of servers not associated with any users. This report identifies
Server IP and Service Name. This information comes from the choices made in the
User-Database Associations activity. See the Data User Security - Hierarchy and
Associations help topic.

Understanding Roles
Assign a role to a Guardium user to grant them specific access privileges. Some
examples of roles are: CLI, admin, accessmgr, CAS, and user.

34 Administration
The access manager defines roles, and assigns them to users and applications.
When a role is assigned to an application or the definition of an item (a specific
query, for example), only those Guardium users who are also assigned that role
can access that component.

If no security roles are assigned to a component (a report, for example), only the
user who defined that component and the admin user can access it. At installation
time, Guardium is configured with a default set of roles, and a default set of user
accounts.

When user definitions are imported from an LDAP server, the groups to which
they belong can optionally be defined as roles. For more information, see
“Importing Users from LDAP” on page 48.

Object types that can be assigned to roles: Alert; Audit process (Discover Sensitive
Data scenario); Baseline; Custom domain; Custom table; Classifier policy (Discover
Sensitive Data scenario); Custom workflow; Data source; Group; Query; Policy;
Privacy set; Report; Security assessment; or SQL application.

Each default role comes with a default layout. When a user logs in for the first
time, that user's initial layout is determined by the roles assigned. After the initial
login, adding or removing roles will not alter the user's layout. After a role is
removed, if the user attempts to access reports or applications that are no longer
authorized, a not authorized message will be produced.

Note: When assigning roles to a user, the admin and access manager role cannot
be assigned to the same user.

Note: Custom-created roles cannot be combined with default-provided roles


(examples are user, admin, accessmgr, cli, inv, datasec-exempt, review-only).

Note: Admin role and object owner have access to all objects by default.

Note: Taking a base role and customizing (with additional navigation items), and
then copying this customized role, will result in a loss of the customization if the
customized or copied role is reset to default.

Default Roles
The Guardium system is pre-configured to support users who fall into four
broadly defined default roles: admin, user, access manager, and investigations. The
Guardium access manager can create new roles as well. Users must always be
assigned one of the default roles, but might be assigned any number of other roles,
as well.

Note: Note: If data level security at the observed data level is enabled (see Global
Profile settings), then audit process escalation is allowed only to users at a higher
level in the Data Hierarchy (see Access Manager). The Datasec-exempt user can
escalate, without restrictions, to anyone.
Table 4. Default Roles
Default Role Description
user Provides the default layout and access for all common users. This role can
not be deleted.

Chapter 2. Access Management Overview 35


Table 4. Default Roles (continued)
Default Role Description
admin Provides the default layout and access for Guardium administrators. Do
not confuse the admin role with the admin user, which is a special user
account having the admin role, but also having additional powers that are
reserved for the admin user account only. This role can not be deleted.
accessmgr Provides the default layout and access for the access manager. This role
can not be deleted.
cli Provides access to CLI. The admin user has default access to CLI.
Everyone else must be given permission when users are created by access
manager and roles specified. The access manager can define as many
users in the system and give them the CLI role. These users have access
to the CLI and all activities of their CLI sessions are associated with this
user.

To run GrdAPI or CLI commands without admin rights, click the role CLI
for Admin Console in the User Role Permissions selection.

See the topic, diag CLI Command, on how to manage the diag role.
inv Provides the default layout and access for investigation users. An
investigation user must have the restore-to database name of INV_1,
INV_2 or INV_3, as the Last Name in their user definition. This is not
enforced by the GUI, but is required for the application to function
properly. When assigned, the user role must also be assigned. This role
can not be deleted.

Note: The Run an Ad-Hoc Audit Process button is available on all report
screens for all users except investigation (INV) user.
datasec-exempt Data Security - Exempt. This role is activated when Data level security is
enabled (see Global Profile in Administration Console) and the
datasec-exempt role has been assigned. If the user has this role, a Show
all check box appears in all reports. If checked, all sniffed data records are
shown (no filter is applied). This role cannot be deleted in the Role
Browser.
review-only A user that is specified by this role can view only results (Audit,
Assessment, Classifier), Audit Results and the To Do List. This role cannot
be deleted in the Role Browser.

Users with this role is allowed to enter comments in the audit process
viewer (not workflow or comments/data per row, but comments at
process/result level).

Users with this role cannot perform any changes/actions on any


workflow automation result (escalate, reassign, etc).

Sample Roles
In addition to the default roles, a set of sample roles is also defined.
Table 5. Sample Roles
Sample Role Description
dba Users who have a database-centric view of security, allowing access to
database-related reports and tracking of database objects
infosec Users who have an information security focus, including tracking access
to the database, and handling network requests, audits, and forensics

36 Administration
Table 5. Sample Roles (continued)
Sample Role Description
netadm Users who have a network-centric view, including IP sources for database
requests
appdev Application developers, architects, and QA personnel who have an
application-centric focus and want to track and report on SQL streams
generated by an application
audit Auditors and others who need to view audit reports
Note: If trying to copy this role, an embedded message will appear
explaining that not all aspects of this role can be copied. The message is:
"Create a new role using the layout and permission from the "audit" role.
Special privileges and actions associated with the "audit" role will not be
copied."
audit-delete This role is used to track or log when an audit process result has been
deleted. Users with the audit-delete role can delete reports. Admin users
can also delete reports. Tracking is done through the User Activity Audit
Trail report.
admin-console- A user that is specified by this role can only access the admin console tab.
only
cas Configuration Auditing System (CAS)
vulnerability- A user that is specified by this role can view only vulnerability results.
assess
diag A user that is specified by this role can access and run the diag
commands in CLI.
workload- A user that is specified by this role can define and modify the
replay-admin workload-replay functions.
workload- A user that is specified by this role can run the workload-replay functions.
replay-user
fam A user that is specified by this role can define and modify the File
Activity Monitor functions.
BaselII Accelerator - Basel II. This role can not be deleted.

Basel II Part 2 Sections 4 and 5 require that banking institutions must


define a Securitization Framework around financial information and
estimate the associated operational risk.
DataPrivacy Accelerator - DataPrivacy. This role can not be deleted.

The Data Privacy Accelerator delivers a portfolio of pre-configured


policies, real-time alerts, and audit reports that are specifically tailored to
the challenges of identify theft and based on industry best practices. With
the Data Privacy Accelerator, security managers, privacy officers, and
database administrators begin by defining combinations of data elements
– called "privacy sets" – whose access may indicate hacking or
inappropriate activities by internal users.
pci Accelerator - PCI. This role can not be deleted.

The PCI DSS is a set of technical and operational requirements designed


to protect cardholder data and applies to all organizations who store,
process, use, or transmit cardholder data. Failure to comply can mean loss
of privileges, stiff fines, and, in the case of a data breach, severe loss of
consumer confidence in your brand or services. The IBM Guardium
accelerator helps guide you through the process of complying with parts
of the standard using predefined policies, reports, group definitions, and
more.

Chapter 2. Access Management Overview 37


Table 5. Sample Roles (continued)
Sample Role Description
sox Accelerator - SOX. This role can not be deleted.

SOX Section 404 requires that companies must establish and maintain an
adequate internal control structure and procedures for financial reporting.

Roles in a Central Manager Environment

In Central Manager environments, all User Accounts, Roles, and Permissions are
controlled by the Central Manager. To administer any of these definitions, you
must be logged in to the Central Manager (and not to a managed unit).

Create a Role
1. Login as accessmgr, and open the User Role Browser by clicking Access >
Access Management > Role Browser.
2. Click Add Role to open the Role Form panel.
3. Enter a unique name for Role Name and click Add Role.

Remove a Role
1. Open the User Role Browser by clicking Access > Access Management > Role
Browser.
2. Click Delete for any role (some roles cannot be removed, and do not have the
Delete option). This opens the Role Form for the role.
3. Click Confirm Deletion. A message displays informing you that all references
to the role are removed, and you will be asked to confirm the action.
4. Click OK to confirm the deletion, or Cancel to abort the operation.

Managing roles and permissions


Roles and permissions provide different levels of access to users based on their job
duties.

Examples of roles include user, admin, and audit. Using roles allows you to easily
define permissions for an entire group of users. Only access managers can create
new roles and assign users to that role. As part of role creation, access managers
can also customize the navigation menu and permissions for that role.

Creating customized roles involves several processes:


v Creating a new role
v Managing permissions for the role to limit what users can access
v Optionally customizing the navigation menu for the role to further limit what
users can see
v Adding users to the role

There are two ways to limit access to specific applications:


Limit access from the application
Limit access from the application by deselecting the All Roles check box
on the Role Permissions > Edit Application Role Permissions screen.
Next, select the individual roles that should have access to the application.

38 Administration
The process is the same if you find that the All Roles check box is already
deselected: simply select or deselect the individual roles to grant or revoke
access to the application.
When All Roles is selected for a particular application, every
currently-defined role will have access to that application.
Limit access from the role
Limit access from the role by navigating to the Role Browser > Manage
Permissions screen and move individual applications from the Accessible
applications list to the Inaccessible applications list.
When managing permissions or customizing the navigation menu for a
new role, the defaults shown in the Accessible applications list reflects
any application with the All Roles check box selected on the Role
Permissions > Edit Application Role Permissions screen.

When working with roles and permissions, removing permissions for an


application also changes the default permissions for new roles. That is, removing
permissions for an application means that any subsequent roles you create will
also lack permissions for that application. If you want a new role to have
permissions for an application that no longer appears in the Accessible
applications list by default, you will need to move the desired application from
the Inaccessible applications list to the Accessible applications list for the new
role.

It is also possible to restrict access to specific tools by hiding menu items using the
Role Browser > Customize Navigation Menu tool. This approach limits access
without altering the default application permissions, but it may be less secure than
a permissions-based approach.

Best Practice: Copy and edit predefined roles to establish the desired permissions
and navigation menu. This approach allows you to revert to the original role if
needed.
Related tasks:
“How to create a role with minimal access”
This topic explains how to create a new role with minimal access permissions, for
example an auditor role that can only access the Audit Process To-Do List and
view specific reports.

How to create a role with minimal access


This topic explains how to create a new role with minimal access permissions, for
example an auditor role that can only access the Audit Process To-Do List and
view specific reports.

Procedure
1. Create a new role.
a. Log in as accessmgr, navigate to Access > Access Management, and select
the Role Browser.
b. Click the Add Role button, give the role a name, and click the Add Role
button to create the new role.
2. Manage permissions so the new role can only access the Audit Process To-Do
List and the Report Builder (which is required for viewing reports).

Chapter 2. Access Management Overview 39


a. From the Role Browser, click the Manage Permissions link for the new
role.
b. Select the checkbox in the header of the Accessible Applications list and
use the arrow to move all items to the Inaccessible applications list. When
creating a highly restricted role, it is easier to begin by removing
permissions.
c. In the Inaccessible applications list, select the Audit Process To-Do List
and the Report Builder, and use the arrow to move them back to the
Accessible applications list. The new role now has access to only these two
specific applications.
d. Click the OK button to commit your changes.
3. Customize the menus and navigation by defining which reports and
applications are available to the new role.
a. From the Role Browser, click the Customize Navigation Menu link for the
new role.
b. In the Navigation Menu list, select the Reports group so it is highlighted.
The selected group acts as the destination for menu items added in
subsequent steps.
c. In the Available Tools and Reports list, expand the Reports section or use
the Filter to identify specific reports, select the check box next to each item
that should be available to the new role, and use the arrow to add the items
to the Navigation Menu list. Items moved into the Navigation Menu list
will become visible to users assigned to this role.
d. In the Navigation Menu list, remove access to the Report Builder by
clicking the icons next to the Reports > Report Configuration Tools and
Investigate groups. This further simplifies the menu structure for this role
and removes access to the Report Builder tool without also removing
application permissions that are required to access reports.
e. Click the OK button to commit your changes. You have now created a new
role with very minimal privileges that can be assigned to users.
4. Optionally specify a custom home page for the new role.
a. From the Role Browser, click the Customize Navigation Menu link for the
new role.
b. In the Navigation Menu list, specify a new default home page by selecting
Comply > Tools and Views > Audit Process To-Do List and clicking the
icon in the toolbar. Users assigned to this role will now see the Audit
Process To-Do List as the default screen after logging in.
c. Click the OK button to commit your changes.
5. Create a new user and add that user to the new role.
a. Navigate to Access > Access Management and select User Browser.
b. Click Add User, provide the required information, and click Add User to
create the new user. You will now see the user you created listed in the
User Browser.
When a new user is created, the account is disabled by default. Deselect the
Disabled check box if you want the user to have immediate access to their
account.
c. From the User Browser, click the Roles link for the new user to view a list
of available roles.
d. Select the Assign check box next to the custom role you created earlier. This
will assign the user to the new role.

40 Administration
e. Deselect the Assign check box next to the user role. Deselecting the user role
prevents the new user from inheriting the default user access and
permissions.
f. Click Save to commit your changes.
Related concepts:
“Managing roles and permissions” on page 38
Roles and permissions provide different levels of access to users based on their job
duties.

Manage Users
Use the access manager, assigned the user name accessmgr, to add user accounts,
enable or disable user accounts, import members from LDAP, or edit user
permissions. Open the User Browser and browse the user accounts by clicking
Access > Access Management > User Browser

Defining and modifying users involves deciding both who will be using the
Guardium system and to what roles they will be assigned. A group of users can all
have the same role and the same access privileges if you so choose. For more
information on roles, see “Understanding Roles” on page 34.

Note: A default layout can be defined for a role, so that any new user assigned
that role will have that layout. See Generate New Layout in the CLI Reference.

User definitions can be imported from an LDAP server, on demand or on a


schedule.

Regardless of how users are defined to the Guardium system, the Guardium
administrator can configure the system to authenticate users via Guardium, LDAP,
or Radius.

When getting started with your Guardium system, an important early task is to
identify which groups of users will use the system, and what their function will
be. For example, an information security group might use Guardium for alerting
and troubleshooting purposes while a database administrator group might use
Guardium for reporting and monitoring. When deciding who will access the
Guardium system, keep in mind that sensitive company data can be picked up by
the system. Therefore, be very aware of who will be able to access that data.

Once you decide which groups of users will use the Guardium system (and for
what purpose), collect the following information for each user:
v User’s first and last name
v User account name (the name they will use to log in)
v User’s email address
v User’s function/role with Guardium

User Account Security


Several settings can be changed to provide additional security for user accounts.
You can enable or disable these settings using the show and store password CLI
commands (see User Account, Password and Authentication CLI Commands in the
CLI Reference).

Chapter 2. Access Management Overview 41


v By default, password validation is enabled. This means that a minimum of eight
characters is required, and the password must contain at least one character
from each of the following categories:
– Uppercase letters: A-Z
– Lowercase letters: a-z
– Digits: 0-9
– Special characters: @#$%^&.;!-+=_

Note: If password validation is disabled, any characters are allowed.


v By default, password expiration is enabled. Passwords can be configured to
expire after a designated number of days.
v By default, account lockout following a specified number of failed login attempts
is enabled. Lockout can be configured to occur after a fixed number of attempts
in a given time, or after a total number of attempts for the life of the account.

Locked Accounts
1. Open the User Browser by clicking Access > Access Management to view the
list of users.
2. Click Edit for any user, clear the Disabled check box, and click Update User to
save changes.

Note: If the admin user account becomes locked, use the unlock admin CLI
command to unlock it (see Configuration and Control CLI Commands in the
CLI Reference).

Create a User Account


1. Open the User Browser and click Add User to open the User Form panel.
2. Enter a unique name for Username. Do not include apostrophe characters in
the name. User names are not case sensitive.

Note: When adding a user manually, from either the Add User panel or User
LDAP Import, if there is no first name and/or last name, the login name will
be used.
3. Enter a password and confirm it again in the Password (confirm) box. The
password you assign will be temporary, and the user will be required to change
it following their first login.

Note: Passwords are case sensitive. When password validation is enabled (the
default), the password must be eight or more characters in length, and must
include at least one uppercase alphabetic character (A-Z), one lowercase
alphabetic character (a-z), one digit (0-9), and one special character from the
following set: @$%^&.;!-+=_
4. Enter the user’s first and last name in the respective fields.

Note: Restrictions apply to the last name for those users assigned the
Investigation Data Restore role (inv). If you want to assign a user the
investigator role, their last name must be INV_1, INV_2, INV_3. The UI will
not restrict you from entering something different in this field, but the
application will not function properly unless the last name is entered as shown.
Further, the investigator cannot be assigned any additional roles - they must be
inv only. This is the only case where it is not required to have a user or admin
role.
5. (Optional) Enter the user’s email address.

42 Administration
6. (Caution) The Disabled check box is checked by default. We suggest that you
defer clearing the check box and enabling the account until after the correct set
of roles have been assigned for the user.
It is much simpler to assign the roles first, so that the user has all components
in their layout the first time they log in. When a user logs in for the first time,
their layout is built using all of the roles assigned at that time. If roles are
added later, the user has access to everything available to that role, but will
have to add reports or applications particular to that role manually.
7. Click Add User to save the new user account definition and close the panel.

This completes the user definition. We suggest that you add the appropriate roles
for the user before informing them of their password for the initial login. See
“Understanding Roles” on page 34 for more information.

Enable/disable many users

Open the User Browser and click Search Users to easily filter users by role. When
you select a user, you have the option to enable or disable the user. Because users
are disabled by default, this menu can be very useful to easily change the status of
many users.

Update a User Account


1. Open the User Browser and click Edit for the user you want to modify.
2. Replace any values in the User Form panel.
3. Click Update User to save changes.

Note: Changing a user's password will require the user to change it following
their next login.

Enable a Disabled User Account


1. Open the User Browser and click Edit for the user you want to enable.
2. Clear the Disabled check box.
3. If the user has forgotten their password, enter a new password in both the
Password and Password (confirm) boxes.
4. Click Update User.

Remove a User Account


1. Open the User Browser by clicking Access > Access Management .
2. Click Delete for the user you want to remove.
3. Click Confirm Deletion.

Note: Alerts that were sent to deleted user will be sent now to the admin;
however this will not take effect until the access policy is re-installed.

Define the Data Security User Hierarchy


1. Click Data Security > User Hierarchy.
2. Select a user from the User menu to refresh the screen and display the selected
user's current hierarchy in the user pane.
3. Right-click a user node for the following op:
v Add User - Clicking Add User displays the Add User dialogue. Search or
filter by role, and add a user as a descendent of the selected user.

Chapter 2. Access Management Overview 43


This can create a measure of data-level security, by permitting the parent of a
hierarchy to look at specified servers and databases, but not the children of
the hierarchy. Depending on the configuration, inheritance can also take
place in that the parent inherits the data-level security of the child.

Note: Many-to-many relationships are permitted where a user may have


more than one parent and a parent may have more than one user.
v Unlink User from parent - will sever the descendent from the parent
v Remove all descendents - will sever all descendents from the parent
4. Click Refresh Cached Hierarchy to apply the recent changes to the user
hierarchy map.
5. Click Full Update Active User-DB Map to fully apply all recent changes to the
active User-DB association map.

Note: Best practices dictate a Full Update Active User-DB Map after changing
the User Hierarchy.
When you make a change to a hierarchy or to a database association (via UI or
GuardAPI), this change DOES NOT take effect automatically. The Periodic
Update will NOT pick up this change, unless it is the FIRST time the Periodic
Update has run. Otherwise, the user MUST click Full Update or run the Full
Update GuardAPI command for their changes to take effect.
A periodic update of the user hierarchy is run every 10 minutes automatically.
This cannot be run manually. This is an incremental update, meaning that it is
only looking at new server IPs or Service Names that have been sniffed since
the last time the periodic update was run. It compares the existing hierarchy
and associations against the new IPs/Service Names and determines what
users should have access to these IPs/Service Names.
A full update of the user hierarchy is NOT run automatically. It is only run
when the user executes it, either via the UI or GuardAPI function. This
compares ALL IPs/Service Names to the existing hierarchy and associations to
determine who has access to what.

Define the Data Security User to Database Association

Use the Data Security User-DB Association to find, assign, or remove users from
available servers and service names (databases).
1. Open the User-DB Association panel by clicking Data Security > User-DB
Association.
2. Select the check boxes of the Server & Service Name Suggestion to find
databases and service names to associate to users. Choices include:
v Observed Accesses - Observed traffic from Guardium internal database table
GDM_Access
v Datasource Definitions - Existing datasource definition information such as
name, database type, authentication information, and location of datasource.
v S-TAP Definitions - Existing S-TAP definition information such as the IP
address of the database server and the IP address of the Guardium host that
will receive data from S-TAP.
v Auto-Discovered Hosts - Hosts discovered by the Guardium Auto-discovery
process that were not previously known. Guardium's Auto-discovery
application can be configured to probe the network, searching for and
reporting on all databases discovered.

44 Administration
v Guardium Install Manager (GIM)-Discovered Systems - Hosts discovered by
the GIM that were not previously known.
3. Click Go to find and display available servers, service names, and currently
associated users.

Note: When traversing the node tree, numerical indicators are displayed next
to each server and service name to provide a count of direct and descendant
users that have been associated. The indicators take the format of [nn] for
direct association and (mm) for descendant association (a server or service
name within the current server has a user associated to it for example).
Likewise, when viewing the users associated to a server or service name, if
there is a user associated to a larger level node in the tree, that user will be
displayed.
4. Click a server or service name node to display associated users. With any node
selected, you can do one of the following:
v Click Add User to add a new user-DB association, click any users you want
to add, and then click Add.
v Click Add Group to add a new group-DB association. When Add Group is
selected, groups that were created using the Group Builder for group type
Guardium Users will be displayed. Select the group you'd like to add and
click Add.
v Right-click any server or service name node to do one of the following:
5. Right-click any server or service name node, and you are presented with
options to do one of the following:
v Highlight the server
v Expand or collapse the server
v Find a server
v Add server, service name, or unnamed service
v Delete the server
6. Add an IP or IP/Service Name pair using the IP and Service Name fields
before the tree structure.

Note: The Find button can be used to search the IP/Service Name tree
structure. IP strings may be entered as partials or include the wild card * such
that 192.168 and 192.168.*.* are both valid. Numeric values cannot trail the use
of any wild card or be used with the wild card to form an octet. Service Name
names may include the wild card % anywhere within their name.
7. Click Full Update Active User-DB Map to fully apply all recent changes to the
active User-DB association map.

Note: Best practices dictate a full update of the active User-DB map after
changing the User-DB Association.
A full update of the user hierarchy is NOT run automatically. It is only run
when the user executes it, either via the Full Update Activer User-DB Map
button or the GuardAPI function. This compares ALL IPs/Service Names to the
existing hierarchy and associations to determine who has access to what.
A periodic update of the user hierarchy is run every 10 minutes automatically
(cannot be run manually). This update is only looking at new server IPs or
Service Names that have been sniffed since the last time the periodic update
was run. It compares the existing hierarchy and associations against the new
IPs/Service Names and determines what users should have access to these
IPs/Service Names.

Chapter 2. Access Management Overview 45


When you make a change to a database association (via UI or GuardAPI), this
change DOES NOT take effect automatically. The periodic update will NOT
pick up this change, unless it is the FIRST time the periodic update has run.
Otherwise, the user MUST click the Full Update Activer User-DB Map button,
or run the full update GuardAPI command for the changes to take effect.

How to create a user with the proper entitlements to login to CLI


Use this task to create a user who has the proper roles and entitlements to use CLI
to run GuardAPI commands.

About this task


This how-to topic is important since (1) GuardAPI commands can be executed only
through CLI, and (2) Most GuardAPI commands are associated with a specific
application and therefore with its roles; meaning that the standard CLI user (who
has a hard coded "admin" role) cannot run many of the GuardAPI commands
because that user does not have the appropriate roles.

Procedure
1. Login as the accessmgr and open the User Browser by clicking Access > Access
Management > User Browser.
2. Click Add User from the User Browser panel

3. Fill in the User Form, clear the Disabled check box to enable the user upon
creation, and click Add User.

When a user is initially created they do not have the privilege to login to CLI
and execute any of the GuardAPI commands. As an example, if we try and use

46 Administration
one of the CLI accounts (guardcli1,...,guardcli5) under the newly created user
we are quickly disconnected and told that the user does not have the necessary
role defined.
$ ssh -l guardcli1 192.168.1.89 guardcli1@192.168.1.89’s password:
Last login: Tue Aug 10 18:37:25 2010 from 192.168.1.14
Welcome guardcli1 - your last login was Tue Aug 10 18:37:26 2010
Please enter your GUI login (one with ADMIN or CLI role defined):johnsmith
No such user or user does not have the necessary role defined.
Connection to 192.168.1.89 closed.
4. From the User Browser panel, click Roles for any user to bring up the User
Role Form panel.
5. Check the CLI check box, and click Save to grant the user CLI access

Now when the user tries to use one of the CLI accounts (guardcli1,...,guardcli5)
under the newly created user we are asked for a password and granted access
to the CLI.
$ ssh -l guardcli1 192.168.1.89
guardcli1@192.168.1.89’s password:
Last login: Tue Aug 10 18:39:01 2012 from 192.168.1.14
Welcome guardcli1 - your last login was Tue Aug 10 18:39:02 2011

Chapter 2. Access Management Overview 47


The ’set guiuser’ command must be run (successfully) before any other commands will work
set guiuser admin
Enter current password
192.168.1.89>
6. Grant any additional roles, if desired, to allow access to the user to execute
GuardAPI functions.
For example, if the user johnsmith were to issue the following GuardAPI
command, he would find out he does not have any API commands to execute:
192.168.1.89 >grdapi commands user
ID=0
Matching API Function list:
ok
But if we were to grant johnsmith the accessmgr role (previously in step 5) the
same GuardAPI command would result in the following API commands being
available:
192.168.1.89> grdapi commands user
ID=0 Matching API Function list :
create_db_user_mapping
create_user_hierarchy
delete_allowed_db_by_user
delete_db_user_mapping
delete_user_hierarchy_by_entry_id
delete_user_hierarchy_by_user
execute_ldap_user_import
list_allowed_db_by_user
list_db_user_mapping
list_user_hierarchy_by parent_user
update_user_db
ok

Importing Users from LDAP


You can import Guardium user definitions from an LDAP server by configuring an
import operation to obtain the appropriate set of users.

You can run the import operation on demand, or schedule it to run on a periodic
basis. You can elect to have only new users imported, or you can have existing
user definitions replaced. In either case, LDAP groups can be imported as
Guardium roles.

When importing LDAP users:


v The Guardium admin user definition will not be changed in any way.
v Existing users will not be deleted (in other words, the entire set of users is not
replaced by the set imported from LDAP).
v Guardium passwords will not be changed.
v New users being added to Guardium:
– Will be marked inactive by default
– Will have blank passwords
– Will be assigned the user role

Note:

Special characters in a user name is not supported.

48 Administration
When adding a user manually via Access Management (either from Add User or
LDAP user import), if there is no first name and/or last name, the login name will
be used.

This LDAP configuration menu screen has tool tips for certain menu choices. Move
the cursor over a menu choice (such as Object Class for user), and a short
description will appear.

Guardium CLI users can not authenticate in the LDAP environment, as there is no
privilege separation for the CLI users.

Configure LDAP User Import

The attribute that will be used to identify users is defined by the Guardium
administrator, in the User RDN Type box of the LDAP Authentication
Configuration panel. See Configure LDAP Authentication for further information.
The default is uid, but you should consult with your Guardium administrator to
determine what value is being used. If a user is using SamAccountName as the
RDN value, the user must use either a =search or =[domain name] in the full
name. Examples: SamAccountName=search, SamAccountName=dom

Note: In order to configure LDAP user import, accessmgr user must have the
privilege to run Group Builder. In certain situations, when changes are made to the
role privilege, accessmgr's privilege to Group Builder can be taken away. This
results in an inability to save or run successfully LDAP user import. Go to the
access management portal, select Role Permissions from the choices. Choose the
Group Builder application and make sure that there is a checkmark in the all roles
box or a checkmark in the accessmgr box.
1. Open the LDAP User Import panel by clicking Access > Access Management
> LDAP User Import.
See Example of Tivoli® LDAP Configuration at the end of this help topic for
reference in filling out the required information.
2. For LDAP Host Name, enter the IP address or host name for the LDAP server
to be accessed.
3. For Port, enter the port number for connecting to the LDAP server.
4. Select the LDAP server type from the Server Type menu.
5. Check the Use SSL Connection check box if Guardium is to connect to your
LDAP server using an SSL (secure socket layer) connection.
6. For Base DN, specify the node in the tree at which to begin the search. For
example, a company tree might begin like: DC=encore,DC=corp,DC=root
7. For Attribute to Import, enter the attribute that will be used to import users
(for example: cn). Each attribute has a name and belongs to an objectClass.
8. Check the Clear existing group members before importing check box if you
want to delete all existing group members before importing.
9. For Log In As and Password, enter the user account information that will
connect to the Guardium server.
10. For Search Filter Scope, select One-Level to apply the search to the base level
only, or select Sub-Tree to apply the search to levels beneath the base level.
11. For Limit, enter the maximum number of items to be returned. We
recommend that you use this field to test new queries or modifications to
existing queries, so that you do not inadvertently load an excessive number of
members.

Chapter 2. Access Management Overview 49


12. Optional: For Search Filter, define a base DN, scope, and search filter.
Typically, imports will be based on membership in an LDAP group, so you
would use the memberOF keyword. For example:
memberOf=CN=syyTestGroup,DC=encore,DC=corp,DC=root
13. Click Apply to save the configuration settings.

Note: The Status indicator in the Configuration - General section will change
to LDAP import currently set up for this group as follows and the Modify
Schedule and Run Once Now buttons will be enabled. You can now import
from your LDAP server.

Schedule LDAP User Import

If LDAP Import has not yet been configured, you must perform Configure LDAP
User Import before performing this procedure.
1. Open the LDAP User Import panel by clicking Access > Access Management >
LDAP User Import.

Run LDAP User Import


When you run LDAP user import on demand, you have the opportunity to accept
or reject each of the users returned by the query. This is especially useful for
testing purposes. If LDAP Import has not yet been configured, you must perform
Configure LDAP User Import before performing this procedure.
1. Open the LDAP User Import panel by clicking Access > Access Management >
LDAP User Import.
2. Click Run Once Now. After the task completes, the set of members satisfying
your selection criteria will be displayed in the LDAP Query Results panel.
3. In the LDAP Query Results panel, mark the check box for each user you want
added, and click Import (or click Cancel to return without importing any
users).
4. To view the added users, open the User Browser by clicking Access > Access
Management > User Browser. Verify that the correct user accounts have been
added.

Example of Tivoli LDAP Configuration


Table 6. Example of Tivoli LDAP Configuration
LDAP Host Name Values
Port 389
Server Type Tivoli Directory
Use SSL connection
Base DN cn=sample realm,o=sample
Import Mode Choose Override existing attributes
Disable user if not on
import list
Enable new Imported
Users
Log in as cn=root
Password
Search filter scope Sub-Tree

50 Administration
Table 6. Example of Tivoli LDAP Configuration (continued)
LDAP Host Name Values
Limit
Attribute to Import as cn (Configurable through Portal)
User Login
Search filter
Object Class for User Fill with Default Value -
|(objectClass=organizationalPerson)(objectClass=inetOrgPerson)(objectClass=person)
Import Roles Add a Checkmark
Attribute to Import as cn
Role
Role Search Base DB Fill with Default Value - cn=sample realm,0=sample
Role filter
Object Class for Role Fill with Default Value -
|(objectClass=groupOfNames)(objectClass=group)(objectClass=groupOfUniqueNames)
Attribute in User to Fill with Default Value - memberOf
Associate Role
Attribute in Role to Fill with Default Value - member
Associate User

Data Security - User Hierarchy and Database Associations


You can use data security features to create a hierarchy of users and associate users
to specific databases and servers. Guardium data security features report on which
users accessed what information, and ensure that only specific users see
information that they are responsible for.

Follow these steps to enable and use Guardium data security features:
1. Enable Data Security
2. Create a User Hierarchy
3. Create a User to Database Association
4. Filter Results

When data security features are used with the Classification feature (which
discovers and classifies sensitive data found in multiple places of the database), the
Data Level Security prevents a specified user from seeing classifier results from a
specified datasource (datasource definition). Using Data Level Security can also
prevent a specified user from seeing Audit Task results when the task type is
Classifier.

Enable Data Security


1. Log in as the admin user and open the Global Profile by clicking Setup >
Global Profile.
2. Click Enable for Data level security filtering.

Note: The status indicator icon for Data level security filtering will now appear
as .

You can verify that Data level security filtering is enabled by referencing the
Services Status panel (Setup > Services Status).

Chapter 2. Access Management Overview 51


v With data level security filtering enabled, log in as the accessmgr to use the User
Hierarchy and User-DB Association features.

Create a User Hierarchy


The User Hierarchy shows you the parent-child relationships between all users.
User hierarchies permit the parent of the relationship to look at specified servers
and databases, but not the children.

Log in as accessmgr and open the User Hierarchy by clicking Data Security > User
Hierarchy.

Do one of the following:


v Click Full Update Active User-DB Map to view the full hierarchy of users.
v Use the Roles and Users filters to view the hierarchy for a specific user or role.
Right-click a node in the hierarchy to expand or collapse the tree, or add a user
to a specific hierarchy.
v Click Refresh Cached Hierarchy to update the hierarchy.

Note: Depending on the configuration, inheritance can also take place where the
parent inherits the data-level security of the child.

Create a User to Database Association

The User-DB Association feature maps users to specific databases to ensure that
users see only data that they are permitted to view.

Log in as accessmgr and open the User-DB Association by clicking Data Security >
User-DB Association.

Do one of the following:


1. View the current mapping of users to databases by clicking Full Update Active
User-DB Map.
2. Create a new User-DB association map by selecting options from the Server &
Service Name Suggestion list and clicking Go.

Note: Once the map is fully updated, you will see a tree listing all your
servers. Click any node in the tree to view which users are currently associated
with that node.

If you are using dual-stack configuration, there is a root node, and two trees of
addresses to choose from. One tree is for the IPV4 address, and the longer tree
is for the IPV6 address.

Add a user or group to a node by selecting the node and clicking Add user or
Add group.

Central Management
On a Central Management appliance, there is also a box on the User-Database
Associations screen that allows a user to create database associations based on data
from a managed node. Select a remote source from only a box that appears for
Central Management appliances. Also, there is a check box to get data from ALL
managed nodes.

52 Administration
Filter Results
Data level security at the observed data level requires the filtering of data for
specific users and the specific databases they are responsible for.

Filtering at the system level is based on the User Hierarchy and User-DB
Association so that users will see only information from their assigned databases
for the various reports, audit processes, security assessments, and so on, within the
Guardium system.

Log in as the admin user and use the Global Profile to filter results. Open the
Global Profile by clicking Setup > Global Profile.
v Default filtering:
– Show all - This option is available only if the user logged in has the special
role datasec-exempt defined, which allows the user to see all data as if there
was no data level security.
– Include indirect records - This check box shows the viewer not only the rows
that belong to the user logged in, but also all the rows that belong to other
users within that hierarchy.
v Audit Process Escalation: Escalation is allowed for tasks on this type only to
users who have the datasec-exempt role. Users without the datasec-exmpt role
are not shown in the escalation list.
Escalate results to all users - A check mark in this check box escalates audit
process results (and PDF versions) to all users, even if data level security at the
observed data level is enabled. The default setting is enabled. If the check box is
disabled (no check mark in the check box), then audit process escalation only
will be allowed to users at a higher level in the user hierarchy and to users with
the datasec-exempt role. If the check box is disabled, and there is no user
hierarchy, then no escalation is permitted.
v PDF and CSV generation for results (attached to email) distribution will use the
default global profile values set in Administration Console parameters.
v PDF and CSV generated from the viewer will use the same filtering as in the
screen.

Note:

The Data Security User to Database Association filters reports only from the
following domains: Access; Exception; and, Policy Violations (as well as custom
domains using these domains or tables from these domains). All other domains
(reports) are not filtered by the Data Security User to Database Association.

Users with admin role will be able to see event types on all roles (the information
will still be filtered based on observed data level security parameters).

If Data Level Security is turned on, predefined entities added to a custom domain
need to be in the same domain(s) for the data level security filtering to work
properly.

If Data Level Security is on, and two predefined entity subjects are trying to send
data from two domains (not Custom Domains) that are using a filtering policy,
then the sending of the two predefined entity subjects will not be permitted. Data
Level Security can only enforce one kind of filtering policy (for example, there can
be only one policy depending on server_ip/service_name and one policy
depending on datasource).

Chapter 2. Access Management Overview 53


How to define User Hierarchies
Use the UI from an access manager account to easily define user hierarchies.

About this task

The Data Security User Hierarchy represents the parent-child relationships between
users; allowing for the creation and enforcement of a data-level security by
permitting the parent of a hierarchy to look at specified servers and databases, but
not the children. Depending on the configuration, inheritance can also take place in
that the parent inherits the data-level security of the child.

Procedure
1. Login as accessmgr and click Data Security > User Hierarchy.
2. Select a user from the Users drop-down menu to display it in the Data Security
User Hierarchy pane. This example uses john smith as a user.

3. To add a user to john smith's hierarchy, right-click on the user in the Data
Security User Hierarchy pane, and select Add user from the drop-down menu.

4. After clicking Add user from the drop down list, the Add user dialog appears.
Select one or more users that you would like to add to the user's hierarchy, and
then click Add.

54 Administration
5. After adding the users to a hierarchy, the Data Security User Hierarchy panel
will be refreshed; allowing the user to drill down and see the new hierarchy.

6. Repeat the steps until all required users are defined to the data security user
hierarchy.

Chapter 2. Access Management Overview 55


56 Administration
Chapter 3. Aggregation and Central management
Aggregation enables you to bring together data from multiple Guardium systems
for a consolidated view. Central management enables you to maintain consistency
among your Guardium systems.

Aggregation
Collect and merge information from multiple Guardium units into a single
Guardium Aggregation appliance to facilitate an enterprise view of database usage.

Aggregation Process
v Accomplished by exporting data on a daily basis from the source appliances to
the Aggregator (copying daily export files to the aggregator).
v Aggregator then goes over the uploaded files, extracts each file and merges it
into the internal repository on the aggregator.

For example, if you are running Guardium in an enterprise deployment, you may
have multiple Guardium servers monitoring different environments (different
geographic locations or business units, for example). It may be useful to collect all
data in a central location to facilitate an enterprise view of database usage. You can
accomplish this by exporting data from a number of servers to another server that
has been configured (during the initial installation procedures) as an aggregation
appliance. In such a deployment, you typically run all reports, assessments, audit
processes, and so forth, on the aggregation appliance to achieve a wider view, not
always an enterprise view. Note: The Aggregator does not collect data, but it is
used to present the data from the collectors.

Pre-defined aggregation reports can be located on the Guardium Monitor tab,


Enterprise Buffer Usage Monitor, and the Daily Monitor tab, Logging Collectors.

Appliance Types
Collector
Used to collect database activity, analyze it in real time and log it in the
internal repository for further analysis and/or reacting in real-time
(alerting, blocking, etc.).
Use this unit for the real-time capture and analysis of the database activity.

Aggregator (see notes 1, 2)


Used to collect and merge information from multiple appliances (collectors
and other aggregators) to produce a holistic view of the entire environment
and generate enterprise-level reports. The Aggregator does not collect data
itself; it just aggregates data from multiple sources.
Central Manager (see notes 1, 3, 4)
Use this Appliance to manage and control multiple Guardium appliances.
With Central Manager (CM), manage the entire Guardium deployment (all
the collectors and aggregators) from a single console (the CM console).
This includes patch installation, software updates and the management and
configuration of queries, reports, groups, users, policies, etc.

57
Note:

In many environments, the Central Manager is also the Aggregator. Central


Manager and Aggregator can be installed on the same appliance.

Guardium appliance needs to be configured as an Aggregator at install time.

One Central Manager per federated environment

Terminology
Table 7.
Term Description
Guardium Appliance The physical or virtual Guardium box; can
be either a “collector” or an “aggregator”
(with or without central management)
Guardium Unit See Guardium Appliance
Manager Unit An appliance configured as Central Manager
Managed Unit An appliance managed by the Central
Manger
Standalone Unit An appliance not in a Central Manager
environment
Purge For the best performance, purge all data that
is not needed. Purge to free disk space.
Archive Compress the data of a single day into an
encrypted file and send it to the aggregator.

Hierarchical Aggregation
Guardium also supports hierarchical aggregation, where multiple aggregation
appliances merge upwards to a higher-level, central aggregation appliance. This is
useful for multi-level views. For example, you may need to deploy one aggregation
appliance for North America aggregating multiple units, another aggregation
appliance for Asia aggregating multiple units, and a central, global aggregation
appliance merging the contents of the North America and Asia aggregation
appliances into a single corporate view. To consolidate data, all aggregated
Guardium servers export data to the aggregation appliance on a scheduled basis.
The aggregation appliance imports that data into a single database on the
aggregation appliance, so that reports run on the aggregation appliance are based
on the data consolidated from all of the aggregated Guardium servers.

About the System Shared Secret


The Guardium administrator defines the System Shared Secret on the System
Configuration panel, which is described in the following section. The system
shared secret is used for archive/restore operations, and for Central Management
and Aggregation operations. When used, its value must be the same for all units
that will communicate. This value is null at installation time, and can change over
time.

The system shared secret is used:


v When secure connections are being established between a Central Manager and
a managed unit.

58 Administration
v When an aggregated unit signs and encrypts data for export to the aggregator.
v When any unit signs and encrypts data for archiving.
v When an aggregator imports data from an aggregated unit.
v When any unit restores archived data.

Depending on your company’s security practices, you may be required to change


the system shared secret from time to time. Because the shared secret can change,
each system maintains a shared secret keys file, containing an historical record of
all shared secrets defined on that system. This allows an exported (or archived) file
from a system with an older shared secret to be imported (or restored) by a system
on which that same shared secret has been replaced with a newer one. Shared
secrets (current and historic ones) can be exported from one appliance and
imported to another through the CLI.

For aggregation to work, the shared secret must be set and be the same for
aggregator and all aggregated collectors.

Aggregating, Archiving, and Purging Operations

Scheduled export operations send data from Guardium collector units to a


Guardium aggregation appliance. On its own schedule, the aggregation appliance
executes an import operation to complete the aggregation process. On either or
both units, archive and purge operations are scheduled to back up and purge data
on a regular basis (both to free up space and to speed up access operations on the
internal database). The export, archive, and purge functions can work on the same
data, but not the same date ranges. For example, you may want to export and
archive all information older than one day and purge all information older than
one month, thereby always leaving one month of data on the sending unit.

Note:

When setting the schedule of import on an aggregator, it should be planned to run


after export is completed on all collectors.

CAS data is also aggregated and archived.

Note: The alert for no traffic is inactive for aggregator servers.

Managing Data on an Aggregator


v Exporting Data
– Stopping Export
v Importing Data
– Stopping Import
v Archiving and Purging
v Stopping Archiving and Purging
v Verify Archiving and Purging Process
v Reporting on Aggregation and Archiving Activity
v Restoring

Chapter 3. Aggregation and Central management 59


Exporting Data
Table 8. Exporting Data
Topic Description
Function Compress the data of a single day (midnight to midnight, typically -
yesterday) into an encrypted file and send it to the aggregator (or to an
external repository on Archive).
Schedule Executed on a daily basis.

Starts immediately after midnight (00:10) to include full day’s data.

Assumed to take up to 2 hours to complete (Average – dependent on


amount of data).
High Level Create a temporary database.
Process
Load the relevant data (last day’s activity) to the tmp db.

Update auto-increment IDs in tmp db to ensure uniqueness.

Create an encrypted compressed export file of the tmp database.

Copy the export file to the aggregator (or to an external repository on


Archive).

To export data to an aggregation appliance, follow the procedure. You can define a
single export configuration for each Guardium unit.
1. Click Manage > Aggregation & Archive > Data Export to open Data Export.
2. Check the Export data box as this will open additional options for exporting
data.
3. In the boxes following Export data older than, specify a starting day for the
export operation as a number of days, weeks, or months prior to the current
day, which is day zero. These are calendar measurements, so if today is April
24, all data captured on April 23 is one day old, regardless of the time when
the operation is performed. To archive data starting with yesterday’s data, enter
the value 1.
4. Optionally, use the boxes following Ignore data older than to control how many
days of data will be archived. Any value specified here must be greater than
the Export data older than value, so you always export at least two days of
data. If you leave the Ignore data older than blank, you export data for all days
older than the value specified in the Export data older than row; It is
recommended to always set the Ignore older than value, otherwise you will be
exporting the exact same days over and over again; overloading the network
and the aggregator with redundant data (that will be ignored).
5. The Export Values box is checked by default. In some cases, where the collector
resides in a country that prohibits the export of data, and the aggregation
appliance resides in another country, you would want to clear the Export
Values check box, which would mask all fields containing database values.
6. In the Host box, enter the IP address or DNS host name of the aggregation
appliance to which this system’s encrypted data files will be sent. There is also
an option to enable a secondary aggregation for export data over more then
one aggregator. There are two Host boxes available, the first one is required,
while the Secondary Host is an option. This unit and the aggregation appliance
to which it is sending data must have the same System Shared Secret. If not,
the export operation works, but the aggregation appliance that receives the data
is not able to decrypt the exported file and the Import will fail. See System

60 Administration
Shared Secret in “System Configuration” on page 1 for more information. The
Shared Secret is required to be identical on both exporting system and receiving
system. The reason for this is that unless they have same shared secret, the
configuration on the exporting system will not be set and there will be a
message for a test file that can not be sent to the receiving system.
7. Click the Apply button to save the export and purge configuration for this unit.
When you click the Apply button, the system attempts to verify that the
specified aggregator host will accept data from this unit. If the operation fails,
the following message is displayed and the configuration will not be saved: A
test data file could not be sent to this host. Please confirm the hostname or IP
address is entered correctly and the host is online. If the Apply operation
succeeds, the buttons in the Scheduling panel become active.
8. Click Run Once Now to run the operation one time.
9. Click Modify Schedule to schedule this operation to run on a regular basis.

Stopping Export

To stop the export of data to an aggregation appliance:


1. Click Manage > Aggregation & Archive > Data Export to open Data Export.
2. Clear the Export checkbox.
3. Click Apply.

Note: Stopping an export after the Run Once Now button has been clicked is
impossible.

Importing Data

The Guardium collector units export encrypted data files to another Guardium
appliance configured as an aggregation appliance. The encrypted data files reside
in a special location on the aggregation appliance until the aggregation appliance
executes an import operation to decrypt and merge all data to its own internal
database.

Note: To avoid the possibility of importing files that have not completely arrived,
the aggregation appliance will not import files that have changed in the last two
minutes.
Table 9. Importing Data
Topic Description
Function Import and merge the imported data into the internal databases of the
Aggregator.
Schedule Executed on a daily basis.

Starts at 02:00 (or after export has ended).

Assumed to take up to 3 hours to complete.


High Level Construct the delete command for each purged table (tables and the
Process (for purge conditions defined in AGG_TABLES).
each purged
day) Execute the delete commands for each of the tables.

Follow the procedure to define the Data Import operation on an aggregation


appliance. You can define only a single Data Import configuration on each unit.

Chapter 3. Aggregation and Central management 61


1. Click Manage > Aggregation & Archive > Import to open Import.
2. Check the Import checkbox which causes the appearance of an additional
non-modifiable field indicating the location of the data files to be imported.
3. Click Apply to save the configuration. The Apply button is only available when
you toggle the Import data from checkbox on or off.
4. Click Run Once Now to run the operation once.
5. Click Modify Schedule to schedule the operation to open the general-purpose
task scheduler and run on a regular basis. This aggregation appliance and all
units exporting data to it must have the same System Shared Secret. If not, the
export operations will still work, but the aggregation appliance will not be able
to decrypt the files of exported data.

Stopping Import

To stop importing data sent from other Guardium units:


1. Click Manage > Aggregation & Archive > Import to open Import.
2. Clear the Import data box.
3. Click Apply to save the configuration. Stopping importing does not stop other
Guardium units from exporting data to this system. To stop that, you must stop
the Export operation on each sending unit.

Note: Stopping an import once the RUN ONCE NOW button is clicked is
impossible.

Archiving and Purging

Archiving and purging data on a regular basis is essential for the health of your
Guardium system. For the best performance, we strongly recommend that you
archive and purge all data that is not needed. Important - purge to free disk space.
For example, if you only need three moths of data on the Guardium appliance,
archive and purge all data that is older than 90 days.

The archive and purge process frees space and preserves information for future
use. You should periodically archive and purge data from standalone units and
from aggregation units. The Guardium’s archive function creates signed, encrypted
files that cannot be tampered with. Archive files are transferred and stored on
external systems such as file servers or storage systems.

Note:

If both Archive and Purge are scheduled, Purge will run after Archive.

Data that was archived on a collector can be restored either on another collector or
an aggregator server. Restoring of data that was archived on an aggregator to a
collector machine is not supported.

Archiving data on aggregator system - on the first day of the month, all static
tables are archived. On all other days, only additional data added to archived data
will be archived. This methodology is the same as used by collectors. Adding the
static tables to the normal purge process eliminates the existence of orphans,
freeing up disk space and improving report performance.

Archive and export of static tables on an aggregator includes full static data only
on the first day of the month (archive) or when the export configuration changes

62 Administration
(export). Use the CLI commands, store archive_table_by_date [enable |
disable] or show archive_table_by_date. Other relevant CLI commands are store
aggregator clean orphans or show aggregator clean orphans.

Scheduling Data Management tasks - Default schedule times are supplied when
the unit is built and these can be amended accordingly. The Data Management
tasks should be scheduled at less busy times, for example, overnight. They should
be spaced out so as not to overlap (for example, the start of one task should not
run into the start of another before finishing.)

Aggregator Data Archive, when dealing with an Aggregator/ Central Manager that
performs Data Imports and Data Archives. A default or common setting is to have
the Data Archive perform an Archive of data older than one day ignoring data
older than two days. If it happens that the Data Archive is scheduled to run
BEFORE the Data Imports from other Collector(s)/Aggregator(s), then the Archive
will NOT contain the Imports meant for that days Archive. Imagine the following
schedule: Data Archive to run at 30 minutes past Midnight; Data Imports to run at
6:00 AM for data older than 1 day - ignoring older than 2 days. When the Archive
happens - it will not Archive any relevant yesterday data - no Imports for that
days data have yet occurred. In this example, the Data Archive should be
re-scheduled to occur AFTER the Data Import(s) have finished. This way the
Archive would correctly contain data for yesterday.
Table 10. Archiving and Purging Data
Topic Description
Purge Function Delete old records from appliance (typically - older than 60 days) to free
up space and speed up access operation to the internal database.

Purging is based on dates (deleting whole days’ worth of data), but will
not delete records that are still “in use” (for example: open sessions).
Schedule The default purge activity is scheduled every day at 5:00 AM.

Collectors, after the export/archive.

Aggregator, after the import.

Assumed to take up to 2 hours to complete.


High Level Purge configuration is used by both Data Archive and Data Export.
Process (for
each purged Use the Purge data older than field to specify a starting day for the
day) purge operation as a number of days, weeks, or months prior to the
current day, which is day zero.
Default Purging The default value for purge is 60 days

The default purge activity is scheduled every day at 5:00 AM.

For a new install a default purge schedule will be installed that is based
on the default value and activity

When a unit type is changed between manager managed or back to


standalone the default purge schedule will be applied The purge
schedule will not be affected during an upgrade

It may be necessary to run reports or investigations on this data at some point. For
example, some regulatory environments may require that you keep this
information for three, five, or even seven years in a form that can be queried

Chapter 3. Aggregation and Central management 63


within 24-hours. This functionality is supported by the Guardium restore
capability, which allows you to restore archived data to the unit.

The following sections describe how to define and schedule archiving and how to
restore from an archive.

Note: The archive and restore operations depend on the file names generated
during the archiving process. DO NOT change the names of archived files.

Archive data files can be sent to an SCP or FTP host on the network, or to an EMC
Centera or TSM storage system (if configured). You can define a single archiving
configuration for each unit To archive data to another host on the network and
optionally purge data from the unit, follow the procedure.
1. Click Manage > Aggregation & Archive > Data Archive to open Data
Archive.
2. Check the Archive checkbox to expose additional fields for the archive
process.
3. In the boxes following Archive data older than, specify a starting day for the
archive operation as a number of days, weeks, or months prior to the current
day, which is day zero. These are calendar measurements, so if today is April
24, all data captured on April 23 is one day old, regardless of the time when
the operation is performed. To archive data starting with yesterday’s data,
enter the value 1.
4. Optionally, use the boxes following Ignore data older than to control how
many days of data will be archived. Any value specified here must be greater
than the value in the Archive data older than field. If you leave the Ignore
data older than row blank, you archive data for all days older than the value
specified in the Archive data older than row. This means that if you archive
daily and purge data older than 30 days, you archive each day of data 30
times (before it is purged on the 31st day). Depending on the archive options
configured for your system (using the store storage-system CLI command),
you may have EMC Centera or TSM options on your panel. If you select one
of those archive destinations, see the appropriate topic.
a. EMC Centera Archive and Backup
b. TSM Archive and Backup
5. Enter the IP address or DNS Host name of the host to receive the archived
data
6. In the Directory box, identify the directory in which the data is to be stored.
How you specify this depends on whether the file transfer method used is
FTP or SCP. For FTP, specify the directory relative to the FTP account home
directory. For SCP, specify the directory as an absolute path.
7. In the Username box, enter the user name to use for logging onto the host
machine. This user must have write/execute permissions for the directory
specified in the Directory box.
8. In the Password box, enter the password for the user, then enter it again in
the Re-enter Password box.
9. Data Purge
10. Check the Purge checkbox to purge data, whether or not it is archived. When
this box is marked, the Purge data older than fields display. It is important to
note that the Purge configuration is used by both Data Archive and Data
Export. Changes made here will apply to any executions of Data Export and
vice-versa. In the event that purging is activated and both Data Export and
Data Archive run on the same day, the first operation that runs will likely

64 Administration
purge any old data before the second operation's execution. For this reason,
any time that Data Export and Data Archive are both configured, the purge
age must be greater than both the age at which to export and the age at which
to archive.
11. If purging data, use the Purge data older than fields to specify a starting day
for the purge operation as a number of days, weeks, or months prior to the
current day, which is day zero. All data from the specified day and all older
days will be purged, except as noted otherwise. Any value specified for the
starting purge date must be greater than the value specified for the Archive
data older than value. In addition, if data exporting is active (see Exporting
Data to an aggregation appliance), the starting purge date specified here must
be greater than the Export data older than value. There is no warning when
you purge data that has not been archived or exported by a previous
operation. The purge operation does not purge restored data whose age is
within the do not purge restored data timeframe specified on a restore
operation. For more information, see Restoring Archived Data.
12. Click Apply to verify and save the configuration changes. When you click the
Apply button, the system attempts to verify the specified Host, Directory,
Username, and Password by sending a test data file to that location.
13. Click Run Once Now to run the operation once.
14. Click Modify Schedule to schedule the operation to run on a regular basis.
The general-purpose task scheduler is opened.

EMC Centera Archive and Backup

To use EMC Centera:


1. Click Manage > Aggregation & Archive > Data Export to open Data Export.
2. Click on the Data Archive or System Backup in the Data Management section.
Initially, the Network radio button is selected by default, and the Network
backup parameters are displayed
3. Select the EMC Centera radio button. The EMC Centera parameters will be
displayed on the panel.
4. In the Retention box, enter the number of days to retain the data. The
maximum is 24855 (68 years). If you want to save if for longer, you can restore
the data later and save it again.
5. In the Centera Pool Address box, enter the Centera Pool Connection String; for
example: 10.2.3.4,10.6.7.8/var/centera/profile1_rwe.pea
6. Click Upload PEA to upload a Centera PEA file to be used for the connection
string.
7. Click Apply to save the configuration. The system will attempt to verify the
Centera address by opening a pool using the connection string specified. If the
operation fails, you will be informed and the configuration will not be saved.

TSM Archive and Backup


When you select TSM as an archive or backup destination, the TSM portion of the
archive or backup configuration panel expands. Before setting TSM as an archive
or backup destination, the Guardium system must be registered with the TSM
server as a client node. A TSM client system options file (dsm.sys) must be created
(on your PC, for example) and uploaded to Guardium. Depending on how that file
is defined, you may also need to upload a dsm.opt file. For help creating a dsm.sys
file for use by Guardium, consult with your company’s TSM administrator. To
upload a TSM configuration file, use the CLI command, import tsm config.

Chapter 3. Aggregation and Central management 65


To use TSM:
1. Click Manage > Aggregation & Archive > Data Archive to open Data Archive.
2. Select the TSM radio button. The TSM parameters will be displayed on the
panel.
3. In the Password box, enter the TSM password that this Guardium unit uses to
request TSM services, and re-enter it in the Re-enter Password box.
4. Optionally enter a Server name matching a servername entry in your dsm.sys
file.
5. Optionally enter an As Host name.
6. Click Apply to save the configuration. When you click the Apply button, the
system attempts to verify the TSM destination by sending a test file to the
server using the dsmc archive command. If the operation fails, you will be
informed and the configuration will not be saved.

Stopping Archiving and Purging


1. Click Manage > Aggregation & Archive > Data Archive to open Data Archive.
2. Clear the Archive or Purge box.
3. Click Apply.

Verify Archiving and Purging Process


1. Click Reports > Guardium Operational Reports > Aggregation/Archive Log to
open the Aggregation/Archive Log.
2. Check to ensure that each Archive/Purge operation has a status of Succeeded.

Reporting on Aggregation and Archiving Activity


1. Click Manage > Reports > Activity Monitoring > Aggregation & Archive >
Aggregation/Archive Log to open the Aggregation/Archive Log.
2. Define a query and build a report.

Restoring

As described previously, archives are written to a SCP or FTP host, or to a Centera


or TSM storage system. To restore archives, you must copy the appropriate file(s)
back to the Guardium system on which the data is to be restored. There is a
separate file for each day of data. Depending on how your archive/purge
operation is configured, you may have multiple copies of data archived for the
same day. Archive and export data file names have the same format:
<daysequence>-<hostname.domain>-w<run> datestamp>-d<data_date>.dbdump/TAR
file. To restore file for archived data (and not backup system), you need to use the
GUI screen called Catalog Archive. The archive and restore operations depend on
the file names generated during the archiving process. DO NOT change the names
of archived files. If a generated file name is changed, the restore operation will not
work.

For example: 732423-g1.guardium.com-w20050425.040042-d2009-04-22.dbdump/TAR


file.

Unless you are restoring data from the first archive created during the month, you
will need to restore multiple days of data. That is because when restoring data,
Guardium needs to have all of the information that it had when the data being
restored was archived. After the archive was created, some of that information may
have been purged due to a lack of use. All information needed for a restore
operation is archived automatically, the first time that data is archived each month.

66 Administration
So, when restoring data, you can restore the first day of the month and all the
following days until the desired day or restore the desired day and then the first
day of the following month

For example, to restore June 28th, either restore June 1st through June 28th, or
restore June 28th and July 1st.

To restore file for archived data (and not backup system), you need to use the GUI
screen called Catalog Archive. The archive and restore operations depend on the
file names generated during the archiving process. DO NOT change the names of
archived files. If a generated file name is changed, the restore operation will not
work.
1. Click Manage > Aggregation & Archive > Data Restore to open Data Restore.
2. Enter a date in the From box, to specify the earliest date for which you want
data.
3. Enter a date in the To box, to specify the latest date for which you want data.
4. In the Host Name box, optionally enter the name of the Guardium appliance
from which the archive originated.
5. Click Search.
6. In the Search Results panel, mark the Select box for each archive you want to
restore.
7. In the Don't purge restored data for at least box, enter the number of days that
you want to retain the restored data on the appliance.
8. Click Restore.
9. Click Done when you are finished.

Troubleshooting

Log files are located in /var/log/guard


v On a collector, the basic log file is launch_agg.log
v On an aggregator, it isagg_progress.log

When there is a problem, more detailed logging should be started using CLI
command: CLI>agg debug start

This causes execution of archive/export/import/purge operations to produce


detailed logging.

On an aggregator, most of the detailed logging is in agg_debug.log

On a collector, there are guard_agg.out and aggregator_err.txt

For purge process:


v on a collector, the debug logging goes to the files specified,
v on an aggregator, application debug logging should be started in the diag utility
and the log is written in /var/log/guard/debug-log/.

On any escalation to Technical Support, please supply detailed log files of the time
when the problem occurred.

Chapter 3. Aggregation and Central management 67


Examine two Internal database tables: GDM_Exception and
GDM_CONSTRUCT_VALUE. Look at the amount of data in these tables. This is
important when troubleshooting problems, in particular, if the amount of data in
these tables is very large.

Use the Support-based CLI commands to organize and sort material important to
troubleshooting.

Central Management
In a central management configuration, one Guardium unit is designated as the
Central Manager. That unit can be used to monitor and control other Guardium
units, which are referred to as managed units. Un-managed units are referred to as
stand-alone units.

The concept of a local machine can refer to any machine in the Central
Management system. There are some applications (Audit Processes, Queries,
Portlets, etc.) which can be run on both the Managed Units and the Central
Manager. In both cases, the definitions come from the Central Manager and the
data comes from the local machine (which might also be the Central Manager).

Once a Central Management system is set up, customers can use either the Central
Manager or a managed unit to create or modify most definitions. Keep in mind
that most of the definitions reside on the Central Manager, regardless of which
machine does the actual editing.

Note:
v Using the Remote Source function, a user on the Manager can run any report on
the managed unit (the user must have the correct role privileges) and view data
and information of that managed unit.
v CAS template definitions are shared between all units of a federated
environment just like all other definitions (reports, policies, alerts, etc.)
v It is recommended that a user run CAS Reports on a manager, especially CAS
Reports relating to CAS configurations, hosts, and templates.
v If you use the Custom Domain Builder to create a report that uses some or all
remote tables (tables that live on the manager in a Central Manager
environment, such as Datasource or Comments), this report does not work on a
managed node. No data will be returned.
v The Central Management page of a manager will no longer automatically refresh
itself based on a certain interval. This page will timeout based on the GUI
timeout of the system.
v After some time of inactivity, the system will log you out automatically and ask
you to sign in again. The length of the GUI timeout can be set via the CLI
command show/store session timeout (default is 900 seconds). Status lights will
refresh every five minutes when the session is active.
v If a user is attempting to synchronize or upload any data from the Central
Manager to managed nodes, all nodes that are involved in this type of activity
MUST be on the SAME version of Guardium.
v During the Central Management Redundancy Transition, it can take up to five
minutes for the Unit type Sync to occur depending on how many units are
defined in the Central Management environment.

68 Administration
Guardium Component Services
Identify Guardium components and the locations from which they are taken in a
central management environment.

That unit can be used to monitor and control other Guardium units, which are
referred to as managed units. Unmanaged units are referred to as stand-alone
units.
Table 11. Guardium Component Services
Component Description
Users, Roles and Central Manager controls the definition of users, roles, groups and datamart tables for all
Permissions managed systems. The Central Manager exports the complete set of user, security role,
group, and datamart tables definitions on a scheduled basis or on demand. The managed
units update their internal databases on an hourly basis. As a result, there might be a delay
of up to an hour between the time users, roles, permissions or datamart tables are added or
modified on the Central manager and the time that the managed unit applies those
updates.
Note: If you have Guardium users or security roles that are defined on an existing
stand-alone unit that is about to be registered for central management, those definitions
will not be available after the system is registered, unless those users and security roles
have also been defined on the Central Manager. You cannot administer users or security
roles on a managed unit. Those definitions can be administered only when logged on to the
Central Manager. When a unit is unregistered for central management, all added users and
security roles are removed leaving only the default users (admin, accessmgr). When
installing an Accelerator add-in product (PCI, SOX, etc.), in a Central Manager
environment, install it first on the Central Manager and then on the managed unit. Add
any roles and users as required for the Accelerator on the Central Manager (and those will
be synchronized with the managed unit from there). Accelerator documentation is
contained within the Accelerator module. See an overview of PCI Accelerator at the end of
this Component Services table.
Aliases and Groups On all processes that automatically generate aliases or groups, for example: import user
groups from LDAP, group generation from queries, alias generation from queries, classifier,
etc. if the same group or alias is automatically generated on more than one managed
machine (managed by the same manager), then it might conflict with an existing group or
alias, which will not be replaced.
Audit Processes The definitions of the Audit Process itself and all of its corresponding tasks are saved to the
Central Manager and available to all managed units. However, Schedules, Results, and
To-Do lists are saved on the local machine. This means that the same Audit Process tasks
can be run on all Managed Units, plus the Central Manager. But it can be run at different
times on different machines, which can be useful if the Managed Units have different peak
load periods. Each machine has its own set of results, which are based on the data that the
machine has collected; and each machine has its own set of To-Do lists for all users. Audit
Process definitions are exported from the Central Manager to the managed units as part of
the user synchronization process (see Synchronizing Portal User Accounts). When audit
process results have been produced, the results are available to users, but on managed
units, there might be a delay of up to an hour before reports or monitors such as
Outstanding Audit Process Reviews are updated.
Queries Each query can get only database information from a single machine. Queries that require
access information including both Central Manager definitions and Managed Unit data
show no data, or missing data.

Chapter 3. Aggregation and Central management 69


Table 11. Guardium Component Services (continued)
Component Description
Policies Policy definitions are saved on the Central Manager. However, when you install a policy on
a Managed Unit, a local copy is made and saved on the Managed Unit. The reason for that
is that the Managed Unit is needed to keep on monitoring the database activity and using
the policy even when the Central Manager is not available for any reason.
Note: Installing a policy on a managed node will not upload this policy to the Central
Manager until the Refresh on the Central Manager is clicked. Versions must be the same
between Central Manager and Managed Unit when installing policies else policies will not
install and errors are generated.
Reports Report definitions are saved on the Central Manager.

When regenerate portlet is called on a Central Manager, it also sends a management (https)
request to all managed units to regenerate the portlet (with the report ID). When regenerate
is called on a managed unit - if it is called from the screen (not the management request),
then it should send a management request to the manager to refresh the portlet (this would
also send it to all units). There is a persistence mechanism for management requests for the
case a unit is down - see sections within this topic on registration and policy installation.

From the Central Manager, reports and audit processes can use data from a managed unit
but not managed aggregators. The managed unit is selected as a run-time parameter, is
referred to as a remote datasource, and presented as a filtered drop-down selection list
containing only managed units. When an audit process references a remote datasource, that
audit process can be run from the Central Manager only, so it will not appear in a list of
audit processes that are displayed on a managed unit.
Note: Certain reports, on a Central Manager, of domain Sniffer Buffer Usage (for example,
Request Rate, CPU Usage, Buffer Usage Monitor) will NOT display any data. The reports
will be empty.
Security Assessment Like the Audit Process, the definition of the Security Assessment itself is saved to the
Central Manager. But the results are saved on the local machine. This means that the same
Security Assessment can be run on all Managed Units, plus the Central Manager.
Baselines Baselines are always saved on the Central Manager. However, baselines are GENERATED
using the logged data that is local to the machine on which it is generated. Therefore, if
you want to include constructs from all Managed Units, you must regenerate the baseline
on ALL Managed Units and merge the new results into the existing baseline.
Comments Comments can be saved on either the local machine or the Central Manager, depending on
what the comment is associated with. If the Comment is associated with a definition that
resides on the Central Manager, then it is also saved on the Central Manager. If the
Comment is associated with a Result on the local machine, OR something specific to a
Managed Unit (like an Inspection Engine), the Comment is also saved on the local machine.
Schedules Schedules are always saved on the local machine, even when the definition is saved on the
Central Manager.
Non-Central Manager When a server is configured as a Central Manager, you must be aware of the tasks that
Tasks cannot be performed on that unit, but rather must be performed on other (non-Central
Manager) units. Inspection engines cannot be defined on the Central Manager and can be
created only on the Managed Units. But Inspection engines can be viewed from the Central
Manager.
Upgrade It is recommended to have your Central Manager and managed units on the same version.
Considerations The Central Manager should be upgraded first and then the managed units should follow.
Having a manager in a different version than its managed units should be a temporary
thing and it is highly recommended to upgrade all managed units to the same version as
the manager. Run Sync (Refresh) on all managed nodes after upgrading, in order for these
managed nodes to recognize the proper software version that they are.

70 Administration
Table 11. Guardium Component Services (continued)
Component Description
PCI Accelerator for The PCI Data Security Standard consists of twelve basic requirements. Much of the
Compliance requirements are focused on protecting physical infrastructure (for instance, Requirement 1:
Install and maintain a firewall configuration to protect data) or implementing procedural
best practices (for instance, Requirement 5: Use and regularly update anti-virus software).
However, an extra emphasis is placed on real-time monitoring and tracking of access to
cardholder data and continuous assessment of database security health status (for instance,
Requirement 10: Track and monitor all access to network resources and cardholder data).

Guardium's PCI Accelerator for Database Compliance is tailored to simplify organizational


processes that are needed to support these monitoring and tracking mandates and to allow
for cardholder data security. The Accelerator report templates can be customized to directly
reflect specific organizational and regulatory requirements. You can access these templates
using the tabs that are provided:
v PCI Data Security Standard overview
v Plan and Organize
v PCI Req. 10: Track and Monitor Access
v PCI Req. 11: Regularly Test and Validate
v PCI Policy Violations Monitoring

Other tools in the Guardium family of solutions are available to help meeting regulations
include the following:
v Cardholder Database Access Map - A graphical map of access between cardholder
database access clients and servers. This map, which is located under the access map
capabilities, provides an at-a-glance view of activities by access type, content, and
frequency.
v PCI Compliance Report Card - A detailed view of cardholder databases access security
health that is used to automate the compliance processes with continuous real-time
snapshots customized for user-defined tests, weights, and assessments. The Report Card
can be generated using security assessment.
v Full Audit Trail - The non-intrusive generation of a full audit trail for data usage and
modifications that are required by regulatory compliance.
v Automated Scheduling - Automated scheduling of PCI work flows, audit tasks, and
dissemination of information to responsible parties across the organization.

The following table can help identify which components are taken from which
location in a central management environment.
Table 12. Components and Location in Central Manager Environment
Central Manager Managed Unit
Users System Configuration
Security Roles Inspection Engines
Application Role Alerter (configuration)
Permissions
Queries Anomaly Detection
Reports Session Inference
Time Periods IP-to-Hostname Aliasing
Alerts System Backup
Security Aggregation / Archiving
Assessments

Chapter 3. Aggregation and Central management 71


Table 12. Components and Location in Central Manager Environment (continued)
Central Manager Managed Unit
Audit Process Custom Alerting
Definitions
Privacy Sets Custom Identification Procedures
Baselines Exported csv Output
Policies Schedules
Groups DB Auto-discovery Configurations
Aliases Audit Process Results

Users, Security Roles, Audit Process Definitions, and Groups are exported from the
Central Manager to all managed units on a scheduled basis, as described later.

From the Central Manager, the administrator can:


v Register Guardium units for management
v Monitor managed units (unit availability, inspection engine status, etc.)
v View system log files (syslogs) of managed units
v View reports using data on managed units
v View main statistics for managed units
v Install Guardium security policies on managed units
v Restart managed units
v Manage Guardium inspection engines on managed units
v Maintain the complete set of Users, Security Roles, Groups, and Application Role
Permissions that are used on all managed systems
v Patch distribution
v Distribute Uploaded JAR files
v Distribute Patch Backup Settings
v Distribute Authentication Config
v Distribute Configurations

Note: Application Role Permissions can also be changed by the administrator from
any managed unit. When this happens, the permissions are changed for all
managed units.

Implementing Central Management


Make one machine into a Central Manager, connect the other machines into a
Central Management system, and register the Managed Units to communicate with
the Central Manager.
v Implementing Central Management in a New Installation
v Implementing Central Management in an Existing Installation
v If the Central Management Unit is unavailable

Implementing Central Management in a New Installation


Make one Machine the Central Manager, use the same shared secret, register units,
and group managed units.

72 Administration
Make one machine the Central Manager

The first thing is to make one machine into a Central Manager. Select a machine.
Then, complete the following steps.
1. Log in to the CLI of the Machine that you want to make the Central Manager.
2. Enter store unit type manager. This step makes the machine a Central
Manager; however, it is not yet managing anything.

Use the Same Shared Secret

After you have a Central Manager, you must connect the other machines into a
Central Management system. For security reasons, it is a requirement that the
communications between the machines be encrypted by using the same shared
secret. To do this step, do the following action items.
1. Click Setup > Tools and Views > System to open System.
2. Set the shared secret to the same string on all systems.

Registering Units:

Register managed units to communicate with the Central Manager.

You can register Guardium units for central management either from the Central
Manager or from the unit itself. Regardless of how the registration is done, the
Central Manager and all managed units must have the same system shared secret.
If the unit to be managed is already registered for central management with
another manager, unregister that unit from that manager before you register it with
the new manager. Be sure to understand exactly what happens to that unit when it
is registered and unregistered for central management.

Note: If the user that is logged in to a managed unit does not exist on the Central
Manager, the session is invalidated. It remains invalidated until the unit is
registered with a Central Manager.

What Happens during Registration

The following actions happen on registration.


v The unit type is set to managed and manager IP is stored.
v Product key of manager is applied. (License key is not propagated with Ping or
User sync. It is sent on registration or when the system refreshes.)
v All job scheduling is reset to default.
v All psml files (portal GUI customizations) are removed.
v All local users and roles are removed.
v List of threshold alerts that is not be evaluated is reset.
v Users roles, permissions from manager are loaded.
v Custom classes, user uploaded JARs, LDAP truststore from manager are
uploaded.
v Database connection from managed to manager is enabled.
v Database connection from manager to managed is enabled.
v CAS listener is started if needed.

After registration all definitions of reports, queries, groups, policies, audits, and
more are retrieved from the Central manager.

Chapter 3. Aggregation and Central management 73


If the Registered Unit Status Remains Offline

If you know the unit that is registered is online and accessible from the Central
Manager, but its status remains offline, then complete the following steps.
v Verify that the unit to be managed is online, accessible, and operational by using
a browser window to log in to the Guardium system on that unit.
v Click Refresh for the unit.
v Check that you entered the correct IP address for the unit.
v Check that the unit has the same shared secret as the Central Manager.

Note: If the registration of a unit is offline, the registration request persists. It is


resent to the IP/port specified on a set interval until the unit registers. A
registration request that does not succeed expires after seven days.

Registering from a Managed Unit

On a managed unit, you can use the GUI to register the unit with the Central
Manager. Otherwise, you can use the CLI register command as described in
Registering a Managed Unit with the CLI.
1. Click Manage > Install Management > Registration to open Registration.
2. For Central Management Host IP, enter the IP address of the Central Manager.
3. For Port, complete the https port for the Central Manager (usually 8443).
4. Click Register.

After you register on the managed unit, it initiates communication with the Central
Manager, and nothing more needs to be done.

Note: The central management unit must be online and accessible by this unit
when you register for central management. In contrast, when you register units for
management from the central management unit, you can register units that are not
currently accessible.

Registering a Managed Unit with the CLI


1. On the managed unit, log in to the CLI.
2. Type register management <Manager IP> <Manager Port>

After you register on the managed unit, it initiates communication with the Central
Manager, and nothing more needs to be done.

Unregistering from a Managed Unit:

When a unit is unregistered, always unregister from the Central Manager. This
method is the only way that the Central Manager decrements its count of managed
units.

Unregistering from the managed unit does NOT unregister the unit on the Central
Manager. The Central Manager still counts that unit as a managed unit for
licensing purposes and thinks the unit is managed. It might not allow another unit
to be registered with the Central Manager. The unregister function on the managed
unit is included for emergency use ONLY. If a manager is no longer in service,
then you must unregister the unit before you can register it to another manager.

74 Administration
If you unregister a unit from the managed unit, it still shows on the Central
Manager screen. Pressing refresh for that unit re-registers it. Pressing any other
operation for that unit gives out a message that the unit is no longer managed and
removes it from the manager.

On a managed unit, you can use the GUI to unregister the unit with the Central
Manager. Also, you can use the CLI unregister command as described in
Unregistering a Managed Unit with the CLI.
1. Log in to the Guardium GUI of the unit to be managed as the admin user.
2. Click Manage > Install Management > Registration to open Registration.
3. Click Unregister.

What Happens during Unregistration

The following actions take place upon unregistration.


v The unit type is set to stand-alone.
v The manager IP is cleared.
v The product key is cleared (license is null until registration to new manager or a
license is loaded manually).
v The list of threshold alerts that is not evaluated is reset.
v All job scheduling is reset to default.
v Psml files are removed.
v All users but the default users (admin, accessmgr) are removed.
v The database connection from managed to manager is disabled.
v The GUI is restarted.

After unregistration all definitions of reports, queries, groups, policies, audits, and
more are retrieved from the local database, the definitions that are stored on
Central Manager are no longer accessible.

If you are unsure about how to verify, contact Guardium Support before you
unregister the unit.

Unregistering a Unit from the Central Manager


1. Log in to the Guardium GUI of the Central Manager as the admin user.
2. Click Manage > Install Management > Registration to open Registration.
3. Mark the check box for the managed unit you want to unregister.
4. Click Unregister.

Unregistering a managed unit from the Central Manager screen removes it from
the managed unit list and sets the unit to be a stand-alone unit.

Note: The product key of the unit is removed and unless the unit is registered to
another manager the product key is placed in manually.

Unregistering from a Managed Unit

On a managed unit, you can use the GUI to unregister the unit with the Central
Manager. Also, you can use the CLI unregister command as described in
Unregistering a Managed Unit with the CLI.
1. Log in to the Guardium GUI of the unit to be managed as the admin user.

Chapter 3. Aggregation and Central management 75


2. Click Manage > Install Management > Registration to open Registration.
3. Click Unregister.

To unregister a Managed Unit by using the CLI, complete the following steps.
1. On the Managed Unit, log in to the CLI.
2. Type unregister management.

After you have unregistered from the Managed Unit, it severs communication with
the Central Manager, and nothing more needs to be done.

Synchronizing Portal User Accounts:

Manage portal user synchronization by using the Central Manager.

About this task

As mentioned earlier, the Central Manager controls the definition of Users, Security
Roles, Groups, and datamart tables for all managed units. The Central Manager
makes an encrypted and signed copy of its complete set of User and Security
Roles. In addition, the Central Manager transmits that information to all managed
units. Furthermore, some other definitions that are required for local processing
(Groups and Group members, Audit processes, Aliases, and more) are also copied.
The managed units then update their internal databases on an hourly basis. This
process means that there might be a delay of up to an hour before using these
roles or datamart tables.

A full user synchronization cycle occurs on registration or by pressing Refresh


from the Central management screen. In both cases, the synchronized information
is sent from the manager and loaded on the managed units immediately.

Note: Use caution when setting the schedule so that it does not interfere with
other scheduled jobs like Import which can fail to start.

Procedure

Click Manage > Central Management > Portal User Sync to manage portal user
synchronization.
1. Click Modify Schedule to change the user synchronization task schedule by
using the standard task scheduler.
2. If the task is actively scheduled, click Pause to stop further scheduled
executions.
3. If the task is paused, click Resume to start running the task again (according to
the defined schedule).
4. Click Run Once Now to run the synchronization task immediately.

Note: The task that is scheduled or Run Once Now refers to the collection of
data and its transmission to the managed units only. The managed units might
not use that data to update their user tables until up to 1 hour after it is
received.

Implementing Central Management in an Existing Installation


Implement Central Management in an existing Guardium environment and
migrate a CAS collector with active instances to be managed.

76 Administration
In an existing Guardium environment, refer to the procedure outlined to develop a
plan for implementing central management. If you are converting an existing
Guardium unit to a Central Manager, keep in mind that a Central Manager cannot
monitor network traffic. For example, inspection engines cannot be defined on a
Central Manager.
1. Select a system shared secret to be used by the Central Manager and all
managed units. For more information, see the system shared secret in System
Configuration.
2. Install the Central Manager unit or designate one of the existing systems as the
Central Manager. In either case, use the store unit type command to set the
manager attribute for the Central Manager.
3. Any definitions from the stand-alone unit that you want to have available in
the central management environment must be exported before the stand-alone
unit is registered for management. Later, those definitions are imported on the
Central Manager. BEFORE exporting or importing any definitions, follow the
procedure that is outlined for each stand-alone unit that is to become a
managed unit. Read through the introductory information under
Export/Import Definitions.
v Decide which users, security roles, queries, reports, groups, time periods,
alerts, security assessments, audit processes, privacy sets, baselines, policies,
and aliases from the stand-alone system you want to have available after the
system becomes a managed unit. Ignore any components on the stand-alone
system you do not want to have available.
v Compare the security roles and groups that are defined on the stand-alone
unit with those defined on the Central Manager. Under central management,
a single version of these definitions applies to all units. If a security role with
the same name exists on both systems and it is used for different purposes,
add a new role on the Central Manager and assign the new role to the
appropriate definitions after they are imported.
v If the same group name exists on the stand-alone unit and the Central
Manager but it has different members, create a new duplicate group on the
stand-alone system, taking care to select a group name that does not exist on
the Central Manager. In all of the definitions to be exported, change the old
group name references to new group name references.
v All security roles that are assigned to all definitions that are exported from
the stand-alone system. When definitions are imported, they are imported
WITHOUT roles, so you must add them manually.
v Check the application role permissions on each system. If any security roles
assigned to an application on the stand-alone unit are missing from the
Central Manager, add them to the Central Manager.
v Export all queries, reports, groups, time periods, alerts, security assessments,
audit processes, privacy sets, baselines, policies, and aliases from the
stand-alone system that you want to have available after the system becomes
a managed unit. (See Export/Import Definitions) Do not export users or
security roles. If you are unsure about a definition, export it in a separate
export operation so that you can decide in the future whether to import that
definition to the Central Manager. After you register for central management,
none of the old definitions from the stand-alone unit are available.
v On the stand-alone unit, create PDF versions audit process results and store
them in an appropriate location. Under central management, only the audit
results produced under central management are available.

Chapter 3. Aggregation and Central management 77


v On the stand-alone unit, instruct all users to remove all portlets that contain
custom report, and to not create any new reports until the conversion to
central management is complete.
v On the Central Manager, manually add all users from the stand-alone unit.
v On the stand-alone unit, delete all user definitions except for the admin user
(which cannot be deleted).
v Register the stand-alone unit for central management. See Registering Units
for Central Management.
v On the Central Manager, import all definitions that are exported from the
stand-alone system. Check to make sure that references to included items
(receivers in alert notifications, for example) are correct. Reassign security
roles, as necessary, to all imported definitions.
v Inform users of the managed unit that they must use the Report Builder
application to regenerate the portlets for any custom reports they want to
display in their layouts.

Migrating a stand-alone CAS collector to managed

Use the following steps when you migrate a CAS collector with active instances to
managed.
1. Export the CAS host definitions from the stand-alone collector.
2. Manage the stand-alone collector.
3. Restart the CAS host from the GUI of the now managed collector.
4. Import the CAS host definition to the manager.
5. Restart the CAS host from the GUI of the managed collector again.

After these steps are performed, the CAS collector has the same instances and
monitor the same files that it did when it was a stand-alone.

Note: The CAS data that was collected when it was a standalone is deleted. There
is no collected CAS data unless a file changes. There is no collected baseline data
for each file.

Using Central Management Functions


Use Central Management functions to synchronize portal user accounts, monitor
managed units, and install security policies on managed units.

Monitoring Managed Units


Monitor managed units by using Central Management.

To monitor managed units:


1. Log in to the Guardium GUI of the unit to be managed as the admin user.
2. Click Reports > Guardium Operational Reports > Managed Units to open
Managed Units.

Each component of the Central Management pane is described in the table.


Table 13. Monitoring Managed Units
Control Description
Select all check box Mark this box in the shaded area of column one to select all managed units.

78 Administration
Table 13. Monitoring Managed Units (continued)
Control Description
Unselect all Clear all managed units.
Check box Mark this box to select the unit for wanted operation.
Refresh unit information Refreshes all information that is displayed in the expanded view of that unit and
issues new requests to that unit. This action also causes a full user
synchronization cycle.
Reboot unit Reboots the unit at the operating system level. By default, the Guardium portal
is started at startup.
Restart unit portal Restarts the Guardium application portal on the managed unit. You can then log
in to that unit to do Guardium tasks (defining or removing inspection engines,
for example).
View unit SNMP attributes Opens the SNMP Viewer pane in a separate window. Clicking the refresh icon in
the SNMP Viewer pane refreshes the data in the window.
View unit syslog Opens the Syslog Viewer in a separate window, displaying the last 64 KB of
syslog messages. Clicking the Refresh icon in the Syslog Viewer pane refreshes
the data in the window.
Shortcut to unit portal Opens the Guardium login page for the managed unit, in a separate browser
window.
Unit Name The host name of the managed unit. If you hold the mouse pointer over the unit
name, its IP address displays as a tooltip. If the host name changes on the unit,
the Central Manager no longer sees that unit when automatically refreshing the
Online status. If you suspect the host name was changed, use Refresh on the
toolbar. Obtain the changed host name and update the displayed current Online
status and other information for that unit.
Online Indicates whether the unit is online. If the green indicator is lit, the unit is
online; if the red indicator is lit, the unit is offline. The Central Manager
refreshes this status at the refresh interval that is specified in the central
management configuration (1 minute by default). If an error occurred connecting
to a unit, the error description can be viewed as a tooltip. Hover the mouse
indicator over that unit's record in the management table.

Chapter 3. Aggregation and Central management 79


Table 13. Monitoring Managed Units (continued)
Control Description
Inspection Engines Click the icon to expand the list of inspection engines; click the icon to hide
the list of inspection engines.

From here, depending on status, you might stop or start the inspection engine.

The information that is displayed for each inspection engine is as follows (This
information is fetched from the managed unit when the Refresh is pressed, not
on every ping):

Name - The name of the inspection engine.

Protocol - The protocol that is monitored by the inspection engine: Oracle,


MSSQL, Sybase, Informix, or DB2

Active on Startup - Indicates if the inspection engine starts on system startup

Exclude From IP - Indicates if the list of from-IP addresses is to be excluded (not


examined).

From-IP/Mask - A list of the IP addresses and subnet masks of the clients whose
database traffic to the To-IP/Mask addresses the inspection engine monitors.

Ports - The ports on which database clients and servers communicate; can be a
single port, a list of ports, or a range of ports

To-IP/Mask - A list of IP addresses and subnet masks of servers whose traffic


from the corresponding client machine (From-IP/Mask) is monitored.
Installed Security Policy The name of the security policy that is installed on the managed unit. This field
is updated on every ping.
Model The Guardium model number of the managed unit.
Version The Guardium version number of the managed unit.
Last Patch The last patch installed.
Last Ping Time The last time that the unit was pinged by the Central Manager to determine the
managed unit's online/offline status.
Selected Units
Group Setup Group Setup opens a new window that allows the user to maintain groups;
creating new groups, removing groups, and associating managed units with
groups.
Unregister Unregister all selected units.
Restarting
Reboot Reboot the selected units.
Restart portal Restart the selected portal.
Restart Inspection Engines Restart the inspection engines of the selected units.
Distribution
Refresh Refresh the selected units.
Install Policy The policy name is a link that opens a new window with the policy's detail.
Patch Distribution Patch Distribution opens a new screen, display an available patch list with
dependencies, and allow for the selecting of a patch and installing it to all
selected units. Schedule a patch up to one year in the future.

80 Administration
Table 13. Monitoring Managed Units (continued)
Control Description
Distribute Uploaded JAR files Click Harden > Configuration Change Control (CAS Application) > Customer
Uploads. Then, enter the name of the file to be uploaded. Otherwise, click the
Browse to locate and select that file. Upload one driver at a time.

Click Upload. You are notified when the operation completes, and the file that is
uploaded is displayed. This action brings the uploaded file to the Central
Manager.

Select a check box of the managed unit or units where these JAR files are to be
distributed. Click Distribute Uploaded JAR files.
Distribute Patch Backup Settings This setting distributes the following to selected units:

PATCH_BACKUP_FLAG; PATCH_AUTOMATIC_RECOVERY_FLAG;
PATCH_BACKUP_DEST_HOST; PATCH_BACKUP_DEST_DIR;
PATCH_BACKUP_DEST_USER; PATCH_BACKUP_DEST_PASS
Distribute Authentication Config Select the managed units that receive the distribution of the Central Management
authentication.

Click Distribute Authentication Config to distribute the authentication


configuration to all managed units selected.

Chapter 3. Aggregation and Central management 81


Table 13. Monitoring Managed Units (continued)
Control Description
Distribute Configurations The following configurations are distributed to sync parameters between the
Central Manager and the managed units:
v Anomaly Detection - Active on startup, Polling interval
v Alerter - all fields
v Data Archive - all fields
v Global profile - Concurrent Logins, Data Level Security, all fields except
Named Templates (which are already synced), PDF footer text, and logo image
v IP-to-Hostname Aliasing - both check boxes
v Results Archive - all fields
v Results export - all fields
v Session Inference - all fields
v System Backup - all fields
v Data export - all fields

Some of these configurations do not take effect until the portal is restarted
(Anomaly Detection, Session Inference). Other processes, such as the Alerter,
need to be restarted, either directly through the admin portal of the managed
unit, or by rebooting all relevant managed units from the manager.

The Distribute Configurations does not restart the managed units. There is a
separate icon for each managed unit to be restarted.

Restart Portal restarts all of the selected units.

After Distribution, a message will display saying that the managed units will
need to be restarted for all the configurations to take effect on managed units.

Each parameter that has scheduling has a second check box. When this second
box is checked, this parameter's scheduling is distributed.

See Distribute Configuration for information on selectively distributing


configurations.

Reboot or restart portal?

Alerter

Active on Startup check box. Each time the appliance restarts, the Alerter is
activated automatically.

GUI restart does not take the Active on Startup value.

Distributing configuration from Central Manager to managed units needs a


reboot on managed units to take full effect

The Alerter to be manually restarted on the managed units through the admin
portal (Admin Console/ Alerter). Since this restart cannot be done from the
Central Manager, restart the managed units from Admin Console and get the
same effect.

Anomaly Detection

Active on Startup check box. Each time the appliance restarts, Anomaly
Detection is activated automatically.

GUI restart takes the Active on Startup value.

Distributing configuration from Central Manager to managed units needs restart


82 Administration portal on managed units to take full effect
Table 13. Monitoring Managed Units (continued)
Control Description
Register New Opens the Unit Registration pane to register a new unit for management.
Patch Installation Status The Patch Installation Status screen displays, for each unit, failed installations
and discrepancies. For example, having one patch installed on part of the units
only, regardless if it failed on other units or was not installed.
Show Distributed Map Displays a map of the Central Manager unit and all managed units.

Installing Security Policies on Managed Units


Install a security policy on a manage unit.

About this task

To install a security policy on a managed unit:

Procedure
1. Click Setup > Tools and Views > Policy Installation to open Currently
Installed Policies and the Policy Installer.
2. From the Policy list, select the policy that you want to install.
3. From the list, select an installation action. After you select an installation action,
you are informed of the success (or failure) of each policy installation. If a
selected unit is not available (it might be offline or a link might be down), the
Central Manager informs you of that fact. It continues attempting to install the
new policy for a maximum of seven days (on the condition that unit remains
registered for central management).
4. From the Policy list, select the policy that you want to install.
5. The available installation actions include the following items:
a. Install and Override - delete all installed policies and install the selected
one instead
b. Install last - installing the selected policy as the last one in the sequence;
installing the policy after all currently installed policies and having the
lowest priority
c. Install first - installing the selected policy as the first one in the sequence;
installing the policy before all currently installed policies.

Note: If you install a policy from the Central Manager, the selection of Run
Once Now (and scheduler) updates existing groups within the installed
policies.
To load changes to rules, including addition and subtraction of groups, you
must either:
a. Initially install policies from the Collector, or
b. Reinstall policies from the Collector or Central Manager.

Viewing Management Maps


Select Distributed Map to display a map of the Central Manager and all managed
units.

To view a map that shows all managed units, click Manage > Central
Management to open Central Management. Then, click Show Distributed Map to
display a map of the central manager unit and all managed units.

Chapter 3. Aggregation and Central management 83


The following table describes the symbols that are used in the map.

Distributed Map Symbols


Table 14. Distributed Map Symbols
Symbol Description
Desktop Computer The Central Manager Unit, labeled with its
host name
Rack Mount Computer A managed unit, labeled with its host name
Disk with CPU An aggregator unit, labeled with its host
name
Blue Arrow A blue arrow that is labeled with the letter
M connects the Central Manager Unit with
all Managed Units (which are not also
aggregation units).
Yellow Arrow Yellow arrows that are labeled with the
letter A connect Aggregation Units with the
units aggregated (unless the unit is also a
Managed Unit). The arrow indicates the
direction of aggregation.
Green Arrow Green arrows that are labeled with the
letters A/M relate Managed Aggregation
Units to the Central Manager Unit. The
arrows indicate the direction of aggregation
(and might be included on both ends if the
Central Manager Units is also an
aggregation unit).

Central Patch Management


Provide visibility and control over patch installation, status, and history.

About this task

Provide visibility and control over patch installation, status, and history. On a
Central management cluster provides a way to install patches on managed units
from the Central Manager.

When you install a patch, a date and time request can be specified to indicate
when the patch is installed. If no date and time is entered or if now is entered, the
installation request time is immediate.

Note: A patch that is installed successfully can be installed again. This fact is
important for batched patches. A warning informs you if the patch is already
installed.

Log in to the Guardium GUI of the unit to be managed as the admin user:

Procedure
1. Click Reports > Guardium Operational Reports > Installed Patches to open
the Installed Patches
2. Do one of the following steps:
a. Click Patch Distribution - Patch Distribution opens a new screen, display
an available patch list with dependencies, and allow for the selecting of a
patch and installing it to all selected units. The list of available patches is

84 Administration
constructed out of the available patches. Evaluate the currently installed
patches on each of the selected units along with the dependency list of
available patches. Patches available but not installable (a dependent patch is
missing) are disabled and cannot be selected. The selection of patch to
install is a single selection - only one patch can be installed at a time. After
a patch is selected and the installation is pushed, a command is sent to all
selected units to install that patch. This process of installing patches
happens in the background.
b. Click Patch Installation Status. The Patch Installation Status screen
displays, for each unit, failed installations and discrepancies. For example,
having one patch installed on part of the units only, regardless if it failed on
other units or was not installed.
c. Click Delete - Click this button to delete the patch file from the Central
Manager, and remove the patch from the Available Patches list.
See the CLI commands, store system patch installation, and delete
scheduled-patch through the CLI.
Patch Management troubleshooting - Problem: Patch is not showing in the
available patch list.
Check:
a. The patch file does not exist in /var/log/guard/patches/
b. The Central Manager and managed units are on a different version.

Distribute Configuration
Configurations and their schedules, can be distributed, either all or individually,
between the Central Manager and the managed units.

Procedure
1. Select the managed units that receive the configurations.
2. Click Distribute Configurations to display the Distribute Configurations
window.
3. Check the appropriate boxes for those Configurations that you would like
distributed. Use the check box in the header to select all configurations.
4. Check the appropriate boxes for those Schedules that you would like
distributed. Use the check box in the header to select all schedules. If a
configuration is not scheduled, there is not a check box for it and displays 'n/a'
instead.
5. Click Distribute to distribute the configurations and schedules.
6. Option: Click Cancel to abort distribution.

Results

If using the command, Central Management > Distribute Configurations >


Global Profile, the following values are distributed:
v ACTIVATE_ALIASES
v CUSTOM_DB_MAX_SIZE
v CHECK_CONCURRENT_LOGIN
v HTML_BOTTOM_RIGHT
v HTML_BOTTOM_LEFT
v DISPLAY_LOGIN_MESSAGE
v LOGIN_MESSAGE
v CSV_DELIMETER

Chapter 3. Aggregation and Central management 85


v FILTERING_ENABLED
v INCLUDE_CHILDREN_ON_FILTER
v SHOW_ALL_RECORDS
v ACCORDION_DISABLED
v SCHEDULER_RESTART_INTERVAL
v SCHEDULER_RESTART_WAIT_SHUTDOWN
v ESCALATE_TO_ALL
v MESSAGE_TEMPLATE

Distribute Authentication Configuration


Instead of configuring authentication on each appliance separately, Central
Management authentication (Configure Authentication) can be configured once on
the central manager and then distributed to all managed units. This way,
information is entered once and it applies to some or all units; some of the units
may have a different type of authentication.

Procedure
1. Ensure authentication (Configure Authentication) on both the central manager
and the managed unit. So if LDAP authentication is being used, ensure that
LDAP is configured on the central manager and the managed unit.
2. Select the managed units to receive the distribution of the central management
authentication.
3. Click Distribute Authentication Config to distribute the authentication
configuration to all managed units selected.

Central Manager Redundancy


Use Central Manager Redundancy or Backup Central Manager (CM) to configure a
secondary or backup CM in case the Primary CM becomes unavailable.

Central Manager redundancy supports the following:


1. Backup Central Manager - Make Primary CM link will be available after
Primary Central Manager loses connection.
2. User Layouts will be retained.
3. User and roles are in the synch backup and will not rely on Portal User Sync.
4. User Group Roles Data will be retained.
5. A GuardAPI function make_primary_cm, has been added to allow switch to
Central Manager from CLI.
6. Data is retained from Audit Process Builder processes after switching Primary
Central Manager to Backup Central Manager.
7. Central Management backup includes all the definitions (reports, queries,
alerts, policies, audit processes etc.), users and roles as it did before.
8. It includes the schedules for enterprise reports, distributed reports and LDAP.
9. It includes schedules for all audit processes, schedules and settings for data
management processes such as archive, export, backup, and import.
10. It includes settings for Alerter and Sender.
11. User's GUI customization's, custom classes and uploaded JDBC drivers are
included.

Note: Data, either collected data, audit results and custom tables data, is not
included.

86 Administration
Note: Failover with Central Manager load balancing - After failover, if the new
Managed Units connect and then disconnect right away, the correct DB_USER will
not be sent until the failover message is received.

Perform these steps on your development or secondary servers and test. If


successful, then perform these steps on your Primary or live Guardium Servers.
Prerequisite
Retrieve the following patches from IBM:
v SqlGuard-9.0p306.tgz.enc - CM Redundancy Fix for Port and Restart CM
Redundancy process
v SqlGuard-9.0p312.tgz.enc – Fix to current CM Redundancy Sync issue
Install Patches on Central Manager
1. Copy the patches listed in the prerequisite to accessible
server/directories from the now Primary Central Manager (CM).
2. From the now Primary CM, login as CLI.
3. Install patch 306 and patch 312 on the Primary CM with the following
CLI command, store system patch install scp
4. This CLI command will copy the files over to your Guardium Server
and give you the ability to install them.
5. Watch these patches being installed with the following CLI command,
show system patch install
6. Wait until the patch status shows “DONE: Patch installation
Succeeded.” for both patches.
Install Patches on Backup CM
1. Login into the now Primary CM GUI as admin.
2. Select the Setup > Tools and Views and then choose Central Manager.
3. Click check boxes for the Backup CM managed unit ONLY on the
Central Manager.
4. Click Patch Distribution and install all of the patches that you just
installed onto the Primary CM.
Example to install a patch
1. Click Patch Distribution.
2. Click on radio button next to patch 306 description.
3. Click Install Patch Now.
4. Repeat steps for patch 312.
5. Wait approximately 15 minutes to be sure the patch is installed on all
managed servers.
6. To verify, login as CLI on the Backup CM and run CLI command, show
system patch install, from Backup CM server.
Install Patches on all other managed servers (optional steps)
1. Repeat the previous steps to install both patch 306 and patch 312 on all
managed servers.
2. Verify that all patches have been installed before going to the next
procedure.
After all Patches have been installed on the CM and managed servers
1. Login as admin onto the now Primary CM.

Chapter 3. Aggregation and Central management 87


2. Select Setup > Tools and Views and then choose Central Manager. Click
Designate Backup CM.
3. Select Backup CM server from the returned list of eligible Backup CM
candidates.
4. Click Apply.
5. Wait approximate two minutes for the Backup CM to sync and the
NEW Backup CM file to be created and copied to the Backup CM.
6. Wait for two complete rounds of backups to complete (approximate 1
hour) for two Backup CM sync files that will be copied to the Backup
CM and can be viewed from the Guardium Monitor tab - Aggregation
Archive Log Report.
7. Select Guardium Monitor and select Aggregation/Archive Log Report
to view the progress of the creation of the Backup CM sync file.
8. Verify the Activity Backup has started and the cm_sync_file.tgz file has
been created from the Aggregation/Archive Log Report.
a. Login as Admin from the GUI.
b. Select Guardium Monitor tab.
c. Select Aggregation/Archive Report.
d. Look for Backup Types.
9. When complete:
a. The patches have been installed on the CM.
b. The patches have been installed on the Backup CM.
c. Option: The patches have been installed on all other managed units.
d. Two Backup CM Sync files have been completed (see
Aggregation/Archive Log file under Guardium Monitor Tab).
e. The following steps outline the process to convert the now Primary
CM and its managed nodes to the Backup CM.

Note:
v IMPORTANT: Wait approximately one hour to be sure at least TWO of
the Backup CM sync files supporting Backup CM have completed.
v The backups schedule for Backup CM sync files is approximately every
30 minutes.
v The process will run on the CM to create a backup CM file and copy
that file to the directory on the Backup CM.
Start the Backup CM Process after two sync file process have completed
Shutdown the Primary CM Guardium Server
If you have no access to shutdown the Primary CM, then go directly to the
Backup CM and login as Admin. (select Setup > Tools and Views and then
choose Central Management) and click Make Primary CM). Skip to section
“Steps to start the Backup CM configuration to become the Primary CM”
in this document.
1. Wait approximate five minutes and login again as admin in the GUI of
the Backup CM.
2. Once the Primary CM is shutdown completely, you can continue onto
the next step

Note:

88 Administration
If you are logged into the Primary CM and it goes down, you get a
message indicating that the connection has timed out.
Steps to start the Backup CM configuration to become the Primary CM
The secondary CM will not be responsive for approximately five minutes.
Login after five minutes and the Make Primary CM link will be available.
The link is available under the admin login and (Setup > Tools and Views
> Central Management).
1. When the Primary Server goes down, you will get a message on the
Backup CM “Unable to connect to Remote Manager, consider switching
to (the name of the backup CM)".
2. If you decide to switch:
a. Login as admin
b. Select Setup > Tools and Views.
c. Click Make Primary CM (do not click the “Make Primary CM” link
more than once. Also stay on this screen and do not select anything
else during the running of this process. A log file will be created
that you can view to see the progress and completion of this
process.) Be patient as this process will take awhile to complete.
There is a safeguard that if you do click this button more than once
nothing will change with the current running process.
d. Within seconds you should get a message “Are you sure you want
to make this unit the primary CM? Click OK.
e. Within a few seconds more you will get a message stating “This
may take a few minutes”. The time it takes for the Backup CM to
become the primary CM depends on the amount of data backed up
from the Backup CM sync file and the amount of managed nodes
that switch to the Backup CM which will become the Primary CM.
Click OK.
As soon as we click OK a log file will be created called
load_secondary_cm_sync_file.log that will allow you to view the
progress of the switch to the completion of the Backup CM switch
process. This file can be viewed from your GUI. The following steps
indicate how to view this log file.
f. The last message will take a while to be presented to the screen. It
will be the last message before the Backup CM switch has
completed. The message is “GUI will restart now. Try to login again
in a few minutes and the Backup CM will now become the Primary
CM”. Click OK.
Wait a few minutes for the Backup CM to become Primary and for
all the managed nodes to complete switching over to the new
Primary CM.
While the CM Backup Process is running – viewing the progress log file
From the Backup CM while the Make Primary CM process is running, you
can do the following to view the progress of the Backup CM becoming the
Primary CM.
Prerequisite: You will need the IP of the server you are connected to in
order to view the log files.
1. Login as CLI from your Backup CM server from a Putty.exe session
2. From CLI run Fileserver <IP> “enter your IP number” 3600", for
example: fileserver 9.70.32.122 3600

Chapter 3. Aggregation and Central management 89


3. From the GUI, enter the value: http://yourserver.x.x.x.com (will
display in the CLI screen after entering the command, example:
http://joe.server.guardium.com (the server name will be the Backup
CM server).
Fileserver Window on the UI will open to select file – Select Sqlguard
logs
4. Select the file: load_secondary_cm_sync_file.log. (The file will display in
a list of files from Step #3.) This will allow you to view the progress of
the Backup CM becoming the Primary CM.
Locate log file for viewing
CM Backup Process is complete when you see this line in the
load_secondary_cm_sync_file.log
Import CM sync info - DONE
5. Wait approximately 10 minutes for all the Managed units to become
available to the New Primary CM.
After the Backup CM becomes the Primary and all Managed nodes are now
managed by the Backup CM server
You can now bring up the old CM server. Once it is up and running,
perform the following steps to add it as the Backup CM server.
1. Reboot Old Primary CM.
2. Once the Server is up, login as CLI.
3. Delete the manager unit type, enter delete unit type manager.
4. After it completes and you get an OK message from CLI.
5. VERY IMPORTANT: Wait approximately five minutes for the GUI to
completely restart even after the deleted unit type displays a
successful message and the GUI restart message.
6. After five minutes, log into the New Primary CM to register Old CM
as a managed unit.
7. Login as admin on New Primary CM.
8. Select Setup > Tools and Views > Central Management.
9. Click Register New.
10. Enter IP of the Old Primary CM that you just rebooted.
11. Enter 8443 as Port.
12. Click Save. (IMPORTANT: Be patient, do not click this button twice).
13. Wait a minute for the Old Primary CM to become registered.
14. Make the Old Primary CM a New Backup CM.
15. Click Designate Backup CM.
16. Click on Old Primary CM server.
17. Click Apply.
18. Old Primary CM server is NOW the New Backup CM server.
19. Refresh Central Management screen to see the New Unit type Backup
CM defined.
20. This task is complete.

90 Administration
Investigation Center
Investigation Center is an extension of the Aggregation Servers. Investigation Users
(once defined) can restore data and results of selected historic dates and perform
forensic investigation. Once the days (dates) are restored, the investigation users
can define and view reports using the standard Guardium UI, only in the scope of
the investigated dates.

Each Guardium appliance maintains a Catalog of all the data and results archived.
The Catalog contains information about the archive, its location and credentials to
access them. The Catalog is exported from the collectors and merged into a
complete Catalog on the Aggregation Server as part of the aggregation process.
With the Catalog in place, investigation users can now select the desired dates for
restoration and these dates will automatically be uploaded to the Investigation
Center and merged into that investigation user’s view. In addition to merging
collectors’ Catalogs through the Aggregation Server, it is also possible to Export
and Import Catalogs from Setup > Tools and Views.

Users and Roles


In a Guardium aggregation server there is a special investigation role (inv). Users
with the inv role can perform forensic investigations on historic data.

An investigation user for the most part utilizes the same query and report
definitions as any other user would. The biggest difference is that the investigation
user sees only data selected for his investigation database (multiple investigators
can be configured to share an INV database). Selected data can be restored from
archive or viewed from the current database in the case of data that was not
purged yet. An investigation user can also restore archived audit process results
and view them.

Caution: Role inv is a special role which will cause the user to be connected to a
separate, investigation-only internal database. It should be combined with the role
user and in general it is incompatible with all other roles.

Note: To correctly configure an investigation user, the user's Last Name must be
set to the name of one of the three investigation databases, INV_1, INV_2, or
INV_3 (case-sensitive).

When creating an investigation user, it is suggested that the user's name


correspond or have some representation that denotes which investigation database
that will be used. For instance, if a user will be using the INV_1 database, the
user's name could be john1 or inv1.

Note: The Run an Ad-Hoc Audit Process button is available on all report screens
for all users except investigation (INV) user.

Audit Process and INV role


If the user is INV, then the audit process finder will show audit processes
according to roles and ownership, but will only allow Clone or New for all audit
processes not owned by INV.

If the user is INV, then the audit process definition menu screen will permit the
following:

Chapter 3. Aggregation and Central management 91


v Only Investigation users and/or specific email addresses are allowed as
receivers (no regular users, no groups, no roles other than INV are permitted as
receivers).
v The Events and Additional Columns button within a saved Report Audit Task is
always disabled. No API automation can be specified.
v No schedule can be specified. Audit process on INV, data can be run only
manually using the Run Now button.
v Only audit tasks of type Report are allowed.
v Active is disabled, Keep Days and Keep Runs fields are disabled.

If the user is not INV, the audit process finder will not display any audit process
owned by an investigation user (regardless of the roles assigned).

When an audit process is ran on INV data, the result title is appended with the
words Executed on Investigation center by and the name of the INV user.

A comment is attached to the results specifying the dates and source hosts of the
data mounted on the Investigation database at execution time.

The results can be viewed either from the Audit Process Builder or for the result
navigation list.

Results of audits run on Investigation center cannot be archived and the results are
discarded when investigation data is discarded.

Investigation Context
Guardium’s Investigation Center supports one to three concurrent investigation
periods, dubbed INV_1, INV_2 and INV_3, each can hold separate historic data
and provides means to forensic investigation of that period. When creating an
investigation user, the user's last name is must be either INV_1, INV_2, or INV_3
to associate that user with one of the investigation databases. When logged into
the Investigation Center (using one of the investigation users) a label specifies the
selected investigation period.

GUI

A user with the investigation role will see two additional tabs that are particular to
the Investigate Center.
v Auditing tab gives access to restored audit process results
v Volume management tab allows the user to set or modify the investigation
period, select audit process results to restore and discard data at the end of an
investigation.

Working with Investigation Center


v Restore an Investigation Period
v Restore Audit Results
v View Restore Log
v Viewing Restored Audit Results

Restore an Investigation Period

After logging into the Guardium interface as a user with the inv role:

92 Administration
1. Click Manage > Aggregation & Archive > Data Restore to open the Data
Restore Search Criteria.
2. C
3. Click Data Restore to open the Restored Data panel. If a prior restore was
performed, this panel will display the currently mounted data periods being
used. At this point, you may click Discard Data to un-mount all previously
mounted data periods.
4. Click Re-Select Investigation Period to open the Data Restore Search Criteria
panel.
5. Enter the start date in the From: box for the beginning time period you wish
to search
6. Enter the end date in the To: box for the ending time period you wish to
search
7. Optionally, enter a Host name to aid in filtering the result set on the host
name
8. Click Search to view the result set - this will search the catalog for all archives
matching the search criteria.
9. From the result set produced, check the Select box(es) of those periods you
wish to restore. You may also click Select All or Unselect All to speed the
selection process.
10. Click Restore to restore the selected periods. Depending on the number of
periods to restore, and whether the datasets are local to the system, the restore
process could take long time.
11. You can monitor the progress of the restore process in the View Restore Log
panel.

Note: Data of any day restored to Investigation Center that falls within the merge
period is also merged into the Guardium application database and is visible by
non-inv users.

Restore Audit Results

A checkbox in Audit Process builder allows to specify if results of a process should


be archived or not. Only results of processes marked for archive for which all
signers had signed are archived. Results of a specific runs are packed, zipped and
stored, the location is recorded in the catalog and is used by the Restore Audit
Results for selection and restore. Archived results from the Guardium Audit
process can be restored to an Investigation Center and contain the results, the view
and signoff trails as well as the comments associated with these results.

After logging into the Guardium interface as a user with the inv role:
1. Click the Volume Management tab.
2. Click Audit Results Restore to open the Restored Results panel. If a prior
restore was performed, this panel will display the currently restored results
being used. At this point, you may click Discard Data to un-mount all
previously mounted results.
3. Click Audit Results Restore to open the Results Restore Search Criteria panel.
4. Enter the start date in the From: box for the beginning time period you wish
to search.
5. Enter the end date in the To: box for the ending time period you wish to
search.

Chapter 3. Aggregation and Central management 93


6. Optionally, enter a Host name, Audit Process, or Run No to aid in filtering the
result set.
7. Click Search to view the result set.
8. From the result set produced, check the Select box(s) of those results you wish
to restore. You may also click Select All or Unselect All to speed the selection
process.
9. Click Restore to restore the selected results. Depending on the number of
results to restore, and whether the datasets are local to the system, the restore
process could take long time.
10. You can monitor the progress of the restore process in View Restore Log.

View Restore Log

The restore log provides a view to the Archive/Restore of past and current restore
attempts and filtered for the user currently logged in. This log enables the user to
validate a successful restore for both data and audit results.

After logging into the Guardium interface as a user with the inv role: Click Restore
Log to open My Restore Log. From this panel you will be able to see the status of
all restore attempts.

Viewing Restored Audit Results

After logging into the Guardium interface as a user with the inv role:
1. Click the Auditing tab.
2. Click the Results Navigation link to open the Audit Process Finder panel.
3. From the drop down list (if there are audit processes), select a process.
4. Click View to open another window and view the available reports for the
audit results.

94 Administration
Chapter 4. Managing your Guardium system
Management tasks include monitoring your system’s health and managing artifacts
such as groups, domains, and notifications.

Guardium Administration
Guardium administrators perform various administration and maintenance tasks.

Any user assigned the admin role is referred to as a Guardium administrator. This
is distinct from the admin user account.

Admin role Privileges

The Guardium admin role has privileges that are not explicitly assigned to that
role. For example, when a user with the admin role displays a list of privacy set
definitions, all privacy sets defined on the Guardium system display, and the user
with the admin role can view, modify, or delete any of those definitions. When a
user without the admin role accesses the list of privacy sets, that user will see only
those privacy sets that he or she owns (i.e. created), and all privacy sets that have
been assigned a security role that is also assigned to that user.

CLI diag Command Access

Use of the diag CLI command requires an additional password, which can be the
password of any user with the admin role.

If automatic account lockout is enabled (a feature that locks a user account after a
specified number of login failures), the admin user account may become locked
after a number of failed login attempts. If that happens, use the unlock admin CLI
command to unlock it.

Note: The access manager (accessmgr) can unlock accounts from the User Browser.
Open the User Browser by clicking Access > Access Management > User Browser.

Admin user Privileges


The admin user has additional privileges that are not granted to the admin role, as
follows:
v Access to all users' to-do lists
v Owner of imported definitions
v Access management functions

Admin User To-Do List Powers


The To-do List is a workflow automation feature that controls the distribution of
audit process results to users. The admin user has special privileges and
responsibilities in this area. If a user account is disabled, all audit process results
for that user will be reassigned to the admin user automatically. If a user is
unavailable for any other reason, audit process results may be installed in that
user's to-do list, i.e., awaiting sign-off before being released to the next results
receiver. The admin user can open any user's to-do list, and take any actions

95
available to that user. When the admin user performs any actions on another user's
to-do list, that fact is noted in the audit process activity log, for example, User
admin signed results on behalf of user x.

Imported Definition Ownership

When definitions are exported, all roles are removed, and the owner is changed to
the admin user. This is the only way to control how the definition will be used on
the importing system.

Access Management and the Administrator


For security purposes, there is a separation of duties for the access manager and
admin. Admin users cannot have access manager privileges, and vice versa.

The next time the admin user logs in, access manager functionality will be
available to them. This is possible for the admin user only (and not for other users
having the admin role).

Note:

The same user may contain both of these roles through a legacy situation or as a
result of an upgrade. However, current use will not allow the two roles to be
assigned to the same user.

In the past, when a unit was upgraded, the accessmgr role was assigned to the
admin user, and the accessmgr user was disabled.

In this situation, to configure the accessmgr and admin, log in as admin and enable
the accessmgr user, then log in as accessmgr (the default initial password
isguardium), and remove the accessmgr role from the admin user.

Certificates
Check certificates periodically to avoid loss of function. Use CLI commands to
obtain and install new certificates.

Certification Expiration
Expired certificates will result in a loss of function. Run the show certificate
warn_expire command periodically to check for expired certificates. The command
displays certificates that will expire within six months and certificates that have
already expired. The user interface will also inform you of certificates that will
expire. To see a summary of all certificates, run the command show certificate
summary.

For more information, see the full list of Certificate CLI Commands.

New Certificates

To obtain a new certificate, generate a certificate signed request (CSR) and contact
a third-party certificate authority (CA) such as VeriSign or Entrust. Guardium does
not provide CA services and will not ship systems with different certificates than
the ones that are installed by default. The certificate format must be in PEM and
include BEGIN and END delimiters. The certificate can either be pasted from the
console or imported through one of the standard import protocols.

96 Administration
You can generate a certificate signed request (CSR) with one of the following
commands:
v create csr alias - This command creates a certificate request with an alias.
v create csr gui - This command creates a certificate request for the tomcat.
v create csr sniffer - This command creates a certificate request for the sniffer.

Note: Do not perform this action until after the system network configuration
parameters have been set.

To install a new certificate through the command line interface, use one of the
following commands:
v store certificate gim - This command stores GIM certificates in the keystore.
v store certificate gui - This command stores tomcat certificates in the
keystore.
v store certificate keystore - This command asks for a one-word alias to
uniquely identify the certificate and store it in the keystore.
v store certificate mysql - This command stores mysql client and server
certificates.
v store certificate stap - This command stores S-TAP certificates.
v store certificate sniffer - This command stores sniffer certificates.

To install a new certificate key through the command line interface, use one of the
following commands:
v store cert_key mysql - This command stores the certificate key of a mysql client
and server.
v store cert_key sniffer - This command stores the sniffer certificate key.

Backup and Default Options

You can choose to restore certificates and certificate keys with the backup or
default parameter. Use the backup parameter to restore a certificate to the last
saved certificate. Use the default parameter to restore a certificate to the original
certificate that Guardium supplied.

Changes in Commands

Some certificate commands have been changed.


v csr is now create csr gui.
v create system csr is now create csr sniffer.
v restore keystore is now restore certificate keystore backup.
v restore system-certificate is now restore certificate sniffer default.
v show system certificate is now show certificate sniffer.
v store system certificate is now store certificate sniffer.
v store trusted certificate is now store certificate keystore.
v store certificate console is now store certificate gui.

New Commands

The following commands are available for use.


v create csr alias
v restore certificate keystore default

Chapter 4. Managing your Guardium system 97


v restore certificate sniffer backup
v show certificate all
v show certificate gim
v show certificate gui
v show certificate keystore alias
v show certificate keystore all
v show certificate mysql client
v show certificate mysql server
v show certificate summary
v show certificate warn_expired

Deprecated Commands
The following commands have been deprecated.
v csr
v store certificate console
v store system key
v show system key
v store system certificate
v show system certificate

Full List of Commands

Use the following commands to create, restore, show, or store certificates.


v create csr gui
v create csr alias
v create csr sniffer
v restore certificate keystore default
v restore certificate keystore backup
v restore certificate sniffer backup
v restore certificate sniffer default
v show certificate all
v show certificate gim
v show certificate gui
v show certificate keystore alias
v show certificate keystore all
v show certificate mysql client
v show certificate mysql server
v show certificate sniffer
v show certificate summary
v show certificate warn_expired
v store certificate sniffer
v store certificate gui

98 Administration
Unit Utilization Level
Use unit utilization reports to identify under- and over-utilized collectors in your
Guardium system. Unit utilization reporting is not available on systems without a
Central Manager.

Open the unit utilization reports by clicking Manage > Reports > Unit Utilization,
and then selecting one of the reports.

There are four unit utilization reports that you can use:
1. Buff Usage Monitor
2. CPU Tracker
3. Enterprise Buffer Usage Monitor
4. Unit Utilization

Utilization Parameters

All parameters except for number of restarts are averaged for a specific unit over a
specific time range. The number of restarts is a count of the sniffer restarts during
a specific time range based on the different PIDs.

The parameters supported are:


v Number of restarts
v Sniffer memory
v Percent MySQL memory
v Free buffer space
v Analyzer queue
v Logger queue
v MySQL disk usage
v System CPU load
v System var disk usage

Thresholds
For each parameter there are two thresholds defined that separate three utilization
levels: Low, Medium, and High.

Utilization levels:
v Low: value is less than Threshold1
v Medium: value is greater than Threshold1, and less than Threshold2
v High: value is greater than Threshold2

There is also an overall utilization level for each unit. For each period of time, this
level is the highest level for all levels during that period.

Reporting
View the four unit utilization reports by clicking Manage > Reports > Unit
Utilization.

The Unit Utilization Levels tracking option allows you to create custom queries
and reports.

Chapter 4. Managing your Guardium system 99


Using aliases is recommended when using unit utilization data in custom and
predefined reports. Otherwise, utilization levels will display as numbers: 1, 2, 3,
instead of Low, Medium, High.

The list of attributes are:


v Host name
v Period start
v Number of restarts
v Number of restarts level
v Sniffer memory
v Sniffer memory Level
v Percent MySQL memory
v Percent MySQL memory level
v Free buffer space
v Free buffer space level
v Analyzer queue
v Analyzer queue level
v Logger queue
v Logger queue level
v MySQL disk usage
v MySQL disk usage level
v System CPU load
v System CPU load level
v System var disk usage
v System var disk usage level
v Overall unit utilization level

Note: Each parameter is classified into three levels based on the values of the
thresholds.

Throughput information available in Unit Utilization


Throughput data is collected on each collector unit. The CM consolidates all
throughput data and creates an enterprise custom table that is added to predefined
utilization reports.

Throughput information collected:


v Number of requests (for the period) (from construct instance)
v Number of full SQLs (for the period) (from construct text)
v Number of exceptions
v Number of policy violations

By default, throughput information is collected every hour.

GuardAPI and CLI commands for Unit Utilization

Guard APIs:
v listUtilizationThresholds
updateUtilizationThresholds

100 Administration
reset_unit_utilization

CLI commands:
v store monitor gdm_statistics
v show monitor gdm_statistics

Customer Uploads
Database Activity Monitor Content Subscription (previously known as Database
Protection Subscription Service) supports the maintenance of predefined
assessment tests, SQL based tests, CVEs, APARs, and groups such as database
versions and patches.

Uploads are used to keep information current and within industry best practices to
protect against newly discovered vulnerabilities. Distribution of updates is done on
a quarterly basis.

Use Customer Uploads to upload the following: DPS update files; Oracle JDBC
drivers; MS SQL Server JDBC drivers; and, DB2 for z/OS license jar.

Note: If a custom group exists with the same name as a predefined Guardium
group, the upload process will add Guardium in front of the name for the
predefined group.
1. Open Customer Uploads by clicking Harden > Configuration Change Control
(CAS Application) > Customer Uploads.
2. For DPS Upload, click Browse to locate and select the file to be uploaded.

Note: Reference the Import DPS pane to see what files have been uploaded.
3. For Upload DB2 z/OS License jar, click Browse to locate and select the file.
4. Use Upload Oracle JDBC driver or Upload MS SQL Server JDBC driver to
upload open source drivers. After uploading, you will see the databases added
to the Datasource finder. Upload one driver at a time.

Note: There are two instances where open source drivers are recommended
over Oracle Data Direct drivers or MS SQL Data Direct drivers.
a. To support Windows Authentication for MS SQL Server. In all other uses,
the Data Direct driver pre-loaded in the Guardium appliance is sufficient.
b. When using the Value Change Tracking application for Oracle version 10 or
higher, the open source driver is recommended in order to support using
streams instead of triggers.

Use keywords to search and download open source JDBC drivers (for example:
open source JDBC driver for MS SQL).
5. Use the Central Manager to distribute the .jar file to managed units. After the
file is successfully uploaded, the GUI needs to be restarted on the Central
Manager and the managed units.

Note:

If you will be exporting and importing definitions from one unit to another, be
aware that subscribed groups are not exported. When exporting definitions that
reference subscribed groups, you must ensure that all referenced subscribed groups
are installed on the importing unit (or central manager in a federated
environment).

Chapter 4. Managing your Guardium system 101


When uploading DB2 z/OS® license jar files, the license will take effect after restart
of the GUI.

Note: If the DPS stops for any reason (for example, a server restart or a GUI
restart), it is recommended to wait 30 minutes before starting the DPS upload
process again.
Use a TAB Delimited file (.TXT) when creating and saving a Datasource Upload
file from the Customer Upload functionality
If you choose to use a comma delimited file structure (.CSV), it will not
behave as intended if any column value contains a comma.
Follow these steps
1. If using EXCEL, save file as a TAB Delimited (.TXT) file.
2. If using OpenOffice or Libre Office then save a (.CSV) file with TAB
Delimiters.
3. Log in as admin and open Customer Uploads by clicking Harden >
Configuration Change Control (CAS Application) > Customer
Uploads.
4. For Upload CSV to Create/Update Datasources, click Browse..., and
select the tab delimited file.

Create Datasource for CSV uploaded via the Upload CSV menu
Follow the proceeding steps to create a Tab Delimited .TXT formatted file
containing datasource information. This Tab Delimited .TXT file can then be used
with the Customer Upload function in the Guardium application to many
datasource types.

Use the function to import datasources was not always compatible with each
Guardium Software Release. This procedure will enable the uploading of any
datasource.

The following is a list of Header Columns that should be added to an Excel


spreadsheet when creating the .TXT tab delimited datasource upload file:

Column Values (accepted for .CSV datasource upload file)

102 Administration
Table 15. create_datasource
Parameter Description
application Required. Identifies the application for which the datasource is being
defined. It must be one of the following:

ChangeAuditSystem

Access_policy

MonitorValues

DatabaseAnalyzer

AuditDatabase

CustomDomain

Classifier

AuditTask

SecurityAssessment

Replay

Stap_Verification
compatibilityMode Compatibility Mode: Choices are Default or MSSQL 2000. The
processor is told what compatibility mode to use when monitoring a
table.
conProperty Optional. Use only if additional connection properties must be
included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.

For a Sybase database with a default character set of Roman8, enter


the following property: charSet=utf8
customURL Optional. Connection string to the datasource; otherwise connection
is made using host, port, instance, properties, etc. of the previously
entered fields. As an example this is useful for creating Oracle
Internet Directory (OID) connections.
dbInstanceAccount Optional. Database Account Login Name (software owner) that will
be used by CAS
dbInstanceDirectory Optional. Directory where database software was installed that will
be used by CAS
dbName Optional. For a DB2 or Oracle datasource, enter the schema name.
For others, enter the database name.
description Optional. Longer description of the datasource.
host Required. Can be the host name or the IP address.
name Required. Provides a unique name for the datasource on the system.
owner Required. Identifies the Guardium user account that owns the
datasource.
password Optional. Password for owner. If used, user must also be used.
port Optional (integer). Port number.
serviceName Required for Oracle, Informix, DB2, and IBM® ISeries. For a DB2
datasource enter the database name, for others enter the service
name.

Chapter 4. Managing your Guardium system 103


Table 15. create_datasource (continued)
Parameter Description
severity Optional. Severity Classification (or impact level) for the datasource.
shared Optional (boolean). Set to true to share with other applications. To
share the datasource with other users, you will have to assign roles
from the GUI.
type Required. Identifies the datasource type; it must be one of the
following:

DB2

DB2 for i

DB2 for z/OS

Informix

MS SQL Server

MS SQL Server (DataDirect)

MySQL

NA

Netezza

Oracle (DataDirect)

Oracle (Service Name)

Oracle (SID)

PostgreSQL

Sybase

Sybase IQ

Teradata

The following can be used when the application is CustomDomain


or Classifier:

TEXT

TEXT:FTP

TEXT:HTTP

TEXT:HTTPS

TEXT:SAMBA
user Optional. User for the datasource. If used, password must also be
used.

Notes:
1. Each of the column names must be included in the Excel spreadsheet SAVED
as a TAB delimited (.TXT) file.
2. The Created Datasource name (what is shown when looking for the datasource)
is made up of both the name column and the type column.
104 Administration
3. Upload file MUST be saved as a Column Tab Delimited file type.

Steps to create and upload txt file in a Text CSV format file and add Datasource
Data
1. Create the Excel spreadsheet file save as a Tab Delimited .TXT file with the
following headers and datasource data to support the datasource import
capability.
2. Create and save your .txt file to your PC or UNIX/Linux device for uploading
into the Guardium application.
3. Login as admin and open Customer Uploads by clicking Harden >
Configuration Change Control (CAS Application) > Customer Uploads
4. From Upload CSV to Create/Update Datasources, click Browse and select the
.txt file containing the tab delimited datasource information.
5. Click Upload.

A message will display showing which values from the .txt file were uploaded:
1. New: Per file upload (if save file and added New Datasource member(s), these
members will be have the status of NEW.
2. Update: Upload SAME datasource that you made changes on will give an
Update status.
3. Fail: Displayed failed datasource or errors

Services Status panel


The Services Status panel is a centralized place to check status of services such as
CAS or alerter, and if necessary, investigate each service further. Open the Services
Status panel by clicking Setup > Tools & Views > Services Status. Each time the
Services Status panel is opened, the status of each service is refreshed.

Say that you set up a policy that sends a real-time alert whenever there are more
than three failed log-ins in 5 minutes. To protect against this possible intrusion,
you must make sure that the policy was installed, and that the alerter is on.

Use the Services Status panel to verify that both of these services are configured
properly. If for some reason the policy didn't install correctly, click Policy
Installation to go to Policy Installer, view the currently installed policies, and make
the necessary changes.

Note: Clicking any service takes you to its configuration page, where you can turn
the service off or on, and, also view the status of the service.

Each service displays one of the following icons:


v Service is running/scheduled:
v Service is paused:
v Service is off:

Archive, Purge and Restore


Archive and purge operations should be run on a scheduled basis. Use Data
Archive and Results Archive to store captured and information for auditing.
Amazon S3 Archive and Backup in Guardium also appears at the end of this topic.

Chapter 4. Managing your Guardium system 105


Data Archive and Results Archive can be found by clicking Manage > Data
Management.
v Data Archive backs up the data that has been captured by the Guardium system,
for a time period. When configuring Data Archive, a purge operation can also be
configured. Typically, data is archived at the end of the day of everyday to
ensure that in the event of a catastrophe, only one day of data is lost. The
purging of data depends on the application and is highly variable, depending on
business and auditing requirements. In most cases, data can be kept on the
Guardium systems for more than six months.
v Results Archive backs up audit tasks results (reports, assessment tests, entity
audit trail, privacy sets, and classification processes) as well as the view and
sign-off trails and the accommodated comments from workflow processes.
Results sets are purged from the system according to the workflow process
definition.

In an aggregation environment, data can be archived from the collector, from the
aggregator, or from both locations. Most commonly, the data is archived only once,
and the location from where it is archived varies depending on your requirements.

Scheduled export operations send data from Guardium collector units to a


Guardium aggregation server. On its own schedule, the aggregation server
executes an import operation to complete the aggregation process. On either or
both units, archive and purge operations are scheduled to back up and purge data
regularly (both to free up space and to speed up access operations on the internal
database).

Archive files can be sent using SCP or FTP protocol, or to an EMC Centera or TSM
storage system (if configured). You can define a single archiving configuration for
each Guardium system.

Guardium’s archive function creates signed, encrypted files that cannot be


tampered with. DO NOT change the names of the generated archive files. The
archive and restore operations depend on the file names that are created during
the archiving process.

Archive and export activities use the system shared secret to create encrypted data
files. Before information encrypted on one system can be restored on another, the
restoring system must have the shared secret that was used on the archiving
system when the file was created.

Whenever archiving data, be sure to verify that the operation completes


successfully. To do this, open the Aggregation/Archive Log by clicking Manage >
Reports > Data Management > Aggregation/Archive Log. There should be
multiple activities that are listed for each archive operation, and the status of each
activity should as completed.

Perform System Backup tasks by clicking Manage > Data Management > System
Backup. You can also perform backup tasks from the CLI.

Default Purging
v The default value for purge is 60 days
v The default purge activity is scheduled every day at 5:00 AM.
v For a new install, a default purge schedule is installed that is based on the
default value and activity.

106 Administration
v When a unit type is changed to a managed unit or back to a standalone unit, the
default purge schedule is applied.
v The purge schedule will not be affected during an upgrade.
v When purging a large number of records (10 million or higher), a large batch
size setting (500k to 1 million) is the most effective way to go. Using a smaller
batch size or NULL causes the purge to take hours longer. Smaller purges finish
quickly, so a large batch size setting is only relevant for large purges.

Note: Setting batch size is not available in the UI. Use the GuardAPI command
grdapi set_purge_batch_size batchSize to set batch size.

How to determine what days are not archived


Use the Report Builder to view the list of all files with archive dates. Open the
Report Builder by clicking Manage > Reports > Report Builder. From the Query
menu, select Location View. Dates not on this report indicate that those dates have
not been archived. Run archive for the dates not on the list, if required.

Configure Data Archive and Purge


1. Open the Data Archive by clicking Manage > Data Management > Data
Archive.
2. To archive, check the Archive check box. Additional fields will appear in the
Configuration panel.
3. For Archive data older than, enter a value and select a unit of time from the
menu. To archive data starting with yesterday’s data, enter the value 1, and
select Day(s) from the menu.
4. Use Ignore data older than to control how many days of data is archived.
Any value that is specified here must be greater than the Archive data older
than value.

Note: If you leave this field blank, you archive data for all days older than
the value specified in Archive data older than. This means that if you archive
daily and purge data older than 30 days, you archive each day of data 30
times (before it is purged on the 31st day).
5. Check the Archive Values check box to include values from SQL strings in the
archived data. If this box is cleared, values are replaced with question mark
characters on the archive (and hence the values will not be available following
a restore operation).
6. Select a Protocols option, and fill in the appropriate information. Depending
on how your Guardium system has been configured, one or more of these
buttons might not be available. For a description of how to configure the
archive and backup storage methods, see the description of the show and
store storage-system commands.
7. Perform the appropriate procedure, depending on the storage method
selected:
v Configure SCP or FTP Archive or Backup
v Configure EMC Centera Archive or Backup
v Configure TSM Archive or Backup
8. Check the Purge check box to define a purge operation.
IMPORTANT: The Purge configuration is used by both Data Archive and Data
Export. Changes that are made here apply to any executions of Data Export
and vice versa. In the event that purging is activated and both Data Export

Chapter 4. Managing your Guardium system 107


and Data Archive are run on the same day, the first operation that runs will
likely purge any old data before the second operation's execution.
For this reason, any time that Data Export and Data Archive are both
configured, the purge age must be greater than both the age at which to
export and the age at which to archive.
9. If purging data, use the Purge data older than field to specify a starting day
for the purge operation as a number of days, weeks, or months before the
current day, which is day zero. All data from the specified day and all older
days are purged, except as noted. Any value that is specified for the starting
purge date must be greater than the value specified for the Archive data older
than value. In addition, if data exporting is active, the starting purge date that
is specified here must be greater than the Export data older than value. See
the IMPORTANT note.

Note:

There is no warning when you purge data that has not been archived or
exported by a previous operation.

The purge operation does not purge restored data whose age is within the do
not purge restored data timeframe that is specified on a restore operation.
10. Click Apply to save the configuration changes. The system attempts to verify
the configuration by sending a test data file to that location.
v If the operation fails, an error message is displayed and the configuration
will not be saved.
v If the operation succeeds, the configuration is saved.
11. To run or schedule the archive and purge operation, do one of the following:
v Click Run Once Now to run the operation once.
v Click Modify Schedule to schedule the operation to run on a regular basis.
12. Click Done when you are finished.

Configure SCP or FTP Archive or Backup

After selecting SCP or FTP in an archive or backup configuration panel, the


following information must be provided:
1. For Host, enter the IP address or host name of the host to receive the archived
data.
2. For Directory, identify the directory in which the data is to be stored. How you
specify this depends on whether the file transfer method used is FTP or SCP.
v For FTP: Specify the directory relative to the FTP account home directory.
v For SCP: Specify the directory as an absolute path.
3. For Port that can be used to send files over SCP and FTP. The default port for
ssh/scp/sftp is 22. The default port for FTP is 21.

Note: Seeing a zero (0) for port indicates that the default port is being used
and that there is no need to change.
4. For Username and Password, enter the credentials for the user logging on to
the SCP or FTP server. This user must have write/execute permissions for the
directory that is specified in Directory.
For Windows, a domain user is accepted with the format of domain\user
5. Click Apply to save the configuration.

108 Administration
Configure EMC Centera Archive or Backup
This backup or archiving task copies files to an EMC Centera storage system
off-site. A license is needed with user name and password from EMC. Four main
actions are needed for this task:
1. Establish account with an EMC Centera on the network (IP addresses and a
ClipID are needed)
2. Configure the data and/or configuration files from a Guardium system
3. Define and export a library
4. Confirm that your files are stored on the EMC Cetera storage system.

CLI action

From the CLI, run these commands:


store storage-system centera backup ON
show storage-system

Configure Centera Archive or Backup

Open System Backup by clicking Manage > Data Management > System Backup.
Select EMC Centera, the following information must be provided:
1. For Retention, enter the number of days to retain the data. The maximum is
24855 (68 years). If you want to save it for longer, you can restore the data later
and save it again.
2. For Centera Pool Address, enter the Centera Pool Connection String; for
example: 10.2.3.4,10.6.7.8?/var/centera/us1_profile1_rwe.pea txt

Note: This IP address and the .PEA file comes from EMC Centera. The
question mark is required when configuring the path. The .../var/centera/...
path name is important as the backup might fail if the path name is not
followed. The .PEA file gives permissions, username, and password
authentication per Centera backup request.
3. Click Upload PEA File to upload a Centera PEA file to be used for the
connection string. The Centera Pool Address is still needed.

Note: If the message Cannot open the pool at this address.. appears, check
the size of the Guardium system host name. A timeout issue has been reported
with Centera when using host names that are fewer than four characters in
length.
4. Click Apply to save the configuration. The system attempts to verify the
Centera address by opening a pool using the connection string specified. If the
operation fails, you will be informed and the configuration will not be saved.
5. Click Run Once Now to perform the backup using the downloaded .PEA file.

Confirm that your files have been copied to the EMC Centera. The name of the
files and a ClipID are required for this task.

Configure TSM Archive or Backup


Before archiving to a TSM server, a dsm.sys configuration file must be uploaded to
the Guardium system, via the CLI. Use the import tsm config CLI command.
After you select TSM in an archive or backup configuration panel, provide
following information:

Chapter 4. Managing your Guardium system 109


1. For Password, enter the TSM password that this Guardium system uses to
request TSM services, and re-enter it in the Re-enter Password box.
2. Optionally, enter a Server name matching a servername entry in your dsm.sys
file.
3. Optionally, enter an As Host name.
4. Click Apply to save the configuration. When you click the Apply button, the
system attempts to verify the TSM destination by sending a test file to the
server using the dsmc archive command. If the operation fails, you will be
informed and the configuration will not be saved.
5. Return to the archiving or backup procedure to complete the configuration.

Configure Results Archive


1. Open the Results Archive by clicking Manage > Data Management > Results
Archive (Audit).
2. In the files following Archive results older than, specify a starting day for the
archive operation as a number of days, weeks, or months before the current
day, which is day zero. To archive results starting with yesterday’s data, enter
the value 1, and select Day(s) from the list.
3. Optionally, use the fields following Ignore results older than to control how
many days of results are archived. Any value that is specified here must be
greater than the Archive results older than value.
4. Select a storage method from the radio buttons. Depending on how the
Guardium system has been configured, one or more of these buttons might not
be available.
v EMC CENTERA
v TSM
v SCP
v FTP
5. Perform the appropriate procedure depending on the storage method selected:
v Configure SCP or FTP Archive or Backup
v Configure EMC Centera Archive or Backup
v Configure TSM Archive or Backup
v Amazon S3 Archive and Backup in Guardium
6. Optionally, enter a Comment to be stored with the configuration.
7. Click Apply to verify and save the configuration changes. The system attempts
to verify the configuration by sending a test data file to that location.
v If the operation fails, an error message is displayed and the configuration
will not be saved.
v If the operation succeeds, the configuration is saved.
8. To run or schedule the archive and purge operation, do one of the following:
v Click Run Once Now to run the operation once.
v Click Modify Schedule to schedule the operation to run regularly.
9. Click Done when you are finished.

Restore Data

Before Restoring Data


v Before restoring from TSM, a dsm.sys configuration file must be uploaded to the
Guardium system, via the CLI. Use the import tsm config CLI command.

110 Administration
v Before restoring from EMC Centera, a pea file must be uploaded to the
Guardium system, via the Data Archive panel.
v Before restoring or importing a file that was encrypted by a different Guardium
system, make sure that the system shared secret used by the Guardium system
that encrypted the file is available on this system (otherwise, it will not be able
to decrypt the file). See About the System Shared Secret in “System
Configuration” on page 1.
v Before restoring on a Guardium collector run the CLI command stop
inspection-core to stop the inspection-core process.

Note: The data cannot be captured during the restore process.

To restore data:
1. Open Data Restore by clicking Manage > Data Management > Data Restore.
2. Enter a date in From to specify the earliest date for which you want data.
3. Enter a date in To to specify the latest date for which you want data.
4. For Host Name, optionally enter the name of the Guardium system from which
the archive originated.
5. Click Search.
6. In the Search Results panel, check the Select check box for each archive you
want to restore.
7. In the Don't purge restored data for at least field, enter the number of days
that you want to retain the restored data on the system.
8. Click Restore.
9. Click Done when you are finished.

Note: The restore of data archived from a collector should be done only to: the
same collector; an aggregator; or, a different collector dedicated to investigation
that is not part of an aggregation cluster. In the case of a crashed collector, a
system backup can be restored onto a new, clean collector.

Amazon S3 Archive and Backup in Guardium

Use this feature to archive and backup data, from Guardium, to Amazon S3.

Amazon S3 (Amazon Simple Storage Service) provides a simple web service


interface that can be used to store and retrieve any amount of data, at any time,
from anywhere on the web. It gives any developer access to the same highly
scalable, reliable, secure, inexpensive infrastructure that Amazon uses to run its
own web sites.

Prerequisites
1. An Amazon account.
2. Register for S3 service
3. Amazon S3 credentials are required in order to access Amazon S3. These
credentials are:
v Access Key ID - identifies user as the party responsible for service requests.
It needs to be included it in each request. It is not confidential and does not
need to be encrypted. (20-character, alphanumeric sequence).
v Secret Access Key - Secret Access Key is associated with Access Key ID
calculating a digital signature included in the request. Secret Access Key is a
secret, and only the user and AWS should have it (40-character sequence).

Chapter 4. Managing your Guardium system 111


This key is just a long string of characters (and not a file) that is used to
calculate the digital signature that needs to be included in the request.
v Data Archive backs up the data that has been captured by the system, for a
given time period.
v Results Archive backs up audit tasks results (reports, assessment tests, entity
audit trail, privacy sets, and classification processes) as well as the view and
sign-off trails and the accommodated comments from work flow processes.

When Guardium data is archived, there is a separate file for each day of data.

Archive data file name format:


<time>-<hostname.domain>-w<run_datestamp>-d<data_date>.dbdump.enc

Guardium's archive function creates signed, encrypted files that cannot be


tampered with. The names of the generated archive files should not be changed.
The archive operation depends on the file names that are created during the
archiving process.

System backups are used to backup and store all the necessary data and
configuration values to restore a server in case of hardware corruption.

All configuration information and data is written to a single encrypted file and
sent to the specified destination, using the transfer method that is configured for
backups on this system.

Backup system file format:


<data_date>-<time>-<hostname.domain>-SQLGUARD_CONFIG-9.0.tgz
<data_date>-<time>-<hostname.domain>-SQLGUARD_DATA-9.0.tgz

Use the Aggregation/Archive Log report in Guardium to verify that the operation
completes successfully. Open the Aggregation/Archive Log by clicking Manage >
Reports > Data Management > Aggregation/Archive Log. There should be
multiple activities that are listed for each Archive operation, and the status of each
activity should be Succeeded.

Regardless of the destination for the archived data, the Guardium catalog tracks
where every archive file is sent, so that it can be retrieved and restored on the
system with minimal effort, at any point in the future.

A separate catalog is maintained on each system, and a new record is added to the
catalog whenever the system archives data or results.

Catalog entries can be transferred between appliances by one of the following


methods:
v Aggregation - Catalog tables are aggregated, which means that the aggregator
will have the merged catalog of all of its collectors
v Export/Import Catalog - These functions can be used to transfer catalog entries
between collectors, or to backup a catalog for later restoration, etc.
v Data Restore - Each data restore operation contains the data of the archived day,
including the catalog of that day. So, when restoring data, the catalog is also
being updated.

112 Administration
When catalog entries are imported from another system, those entries will point to
files that have been encrypted by that system. Before restoring or importing any
such file, the system shared secret of the system that encrypted the file must be
available on the importing system.

Enable Amazon S3 from the Guardium CLI

Amazon S3 archive and backup option is not enabled by default in the Guardium
GUI. To enable Amazon S3 via Guardium CLI, run the following CLI commands:
store storage-system amazon_s3 archive on
store storage-system amazon_s3 backup on

Amazon S3 requires that the clock time of Guardium system to be correct (within
15-minutes). Otherwise, this results in an Amazon error. If there is too large a
difference between the request time and the current time, the request will not be
accepted.

If the Guardium system time is not correct, set the correct time using the following
CLI commands:
show system ntp server
store system ntp server (An example is ntp server: ntp.swg.usma.ibm.com)
store system ntp state on

User Interface

Use the System Backup to configure the backup. Open the System Backup by
clicking Manage > Data Management > System Backup.

User input requires:


v S3 Bucket Name (Every object that is stored in Amazon S3 is contained in a
bucket. Buckets partition the namespace of objects that are stored in Amazon S3.
Within a bucket, you can use any names for your objects, but bucket names
must be unique across all of Amazon S3.
v Access Key ID
v Secret Access Key

If bucket name does not exist, it will get created.

Secret Access Key is encrypted when saved into the database.

Check that files got uploaded on Amazon S3


1. Log onto AWS Management Console using your email address and password.

http://aws.amazon.com/console/
1. Click S3.
2. Click the bucket that you specified in Guardium UI.

Guardium catalog
When you archive data from your Guardium system, the Guardium catalog tracks
where every archive file is sent, so that it can be retrieved and restored.

Chapter 4. Managing your Guardium system 113


About this task
A separate catalog is maintained on each Guardium system, and a new record is
added to the catalog whenever you archive data or results. Catalog entries can be
transferred between appliances by one of the following methods:
v Aggregation: catalog tables are aggregated, which means that the aggregator has
the merged catalog of all of its collectors.
v Export/Import Catalog: these functions can be used to transfer catalog entries
between collectors, or to back up a catalog for later restoration.
v Data Restore: each data restore operation contains the data of the archived day,
including the catalog of that day. When you restore data, the catalog is also
updated.

You can archive a catalog, export a catalog to external storage, or import a catalog
that has been stored.

When catalog entries are imported from another system, those entries point to files
that have been encrypted by that system. Before you restore or import any such
file, the system shared secret of the system that encrypted the file must be
available on the importing system. You can use the aggregator backup keys file
and aggregator restore keys file CLI commands to copy the shared secrets from
one Guardium system to another.

Archiving a catalog
Procedure
1. Click Manage > Data Management > Catalog Archive.
2. You can display available catalog entries for a range of dates, or add a catalog
entry. To display catalog entries:
a. Enter a date in From to specify the earliest date for which you want data.
b. Enter a date in To to specify the latest date for which you want data.
c. Optional: For Host Name, enter the name of the Guardium system from
which the archive originated.
d. Click Search.
To add a catalog entry:
a. Click Add.
b. Enter a File Name.
c. Enter a Host Name.
d. Enter the Path for the file.

Note:

For FTP: specify the directory relative to the FTP account home directory

For SCP: Specify the directory as an absolute path.

For TSM: Specify the directory as an absolute path of the original location.
e. Enter a User Name and Password for access to this location.
f. In the Retention field, enter the number of days this entry is to be kept in
the catalog (the default is 365).
g. Select an option from the Storage System menu on which the file is
contained.

114 Administration
h. Click Save.
3. To remove a catalog entry, open the catalog, select the entry, and click Remove
Selected.
4. Click Done when you are finished.

Exporting a catalog
Procedure
1. Click Manage > Data Management > Catalog Export.
2. Select a definition type from the Type dropdown list. TheDefinitions to Export
list is populated with definitions of the selected type.
3. Select all of the definitions of this type that you want to export and click
Export. Depending on your browser security settings, you might see a message
that asks whether you want to save the file or open it.
4. Choose a location to save the exported file.

Importing a catalog
Procedure
1. Click Manage > Data Management > Catalog Import.
2. Click Browse to locate and select the file.
3. Click Upload. You are notified when the operation completes and the
definitions that are contained in the file are displayed. Repeat to upload more
files.
4. Click Import to import the uploaded files or click Remove without Importing
to remove the uploaded files without importing the contents.

How to manage backup and archiving


Establish data retention practices; control activity volume; manage scheduling of
data archive and purge, and monthly backups.

Value-added: Best Practices. Protect your data from loss. Make your data readily
accessible for auditing purposes.

Use the System Backup function to define a backup operation that can be run on
demand or on a scheduled basis.

System backups are used to back up and store all the necessary data and
configuration values to restore a server in case of hardware corruption.

There are two archive operations available on the Administration Console, in the
Data Management section of the menu:
v
Data Archive backs up the data that has been captured by the Guardium system,
for a given time period. When configuring Data Archive, a purge operation can
also be configured. Typically, data is archived at the end of the day on which it
is captured, which ensures that in the event of a catastrophe, only the data of
that day is lost. The purging of data depends on the application and is highly
variable, depending on business and auditing requirements. In most cases data
can be kept on the machines for more than six months.
v

Chapter 4. Managing your Guardium system 115


Results Archive backs up audit tasks results (reports, assessment tests, entity
audit trail, privacy sets, and classification processes) as well as the view and
signoff trails and the accommodated comments from workflow processes.
Results sets are purged from the system according to the workflow process
definition.

116 Administration
In an aggregation environment, data can be archived from the collector, from the
aggregator, or from both locations. Most commonly, the data is archived only once,
and the location from where it is archived varies depending on the customer's
requirements.

Whenever archiving data, be sure to verify that the operation completes


successfully. To do this, log in as admin user, and open the Aggregation/Archive
Log by clicking Manage > Reports > Aggregation & Archive >
Aggregation/Archive Log. There should be multiple activities listed for each
Archive operation, and the status of each activity should be Succeeded.

Data backup

There are three types of recommended data backups:


1.
Full/system backups:
a.
Weekly or daily full backups of the Central Manager unit (assuming a
standalone Central Manager).
b.
Monthly for aggregators and collectors during a quiet off-hour period
2.
Daily archives (think of these archives as incremental backups) for aggregators
and collectors. The archive files from the aggregators are much larger than
those from the collectors. For example, if an aggregator has ten collectors
sending data to it, the starting point for the size of the archive file is equal to
those of all ten collector archive files. However, it is much larger than the entire
combined collector archives because the aggregator archive files contain extra
data that is not sent by the collectors every day.
3.
Results archive (this is a specialized subset of the data in the daily and full
backups) for aggregators. An alternative to using the Results archive is to save
a PDF file from the Audit Process after all users complete the review process.

Data retention

The data backup and archive files serve two purposes: disaster recovery, and
historical investigation or auditing.

The following suggestions can be modified based on your corporate data retention
policy. For example, some organizations are mandated to keep all backups for 18
months.

Chapter 4. Managing your Guardium system 117


For disaster recovery
v
Keep a rolling three months full backup from each unit
v
Keep a rolling 2-weeks worth of daily archives from the managed collectors

Note: If you have stand-alone collectors, the daily archives should be kept
according to your data-retention policy.

For historical investigation or auditing purposes


v
All daily archives from the aggregators for the period required by your auditing
or corporate data-retention policies.

Storage capacity

The following are only estimates/ranges of backup and archive file sizes for
auxiliary storage capacity planning purposes.

The actual sizes vary depending on (1) the volume and granularity of the database
activity that is logged on the Guardium collectors, and (2) the retention period of
the backup files.

Daily Archives

Collector: approximately 40 MB (privileged user monitoring) to 1 GB


(Comprehensive monitoring with full details logged on all traffic).

Aggregator: a rough multiple of the number of collectors, for example, Number of


collectors multiplied by 40 MB.

Monthly System Backups – assuming a 50% full database on a Dell R610 or IBM
xSeries 3550 M4 (600 GB Disks)

Note: The backup gets roughly a 1:8 compression for the backup file.

Collector: 7 – 10 GB

Aggregator: 16 – 20 GB

Central Manager (no aggregation): << 1 GB

Results Archives

Depends on the number and frequency of audit processes implemented.

Control activity volume


Controlling the volume of activity monitored (on the database server) and logged
(on the collector) helps to reduce network utilization; reduce the Guardium
system’s database disk consumption; and improves the overall capacity and
performance of the IBM Security Guardium infrastructure.

118 Administration
This control is primarily achieved in the policy rules, and via the inspection engine
configuration.

The following are general guidelines:


v
Avoid using port ranges in inspection engines
v
Identify all trusted applications and batch programs (these programs generally
generate the bulk of the database activity) and if possible, ignore/skip their
activity by using the Ignore STAP Session or Skip Logging actions.
v
Unless necessary, avoid using the Log Full Details action.
v
If possible, use the Selective Audit policy (with the Ignore S-TAP session rules)
to minimize network traffic.
v
If no extrusion rules are used, for example, result sets are not examined,
consider using the Ignore Responses per Session action to eliminate result sets
being sent to the Guardium system.
v
Establish a process to periodically review and update policy rules, including
groups, to accommodate new databases and applications.
v
Establish a process to periodically monitor SQL Errors and provide to the DBA
and Application development teams for remediation.

Scheduling
The following tables provide a summary of the key schedules to be configured on
your Guardium systems. Following the tables is a brief explanation of each
process.

Use the Aggregation/Archive log to record the time and status of these processes
to assist with adjusting your scheduling times.

The following table lists a schedule of tasks for a Guardium system that is
deployed as a collector.

Function Schedule
Data export (to the Aggregators) Daily*: 12:30 AM
Data Archive and Purge Daily: 01:30 AM AND Purge for 15 days
Audit/Workflow jobs Daily: 03:00 AM (if standalone)
CSV/CEF export to the SCP/FTP Server Daily: 05:00 AM, if configured in the Audit
jobs AND after the audit jobs complete.
Host name Aliasing Daily: 10:00 PM
Policy Reinstallation Daily: 11:00 PM
System Backups Monthly: First Sunday of each Month at 6:00
AM

Chapter 4. Managing your Guardium system 119


The following table lists a schedule of tasks for a Guardium system that is
deployed as an aggregator.

Function Schedule
Data Archive and Purge Daily: 12:30 AM AND Purge for 30 days
Data Import (from the Collectors) Daily 1:15 AM
Audit/Workflow jobs Daily: 03:30 AM
CSV/CEF export to the SCP/FTP Server Daily: 05:15 AM, if configured in the Audit
jobs AND after the audit jobs complete.
Hostname Aliasing Daily: 10:00 PM
System Backups Monthly: First Sunday of each Month at 7:00
AM

Note: Avoid scheduling before 12:15 a.m. to avoid any conflicts with the internal
start-of-day processing on each Guardium system.

The daily Data Archive should be set to Archive data older than 1-Day and Ignore
data older than 2-days. The first run archives all data in the database and
subsequent processes will only archive yesterday's data.

The amount of data kept online is constrained by the size of the database on each
Guardium system, so the Purge process helps to manage how much data is kept
online, and it works with the Daily Archive. Guardium recommends keeping the
minimum amount of data necessary to avoid filling up the database and help with
database performance.

For collectors, Guardium recommends 15 days for the collector and 30 days for the
aggregator. The actual length, however, depends on how much data is recorded
(for example, numbers of S-TAPS, policy rules, and collectors).

Data Export and Import

The previous day’s logged activities are exported daily (a push process) from the
collectors to their assigned aggregators for aggregated-reporting. This activity is the
counterpart to the Data Import on the aggregator.

Note: For convenience, purge can be configured on either the Archive or Export
setup screens.

120 Administration
The Data Import process is scheduled only on an aggregator. It imports and
processes the previous day’s data exported from the collectors.

Monthly Backups

As noted previously, the system backups are full backups and used for disaster
recovery. Here is an example of the monthly schedule for the first Sunday of each
month starting at 6:00 AM.

Exporting Results (CSV, CEF, PDF)


CSV, CEF, and PDF files can be created by workflow processes. This function
exports all such files that are on the Guardium system.

CEF/CSV files that are created by workflow processes can also be written to
syslog. When that happens, those files are not available to be exported by the
means described here. Those files should be accessed from syslog by other means.

To export CSV, CEF, and PDF files:


1. Open the Results Export (files) by clicking Manage > Data Management >
Results Export Files.
2. Choose an option from the Protocols radio buttons: SCP, FTP, Amazon S3, or
Softlayer.
3. For Host, enter the IP address or DNS host name of the host to receive the files.
4. For Directory, identify the directory in which the data is to be stored. How you
specify this directory depends on the protocol you selected.
v For FTP: Specify the directory relative to the FTP account home directory.
v For SCP: Specify the directory as an absolute path.
5. Change the Port that can be used to send files over SCP and FTP. The default
port for SSH, FTP, and SFTP is 22. The default port for FTP is 21.
6. For Username and Password, enter the credentials for the user logging in to
the host machine. This user must have write/execute permissions for the
directory that is specified in the Directory field.
7. Click Apply to save the configuration. The system attempts to verify the
configuration by sending a test data file to that location. If the operation fails, it
displays an error message. If the test file is transmitted successfully, the buttons
in the Scheduling section become active.
8. Do one of the following:
v To export the files now, click Run Once Now.
v To schedule the export operation, click Modify Schedule.

Chapter 4. Managing your Guardium system 121


9. To verify that files have been exported, check the Aggregation/Archive Log.
There should be a Send activity for each CSV or CEF file exported.

To define a default separator, open the Global Profile by clicking Setup > Tools
and Views > Global Profile.

To enter a label to be included in all file names, go to Tools > Audit Process
Builder.

Note:

The Syslog maximum message size is 4000. CSV results are truncated if they
exceed this limit.

Set the encoding to UTF-8 no matter what application is used to read .CSV files.
Excel defaults to a different character set and can corrupt the .CSV files. Also,
when using Excel, import the .CSV file and select UTF-8 encoding instead of just
opening the file and having Excel launch based on file association.

Export/Import Definitions
If you have multiple systems with identical or similar requirements, and are not
using Central Management, you can define the components that you need on one
system and export those definitions to other systems, provided those systems are
on the same software release level.

You can export one type of definition (reports, for example) at a time. Each
element that is exported can cause other referenced definitions to be exported as
well. For example, a report is always based on a query, and it can also reference
other items, such as IP address groups or time periods. All referenced definitions
(except for security roles) are exported along with the report definition. However,
only one copy of a definition is exported if that definition is referenced in multiple
exported items. An export of policies or queries exports only the groups that are
referenced by the exported policies or queries. Previously an export of policies or
queries would export all groups.
Export/Import Definitions
Export and Import Definitions are used to save and then restore functional
data from a given Guardium system. For example, this function enables
you to create a report on one Guardium system and then import that same
report onto another server with the same Guardium installed version.

Note: This function is not the same as a full backup of the server. Backups
should still be defined and run on a scheduled or manual basis.
Export Definitions - Are used to save and share defined functional values
such as Reports/Queries, CAS data, Classifier Data, and so on. The export
types are saved onto your PC as a .sql file type.
Import Definitions - This function is used to import the exported
definitions onto servers that use the SAME Guardium Software version.
For example, if you export definitions from a Guardium V10 system, then
you can import those definitions only onto another V10 system.

Note:

122 Administration
v When you export graphical reports, the presentation parameter settings (colors,
fonts, titles, and so on) are not exported. When imported, these reports use the
default presentation parameter settings for the importing system.
v Subscribed groups are not exported. When you export definitions that reference
subscribed groups, the user must ensure that all referenced subscribed groups
are installed on the importing appliance (or Central Manager in a federated
environment).
v The logs of Export/Import Definitions have the same retention period than the
monitored database activity logs.
v Comments are not included in export.
v When audit process definitions of scheduled runs (including schedule time) are
exported to another system, the ACTIVE check box in Audit Process Builder is
not checked (INACTIVE).
v Schedule Start Time of an audit process defined on one appliance and exported
to another (unrelated) appliance - In the case that the original schedule start
time is defined, it is retained. If the original schedule start time is not defined
(empty), then the imported schedule start time is set to the time it was imported.
v When you export a datasource with an open source driver, the open source
driver is not included in the export. The user needs to first upload the open
source driver into the new system before importing the datasource definition
that was created using it, otherwise the data direct driver will be substituted for
the open source driver when it is imported.
v Large complex imports can take a very long time and can exceed the length of
the user's session. If this happens and the session times out, the import
continues to run in the background until it completes.
v When you export the definition of classifier policies - any custom evaluation
classes associated with the policies are not exported with the definition. For the
imported policies to work custom evaluation classes must be uploaded
separately.
v Exporting/Importing definitions between different languages does not work. For
example, trying to export a file from a Guardium system with a language of
Simplified Chinese and import that file to a Guardium system of English will
not be successful.

Export to XACML Protocol


Guardium supports export of Policy Rules to a XACML file, and import of
XACML files to another Guardium system.

The XACML (eXtensible Access Control Markup Language) is a declarative access


control policy language that is implemented in XML and a processing model,
describing how to interpret the policies.

The export/Import to standard XACML is used as a bidirectional interface to


transfer policies rules between Optim Designer and Guardium.

Optim Designer can convert data values for various purposes and through various
means. In the core Optim runtime (z/OS and Distributed) this is achieved through
the invocation of data privacy functions that are declared within column maps. In
Optim Privacy this is specified, by the user, as the application of a data privacy
policy on an attribute, referenced by an entity within a data access plan.

Chapter 4. Managing your Guardium system 123


Customers who bought both products, Optim Privacy and Guardium, will be able
to Export to XACML the policies and privacy information from one product and
Import to the other product.

Note: XACML imports from previous versions of Guardium are not supported.

To export Guardium policies to XACML, follow these steps:


1. Click Manage > Data Management > Export.
2. Select Policy from the Type menu.
3. Check the Export to XACML File check box.
4. Select definitions from the Definitions to Export menu.
5. Click Export.

To Import an XACML file from another Guardium system or Optim Privacy, open
the Definitions Import by clicking Manage > Data Management > Import.

Importing Groups

When you import a group that already exists, members may be added, but no
members will be deleted.

Importing Aliases

When you import aliases, new aliases may be added, but no aliases will be
deleted.

Ownership of Imported Definitions

When a definition is created, the user who creates it is saved as the owner of that
definition. The significance of this is that if no security roles are assigned to that
definition, only the owner and the admin user have access to it.

When a definition is imported, the owner is always changed to admin.

Roles for Imported Definitions


References to security roles are removed from exported definitions. So any
imported definitions will have no roles assigned.

Users for Imported Definitions

A reference to a user in an exported definition causes the user definition to be


exported. When definitions are imported, the referenced user definitions are
imported only if they do not exist on the importing system. In other words,
existing user definitions are never overwritten. This has several implications, as
described in Duplicate Role and User Implications.

In addition, imported user definitions are disabled. This means that imported users
can receive email notifications that are sent from the importing system, but they
are not able to log in to that system, unless and until the administrator enables that
account.

124 Administration
Duplicate Group and User Implications
If a group that is referenced by an exported definition exists on the importing
system, the definition of that group from the exporting system will not be not
imported. This may create some confusion if the group is not used for the same
purposes on both systems.

If a user definition exists on the importing system, it may not be for the same
person that is defined on the exporting system. For example, assume that on the
exporting system the user jdoe with the email address john_doe@aaa.com is a
recipient of output from an exported alert. Assume also that on the importing
system, the jdoe user already exists for a person with the email address
jane_doe@zzz.com. The exported user definition is not imported, and when the
imported alert is triggered, email is sent to the jane_doe@zzz,.com address. In
either case, when security roles or user definitions are not imported, check the
definitions on both systems to see if there are differences. If so, make the
appropriate adjustments to those definitions.

Definition Types for Exporting


Table 16. Definition Types for Exporting
Can Be Exported Cannot be Exported
Access Map Baseline or Baseline included in a Policy
Alert Custom Alerting Class

A check box in the Definitions export screen will Exclude


group members. See description in Group line item.
Alias Custom Assessment Test
Audit Process Custom Identification Procedure
Auto-discovery Process
CAS Hosts
CAS Template Sets
Classification Process Access Rule
Classifier Policy
Custom Class Connection
Permission
Custom Domain
Custom Table
Datasource
Event Type
Group A check box in the Definitions export screen will
Exclude group members. This check box is visible only
for data sets that have groups somewhere in the export
hierarchy (for example, export of an alert includes also
the query of the alert and the query might include groups
in the query conditions). If the export of datasource does
not include groups, the checkbox is not visible. When that
checkbox is set, the export file includes groups (if groups
are linked to the exported definition) but members of the
groups are not exported. The checkbox is not set by
default, its state is not persistent, and only applies to the
current export.

Chapter 4. Managing your Guardium system 125


Table 16. Definition Types for Exporting (continued)
Can Be Exported Cannot be Exported
Named Template
Period (time period)
Policy (but not an included
Baseline)
Privacy Set
Query
Replay
Report A check box in the Definitions export screen will Exclude
group members. See description in Group line item.

Role
Security Assessment
User
Users database mapping
Users database permission
Users Hierarchy

Export Definitions
1. Open the Definitions Export pane by clicking Manage > Aggregation &
Archive > Export.
2. Select an option from the Type menu. The Definitions to Export menu will be
populated with definitions of the selected type.
3. Select all of the definitions of this type to be exported.

Note: Do not export a Policy definition whose name contains one or more
quote characters. That definition can be exported, but it cannot be imported. To
export such a definition, make a clone of it, naming the clone without using
any quote characters, and export the clone.
4. Click Export. Depending on your browser security settings, you may receive a
warning message asking if you want to save the file or to open it using an
editor.
5. Save the exported file in an appropriate location.

Import Definitions
1. Open the Definitions Import pane by clicking Manage > Aggregation &
Archive > Import.
2. Click Browse to locate and select the file.
3. Click Upload. You are notified when the operation completes and the
definitions contained in the file are displayed. Repeat to upload additional files.
4. Use the Fully synchronize group members checkbox to set the behavior of
how to add new group members imported directly or via other datasets such
as queries or policies. If not checked, new members that are in the import are
added, but members not in the import are not removed. If checked, then group
members not in the import are removed. Use the Set as default button next to
the checkbox to save the checkbox setting.

126 Administration
5. Click Import this set of Definitions to import a set of definitions, or click
Remove this set of Definitions without Importing to remove the uploaded file
without importing the definitions.
6. You will be prompted to confirm either action.

Note: An import operation does not overwrite an existing definition. If you


attempt to import a definition with the same name as an existing definition,
you are notified that the item was not replaced. If you want to overwrite an
existing definition with an imported one, you must delete the existing
definition before performing the import operation.

Distributed Interface
Use this configuration screen to define the Distributed Interface and upload the
Protocol Buffer (.proto) file to the DIST_INT database.

From this database, Query Domain metadata is built automatically. After the
metadata is built, the user can go to the Custom Domain Builder to modify or
clone the data and build custom reports. The distributed interface data uses
protocol buffers. Protocol buffers are a flexible, efficient, and automated mechanism
for serializing structured data.

For Universal Feed type 3, upload the protocol definition file for configuration of
DIST_INT database by clicking Manage > Aggregation & Archive > Distributed
Interface.

Note: Click Maintenance to manage the table engine type and table index. The
table engine types for universal feed tables (InnoDB and MyISAM) will appear for
all universal feed tables as the data stored on the Guardium internal database is
MYSQL-based.

Configure the Distributed Interface


1. Open the Distributed Interface Finder by clicking Manage > Aggregation &
Archive > Distributed Interface.
2. Click New to create a new Distributed Interface, or select an existing
Distributed Interface from the Distributed Interface Finder and click Modify or
Delete.
3. For Vendor ID, enter the ID of the vendor (for example, 20000).
4. For Domain name, enter the name of the domain that will be selectable from
Custom Domain Builder.
5. Check the Included in aggregation
6. For File Name, click Browse to select a file.
7. Click Apply to save this configuration.
8. Build a custom report in the Custom Domain Builder. Open the Custom
Domain Builder by clicking Setup > Tools & Views > Custom Domain
Builder.

Example of a .proto file


package bim;
option java_package = "com.ibm.infosphere.bim.proto";
option java_outer_classname = "BimEvent";
// NOTE: AssetID and Property_type (== Property name!) are strings.
// For AssetID , it is safest to use a UUID since it provides world-wide unique ID.
// This will be the key to the table of current metrics and property values.

Chapter 4. Managing your Guardium system 127


// per each asset, per each property , there will be one value (recent, or min, or max,etc)
message EventTypeID {
required string eventType = 1; //e.g. Schema change
}
message AssetID {
required string assetId = 1;
}
message InfoPropertyID {
required string assetId = 1;
required string propertyName = 2;
}
message MetricPropertyID {
required string assetId = 1;
required string propertyName = 2;
}
message AssetRelationID {
// These are asset "native" ids
required string sourceAssetId = 1;
required string targetAssetId = 2;
}
message RelationPropertyID {
required string assetRelationId = 1;
required string propertyName = 2;
}
message Event {
optional InnerEvent innerEvent = 1;
}
message InnerEvent {
// Common for all events
optional EventTypeID eventTypeId = 1;
optional string description = 2;
optional string time = 3;
optional string agentId = 4;
// Event can be for asset info, or metric property
optional AssetInfoEvent assetInfoEvent = 5;
optional MetricPropertyEvent metricPropertyEvent = 6;
optional AssetRelationEvent relationEvent = 7;
optional RuleEvent ruleEvent = 8;
}
message AssetInfoEvent {
optional AssetID unique_key__ = 1;
optional string assetType = 2;
optional string assetName = 3;
optional string gdm_server_ip = 4;
optional string gdm_service_name = 5;
repeated InfoProperty property = 6;
}
message InfoProperty {
optional InfoPropertyID unique_key__ = 1;
optional string value = 2;
}
message MetricPropertyEvent {
optional AssetID assetId = 1;
repeated MetricProperty property = 2;
}
message MetricProperty {
optional MetricPropertyID unique_key__ = 1;
optional AssetID assetId = 2;
optional string stringValue = 3;
optional double doubleValue = 4;

enum Data_type {
DOUBLE = 1;
LONG = 2;
INT = 3;
FLOAT = 4;
DATE = 5;

128 Administration
BOOLEAN = 6; // convention is to store it
as 0 and 1 in the double_value
STRING = 7; // stored in string_value
}
optional Data_type dataType = 5;
optional string unit = 6; // unit for the value
}
message AssetRelationEvent {
optional AssetRelationID unique_key__ = 1;
required string relationshipType = 2;
repeated RelationshipProperty property = 3;
optional bool deleted = 4;
}
message RelationshipProperty {
optional RelationPropertyID unique_key__ = 1;
optional string value = 2;
}
message RuleEvent {
optional string ruleName = 1;
optional bool enabled = 2;
}
// --- Metadata --- All unique identifier must be defined here
message Identifier {
optional InfoPropertyID infoPropertyId = 1;
optional MetricPropertyID metricPropertyId = 2;
optional AssetID assetId = 3;
optional AssetRelationID assetRelationId = 4;
optional RelationPropertyID relationshipPropertyId = 5;
}

Manage Custom Classes


Upload and maintain custom classes used in alerts or evaluations. Manage custom
classes by clicking Setup > Custom Classes.

After you compile a class, it must be uploaded to the Guardium system.

Uploading a Custom Class


1. You can upload a custom class for alerts or evaluations. Upload a custom class
by clicking Setup > Custom Classes, then either Alerts > Upload or
Evaluations > Upload
2. Enter a description for the custom class.
3. Click Browse to locate and select the class file that you want to upload.
4. Click Apply.

Updating a Custom Class


1. Select Setup > Custom Classes, then either Alerts > Update or Evaluations >
Update.
2. Select the description of the class to be updated.
3. Click Browse to locate and select the class file that is to be used for the update.
4. Click Apply.

Deleting a Custom Class


1. Select Setup, then either Alerts > Delete or Evaluations > Delete
2. Select the description of the class to be deleted.

Note: You cannot remove a class that is in use by some other component (the
installed policy, for example).

Chapter 4. Managing your Guardium system 129


3. Click Delete.

Uploading a Key File


Under rare conditions, a Microsoft SQL Server key file must be uploaded to the
Guardium system, in order to monitor encrypted SQL Server traffic.

No key file is needed if an S-TAP has been installed on the SQL Server and
configured to handle encryption. This is the recommended and most common way
of configuring an S-TAP agent for MS SQL Server. To determine if an S-TAP is
configured to handle encrypted MS SQL Server traffic:
1. Open the S-TAP Control by clicking Manage > Activity Monitoring > S-TAP
Control.
2. Expand the Details pane for the S-TAP agent on the MS SQL Server host.
3. Verify that the SQL Server TAP Decrypted property has been set to either SSL
Only or Kerberos and SSL.
4. If the SQL Server TAP Decrypted property has been set to None, we recommend
changing that setting to either SSL Only or Kerberos and SSL.

Note: After changing the SQL Server TAP Decrypted property, you must restart
the S-TAP and MSSQL Monitor service for the change to take effect, .
If for some reason you are not permitted to change the SQL Server TAP
Decrypted setting, use this procedure to upload a key file from the server.

If no S-TAP has been installed, or if it has been installed but is not configured to
handle encrypted SQL Server traffic, a key file is required to monitor SQL Server
traffic under the following conditions:
v If the server is configured using the force protocol encryption option.
v If the server in a SQL Server 2005 environment uses encrypted login sessions
with SQL Server mixed authentication.

Since a single Guardium system may be monitoring multiple SQL Server instances,
you may need to upload multiple key files. To upload a key file to the Guardium
system:
1. Click Setup > Tools & Views > Upload Key File.
2. Click Browse to locate the key file you want to upload.

Note: The key file name must be the fully qualified domain name of the SQL
Server. The class file cannot be renamed – it must be created with that name.
3. Click Upload Key File. You will be informed of the results of the operation.

SSH Public Keys


Use this information to create, modify or remove an SSH Public Key.
1. Click Manage > Activity Monitoring > SSH Public Key Management, and do
one of the following:
v To create a key, click New.
v To generate a key, click Generate.
v To modify a key, select it from the list and click Modify.
v To remove a key, select it from the list and click Remove.
2. Fill in the appropriate information on the SSH Public Key Edit panel and click
Apply to save.

130 Administration
How to install an appliance certificate to avoid a browser SSL
certificate challenge
Use IBM Security Guardium CLI commands to create a certificate signing request
(CSR), and to install server, certificate authority (CA), or trusted path certificates on
your Guardium system.

About this task

Eliminate the Certificate Error warning screens saying:


There is a problem with this website’s security certificate. The security certificate presented b

See Certificate CLI Commands for more information on all the certificate
commands.

Note: One prerequisite is that you must provide a public certificate from a CA you
will be using to sign your certificates (Verisign, Thwate, Geotrust, GoDaddy,
Comodo, within-your-company, etc).

Note: Guardium does not provide CA services and will not ship systems with
different certificates than the one installed by default. A customer that wants their
own certificate will need to contact a third-party CA.

Procedure
1. Have available the public certificate from the CA (Certificate Authority) you
will be using to sign your certificates (from Verisign, Thwate, Geotrust,
GoDaddy, Comodo, in-house, etc).
2. Log into the CLI on the individual Guardium system you wish to have a
signed certificate on.
Before executing the command, obtain the appropriate certificate (in PEM
format) from your CA, and copy the certificate, including the Begin and End
lines, to your clipboard.
3. Enter the command, store trusted certificate. The following prompt will be
displayed:
What is a one-word alias we can use to uniquely identify this certificate?
Enter a one-word name for the certificate and press Enter.
The following instructions will be displayed:
Please paste your CA certificate, in PEM format. Include the BEGIN and END lines, and then pres
Paste the PEM-format certificate to the command line, then press CRTL-D. You
will be informed of the success or failure of the store operation.
Now the CA you will sign with is set as trusted on the Guardium system.
4. Next, from the CLI command prompt, type: csr.
Fill in the requested information. If the CN (common name) of the certificate is
not set to the hostname.domain of the box, certificate errors from the browser
will result.
There are no parameters, but you will be prompted to supply the
organizational unit (OU), country code (C), and so forth. Be sure to enter this
information correctly. The last prompt is as follows:
What encryption algorithm should be used (1=DSA or 2=RSA)?

Chapter 4. Managing your Guardium system 131


DSA, or the Digital Signature Algorithm, is a federal information processing
standard (FIPS) for digital signatures. RSA is a public-key cryptosystem that
involves key generation, encryption, and decryption. The default encryption
algorithm is RSA.
After you respond to the last prompt, the system displays a description of the
request, followed by the request itself, and followed finally by additional
instructions. For example:
This is the generated CSR: Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=MA, L=Lit
5. Copy and paste the generated hash from ---Begin CSR---- to ---End CSR--- into
a text document. Now send this off to your CA for them to return the signed
key.
Before continuing, check the Subject line to verify that you have entered your
company information correctly. From this point forward, use whatever
procedure you would normally use to obtain a server certificate from your CA.
6. The CA signs the CSR and sends you back your signed key.
7. Now, go back to the CLI prompt on the Guardium system and have the signed
key from the CA handy. Type the following: store certificate console.
Enter the command exactly as shown. You will receive the following
information and prompt:
Please paste your new server certificate, in PEM format.
Include the BEGIN and END lines, and then press CTRL-D.
Paste the PEM-format certificate to the command line, then press CRTL-D. You
will be informed of the success or failure of the store operation.
-----BEGIN CERTIFICATE----- MIIDvTCCAqegAwIBAgIBATALBgkqhkiG9w0BAQUwcjELMAkGA1UEBhMCVVMxEzAR BgNVB
8. For the final step, restart the UI using the command restart gui.
You have now successfully installed one certificate for one Guardium unit.
Repeat the steps for every Guardium system on-site.

Express Security Setup


This user application permits a quick start to the Guardium solution. Based on a
profile (one profile per user), this application generates a policy (and installs it), an
assessment, and defines an audit process.

Before you use Express® Security Setup, perform these pre-requisites:


1. Define your datasources. Click Setup > Tools and Views > Datasource
Definitions and select Security Assessment from the Application Selection
menu.
v Create a new datasource definition by clicking New, or modify an existing
datasource by cloning it and modifying the clone.
v Fill in the required Datasource Definition information and click Apply to
save the datasource.
v Assign roles for the datasource by clicking New.
2. Configure alert messages. Open the Alerter by clicking Protect > Database
Intrusion Detection > Alerter. Fill in the required SMTP (email) and SNMP
information. For further information on this procedure, see Alerter
Configuration.
3. Define and populate groups for your databases, servers, and objects. Open the
Group Builder by clicking Setup > Tools and Views > Group Builder. For
more information, see Groups.

132 Administration
Note: Express Security does not run on Aggregators or Central Managers.

Select Datasources

Open the Express Security Setup by clicking Setup > Tools and Views > Express
Security.

Select the datasources you want to use from the Available Datasources menu, and
move them over to the Selected Datasources menu by using the chevron buttons.

Datasources can be modified directly from this page by adding them to the
Selected Datasources menu and double-clicking them.

Audit Filters / Granularity

These policy choices define the exclusions per groups (users and servers). The
default choice is “no exclusions”, however, the more users and servers that are
monitored, the more processing and data collection that takes place.

Granularity is the relative level of detail that characterizes an object or activity.

The Express Security menu choices are:


v Exclude applications in group
v Exclude users in group
v Exclude users not in group
v Exclude Client IPs in group

The granularity of the policy is chosen by marking the check box for the policy,
and then choosing an option from the menu.

There are two radio buttons for Merge common access requests:
v Yes, maintaining counts - Merge common access requests and maintain counts
(this is the default and is also known as “Audit only”)
v No - log full detail

Note: Click the Groups icon to modify members of selected groups. Groups must
be defined by the admin user.

Alerting Options

An alert is a message that indicates that an exception or policy rule violation was
detected. These choices specify how to handle exceptions or policy rule violations.
They also define how to transmit the message.

The menu choices are:


v Alert On
– Signature violations (requires Database Activity Monitor Content Subscription
service)
– Failed logins
– SQL errors
v Alert Per
– Alert per occurrence (event)
– Alert once per session (default)

Chapter 4. Managing your Guardium system 133


v Alert Using
– Syslog (default)
– SNMP traps
– Email (SMTP; sends all email to a specified user address)

Add policy rules

A security policy contains an ordered set of rules to be applied to the observed


traffic between database clients and servers. Each rule can apply to a request from
a client, or to a response from a server.

Each rule in a policy defines a conditional action. The condition that is tested can
be a simple test, for example, it might check for any access from a client IP address
that does not belong to an Authorized Client IPs group. Or the condition tested
can be a complex test that considers multiple message and session attributes
(database user, source program, command type, time of day, etc.), and it can be
sensitive to the number of times the condition is met within a specified timeframe.
See Policies.

Selecting a policy in this section of the menu screen takes all the rules from the
particular policy and appends them to the rules that Express Security Setup has
collected in the other sections of the total menu screen. The user does not have to
select additional policies in the Add Policy rules section of this total menu screen.

The Express Security menu choices list rules from predefined or customized
policies. The following examples are Guardium predefined policies:
v Copy all rules from policy
– Allow all
– Basel II
– Data privacy
– Data privacy - PII
– HIPAA PCI
– PCI Oracle EBS
– PCI SAP
– Privileged users monitoring (black list)
– Privileged users monitoring (white list)
– SOX
– SOX Oracle EBS
– Vulnerability and threads management

Assessments
The security assessment function scans the database infrastructure for
vulnerabilities and provides evaluation of database and data security health, with
real time and historical measurements. For further information on this procedure,
see Vulnerability Assessment.

Choose test databases of type (choices):


v DB2
v Informix
v Microsoft SQL Server

134 Administration
v MySQL
v Netezza®
v Oracle
v PostgreSQL
v Sybase
v Teradata

Place a checkmark next to the security assessment tests:


v Include configuration tests
v Include version/vulnerability tests
v Include authentication tests
v Include privilege tests
v Include other tests
v Include file system tests (Requires CAS being installed and configured)

Auditing Of

This section of the Express Security menu screen includes selecting additional
policies that result in a selective audit policy.

To completely control the client traffic that is logged, a policy can be defined as a
selective audit trail policy. In this type of policy, audit-only rules and an optional
pattern identify all of the client traffic to be logged. For further information see
Using_Selective_Audit_Trail in Policies.

Express Security menu choices are as follows, use available drop down menus to
see policy group choices for each item:
v Privileged users
v Data definition language (DDL) commands
v Administrative commands
v Data manipulation language (DML)
v SELECT commands
v EXECUTE commands

Compliance Reporting/ Sign-off


The results of this entire process including pre-defined reports, period of time
displayed on reports, sign-off trail and specified retention period of this data are
selected in the following menu choices:
v SQL report
v Exception report
v Security assessment report
v Session report (Login/Logout/Ignored)
v Policy violations report
v Alerts sent report

Display report data for:


v One Month
v One Week

Chapter 4. Managing your Guardium system 135


v One Day

Get sign-off from: access role, admin role, user-defined role, accessmgr or admin
user.

Retention period (in days - must be positive). 30 is default.

After checking off selections, click Install to install and save the policy choices, or
Save to save the choices without installing the policy choices.

A Done message will appear when the choices have been successfully saved. When
the install is done, another menu screen will appear. The menu defines the
schedule when the Audit will run. The choices are day/week or month and the
choices require specific times.

The details of an Installed Policy can also be seen by clicking Setup > Tools and
Views > Policy Installation.

When the Revert button is clicked, the scheduling page is re-opened with the
expectation that the user will want to remove the schedule from this process.

Note: A comment field is available after the Express Security Setup is saved.

GRC Heatmap
This high-level management report shows a snapshot of the current state of the
Guardium system in terms of three areas that matter most: Governance, Risk, and
Compliance (GRC). Open the GRC Heatmap by clicking Setup > Tools and Views
> GRC Heatmap.

The GRC Heatmap allows you to quickly check on the most pertinent security
areas of your environment. There are 16 focus areas organized by Governance,
Risk, and Compliance, and color coded based on the level of activity for each. Each
area has a title and short description for what it reports on. Double-clicking on the
area produces a drill-down tabular report with full details.

Compliance has two rows - the first for the database environment and the second
for the individual unit (for example, whether data is being backed up or not).

A proper Governance strategy implements systems to monitor and record current


business activity, takes steps to ensure compliance with agreed policies, and
provides for corrective action in cases where the rules have been ignored or
misconstrued.

Risk Management is the process by which an organization sets the risk tolerance,
identifies potential risks and prioritizes the tolerance for risk based on the
organization’s business objectives. Compliance is the process that records and
monitors the policies, procedures and controls needed to enable compliance with
legislative or industry mandates as well as internal policies.

The speedometer views are as follows:

136 Administration
Table 17. Speedometer Views of GRC Heatmap
Heatmap Views
Governance Active audit process Processes with Pending to-dos lists Open Incidents
pending results items
Risk Unpatched Databases Critical tests failed Access violations Classification
violations
Compliance Policy Installed? Non-assessed data Unmonitored Servers Inactive S-TAPs
sources
Compliance (self) Data Archiving Results Archiving Data Purged? Backups Performed?
Performed Performed?

Table 18. Specifications for each View/Report:


Heatmap Views
Governance Active audit process Processes with Pending to-dos lists Open Incidents
pending results items
Color-coding Color-coding
Color-coding Color-coding
Green >0 Green <=25
Green <=10 Green <=25
Red =0 Yellow >25 and <=50
Yellow >10 and <= 20 Yellow >25 and <=50
Data Used - Number Red >50
of audit processes Red > 20 Red >50
marked as Active Data Used - Total
Data Used - Total Data Used - Number number of incidents
Timeframe - ALL number of open audit of audit not in status “Closed”
results - meaning, process-receiver
results that are in a pairings for all Timeframe - Three
status other than receivers who have months
“Reviewed” or been distributed audit
“Signed” processes results

Timeframe - One Timeframe - One


week month

Risk Unpatched Databases Critical tests failed Access violations Classification


violations
Color-coding Color-coding Color-coding
Color-coding
Red >=0 Green =0 Green =0
Green 0
Green <=1 Yellow >0 and <=5 Yellow >0 and <=10
Yellow >0 and <=10
Data Used - Number Red >5 Red >10
of used datasources Red >10
whose version and Data Used - Number Data Used - Number
patch level do not of failed security of policy violations Data Used - Number
match a version and assessment test of classifier violations
patch level from a results of critical Timeframe - One
Group such as Oracle severity week Timeframe - One
Database week
Version+Patches Timeframe - One
month
Timeframe - ALL

Chapter 4. Managing your Guardium system 137


Table 18. Specifications for each View/Report: (continued)
Heatmap Views
Compliance Policy Installed? Non-assessed data Unmonitored Servers Inactive S-TAPs
sources
Color-coding Color-coding Color-coding
Color-coding
Green >0 Green <=1 Green <=1
Green <=1
Red =0 Red >=0 Red >=0
Red >=0
Data Used - Number Data Used - Number Data Used - Number
of installed policy Data Used - Number of server IPs where of S-TAPs inactive for
rules of datasources that we have previously more than one hour
have been assessed sniffed traffic but
Timeframe - From on that machine in where we have Timeframe - ALL
three years ago to one the past but do not sniffed no traffic in
hour in the future have any assessment the past hour
results in the past
three months. If a Timeframe - Two
data-source was days
never assessed it will
never appear in the
Non-assessed Data
Sources report.

Timeframe - Three
months
Compliance (self) Data Archiving Results Archiving Data Purged? Backups Performed?
Performed Performed?
Color-coding Color-coding
Color-coding Color-coding
Green >0 Green >0
Green >0 Green >0
Red =0 Red =0
Red =0 Red =0
Data Used - Number Data Used - Number
Data Used - Number Data Used - Number of successful data of successful backups
of successful data of successful results purges performed performed
archives performed archives performed
Timeframe - One Timeframe - One
Timeframe - One Timeframe - One month week
month month

Note: The Unmonitored servers report will always return a 0 on aggregators as


this report is not meaningful for aggregators.

Self Monitoring
The Guardium solution monitors itself to minimize disruptions and correct
problems automatically whenever possible.

Guardium uses a three-pronged approach to ensuring that it is available,


functioning properly, has not been tampered with, and alerts users of problems:
v Reports - Whether textual or graphical, reports are at the core of the Guardium
solution. By using Guardium’s Query Builder and Report Builder, a user can
effectively report on any of the self-monitoring data collected through associated
domains and entities. Many of the predefined reports can be enhanced through

138 Administration
more detailed effort to provide higher levels of granularity. A specific query
builder has been created (VA Test Tracking) to report on tests that are available
for security assessments.
v Alerts - In addition to building reports, a user can define an alert against those
reports through defined thresholds--indicating an exception or policy rule
violation. These alerts can either be real-time or determined through historical
analysis. These alerts can then trigger notification to users through SMTP, SNMP,
syslog, or a custom Java™ class.
v Self-Monitoring Utility - Guardium has implemented an internal self-monitoring
demon (always running) service utility on collectors and aggregators that wakes
up every 5 minutes and does system scan, checking components for optimal
configuration, operational effectiveness, and repairs when necessary. For
example if the utility finds the Web Server down, it will first validate a complete
shutdown of the service, restart the service, and then alerts an administrative
user.

Components Monitored
Table 19. Components Monitored
Components Monitored
System
Disk space(%full)
See the System Monitor for more information - Manage > System View > System Monitor
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

CPU Load
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor. Refer to the System Monitor for more information.
Open the System Monitor by clicking Manage > System View > System Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

Uptime & Reboots


Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Memory Usage
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor. Refer to the System View for more information.

Open the System Monitor by clicking Manage > System View > System Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Chapter 4. Managing your Guardium system 139


Table 19. Components Monitored (continued)
Components Monitored
Failed Logins
Refer to the Guardium Logins section of the System Monitor.

Open the System Monitor by clicking Manage > System View > System Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Login
domain and Guardium Users Login entity to create alerts
Self-Monitoring: Is in use

Number of hosts that ceased to be monitored (based on your no-traffic alert)


Report: Refer to the GRC Heatmap.

Open the GRC Heatmap by clicking Setup > GRC Heatmap.

List of hosts that ceased to be monitored (based on your no-traffic alert)


Report: Refer to the GRC Heatmap. Open the GRC Heatmap by clicking Setup > GRC
Heatmap

Monitoring Engine (snif)


Status: up/down/stuck/overloaded
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

CPU Usage
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Memory Usage
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

Identify bottle-necks
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

140 Administration
Table 19. Components Monitored (continued)
Components Monitored
Overload & delays (Queues)
Report: Buff Usage Monitor - click Reports > Guardium Operational Reports > Buff
Usage Monitor.
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

Lost requests
Report: Dropped Requests - click Manage > Reports > Activity Monitoring > Dropped
Requests.
Alert: You can use the Queries and Correlation Alerts, utilizing the Exceptions domain and
Exceptions entity to create alerts
Self-Monitoring: Is in use

Monitored Data
Database types currently monitored
Report: See Daily Monitor > Databases by Type, or See Predefined admin Reports for
report : Databases by Type for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Auto-discovery domain
and Host Configuration entity to create alerts

Number of datasources with no assessment results in the past 3 months


Report: See Quick Start > GRC Heatmap > Compliance > Non-assessed Data Sources

List of the datasources with no assessment results in the past 3 months


Report: See Quick Start > GRC Heatmap > Compliance > Non-assessed Data Sources >
double-click for list

Number of Datasources with Non approved Version/Patch level


Report: See Quick Start > GRC Heatmap > Risk > Unpatched Databases

Datasources with Non approved Version/Patch level


Report: See Quick Start > GRC Heatmap > Risk > Unpatched Databases > double-click for
list

Change in data patterns


Report: See Daily Monitor > Values Changed, or See Predefined admin Reports for report :
Values Changed for more information
Alert: See Viewing an Audit Process Definition for alert: Data Source Changes - alert on
any data source changes

Packets rates
Report: Select Guardium Monitor > Buffer Usage Monitor

Chapter 4. Managing your Guardium system 141


Table 19. Components Monitored (continued)
Components Monitored
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Requests rates
Report: Select Guardium Monitor > Buffer Usage Monitor, or See Predefined admin
Reports for report : Request Rate for more information

Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Ignored Data (due to selective-audit rules)


Report: Select Guardium Monitor > Buffer Usage Monitor

Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Web Service (tomcat) & Applications Status


Service Status: up/down
Self-Monitoring: Is in use

Scheduled Jobs Exceptions


Report: Select Guardium Monitor > Scheduled Job Exceptions, or See Predefined admin
Reports for report : Scheduled Job Exceptions for more information, or See Predefined
admin Reports for report : Scheduled Jobs for more information, or See Predefined admin
Reports for report : Guardium Job Queue for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Exceptions domain and
Exception Type entity to create alerts

Audit processes status


Report: Select Guardium Monitor > Number of Active Audit Processes, or See Predefined
admin Reports for report : Number of Active Audit Processes for more information, or See
Predefined admin Reports for report : Outstanding Audit Process Reviews for more
information
Alert: You can use the Queries and Correlation Alerts, utilizing the Audit Process domain
and Audit Process entity to create alerts

Inspection Engine Changes


Report: See Tap Monitor > S-TAP Configuration Change History
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP -
alert on any activity related to inspection engine and S-TAP configuration

Inspection Engine
Report: See S-TAP Reports

142 Administration
Table 19. Components Monitored (continued)
Components Monitored
Policy Changes & Policy Installations
Alert: See Viewing an Audit Process Definition for alert: Policy Changes Alert - alert once a
day on policy related changes

Currently installed policy


Report: Select Administration Console > Policy Installation
Alert: You can use the Queries and Correlation Alerts, utilizing the Installed Policy domain
and Installed Policy entity to create alerts

Guardium Users Activity


Login/Logout
Report: Select Guardium Monitor > Logins to Guardium, or See Predefined admin Reports
for report : Logins to Guardium for more information, or See Predefined admin Reports for
report : Admin User Logins for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Login
domain and SQL Guard Login entity to create alerts

Failed Logins
Report: Select Guardium Monitor > Logins to Guardium, or See Predefined admin Reports
for report : Logins to Guardium for more information, or See Predefined admin Reports for
report : Admin User Logins for more information
Alert: See Viewing an Audit Process Definition for alert: Failed Logins To Guardium - alert
if have more than 5 failed logins in the last 11 minutes, or Select Tools > Report Building >
drop-down Report Title: Guardium Logins, See Reports for additional information

User Activity Audit Trail


Report: Select Guardium Monitor > User Activity Audit Trail, or See Predefined admin
Reports for report : User Activity Audit Trail for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Activity
domain and SQL Guard User Activity Audit entity to create alerts
Note: User activity includes those instances where a user changes to the root shell --
providing a log of their root activity.

Creation/Deletion of Users/Roles
Report: Select Guardium Monitor > User Activity Audit Trail, or See Predefined admin
Reports for report : User Activity Audit Trail for more information
Alert: See Viewing an Audit Process Definition for alert: Guardium - Add/Remove Users -
alert on any Addition or Removal of Guardium User

LDAP Configuration Changes


Alert: See Viewing an Audit Process Definition for alert: Guardium - Credential Activity -
alert on any Credential changes including LDAP configuration Changes

Permissions monitoring

Chapter 4. Managing your Guardium system 143


Table 19. Components Monitored (continued)
Components Monitored
Report: Select Guardium Monitor > Guardium Users, or Select Guardium Monitor >
Guardium Roles, or Select Guardium Monitor > Guardium Applications, or See Predefined
admin Reports for report : Guardium Group Details for more information, or See
Predefined admin Reports for report : Guardium Users for more information, or See
Predefined admin Reports for report : Guardium Roles for more information
Alert: You can use the Queries and Correlation Alerts, utilizing the Application domain
and Application Data entity to create alerts

Aggregation / Archive
Activity Log
Report: See Reporting on Aggregation and Archiving Activity
Alert: See Viewing an Audit Process Definition for alert: Aggregation/Archive Errors - alert
on any aggregation/archive error, runs once a day

Resolution -- Success/failure
Report: See Reporting on Aggregation and Archiving Activity
Alert: See Viewing an Audit Process Definition for alert: Aggregation/Archive Errors - alert
on any aggregation/archive error, runs once a day

Internal Database (TURBINE)


Status: up/down
Report: You can use Reports, utilizing the Sniffer Buffer domain and Sniffer Buffer Usage
entity to build a report
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

Disk Space (%Full)


Report: You can use Reports, utilizing the Sniffer Buffer domain and Sniffer Buffer Usage
entity to build a report
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts
Self-Monitoring: Is in use

CPU Usage
Report: You can use Reports, utilizing the Sniffer Buffer domain and Sniffer Buffer Usage
entity to build a report
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Memory Usage
Report: You can use Reports, utilizing the Sniffer Buffer domain and Sniffer Buffer Usage
entity to build a report

144 Administration
Table 19. Components Monitored (continued)
Components Monitored
Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain
and Sniffer Buffer Usage entity to create alerts

Currently running queries


Report: You can use Reports, utilizing the Access domain and Full SQL entity to build a
report

Queries Performance
Report: You can use Reports, utilizing the Access domain and Full SQL entity to build a
report

S-TAP
Status: up/down/synchronizing
Report: See S-TAP Reports, or See Predefined admin Reports for report : S-TAP Status
Monitor for more information
Alert: See Viewing an Audit Process Definition for alert: Inactive S-TAPs Since - alert if
have inactive S-TAPS

Database types monitored


Report: See S-TAP Reports
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP -
alert on any activity related to inspection engine and S-TAP configuration

Changes in data patterns from S-TAP


Report: See S-TAP Reports
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP -
alert on any activity related to inspection engine and S-TAP configuration

S-TAP Config Changes


Report: See S-TAP Reports
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP -
alert on any activity related to inspection engine and S-TAP configuration

Number of S-TAPs inactive in the last hour (based on the Inactive S-TAP Since report)
Report: See Quick Start > GRC Heatmap > Compliance > Inactive S-TAP

S-TAP Info (Central Manager)


Report: See S-TAP Reports. On a Central Manager, an additional report, S-TAP Info, is
available. This report monitors S-TAPs of the entire environment. Upload this data using
the Custom Table Builder. This report is the result of uploading data using remote sources
on a Central Manager and using that data to see a consolidated view of S-TAPs.
S-TAP info is a predefined custom domain which contains the S-TAP Info entity and is not
modifiable like the entitlement domain.

Chapter 4. Managing your Guardium system 145


Table 19. Components Monitored (continued)
Components Monitored

When defining a custom query, go to upload page and click Check/Repair to create the
custom table in CUSTOM database, otherwise save query will not validate it. This table
loads automatically from all remote sources. A user cannot select which remote sources are
used - it pulls from all of them.
Based on this custom table and custom domain, there are two reports:
Enterprise S-TAP view shows, from the Central Manager, information on an active S-TAP
on a collector and/or managed unit (If there are duplicates for the same S-TAP engine, one
being active and one being inactive, then the report will only use the active).
Detailed Enterprise S-TAP view shows, from the Central Manager, information on all active
and passive S-TAPs on all collectors and/or managed units.
If the Enterprise S-STAP view and Detailed Enterprise S-TAP view look the same, it is
because there only one S-TAP on one managed unit being displayed. The Detailed
Enterprise S-TAP view would look different if there is more S-TAPs and more managed
units involved.
These two reports can be chosen from the TAP Monitor tab of a standalone system, but
they will display no information.
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP -
alert on any activity related to inspection engine and S-TAP configuration

CAS
Status: up/down
Report: See CAS Status

Template changes
Report: CAS Templates - click Manage > Change Monitoring > CAS Templates.
Alert: See Viewing an Audit Process Definition for alert: CAS Template Changes - alert on
any CAS Template configuration

CAS Configuration Changes


Report: CAS Templates - click Manage > Change Monitoring > CAS Templates.
Alert: See Viewing an Audit Process Definition for alert: CAS Instance Config Changes -
alert on any CAS Instance configuration

CAS Event
Report: You can use Reports, utilizing the CAS Host History domain and Host Event entity
to build a report

External Data Correlation


Configuration Changes
Report: See External Data Correlation

146 Administration
Guardium nanny process
The Guardium nanny is an internal process that monitors the system's critical
resources and then alert when potential problems are emerging. Nanny alerts go to
syslog, can be forwarded and sent as emails to the administrator, and in some
cases take remedial actions.

The nanny watches key components and critical resources within the Guardium
system—guaranteeing their availability and reliability. These resources and
components include:
v Web service monitoring - service port (default 8443) not responding or tomcat
service is not up
– syslog message
– mail admin
– will issue restarts of the web service
v Inspection Engine activity - snif overloaded, not responding, or failure
– syslog message
– mail admin
– mail guardium support (optional)
– will try and fix by restarting the snif under certain conditions
– will try and respawn snif if process dies
v Diskspace utilization - alerts when > 75% on the critical partitions
– syslog message
– alert admin
– will perform preventive action by cleaning temporary files when over 95%
v Failed login (ssh) to the appliance - checks for ssh daemon's messages and alerts
on failed ssh login attempts
– mail admin (it's already in syslog)
v Monitor internal database (TURBINE) - verify service is up, status, and capacity
utilization monitoring
– syslog message
– mail admin
– restart service
v File System utilization - every five minutes, Nanny.pl checks file system at /var,
warning alert when > 75% in the /var directory, critical alert and services
stopped when >90% in /var directory
– syslog message
– alert admin
– Admin clean-up required, using CLI commands: show filesystem usage, clear
filesystem dir, and restart stopped_services

How to monitor the Guardium system via alerts


Monitor the capacity, performance and availability of the IBM Security Guardium
system using a combination of built-in and custom correlation alerts.

Alert users to issues that may affect system performance, such as: CPU utilization,
database disk space, inactive STAPs, and no traffic situations.

The Sniffer Buffer Usage domain is the basis for most of the following alerts.

Chapter 4. Managing your Guardium system 147


Sniffer Restart Alert
An alert will be sent if the sniffer on a collector has restarted at least three times an
hour.

Create a Query using the Sniffer Buffer Usage domain with the columns and Fields
as shown – there are no conditions.

This is an example of the output from the Query:

Define the alert.

148 Administration
High CPU Utilization
Using the Enterprise Buffer Usage domain, create an alert to monitor system CPU
utilization. Here is an example of a query for CPU utilization which exceeds 75%.

Chapter 4. Managing your Guardium system 149


The alert will then be setup to fire only if the utilization is exceeded for 360 times
in a 24-hour period, for example, 25% of the day.

Note: The Sniffer buffer usage domain is populated once a minute, so there are
1440 entries in a 24-hour period.

Define the alert.

150 Administration
Database Disk Space Alerts
Use the Query Builder to Build two reports (they are similar) and two alerts – one
for the collector and the other for the aggregator since the database size is fixed on
the collector but dynamic on the aggregator (up to the size of the var partition).

Aggregator Disk Space Alert


1.
Create a new Query with Sniffer Buffer Usage as the main entity:

Chapter 4. Managing your Guardium system 151


1.
Configure the fields and conditions.

1.
Setup a new alert in the Alert Builder. Open the Alert Builder by clicking
Protect > Database Intrusion Detection > Alert Builder.

152 Administration
Collector Disk Space Alert

Repeat the previous steps to create an alert for monitoring disk space on the
collectors.
1.
Create a Query.

1.
Use the Alert Builder to set up a new alert.

Chapter 4. Managing your Guardium system 153


Data Import, Merge (Aggregation), Archive or Backup Failure
Alerts
This is a built-in alert and must be activated and scheduled.

Inactive S-TAP Alerts


This is a built-in alert and needs to be activated and scheduled.

For STAPs configured with a primary and secondary collector, if the STAP cannot
communicate with the primary (for example, due to network issues), it will

154 Administration
failover to the secondary. Unless the former-primary collector is able to ping the
STAP, it will then generate an inactive STAP alert.

Note: STAPs in a cluster configuration can generate false alerts if misconfigured.

No Traffic Alerts

This is a built-in alert and needs to be activated and scheduled.

This alert checks for traffic from an active inspection engine, from which the
collector previously received traffic, AND for traffic that is processed by the policy.
If both conditions are not satisfied within 48 hours, an alert will be generated.

Application Monitoring via Ad-hoc Reports

As a general rule, avoid invoking ad-hoc queries/reports on the collector with time
spans > 1 hour. Large/long running queries should be invoked on the aggregator
and are best scheduled using the Audit Process.

The following two reports should be scheduled, from the Central Manager, to run
weekly on each collector.

Note: These reports also need to be scheduled individually on EACH aggregator.

Custom Sniffer Buffer Usage Report

Using the Sniffer Buffer Usage domain, create a report with the following fields:

STAP Status Report

This report displays the key parameters for ALL STAPs and inspection engines for
a given collector. The report cannot be modified but can be run on each collector,
or from the Central Manager pointing to each collector in turn, or scheduled via
the Audit process on each collector.

Chapter 4. Managing your Guardium system 155


Monitoring with SNMP
There is an SNMP agent installed on Guardium systems, and read-only access is
provided using the SNMP community name of guardiumsnmp.

When querying, a value of -1 (minus one) indicates a NULL in the database. The
table at the end of this section lists the available SNMP OIDs.

SNMP Examples
From a Unix session, you can display SQL Guard SNMP information using the
snmpget or snmpwalk commands. (Use snmpget -h or snmpwalk -h to display
command syntax.) Various UI-based software packages are available for displaying
SNMP information. Those alternatives are not described here.
Table 20. SNMP Examples
SNMP Examples
Disk space used and available:
> snmpget -v 2c -c guardiumsnmp a1.corp.com UCD-SNMP-MIB::dskAvail.1
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 1043856
> snmpget -v 2c -c guardiumsnmp a1.corp.com UCD-SNMP-MIB::dskUsed.1
UCD-SNMP-MIB::dskUsed.1 = INTEGER: 914856

To list total memory and used memory:


> snmpget -v 2c -c guardiumsnmp a1.corp.com
HOST-RESOURCES-MIB::hrStorageSize.101
HOST-RESOURCES-MIB::hrStorageSize.101 = INTEGER: 2067352
> snmpget -v 2c -c guardiumsnmp a1.corp.com HOST-RESOURCES-
MIB::hrStorageUsed.101
HOST-RESOURCES-MIB::hrStorageUsed.101 = INTEGER: 1017548

To list the available memory:


> snmpwalk -v 2c -c guardiumsnmp a1.corp.com memAvailReal
UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 1049564

To list values relating to cpu usage:


> snmpwalk -v 2c -c guardiumsnmp a1.corp.com ssCpuRawUser
UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 89240
> snmpwalk -v 2c -c guardiumsnmp a1.corp.com ssCpuRawSystem
UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 195310

156 Administration
Table 20. SNMP Examples (continued)
SNMP Examples
> snmpwalk -v 2c -c guardiumsnmp a1.corp.com ssCpuRawNice
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 11
Note: Adding the RawUser, RawSystem, and RawNice numbers provides a good
approximation of total CPU usage.
> snmpwalk -v 2c -c guardiumsnmp a1.corp.com ssCpuRawIdle
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 26734332

Guardium SNMP OID


Table 21. Guardium SNMP OID
SNMP OID Description
.1.3.6.1.4.1.2021.9.1.7.1 Disk space available in / directory

UCD-SNMP-MIB::dskAvail.1
.1.3.6.1.4.1.2021.9.1.7.2 Disk space available in /var directory

UCD-SNMP-MIB::dskAvail.2
.1.3.6.1.4.1.2021.9.1.8.1 Disk space used in / directory

UCD-SNMP-MIB::dskUsed.1
.1.3.6.1.4.1.2021.9.1.8.2 Disk space used in /var directory

UCD-SNMP-MIB::dskUsed.2
.1.3.6.1.2.1.25.2.3.1.5.1 Total memory available

HOST-RESOURCES-MIB::hrStorageSize.1
.1.3.6.1.2.1.25.2.3.1.6.1 Memory in use

HOST-RESOURCES-MIB::hrStorageUsed.1
.1.3.6.1.4.1.2021.8.1.101.1 Open monitored session count

UCD-SNMP-MIB::extOutput.1
.1.3.6.1.4.1.2021.8.1.101.2 Requests logged by the current sniffer
process (set to zero for each restart)
UCD-SNMP-MIB::extOutput.2

.1.3.6.1.4.1.2021.8.1.101.3 Last session timestamp

UCD-SNMP-MIB::extOutput.3
.1.3.6.1.4.1.2021.8.1.101.4 Last construct timestamp

UCD-SNMP-MIB::extOutput.4
.1.3.6.1.4.1.2021.8.1.101.5 Memory used by the sniffer process

UCD-SNMP-MIB::extOutput.5
.1.3.6.1.4.1.2021.8.1.101.7 Packets in on ETH1/ out on ETH2; usually
only one number (inbound) when a SPAN
UCD-SNMP-MIB::extOutput.7 port or TAP is used

Chapter 4. Managing your Guardium system 157


Table 21. Guardium SNMP OID (continued)
SNMP OID Description
.1.3.6.1.4.1.2021.8.1.101.8 Packets in on ETH3/ out on ETH4; usually
only one number (inbound) when a SPAN
UCD-SNMP-MIB::extOutput.8 port or TAP is used

.1.3.6.1.4.1.2021.8.1.101.9 Packets in on ETH5/ out on ETH6; usually


only one number (inbound) when a SPAN
UCD-SNMP-MIB::extOutput.9 port or TAP is used

Other MIBs accessible in the machine are: SNMPv2-MIB, IF-MIB, RFC1213-MIB,


and HOST-RESOURCES-MIB.

Running Query Monitor


The Running Query Monitor displays the status of active user queries, and enables
you to set a timeout value for all Report/Monitor queries.

Open the Running Query Monitor by clicking Manage > Activity Monitoring >
Running Query Monitor.

From the Running Query Monitor, you can:


v Set the query timeout for all reports and monitors that are running in a portlet.
Other query processes, such as policy simulations, audit processes, baseline
generations, and internal processes are not affected by this timeout value. The
default is 180 seconds (3 minutes).
v Kill any currently running user query. Some queries that are listed in this
panel–audit processes, for example–can exceed the query timeout specified. That
is expected, because the Report/Monitor query timeout applies only to reports
and monitors running in a portlet.

We do not recommend setting the Query Timeout higher than the default setting
(180 seconds) for an extended time. If you set this limit higher, it increases the
chances of overloading the system with ad-hoc reporting activity.

To change the timeout setting, type a number of seconds in the Report/Monitor


Query Timeout (seconds), and click Update. You will be informed when the
update finishes.

Groups
Using groups makes it easy to create and manage classifier, policy and query
definitions, as well as roll out updates to your S-TAP's and GIM clients. Rather
than having to repeatedly define a group of data objects for an access policy, put
the objects into a group to easily manage them.

Groups Overview
Group together similar data objects and use them in creating query, policy, and
classification definitions. Use one of the many predefined groups, or create your
own group using the Group Builder.

158 Administration
There are many places where groups are practical to use. By grouping together
similar data objects, you can use the whole set of objects in policies, classifications,
queries, and reports, rather than having to select multiple data objects individually.

If you need to make changes to a query or policy, rather than applying those
changes to each individual object, you can apply those changes to the group.

S-TAPs and GIM also use groups to make it easier to roll out updates across
managed servers.

Group Builder
The Group Builder allows you to create a new group or modify an existing group
from the user interface.

Open the Group Builder by clicking Setup > Group Builder.

The Group Filter screen allows you to easily sort through groups based on
application type, group type, description or category.

Types of groups

The field Group Type refers to the type of data that will be grouped together. For
example, Server IP expects data arranged as an IP address and Users expects to see
names of users on the application.

Tuple groups
A tuple group allows multiple attributes to be combined together to form a single
composite group member. Three of an ordered set of values are called 3-tuple. An
n-tuple is one with an n-set of value attributes. This simplifies the specification of
conditions for reporting and policy rules.

Examples of tuple groups are:


v Tuple groups - Object/Command, Object/Field, Client IP/DB User, Server
IP/DB User
v 3-tuple groups - Client IP/Source Program/DB User, DB User/Object/Privilege
v 5-tuple group - Client IP/Source Program/DB User/Server IP/Service Instance
Tuple supports the use of one slash and a wildcard character (%). It does not
support the use of a double slash (//).

Predefined groups
There are a number of predefined groups that are included with Guardium. Use
the Group Filter and Group Type menu to browse the list of groups and find the
one that best suits your needs.

Group types DB User/DB Password are by default only available to admin users.
Modify the group roles if you want to change this default setting.

Overlapping group memberships

Groups members can be in more than one group.

Chapter 4. Managing your Guardium system 159


For example, two predefined groups, Create Commands and DDL Commands, both
have a member named CREATE TABLE. If you are querying for either of these
groups, all of the CREATE TABLE members from the reporting period will be
counted in that group.

In some cases you may want to define a set of groups so that each member
belongs to only one group. For example, suppose that for reporting purposes you
need to group database users into one of two groups: employees or consultants.
You would define each of those groups with the same sub-group type
(Employee-Status, for example). When sub-groups are used, the system will not
allow you to add a member to a sub-group if that member has already been added
to another group with the same sub-group type.

Wildcards in members

Group members can include wildcard (%) characters for when the group is used in
a query condition or policy rule.
Table 22. Wildcards in members
Member Matches Does NOT Match
aaa% aaa zzzaaa

aaazzz aaz
%bbb bbb,zzbbb bb

bbbzzz
%ccc% ccc cc

ccczz zzzccczzz

zzzccczzz

Using groups in queries and policies


Short overview of conditional operators for queries and where to use groups in
policies.

Queries

Queries use conditional operators with groups. Here are examples of each
conditional operator:
v IN GROUP - If the value matches any member of the selected group, the
condition is true. IN ALIASES GROUP, this operator works on a group of the
same type as IN GROUP, however assumes the members of that group are
aliases. Note that the IN GROUP/IN ALIASES GROUP operators expect the
group to contain actual values or aliases respectively. Query Builder will look for
records with database values matching the aliases value in the group.
v NOT IN GROUP - If the value does not match any member of the selected
group, the condition is true. NOT IN ALIASES GROUP, this works on a group
of the same type as NOT IN GROUP, however assumes the members of that
group as aliases.
v IN DYNAMIC GROUP - If the value matches any member of a group that will
named as a run-time parameter, the condition is true. IN DYNAMIC ALIASES
GROUP, this works a group of the same type as IN DYNAMIC GROUP,
however assumes the members of that group as aliases.

160 Administration
v NOT IN DYNAMIC GROUP - If the value does not match any member of a
group that will named as a run-time parameter, the condition is true. NOT IN
DYNAMIC ALIASES GROUP, this works a group of the same type as NOT IN
DYNAMIC GROUP, however assumes the members of that group as aliases.

Note: The group may contain either aliases or actual values according to the
operator used (IN GROUP OR IN ALIASES GROUP) can not be used at the
same time.
v LIKE GROUP - If the value is like any member of the selected group, the
condition is true. This condition enables wildcard (%) characters in the group
member names.

Note: A like member value uses one or more wildcard (%) characters, and
matches all or part of the value. For a like comparison, alphabetic characters are
not case sensitive. For example, %tea% would match tea, TeA, tEam, or steam.

Policies and rules

When creating a rule as part of a policy, groups simplify the process of specifying
the parameters you want.

Anywhere there is a Group drop-down menu on the rule definition pane you can
select a group.

Further, if you want to create or modify a group on the fly, click the Groups icon
to open a Group Definition window and make your desired changes.

For example: if you want to capture activity occurring on your production servers,
rather than typing in full IP addresses each time, you could create a group
Production Servers and use that.

Example: Using groups to create rules and policies


Use different group types to easily create a rule as part of a policy.

About this task


Each policy is composed of one or more rules. Specify which conditions will enact
a rule, and then choose an action to correspond with that rule. This example will
show you how to use groups to identify unauthorized users, and block them from
accessing sensitive data objects.

Procedure
1. Login to your Guardium system, and open the Policy Builder by clicking
Setup > Policy Builder.

2. Create a new policy by clicking the icon to open the Policy Definition
window.
3. Fill out the policy definition, click Apply to save the policy, and then click
Edit Rules to start adding rules to the policy.
4. Enter a rule description, category, classification, and severity to begin.
5. Specify where to look. From the Server IP row, select the group (Public) PCI
Authorized Server IPs. The rule will apply to all activity from all PCI servers.

Chapter 4. Managing your Guardium system 161


Note: You can view the members of any group or modify any group by going
to the Group Builder.
6. Specify who to look for. From the DB User row, mark the Not checkbox, and
select the group (Public) Authorized Users. The rule will apply to all
unauthorized users.
7. Specify which objects to look for. From the Object row, select the group
(Public) PCI Cardholder Sensitive Objects. The rule will now apply to all
unauthorized users on PCI servers looking to access PCI sensitive objects.
8. Add an action to the rule. Click Add Action, select S-GATE TERMINATE
from the menu, and click Apply. The rule is now set to terminate all sessions
for unauthorized users attempting to access PCI sensitive objects.
9. Click Save to save the rule.
10. Install the policy.
a. Find the policy that you created. Click Back twice, or click Policy Builder
to get to the Policy Finder and browse the list of policies.
b. With the policy selected, choose Install & Override from the installation
action menu.
c. Click OK to confirm the policy installation, and then check Latest Logs
and Violations to verify the policy was installed.
The policy is now installed and active. Any person not in the (Public)
Authorized Users group attempting to access an object in the (Public) PCI
Cardholder Sensitive Objects groups will have their session terminated.

Creating a new group


Use the Group Builder to manually create a group of data objects.

Procedure
1. Open the Group Builder by clicking Setup > Group Builder.
2. Click Next to bypass the filter and create a new group.
3. In the Create New Group panel, select an option from the Application Type
menu to determine which application you will use the group with.
4. Enter a unique Group Description for the new group - do not include
apostrophe characters in this field.
5. Select a Group Type Description to choose which type of data you are
grouping.
6. Enter a Category, which is an optional label that you can filter by and use to
group items (that the filter has isolated) of policy violations and reports.
7. Enter a Classification, which is another optional label that you can filter by
and use to group items for policy violations and reporting.
8. Select Hierarchical to create a group of groups, where the admin user has
access and then passes it along to users in groups in the hierarchy.
9. Click Add to add the group.

Modifying a group
Make modifications to your group, such as adding a member or changing the
category of the group. Exercise caution when modifying or deleting a group, as
changes made could possibly affect other users or policies.

Procedure
1. Open the Group Builder by clicking Setup > Group Builder.

162 Administration
2. Use the Group Filter to find the group you want to modify, or leave the filter
empty and click Next to look at the complete list of groups.

3. When modifying a group, a best practice is to clone the group , save it as a


new group, and then modify the clone to prevent undesired effects on the rest
of your Guardium system.
The Modify Existing Groups pane allows you to:
v Modify, clone, or delete any group
v Assign or modify roles
v Populate your group from a query, LDAP server, or using Auto Generated
Calling Prox functionality.

4. With any group selected, click Modify to be able to:


v modify the category of the group
v add a new member to the group
v rename a group member
v reset a group's membership to the predefined members
v add comments
v create an alias for a group
v populate a group from LDAP

Modifying group category


Procedure

Select a group from the Group Members list, enter the new category name into the
Category field and click Modify Category to save changes.

Adding a group member


Create a new member and add it to a group, or add an existing member to a
group.

Procedure

If you have a new member you want to add to a group, enter the member's name
into the Create & add a new Member named field and click Add.

Note: When adding to a group of objects, valid member names may be composed
of object_name, schema.object_name, use a wildcard such as %object_name, or a
combination of all three.
The new member is now added to the Group Members list.

Renaming a group member


Procedure
1. Select the group member to be re-named from the Group Members list. This
will also display the current group member name in Rename Selected Member
to.
2. Change the name of the group member in the Rename selected Member to
field and click Update.

Resetting to the predefined group membership


Click Reset to Predefined for any group to replace the current group members
with the set of predefined group members.

Chapter 4. Managing your Guardium system 163


Adding a comment to a group
Click Add Comments for any group to add comments for your future reference.

Creating an alias for a group


Procedure
1. Click Aliases to open the Alias Quick Definition window.
2. For each group member you want to create an alias for, enter a value into the
Alias column and click Apply.

Predefined Groups
This section details the predefined groups in Guardium.

The following table describes the predefined groups that are included with your
Guardium system. To view the list of all groups, open the Group Builder by
clicking Setup > Group Builder. Select SQL_APP_NAME from the Applications
menu, and click Next. From the next screen, manage members from Selected
Groups. The term Group Type refers to expectations on the type of data designated
by the label. For example, the group type Server IP expects data arranged as an IP
address (192.168.1.0) and the group type Users expects to see names of users of the
application.

Additional predefined groups do get added periodically and these additional


predefined groups may not be described here. Open the Group Builder to see all
existing groups.

Predefined groups of group type DB User/DB Password are allowed only to users
with the role of admin. Users can, if preferred, add other roles or even allow the
groups to all roles.
Table 23. Predefined Groups
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
DB2 zOS Groups zOS Audit Dynamic SQL Group Type for DB2 commands
DB2 zOS Groups zOS Audit Query Group Type for DB2 commands
DB2 zOS Groups zOS Audit Updates Group Type for DB2 commands
DB2 zOS Groups zOS Audit Deletes Group Type for DB2 commands
DB2 zOS Groups zOS Audit Inserts Group Type for DB2 commands
DB2 zOS Groups zOS Audit Utilities Group Type for DB2 commands
DB2 zOS Groups zOS Audit Object Group Type for DB2 commands
Maintenance
DB2 zOS Groups zOS Audit User Group Type for DB2 commands
Maintenance
DB2 zOS Groups zOS Audit User Group Type for DB2 commands
Authorization Changes
DB2 zOS Groups zOS Audit DB2 Commands Group Type for DB2 commands
DB2 zOS Groups zOS Audit Plan/ Package Group Type for DB2 commands
Maintenance
IMS™ zOS Groups zOS IMS Audit Query Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Updates Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Deletes Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Inserts Group Type for IMS commands

164 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
IMS zOS Groups zOS IMS Audit DB Group Type for IMS commands
Commands
Policy Builder Cardholder Objects Group Type, Objects
Policy Builder Financial Objects Group Type, Objects
Policy Builder PHI Objects Group Type, Objects
Policy Builder Authorized Client IPs Group Type, Client IP
Policy Builder Production Users Group Type, Users
Policy Builder PII Objects Group Type, Objects
Policy Builder Production Servers Group Type, Server IP
Policy Builder Financial Servers Group Type, Server IP
Policy Builder Functional Users Group Type, Users
Policy Builder Sharepoint Servers Group Type, Server IP
Security DB2 Database Used for (specific) database version and
Assessment Version+Patches patch level tests.
Builder
Informix Database
Version+Patches

MS Sql Server Database


Version+Patches

MySql Database
Version+Patches

Netezza Version+Patches

Oracle Database
Version+Patches

Postgress Version+Patches

Sybase Database
Version+Patches

Teradata PDE
Version+Patches

Teradata TDBMS
Version+Patches

Teradata TDGSS
Version+Patches

Teradata TGTW
Version+Patches

Chapter 4. Managing your Guardium system 165


Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Security DB2 Allowed Grants to TUPLE, Object/Command Application 8
Assessment Public (Security assessment)
Builder
Informix Allowed Grants to List of objects/commands for which
Publics grants to public are allowed.

MS-SQL Allowed Grants to These objects will be skipped on


Public MS-SQL and Sybase tests that check
grants to public.
MYSQL Allowed Grants to
Public Note:

Netezza Allowed Grants to Exceptions group can contain a regular


Public expression or just a member. If regular
expression, the group member must
Oracle Allowed Grants to start with (R) (case sensitive), and the
Public records in the detail will be checked
against the regular expression after the
Postgres Allowed Grants to (R).
Public
For example if a group member is:
Teradata Allowed Grants to
Public (R)SYSTEM.[a-z]+ each detail record
will be checked using pattern:
SYSTEM.[a-z]+

If the member does not start with (R)


the detail record will be considered an
exception only if it is equal to the group
member.

Note a group may contain a mix of


regular expressions and specific
exceptions.
Security MS-SQL Extended Group Type is Objects
Assessment Procedures Allowed
Builder
Security MS-SQL Database Group Type is Users
Assessment Administrators
Builder
Security Teradata Profile Group Type is Objects
Assessment
Builder

Public Account Management Commands used to maintain accounts


Commands (users, roles, permissions), examples:
REVOKE, GRANT, ALTER/CREATE/
DROP USER
Public Account Management Account Management Objects, stored
Procedures Procedures used to maintain accounts
(users, roles, permissions)
Public Active Users Group Type is Users
Public Admin Users Default administrative users (DBAs and
SysAdmins)

166 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Public Administration Objects Privileged Objects, objects that only
DBA or Sys Accounts should access.
These accounts are locked for "public"
by default.
Public Administrative Commands Privileged Commands, privileged
Commands, should be executed only by
DBAs. Examples: GRANT, BACKUP,
DDL commands
Public Administrative Programs Database utilities (clients) that come
with database and usually reside on the
database server and could used by the
server itself
Public ALTER Commands Examples, alter database, alter
procedure, alter profile, alter session,
alter user
Public Application Privileged Public privileged commands that should
Commands be revoked from "public", but not
revoked since they are used by the
application
Public Application Privileged Application Privileged Objects, public
Procedures privileged procedures that should be
revoked from "public" but not revoked
since they are used by the application
Public Application Schema Users Application Users, database user used
by the application to maintain/user the
application tables
Public Archive Candidates Group Type is Objects
Public Authorized Source Group Type is Source Programs
Programs
Public Authorized Users Group Type is Users
Public Connection Profiling List Group Type is Client IP/Src App/DB
User/Server IP/SVC. Name

List of allowed connections


Public CREATE Commands Examples, create context, create
database link, create function, create
statistics, create type, create user
Public Credentials Related Entities Guardium Audit Types, Self-Monitoring,
examples, allowed_role, LDAP_config,
Turbine_user_group_role
Public Data Transfer Commands Backup Commands, commands dealing
with backup/restore of database data
Public Data Transfer Procedures Data Transfer Objects, procedures
dealing with backup/restore of database
data (mostly on MSS and SYB)
Public DB Predefined Users Either non-admin predefined users or
all predefined users, including
administrative ones
Public DBCC Commands Group Type is Commands

Chapter 4. Managing your Guardium system 167


Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Public DDL Commands Data Definitions Language,
schema-privileged commands,
examples, ALTER, CREATE, DROP
Public DML Commands DML Commands, examples, insert,
truncate, update
Public DROP Commands Examples, drop_context,
drop_event_monitor, drop_procedure,
drop_role
Public DW All Object-Field There are five predefined reports that
use monitored data to show object
DW All Objects names. These reports all start with the
prefix DW (Data Warehouse). See the
DW Execute Accessed help topic, How to report on dormant
Objects tables/columns, for further information
on how to use these predefined reports.
DW Select Accessed Objects

DW Select Accessed
Objects/Fields
Public EBS App Servers Group Type is Client IP
Public EBS DB Servers Group Type is Server IP
Public EXECUTE Commands Examples, call, execute, execute function
Public GRANT Commands Examples, grant, grant objectives, grant
system privileges
Public Guardium Audit Categories Guardium patches,
for Detailed Reporting TURBINE_USER_GROUP_ROLE
Public ICM App Servers Group Type is Client IP
Public ICM DB Servers Group Type is Server IP
Public ImportLDAPUser Group Type is Objects
Public ImportLDAPUser_bindValues Group Type is Objects
Public Inspection Engine Entities Examples, adminconsole_sniffer,
software_tap_db_client,
software_tap_db_server
Public Java Commands Examples, alter java, create java, drop
java
Public KILL Commands Example, kill
Public Masked_SP_Executions_MS_SQL_SERVER
For MS SQL Server, a group that
includes a collection of stored
procedures (SP) names. If there is an
execution of an included procedure,
than everything will be masked, even if
in quotes. Predefined as empty.
Public Masked_SP_Executions_SybaseFor Sybase, a group that includes a
collection of stored procedures (SP)
names. If there is an execution of an
included procedure, than everything
will be masked, even if in quotes.
Predefined as empty.
Public MongoDB Skip Commands Group Type is Commands

168 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Public MS-SQL Replication Group Type is Objects
Procedures
Public MS-SQL Security System Group Type is Objects
Procedures
Public MS-SQL System Procedures Group Type is Objects
Public Oracle EBS HRMS Sensitive Group Type is Objects
Objects
Public Oracle EBS-PCI Group Type is Objects
Public Oracle EBS-SOX Group Type is Objects
Public Oracle Predefined Users Group Type is Users
Public Peer Association Commands Commands dealing with
links/replications of data, examples,
links, log shipping, replications,
snapshots
Public Peer Association Procedures Peer Association Objects, procedures
dealing with links/replications of data

Examples: Links, log shipping,


replications, snapshots
Public PeopleSoft Objects Group Type is Objects
Public PeopleSoft Sensitive Objects Group Type is Objects
Public Performance Commands Examples, analyze, create statistics,
update all statistics
Public Policy Related Entities Examples, access_rule,
gdm_install_policy_header
Public Potential Overflow Objects Group Type is Objects
Public Procedural Commands Examples, begin, call, execute, exit,
repeat, set
Public PROCEDURE DDL Examples, alter procedure, create
procedure, drop procedure
Public PSFT App Servers Group Type is Client IP
Public PSFT DB Servers Group Type is Server IP
Public Public executable Execute-Only Objects,
procedures procedures/functions/Packages that by
default granted access to public
Public Public selectable object Select-only Objects, tables that by
default granted access to public
Public RESTORE Commands Examples, restore database, restore log
Public REVOKE Commands Examples, revoke object privileges,
revoke system privileges
Public Risk-indicative Error SQL errors related to security
Messages
Public Sharepoint Servers
Public SAP-PCI Group Type is Objects
Public SAP App Servers Group Type is Client IP
Public SAP DB Servers Group Type is Server IP

Chapter 4. Managing your Guardium system 169


Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Public SAP HR Sensitive Objects Group Type is Objects
Public Select Command Examples, select, select list
Public Sensitive Objects Examples, activity, sales
Public SIEBEL App Servers Group Type is Client IP
Public SIEBEL DB Servers Group Type is Server IP
Public Siebel SIA Sensitive Objects Group Type is Objects
Public SPECIAL CASE Source Group Type is Source Programs
Program
Public Suspicious Objects Group Type is Objects
Public Suspicious Users Group Type is Users
Public System Configuration Database configuration commands
Commands (subset of Administrative Commands)

Examples: ALTER DATABASE, ALTER


SYSTEM
Public System Configuration System Configuration Objects (subset of
Procedures Administration Objects)
Public Terminated DB Users Group Type is Users
Public Vulnerable Objects (with Database objects with reported
wildcards) vulnerabilities
Public Windows File Share Verbs Group Type is Commands
Public DB2 Default Users Group Type is DB User/DB Password

IBM iSeries Default Users

Informix Default Users

MS-SQL Server Default


Users

MYSQL Default Users

Netezza Default Users

Oracle Default Users

PostgreSQL Default Users

Sybase Default Users

Teradata Default Users


Public Hadoop Skip Commands Group Type is Command

Hadoop Skip Objects Group Type is Object

Not Hadoop Server Group Type is Server IP


Public Replay - Exclude from Group Type is Objects
Compare

Replay - Include in
Compare

170 Administration
Table 23. Predefined Groups (continued)
SQL_APP_NAME GROUP_DESCRIPTION MEMBERS
Audit Process Predefined as empty.
Builder
Baseline Builder Predefined as empty.
Classifier Predefined as empty.
Express Security Predefined as empty.

Populating groups
After creating a group or finding the one you want to work with, populate the
group with members. Use the Group Builder to manually add members to a
group, or through several automated import methods.

How to populate a group from LDAP


How to import data from an LDAP server to use in Guardium groups.

About this task

Configure Guardium with your LDAP server, and then import on demand, or
schedule an import in the future.

When importing LDAP users:


v The Guardium admin user account will not be changed in any way.
v You have the option to clear existing members from a group before importing.
v Existing user passwords will not be changed.
v By default, new users are disabled when added, assigned the user role, and have
blank passwords.

Note:

Special characters are not supported in user names.

If you are scheduling an import, consider any other scheduled imports you may
have at that time, as this will affect the behavior of existing scheduled imports.

Procedure

Configure your LDAP server with your Guardium system. Open the Group
Builder by clicking Setup > Group Builder, and fill out the required information.
1. For LDAP Host Name, enter the IP address or host name for the LDAP server
to be accessed.
2. For Port, enter the port number for connecting to the LDAP server.
3. Select the LDAP server type from the Server Type menu.
4. Check the Use SSL Connection check box if Guardium is to connect to your
LDAP server using an SSL (secure socket layer) connection.
5. For Base DN, specify the node in the tree at which to begin the search. For
example, a company tree might begin like this: DC=encore,DC=corp,DC=root
6. For Attribute to Import, enter the attribute that will be used to import users
(for example: cn). Each attribute has a name and belongs to an objectClass.

Chapter 4. Managing your Guardium system 171


7. Check the Clear existing group members before importing check box if you
want to delete all existing group members before importing.
8. For Log In As and Password, enter the user account information that will
connect to the Guardium server.
9. For Search Filter Scope, select One-Level to apply the search to the base level
only, or select Sub-Tree to apply the search to levels beneath the base level.
10. For Limit, enter the maximum number of items to be returned. We
recommend that you use this field to test new queries or modifications to
existing queries, so that you do not inadvertently load an excessive number of
members.
11. Optional: For Search Filter, define a base DN, scope, and search filter.
Typically, imports will be based on membership in an LDAP group, so you
would use the memberOF keyword. For example:
memberOf=CN=syyTestGroup,DC=encore,DC=corp,DC=root
12. Click Apply to save the configuration settings.
The Status indicator in the Configuration - General section will change to
LDAP import currently set up for this group as follows and the Modify Schedule
and Run Once Now buttons will be enabled. You can now import from your
LDAP server.

What to do next

Run or schedule an import.


v Schedule an LDAP import by clicking Modify Schedule, filling out the schedule
information, then clicking Save.

172 Administration
v To run the import on demand, click Run Once Now. After the task completes,
the set of members satisfying your selection criteria will be displayed in the
LDAP Query Results panel.

Note:

When you import on demand, you have the opportunity to accept or reject each
entry returned from the LDAP server.

When you schedule an LDAP import, all of the LDAP entries that satisfy your
search criteria will be imported.

Verify that members have been added to a group by selecting the group in the
Group Builder, then clicking Modify , and looking at the group's membership.

For larger groups, it may be easier to verify members by using the Guardium
Group Details report (Reports > Guardium Group Details).

Populating a group from a query


Create a query, and use the results to populate a group. This option of populating
groups is most useful after the external data correlation has uploaded a custom
table to the Guardium system.

Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With a group selected, click the Populate From Query button to open the
Populate Group From Query Set Up panel.
3. From the Query menu, select the query to be run.
a. Depending on the type of group being populated, different fields will
appear. For most group types, the Fetch Member From Column menu will
appear.
b. For paired attribute groups (Object/Command, Object/Field, or Client
IP/DB User), two menus will appear: Choose Column for Attribute 1 and
Choose Column for Attribute 2.
c. Select the column (or columns) to be used to populate the group, and any
additional parameters for the query. The run-time parameters for the query
will then be added to the pane.
4. Select the Clear existing group members before importing box to delete
existing group content before importing new members.

Chapter 4. Managing your Guardium system 173


5. Optional: Select a remote source (only available from a Central Manager).
6. Click Save to save the definition.
7. Click Run Once Now to run the query immediately, or click Modify Schedule
to set a schedule for the query in the future.

Populating a group from stored procedures


There are several different methods for populating command or object groups from
stored procedures. The auto-generated calling prox functionality in the Group
Builder allows you to analyze command or object groups for specific group
members and add those members into a new group.

About this task

The Group Builder can automatically populate command or object group types
through two ways:
v By analyzing stored procedure source code. To use this option, Guardium must
access the database on which the stored procedures have been defined, and the
stored procedures must not be stored in encrypted format.
v By analyzing stored procedures in database traffic that has been monitored and
logged by Guardium. To use this option, the Guardium appliance must be
inspecting the appropriate database streams, and logging the information (as
opposed to using ignore session or skip logging actions), and the analysis task
must run while the data is still on the unit (as opposed to, for example, after an
archive/purge operation).

There are two groups involved when populating a group from stored procedures:
v The receiving group is the one to which members will be added.
v The starting group which will be analyzed. This group must be an existing
commands or objects group. The search-and-add process is recursive. For
example, if the stored procedure named prox_one is added to the receiving
group, and prox_one is referenced in prox_two, prox_two will also be added to
the receiving group.

Note: Wildcards are not supported in the group members field for stored
procedures.

Procedure
1. Open the Group Builder by clicking Setup > Group Builder.
2. Choose a starting group to analyze that is either a commands or objects group
type.
3. With the starting group selected, click Auto Generated Calling Prox. You will
be presented with five options:
a. Using DB Sources: Populate a group by analyzing the stored procedure
definitions from one or more databases.
b. Using Database Dependencies: Populate a group of objects or a group of
qualified objects by analyzing Functions, Java classes, Packages, Procedures,
Synonyms, Tables, Triggers and/or Views.
c. Using Reverse Dependencies: Populate a group by computing a set of
objects used when starting from a set of objects.

Note: The Using Reverse Dependencies option is only available for Oracle.

174 Administration
d. Using Observed Procedures: Populate a group by analyzing the CREATE
PROCEDURE and ALTER PROCEDURE commands as they are observed in the
database traffic.
e. Generate Selected Object: Populate a group by reverse analysis of observed
stored procedures. Starting from a set of stored procedures, compute all the
tables that these procedures use (directly or indirectly).

Note: The Generate Selected Object option can only be used with object
group type.

Populating a group using database sources:

Before you begin

To use this option:


v You must know where the stored procedures of interest are defined.
v The sources must not be stored in encrypted format.
v You must have access to the stored procedure sources on those databases.

About this task

Guardium will analyze the stored procedure source code, on one or more database
servers. Select a group and then run the Auto Generated Calling Prox process to
scan your stored procedures. This process will check the selected group to see if
any of the objects in that group can be accessed or if any of the commands in that
group can be executed. Any matches will be added to a new group. To populate a
group using database sources:

Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.

Note: This option can only be used with commands or objects group types.
2. With the group selected, click Auto Generated Calling Prox, and select the
Using DB Sources option. This opens the Analyze Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The
selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters. Some fields only apply to certain
databases.
v For Sybase, MS SQL Server, and Informix, enter a database name to restrict
the operation to that database. If it is blank, all stored procedures in the
master database will be analyzed.
v For MySQL, Oracle or DB2 only, enter a schema name to restrict the
operation to databases owned by that schema. For MySQL only, the Schema
Owner is in the form user_name@host, where host can be a specific IP or it
can be a % to specify all hosts. To get all hosts, enter the schema name
followed by %.
v For MySQL, Oracle or DB2 only, enter a stored procedure name in Object
Name. Wildcard characters may be used. For example, if only interested in
the procedures beginning with the letters ABC, enter ABC% in the Object
Name box.
5. In the Source Detail Configuration section, do one of the following:

Chapter 4. Managing your Guardium system 175


v Add members to an existing group by checking the Append check box, and
then selecting a group from the Existing Group Name menu.
v Add members to a new group by entering the new group name in New
Group Name.

Note: Do not include apostrophe characters in a group name.


6. Select Flatten Namespace to create member names using wildcard characters,
so that the group can be used for LIKE GROUP comparisons. For example, if
sp_1, is discovered, the member %sp_1% will be added to the group, and in a
LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all
match.
7. Click Analyze Database to begin populating the group. The operation may
take an extended amount of time to complete.

Populating a group using database dependencies:

Use this option to populate groups based on Database Dependencies such as


Functions, Java classes, Packages, Procedures, Synonyms, Tables, Triggers and/or
Views. This option will only work with Oracle databases on object group types.
This option does not work on Command group types because dependency
information in the database is only related to objects.

About this task

When specifying the group type, keep in mind that only Object or Qualified Object
group types work with this option. A qualified object requires five value attributes:
server IP, instance, DB name, owner and object. This is also called a 5-tuple object.

An example of what a Qualified Objects group member looks like is


192.168.1.0+guardium+oracle+admin+fininacial object.

Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With the objects or qualified objects group selected, click Auto Generated
Calling Prox, and select the Using Database Dependencies option. This opens
the Analyze Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The
selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters.
5. In the Source Detail Configuration section, do one of the following:
v Add members to an existing group by checking the Append box, and then
selecting a group from the Existing Group Name menu.
v Add members to a new group by entering the new group name in New
Group Name.

Note: Do not include apostrophe characters in a group name, and make sure
that the new group is fully qualified (includes five value attributes: server IP,
instance, DB name, owner and object).
6. Select Flatten namespace to create member names using wildcard characters, so
that the group can be used for LIKE GROUP comparisons. For example, if

176 Administration
sp_1, is discovered, the member %sp_1% will be added to the group, and in a
LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all
match.
7. In the Include Types section, select database dependencies: Functions, Java
classes, Packages, Procedures, Synonyms, Tables, Triggers and/or Views.
8. Click Analyze Database to populate the group. You will be informed of the
results.

Populating a group using reverse dependencies:

Generate Selected Object populates the group through reverse analysis of observed
stored procedures.

About this task

These options from the Group auto-populate menu compute a set of objects used
when starting from a set of objects. For example, starting from a set of stored
procedures, compute all the tables that these procedures use (directly or indirectly).

Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.

Note: The Reverse Dependencies option is available only for Oracle.


2. With the group selected, click Auto Generated Calling Prox, and select the
Using Reverse Dependencies option. This opens the Analyze Stored
Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The
selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters.
5. In the Source Detail Configuration section, do one of the following:
v To add members to an existing group, select Append, and then select the
group from the Existing Group Name list.
v To add members to a new group, enter the new group name in New Group
Name.

Note: Do not include apostrophe characters in a group name.


6. Select Flatten namespace to create member names using wildcard characters, so
that the group can be used for LIKE GROUP comparisons. For example, if
sp_1, is discovered, the member %sp_1% will be added to the group, and in a
LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all
match.
7. In the Include Types section, select database dependencies: Functions, Java
classes, Packages, Procedures, Synonyms, Tables, Triggers and/or Views.
8. Click Analyze Database to populate the group. You will be informed of the
results.

Populating a group using observed procedures:

Guardium will populate a group by inspecting all changes or additions to stored


procedures. This keeps the mapping information up-to-date through continuous
analysis of changes to stored procedures.

Chapter 4. Managing your Guardium system 177


Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With the starting group selected, click Auto Generated Calling Prox, and select
the Using Observed Procedures option. This opens the Analyze Observed
Stored Procedures panel.
3. To edit an existing configuration, select it from the Source Details menu. To
create a new configuration, leave the selection on New.
4. In the Access Information section, select all of the database servers to be
analyzed. You can choose any combination of the check-boxes.
5. In the Source Detail Configuration section, do one of the following:
v Add members to an existing group by checking the Append box, and then
selecting a group from the Existing Group Name menu.
v Add members to a new group by entering the new group name in New
Group Name.

Note: Do not include apostrophe characters in a group name.


6. Select Flatten namespace to create member names using wildcard characters, so
that the group can be used for LIKE GROUP comparisons. For example, if
sp_1, is discovered, the member %sp_1% will be added to the group, and in a
LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all
match.
7. Click Save to save the configuration.
8. Set a schedule for the group by doing one of the following:
v To run the query immediately and get results now, Click Run Once Now.
v To define a schedule for the operation, click Modify Schedule.

Populating a group using generate selected object:

The Generate Select Object option is a part of the Auto Generated Calling Prox
functionality that populates an objects group type through reverse analysis of
observed stored procedures.

About this task

Guardium will populate the group by inspecting all changes or additions to stored
procedures. This keeps the mapping information up-to-date through continuous
analysis of changes to stored procedures.

Procedure
1. Open the Group Builder by clicking Setup > Group Builder. Use the filter to
find the group you want to populate, or click Next and find the group from
the list of all groups.
2. With the starting group selected, click Auto Generated Calling Prox, and
select the Generate selected object option. This opens the Analyze Observed
Stored Procedures panel.
3. To edit an existing configuration, select it from the Source Details menu. To
create a new configuration, click New.
4. In the Access Information section, select all of the database servers to be
analyzed. You can choose any combination of the check-boxes.

178 Administration
5. In the Source Detail Configuration section, enter a name, and choose an
option from the Verb menu.
6. Do one of the following:
v Add members to an existing group by checking the Append box, and then
selecting a group from the Existing Group Name menu.
v Add members to a new group by entering the new group name in New
Group Name.

Note: Do not include apostrophe characters in a group name.


7. Select Flatten namespace to create member names using wildcard characters,
so that the group can be used for LIKE GROUP comparisons. For example, if
sp_1, is discovered, the member %sp_1% will be added to the group, and in a
LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would
all match.
8.
9. Click Save to save the configuration.
10. Set a schedule for the group by doing one of the following:
v To run the query immediately and get results now, Click Run Once Now.
v To define a schedule for the operation, click Modify Schedule.

Security Roles
Security roles are used to grant access to data (groups, queries, reports, etc.) and to
grant access to applications (Group Builder, Report Builder, Policy Builder, CAS,
Security Assessments, etc).

By default, when a component is initially defined, only the owner (the person who
defined it) and the admin user (who has special privileges) are allowed to access
and modify that component.

You can allow other users to access the components you define by assigning
security roles. For example, if you assign a security role named DBA to an audit
process, all users assigned the DBA role will be able to access that audit process.

Note: In order to configure LDAP user import, accessmgr user must have the
privilege to run the Group Builder. In certain situations, when changes are made to
the role privilege, accessmgr's privilege to Group Builder can be taken away. This
results in an inability to save or run successfully LDAP user import. Go to the
access management portal, select Role Permissions. Choose the Group Builder
application and make sure that there is a checkmark in the all roles box or a
checkmark in the accessmgr box.

Assign Security Roles


1. Open or select the item to which you want to assign one or more security roles
(a policy or report definition, for example).
2. Click Roles.
3. Check all of the roles you want to assign from the Assign Security Roles list.
You can only assign roles that are assigned to your account.
4. Click Apply.

Chapter 4. Managing your Guardium system 179


Define a new Security Role
By default, only the special accessmgr user is allowed to create or remove security
roles.
1. Login as accessmgr and open the User Role Browser by clicking Access >
Access Management > User Role Browser.
2. At the end of the role browser, click Add Role.
3. In the Role Form panel, enter a new Role Name and click Add Role.

Remove a Security Role


By default, only the special accessmgr user is allowed to create or remove security
roles. To remove a role assigned to a component, see Assign security roles to a
component.
1. Login as accessmgr and open the User Role Browser by clicking Access >
Access Management > User Role Browser.
2. Click Delete for any role, and then click Confirm Deletion.

Notifications
Use the Alerter and Alert Builder to create notifications. When email or other
notifications are required for alerting actions, follow this procedure for each type of
notification to be defined.

Alerter configuration
1. Before you choose alerting actions, you must be configure the email SMTP
settings in theAlerter
2. Open the Alerter by clicking Protect > Database Intrusion Detection > Alerter.
3. Fill out the SMTP and/or SNMP information.
4. After filling out each section, click Test Connection, and verify that the
connection is working. You will receive a message stating the connection is
unreachable if the connection is not working.
5. Click Apply to save the configuration.
6. At a minimum, IP Address/Host name, port, and return email address must be
specified.
7. Select Mail from the Notification Type menu. If the Severity of the message is
HIGH, the Urgent flag is set.
8. Select a user (which can be an individual or group) from the Alert Receiver
list. Additional receivers for real-time email notification are Invoker (the user
that initiated the actual SQL command that caused the trigger of the policy)
and Owner (the owner/s of the database). The Invoker and Owner are
identified by retrieving user IDs (IP-based) configured by using the Guardium
APIs.
9. Click Add.

Build an alert
1. After configuring the Alerter, open the Alert Builder by clicking Protect >
Database Intrusion Detection > Alert Builder.
2. Fill out the information in the Settings, Alert Definition, Alert Threshold, and
Notification sections and click Apply.
3. Choose who will receive the notifications by clicking Add Receiver.. and
choosing a user.

180 Administration
How to create a real-time alert
Send a real-time alert to the database administrator whenever there are more than
three failed logins for the same user within five-minutes.

About this task

Generate real-time security alerts whenever suspicious activity is detected or access


policies are violated.

Follow these steps:


1. Create a policy
2. Add rules to the policy
3. Install the policy
4. Setup a real-time alert when the policy is enacted

Prerequisites

Configure SMTP in the Alerter. Open the Alerter by clicking Protect > Database
Intrusion Detection > Alerter, and then fill out the SMTP information.

Note: Policy violations can also be seen as a report in Incident Management See
Policies for complete information.

Procedure
1. Create a policy.
a. Open the Policy Builder by clicking Setup > Tools and Views > Policy
Builder.
b. Click New, or modify an existing policy by selecting the policy from the
Policy Finder and clicking Modify.
c. Fill out the required information and click Apply to save the policy.
2. Add rules to the policy.
a. After saving the policy, click Edit Rules to see the existing policy rules.
b. Click Add Rules... and then you are presented with five rule options.
c. Choose Add Exception Rule and fill out the required information.
The Exception Rule Definition screen beings with the following items:

Chapter 4. Managing your Guardium system 181


v Description - Enter a short, descriptive name for the rule.
v Category - The category will be logged with violations, and is used for
grouping and reporting purposes. If nothing is entered, the default for the
policy will be used.
v Classification - (optional) Enter a classification. Like the category, these
are logged with exceptions and can be used for grouping and reporting
purposes
v Severity - Select a severity code from the menu: INFO, LOW, NONE,
MED, or HIGH (the default is INFO).
d. Use the remaining fields to specify how to match the rule - where to search,
what to search for, who to search for, and when to search.
e. Enter a period “.” in the DB User field to count each individual value
separately.
f. From the Excpt. Type (exception type) menu, select LOGIN_FAILED.
g. Use the Minimum Count to set the minimum number of times the rule
must be matched before the action will be triggered. For this example,
choose 1. The count of times the rule has been met will be reset each time
the action is triggered or when the reset interval expires.
h. Use the Reset Interval to set the number of minutes after which the rule
counter will be reset to zero. The counter is also reset to zero each time that
the rule action is triggered. For this example, choose 5.

182 Administration
i. Check the Cont. to next rule check box to continue testing rules once this
rule is satisfied and its action is triggered. If this is not selected, no
additional rules will be tested when this rule is satisfied.
j. Check the Rec. Vals. check box to indicate that when the rule action is
triggered, the complete SQL statement causing that event will be logged and
available in the policy violation report. If not marked, the SQL String
attribute will be empty.
k.
3. Add an action when the rule is triggered.
a. From the Actions section of the Exception Rule Definition screen, click Add
Action.
b. Select an option from the Action menu and click Apply. For this example,
choose ALERT PER MATCH to get a notification every time the rule is
enacted.
c. Select an option from the Notification Type menu. You must configure the
Alerter for mail or SNMP notification types.
d. Add an alert receiver, and click Apply to save the action.
4. Install the policy.
a. Click Setup > Tools and View > Policy Installation.
b. Find the policy from the Policy Installer menu, select an installation action,
and click Modify Schedule or Run Once Now. Your policy is now
installed. Your alert receiver will receive real-time notifications when the
policy rules are enacted.

Custom Alerting Class Administration


Use a custom alert class to send alerts to a custom recipient. Upload the custom
class, then use the Alert Builder to designate the custom class as an alert
notification receiver.
v Before you can use a custom class, you must upload it onto the Guardium
system. Click Comply > Custom Classes > Alerts > Upload to upload a custom
alerting class. Click Browse to select a file, then Apply to save.
v After uploading the custom class, use it in an alert with the Alert Builder. Open
the Alert Builder by clicking Manage > Database Intrusion Detection > Alert
Builder. Fill out the required information, select CUSTM from the Notification
Type menu, and click Save.

Predefined Alerts
Table describing the predefined alerts found in the Alert Builder.

Guardium comes with a set of predefined alerts that can be found in the Alert
Builder. Open the Alert Builder by clicking Protect > Database Intrusion Detection
> Alert Builder. When you open the Alert Builder, you are presented with a list of
all existing alerts in the Alert Finder. Select an alert from the finder and click
Modify to edit it.

In the Modify Alert screen, modify any part of the alert, such as receivers or
threshold.

Chapter 4. Managing your Guardium system 183


You cannot modify the default queries that the alerts are based on. If you want to
modify a query, click the Edit this Query icon for any query to open the Query
Builder. Once in the builder, clone any query, and then modify the clone to suit
your needs.

After making changes to an alert, click Apply to save them.

The following table describes all predefined alerts.


Table 24. Predefined Alerts
Alert Description
Active S-TAPs Changed Checks for changes to Active S-TAP inspection engines done
during the last accumulation interval. The alert will trigger if at
least one inspection engine has been changed during the
period. By default the alert checks every 1/2 hour and checks
the last hour.
Aggregation/Archive Alert once a day on all aggregation or archive tasks that did
Errors not complete successfully.
CAS Instance Config Alert once a day on any CAS instance configuration changes.
Changes
CAS Templates Changes Alert once a day on any CAS template configuration changes.
Data Source Changes Alert once a day on any data source definition changes.
Database disk space Alert every 10 minutes if internal database is more than 80%
filled. See the Self Monitoring help topic for more information
on Disk Space (% full) and the Guardium Nanny process.
Enterprise No Traffic Enterprise No Traffic Alert runs only on Central Manager
systems. It is based on a query similar to the query on the No
Traffic alert and retrieves the records with: timestamp between
X and Y, when X is a query parameter and Y is query from
date generated by the alert mechanism based on the
accumulation interval (same way the existing no traffic alert
works).
Enterprise S-TAPs This alert will only run Central Manager systems.
changed
Failed Logins to Every 10 minutes alert if there have been more than 5 failed
Guardium login attempts on the Guardium appliance.
Guardium - Add/Remove Alert once a day if any Guardium users have been added or
Users removed.
Guardium - Credential Alert once a day if there have been any Guardium credential
Activity changes, including LDAP configuration changes.
Inactive S-TAPs Since Alert once an hour on all S-TAPs that have not been heard
from.
Inspection Engines and Alert once a day on any activity related to inspection engine
S-TAP and S-TAP configuration.

184 Administration
Table 24. Predefined Alerts (continued)
Alert Description
No Traffic Alert to Indicate whether there is no traffic from specific
database servers. This alert will alert when there is no traffic
collected from a server from which the Guardium system was
collecting traffic at some point during the last 48 hours. The
alert will trigger when there is no traffic within the period
defined in the accumulation interval.

For example if the accumulation interval is 60 minutes the alert


will send an email if there was no traffic from a specific
database server in the last hour but there was some traffic in
the last 48 hours. The alert will send an email (by default)
only every 24 hours. Parameters such as accumulation interval,
notification interval, run frequency etc. can be customized.
Parameters such as Threshold, Per Line, operator, query etc.
should not be changed, as changes to these parameters will
cause the alert not to work properly. Note the No Traffic
query should not be cloned.
No Traffic by Similar to the regular No traffic alert with the following
Server/Protocol differences: The alert is per service Name/Net Protocol, and
will report per line. There is a new additional parameter:
Active Traffic Interval that determines when the last request
from each server was received. The alert will trigger under the
following conditions: There was No traffic during the alert
interval from each server/net protocol but there was traffic
since: Active Traffic Interval for that combination.

Unlike the regular No traffic alert that will trigger if there was
no traffic during the alert interval but there was traffic in the
previous 48 hours per server IP.
Policy Changes Alert Alert once a day if there have been any security policy
changes.
Scheduled Job Exceptions Alert every 10 minutes on any scheduled job exception
(including assessment jobs).

Scheduling
The general purpose scheduler is used to schedule many different types of tasks
(archiving, aggregation, workflow automation, etc.).

Depending on the type of task being performed, not all of the features described
here may be available - for example, the schedules for some types of tasks can be
paused, while others cannot be (they can only be stopped or started).

Note: Be aware of scheduling anomalies that can occur when scheduling tasks
during Daylight Savings Time.

Define or Modify a Schedule


1. In a task (for example, Audit Process Builder), click Define Schedule or
Modify Schedule to open the Schedule Definition panel.
2. Fill in the Start Time. The default is 12 a.m. (Midnight).
3. Optionally, to run the task more than once a day:

Chapter 4. Managing your Guardium system 185


v Select a value from the Restart list (every hour up to every 12 hours). The
default is Run only once, meaning the task will not be restarted during the
day.
v Select a value from the Repeat list (every minute up to every 59 minutes).
The default is Do not repeat.
4. From the Schedule by list, select one of the following:
v Day/Week to define a schedule based on one or more days of the week
(Monday, Tuesday, Wednesday, etc.).
v Month to define a schedule based on one or more days of the month, for
every month or specific months.
If you selected Day/Week from the Schedule by list, mark each day of the
week you want the task run, or click Every day to select all days (or to clear all
days if they are already selected).
OR
If you selected Month from the Schedule by list, do one of the following:
v To select a numbered day (the 15th, for example):
– Select the Day button.
– Select a day: 1-31, depending on the month selected.
– Select Every month, or one or more specific months.
v To select a weekday occurrence within the month (the first Monday, for
example):
– Select the button.
– Select a week relative to the start of the month: First, Second, Third, etc.
– Select a weekday: Sunday, Monday, Tuesday, etc.
– Select either Every month, or one or more specific months.
5. From the Schedule Start Time list, select the hour and minute at which you
want to run the task. If a time is chosen earlier than NOW, the Scheduler Start
Time will revert to NOW.
6. Click Apply.

Pause a Schedule

Note: Note that not all types of scheduled tasks provide a pause option.
1. Click Pause and
2. Confirm the action.

Remove a Schedule
After a schedule has been defined, a Remove button appears in the Schedule
Definition panel.
1. Click Define Schedule or Modify Schedule to open the Schedule Definition
panel.
2. Click the Delete button.

Aliases
Create synonyms for a data value or object to be used in reports or queries.

186 Administration
Aliases Overview
An alias is used to display a meaningful or user-friendly name for a data value.

For example, Financial Server might be defined as an alias for IP address


192.168.2.18. Once an alias has been defined, users can display report results,
formulate queries, and enter parameter values using the alias instead of the data
value.

Aliases can be defined in a number of ways:


v Through the IP-to-Hostname Aliasing tool - use this tool to generate aliases for
discovered client and server IPs.
Click Protect > Database Intrusion Detection > IP-to-Hostname Aliasing to
open the IP-to-Hostname Aliasing tool.
v Through the Alias Builder – use this method to define aliases manually.
Open the Alias Builder by clicking Comply > Tools and Views > Alias Builder.
v Through a query.
v While using the Group Builder, with the Alias Quick Definition.
v

Note: Aliases changes on the Central Manager or managed units will not be
available on other systems until either GUI is restarted or any aliases changes are
made through their GUI.

IP-to-Hostname Aliasing

One of the more common applications of aliases is to use them as synonyms for IP
addresses. Use this tool to schedule the discovery of client and server IP's and
generate aliases for them.
1. Open the IP-to-Hostname Aliasing tool by clicking Protect > Database
Intrusion Detection > IP-to-Hostname Aliasing.
2. Check the Generate Hostname Aliases for Client and Server IPs (when
available) check box.
3. Check the Update existing Hostname Aliases if rediscovered check box if you
want the tool to continually look for and update hostname aliases.
4.
5. Click Apply to save your configuration, then schedule the operation.
v Click Run Once Now to start the tool immediately.
v Click Define Schedule... to schedule the tool in the future.
v Click Pause to pause the generation of client and server IPs aliases.

Alias Builder
Use this method to manually create an alias.
1. Open the Alias Builder by clicking Setup > Tools and Views > Alias Builder.
2. Select the attribute type for which you want to define aliases.
3. Filter your search on that attribute type using the Value and Alias fields and
click Search.
4. If any results match your search, they will display in the value and alias table.
Click Apply for the search results, or add a new alias by specifying a Value
and Alias name, then clicking Add.

Chapter 4. Managing your Guardium system 187


5. Add a comment to an alias by clicking the Item Comments icon . This can be
helpful for quickly referencing what an alias refers to in the future.

Define Aliases Using a Query


Use this method to create aliases from a query. When a custom table has been
uploaded to Guardium, that table can be used to map aliases to specific values.
1. Open the Alias Builder by clicking Setup > Tools and Views > Alias Builder.
2. Select the attribute type for which you want to define aliases from the Alias
Finder and click Populate from Query to open the Builder Alias From Query
Set Up panel.
3. Fill out the required information and click Save to save the alias.
v Select the query to be run from the Query menu.
v Choose a value for both Choose Column for Value Column and Choose
Column for Alias Column.
v After selecting column values, more fields display that you must fill in (From
Date, To Date, Remote Source, and any additional parameters for the selected
query).
v Check the Clear existing group members before Importing check box to
delete the existing content of the group before populating from query.
v Click Save to save.
v With the query saved, the Scheduling buttons become active. Click Modify
Schedule to run the query in the future, or click Run Once Now to run it
immediately.

Alias Quick Definition from Group Builder

Use this method to create an alias for a group on the fly while creating or
populating a group.
1. Open the Group Builder by clicking Setup > Group Builder. Select any group
from the list, and click Modify.
2. Click Aliases... to open the Alias Quick Definition window. Type in an alias for
any group(s), and save the alias by clicking Apply.

GuardAPIs for Aliases

Use these GuardAPI commands to create, update and delete alias functions:
v grdapi create_alias
v grdapi update_alias
v grdapi delete_alias

Dates and Timestamps


Use a calendar tool to select an exact date, and a relative date picker to select a
date that is relative to the current time.

There are two tools that are used to populate date fields: a calendar tool to select
an exact date, and a relative date picker to select a date that is relative to the
current time (now -1 day, for example). In addition, exact or relative dates can be
entered manually.

188 Administration
Be aware that when selecting or entering dates, the date on the system on which
you are running your browser may not be the same as the date on the Guardium
appliance to which you are connected.

Timestamps in Queries

Caution need to be taken when including Timestamps in queries.

First, be aware of the distinction between a timestamp (lowercase t) and a


Timestamp (uppercase T).
v A timestamp (lowercase t) is a data type containing a combined date-and-time
value, which when printed displays in the format yyyy-mm-dd hh:mm:ss (e.g.,
2005-07-17 15:40:25). When creating or editing a query, most attributes with a
timestamp data type display with a clock icon in the Entity List panel.
v A Timestamp (uppercase T) is an attribute defined in many entity types. It
usually contains the time that the entity was last updated.

Including a Timestamp attribute value in a query will produce a row for every
value of the Timestamp. This may produce an excessive amount of output. To get
around this, use the count aggregator when including the Timestamp in a query,
and then drill down on a report row, to view the individual Timestamp values for
the items included in that row only, in a drill-down report. See Aggregate Fields in
Queries.

When displaying a Timestamp value in a query that contains Timestamp attributes


in multiple entities, be careful to select the Timestamp attribute from the
appropriate entity type for the report. For example, if the query will display
information from both the Client/Server and the Session entities, with the Session
selected as the main entity, you can display a Timestamp attribute from one or
both entities. If you include the Client/Server Timestamp, you will see the same
value printed for every Session for a given client-server connection – it will always
be the time at which that particular Client/Server was last updated. If you include
the Timestamp attribute from the Session, you will see the time that each Session
listed was last updated.

Tip: If your report displays times that are all the same when you expect them to be
different, you have probably included a Timestamp attribute from an entity too
high in the entity hierarchy for the level of detail you want on the report.

Select an Exact Date from Calendar

To use the Calendar Window to select an exact date:


1. Click the Calendar button for the field where you want to insert a date. This
opens a calendar in a separate window.
v Click the arrow buttons to display the previous or next month in the
calendar window.
2. Click on any date to select that day. The calendar window will close and the
selected date will be inserted into the date field next to the calendar tool that
was clicked.

Note: The default time for a date selected using the calendar is always 00:00:00
(the start of the day). To specify any other time of day, type over this value,
entering the desired time in 24-hour format: hh:mm:ss, where hh is the hour of
the day (0-23), and mm and ss are minutes and seconds respectively (both
0-59).

Chapter 4. Managing your Guardium system 189


Enter an Exact Date Manually
1. Click the field where you want to enter the date and enter the date in
yyyy-mm-dd format, where:
v yyyy is optional and may be any positive integer value. If omitted, yyyy
defaults to the current year. If a one- or two-digit year is entered, the century
portion of the date defaults to 19.
v mm is the month (1-12)
v dd is the day of the month (1 to 28, 29, 30, or 31, depending on the month)
2. If no time is entered, the time defaults to 00:00:00 (the start of the day). To
specify any other time of day, type over this value, entering the desired time in
24-hour format: hh:mm:ss, where hh is the hour of the day (0-23), and mm and
ss are minutes and seconds respectively (both 0-59).

Select a Relative Date from Date Picker

Rather than specify an exact date, it is often more convenient to specify dates
relative to either the current date (now) or some other date (the first Monday, for
example). For example, to always include information from the previous seven
days in a query, it’s more convenient to define relative dates (e.g., start = now
minus seven days and end = now). The Relative Date Picker tool can be used to
select a relative date for many types of tasks.
1. Click the Relative Date Picker button next to any field where a relative date is
allowed. This opens the Relative Date Picker window.
2. Select Now, Start, or End from the list. Regardless of your choice, the display
changes to provide for additional selections.
3. From the middle list, select this, last, or previous, which is relative to the unit
(day, week, month, or day of the week selected in the next list) as follows:
v This is the current unit
v Last is the current unit minus one
v Previous is current unit minus two
4. Select the day, week, month, or a specific day: Monday-Friday.
5. Click the Accept button when you are done. The relative date will be inserted
into the field next to the Relative Date Picker button that was clicked.
6.

Enter a Relative Date Manually

To enter a relative date manually, follow one of the procedures. The keywords are
not case sensitive but each component must be separated from the next by one or
more spaces.

There are three general formats you can use to enter a relative date:

NOW minus a specified number of minutes, hours, days, weeks, or months

OR

The Start or End of the current, last or previous day, week, or month

OR

The Past or Previous day of the week (Sunday, Monday, Tuesday, etc.)

190 Administration
Relative to NOW
1. Click in the field where you want to enter the relative date.
2. Enter the keyword NOW.
3. Enter a negative integer specifying the relative number of hours, days, weeks,
or months (no space is allowed between the minus sign and the integer).
4. Enter a keyword for the units used: HOUR, DAY, WEEK, or MONTH. Be aware
that the plural (hours, days, etc.) is not allowed. Example: now -14 day

Relative to a Day, Week or Month


1. Click in the field where you want to enter the relative date.
2. Enter the keywords START OF or END OF.
3. Enter THIS or LAST, followed by DAY, WEEK, or MONTH. Example: end of
last week

Relative to a Day of the Week


1. Click in the field where you want to enter the relative date.
2. Enter the keywords START OF or END OF.
3. Enter LAST or PREVIOUS, followed by SUNDAY, MONDAY, TUESDAY,
WEDNESDAY, THURSDAY, FRIDAY, or SATURDAY. Example: start of previous
Tuesday

Time Periods
Use the Time Period Builder to create time periods that can be used for policy
rules and query conditions.

When monitoring database activity, use time periods to specify when you want to
monitor. Use the Time Period Builder to create new time periods or modify
existing ones.

Add a Time Period


1. Navigate to the Time Period Builder by clicking Setup > Tools and Views >
Time Period Builder.
2. Expand the Add Time Period pane by clicking the + button.
3. Fill in the information and click Add to add the time period.
v Do not include apostrophe characters in the Time Period Description.
v Check the Contiguous check box to define a single time period that may
span multiple days. A workweek is defined as contiguous, whereas a
workday is defined as non-contiguous.

Remove a Time Period


1. Navigate to the Time Period Builder by clicking Setup > Tools and Views >
Time Period Builder.
2. Check the check box for the time period you want to remove, and click Delete.

Time Periods
Policy rules and query conditions can test for events that occur (or not) during
user-defined time periods.

Chapter 4. Managing your Guardium system 191


There is a set of pre-defined time periods (7x24, After Hours Work, Before Hours
Work, Evening, Regular Work Day, Saturday, Sunday, and Week End), and users
can define their own.

Add a Time Period


1. Navigate to the Time Period panel:
v Users: Monitor/Audit > Build Reports > Time Period builder.
v Administrators: Tools > Config & Control > Time Period Builder.
2. Expand the Add Time Period pane by clicking the + button.
3. Enter a unique description for the period in the Time Period Description box.
Do not include apostrophe characters in the description.
4. Optionally mark the Contiguous box to define a single time period that may
span multiple days. Leave this box cleared to define a fixed time period on
one or more days.
Example: Contiguous vs. Non-Contiguous Time Periods
The following two time periods both begin 09:00 Monday and end 17:00
Friday:
v Workweek is defined Contiguous.
v Workday is defined Non-Contiguous.
The first time period, Workweek, defines a single 164-hour period beginning
at 9 AM on Monday and ending at 5 PM on Friday, whereas the second time
period, Workday, defines five separate eight-hour time periods (9 AM – 5 PM),
on five consecutive days (Monday – Friday)
5. Enter a beginning time in hours (00-24) and minutes (00-59) in the Hour From
box.
6. Enter an ending time in hours (00-24) and minutes (00-59) in the Hour To box.
7. Select a beginning day of the week in the Weekday From box.
8. Select an ending day of the week in the Weekday To box.
9. Optionally click the Comments button to add comments (see Commenting).
10. Click the Add button.

Remove a Time Period


1. Navigate to the Time Period panel:
v Users: Monitor/Audit > Build Reports > Time Period builder.
v Administrators: Tools > Config & Control > Time Period Builder.
2. Mark the Select checkbox for the time period you want to remove.
3. Click the Delete button. You will be prompted to confirm the deletion. Note
that you cannot delete a time period that is used by an existing policy rule.

Comments
Comments apply to definitions and to workflow process results.

Comments can be added or viewed in several places throughout the UI. You can
add a comment to a group or alias for reference purposes, or add a comment to
report to ease auditing requirements. For example, an auditor may want to know
why a configuration change was made on a certain date. Use a comment to easily
reference the reason why the change was made.

192 Administration
Comments apply to definitions (groups, aliases, reports, policies), and to workflow
process results. You can add multiple comments to a component, and you can add
comments to comments, but you cannot modify or delete existing comments.

There are two different kinds of comments:


v Comments Entities are stored on the Central Manager, and will be available
within that Central management environment, given the usual constraints
regarding roles and permissions.
v Local Comments Entities are defined on a single unit, and remain local to that
unit. Local Comments from the standalone or managed unit are not stored on
the Central Manager.

Add or View Comments


1. To view comments, open the User Comments window by clicking Comply >
Reports > User Comments.
2. Throughout the UI, there are different ways to add a comment to an entity or
report.
v Add a comment to a group by modifying the group, and clicking Add
Comments from the Manage Members for Selected Group screen.
v Add a comment to an alias by opening the Alias Builder and clicking the
Item comments icon . Open the Alias Builder by clicking Comply > Tools
and Views > Alias Builder

Report Comments

View a report of all user comments by clicking Comply > Reports > User
Comments.
v The Local Comments entity is used in a Central Manager environment only.
Local comments remain local to the system on which they were defined, and are
not stored on the Central Manager.
v The Comments entity contains comments that are stored on the Central
Manager.

How to install patches


Install a single patch or multiple patches as a background process.

About this task

Use this topic to provide visibility and control over patch installation, status and
history.

See Central Management for more information.

This how-to topic uses a combination of commands from the CLI and choices from
the GUI to help you install the latest Guardium patch.

Note: The Guardium system must be rebooted after installing a patch.

Follow these steps from the Guardium system that is designated and configured as
the Central Manager:
1. Backup the system profile, using the CLI command store backup profile.

Chapter 4. Managing your Guardium system 193


2. Enter the CLI command store system patch install to install a single patch
or multiple patches to the Central Manager from a network location.
3. Click Setup > Tools and Views > Patch Distribution to move patches from the
CM to managed units.

Procedure
1. Backup the system profile
Using a SSH client, log into the IBM Security Guardium Central Manager as
the CLI user.
Enter the following command: store backup profile
The following dialog will appear.
Do you want to setup for automatic recovery? (Y/n) Enter the patch backup destination host: Enter
Other related CLI commands for this step
CLI>show backup profile patch backup flag is 1 patch backup automatic recovery flag is 1 patch bac
Use the CLI command if the patch installation failed, patch revert failed, and
the automatic restore failed or disabled.
This procedure gets the pre-patch backup file and restores it on the system.
If the pre-patch backup file is currently located on the system, enter the file
name.
Otherwise, the pre-patch backup profile information is used to get the file.
2. Install the patch(es) to the Central Manager
Enter the following command:
CLI>store system patch install [sys | ftp | scp | cd ] <date><time>
The ftp and scp options copy a compressed patch file from a network location
to the Guardium system.
Note that a compressed patch file may contain multiple patches, but only one
patch can be installed at a time. To install more than one patch, choose all the
patches that need to be installed, separated by commas. Internally the CLI
submits requests for each patch on the list (in the order specified by the user)
with the first patch taking the request time provided by the user and each
subsequent patch three minutes after the previous one. In addition, CLI will
check to see if the specified patch(es) are already requested and will not allow
duplicate requests.
The option (sys) is for use when installing a second or subsequent patch from a
compressed file that has been copied to the Guardium system by using this
command previously.
The option (cd) is for use in installing the patch from a DVD disk. To display a
complete list of applied patches, see the Installed Patches report on the
Guardium Monitor tab of the administrator portal. There is also an Available
Patches report on this same Guardium Monitor tab.
Syntax
store system patch install <type> <date> <time>
<type> is the installation type, sys | scp | ftp | cd
<date> and <time> are the patch installation request time, date is formatted as
YYYY-mm-dd, and time is formatted as hh:mm:ss
If no date and time is entered or if “now” is entered, the installation request
time is NOW.

194 Administration
Table 25. Parameters
Name Description
sys
Use this option to apply a second or subsequent patch from a patch file that has
been copied to the IBM Guardium system by a previous store system patch
execution.

Install from /var/log/guard/patches


ftp or
scp To install a patch from a compressed patch file located somewhere on the
network, use the ftp or scp option, and respond to the prompts.

Please enter the following information for file transfer:

Host to import patch from: ____________

User on (host name): ________________

Full path to the patch, including name (file name may use wildcard *):
_______________

Password: ________________ (LDAP password)

Enter the scp/ftp port if you need to use a special port, else just press Enter key
to continue:

The file transfer process can take a while to complete.

Leave the terminal open and do not answer any questions until the transfer is
complete.

Starting transfer, please wait.

The file transfer is complete.

The backup profile is not set for saving the backup file when patch installation
failed.

If you want to save the backup file, please answer NO to the question and run
CLI command store backup profile to set up the parameters.

Do you want to continue (yes or no)? yes

List the files in the patches directory:

1. (name of file)

Please choose patches to install (1-1, or multiple numbers separated by ",", or q to


quit): 1

Install item 1

Patch has been submitted, and will be installed according to the request time,
please check installed patches report or CLI (show system patch installed).

Please don't forget to remove your media if necessary.


cd
To install a patch from a DVD, insert the DVD into the IBM Guardium DVD ROM
drive before executing this command. A list of patches contained on the DVD will
be displayed.

Chapter 4. Managing your Guardium system 195


Note: The store system patch install command will not delete the patch file
from the IBM Guardium appliance after the install. While there is no real need
to remove the patch file, as same patches can be reinstalled over existing
patches and keeping patch files around can aid in analyze various problems, a
user may remove patch files by hand or use the CLI command diag (Note, the
CLI command diag is restricted to certain users and roles).
To delete a patch install request, use the CLI command delete scheduled-patch
Other related CLI commands for this step
show system patch available
Displays the available patches.
show system patch installed
Displays the already installed patches and patches scheduled to be
installed—showing date/time and the install status.
fileserver
Use this command to start an HTTP-based (different from an HTTPS:/) file
server running on the Guardium appliance. This facility is intended to ease the
task of uploading patches to the unit, or downloading debugging information
from the unit. Each time this facility starts, it deletes any files in the directory
to which it uploads patches.

Note: Any operation that generates a file, that the fileserver will access, should
finish before the fileserver is started (so that the file is available for the
fileserver).
Example of fileserver
To start the file, enter the fileserver command: CLI> fileserver
Starting the file server. You can find it at http://(name of unit)
Press ENTER to stop the file server.
Open the fileserver in a browser window, and to one of the following:
v To upload a patch, click Upload a patch and follow the directions.
v To download log data, click Sqlguard logs, go to the file you want, right-click
on it, and download as you would any other file.
When you are done, return to the CLI session and press Enter to terminate the
session.
3. Using the UI, move the patch(es) from Central Manager to managed units
Central Patch Management
a. Click Setup > Tools and Views > Patch Distribution.
The Patch Distribution button will open a new screen, display an available
patch list with dependencies, and allow for the selecting of a patch and
installing it to all selected units. The list of available patches is constructed
out of the available patches and evaluating the currently installed patches
on each of the selected units along with the dependency list of available
patches. Patches available but not installable (a dependent patch is missing)
are shown in the list as grayed out and cannot be selected. The selection of
patch to install is a single selection - only one patch can be installed at a
time. Once a patch is selected and the install button pushed a command is
sent to all selected units to install that patch; this process of installing
patches will happen in the background.

196 Administration
b. Click on the Central Management link under Central Management.
c. Click on Patch Distribution.

d. Click on Patch Installation Status


The Patch Installation Status screen will display for each unit, failed
installations and discrepancies - situations such as having one patch being
installed on part of the units only, regardless if it failed on other units or
was not installed.
e. Where to go from here

Note: The Guardium system must be rebooted after installing a patch.


The patched systems are now ready to be used.

Support Maintenance
The Support Maintenance feature is password protected and can be used only as
directed by Technical Support. Contact Technical Support if you require more
information.

Chapter 4. Managing your Guardium system 197


198 Administration
Chapter 5. Product integration
You can integrate IBM Guardium with other products.

Configure BIG-IP Application Security Manager (ASM) to communicate


with Guardium system
Use the Big-IP ASM (from F5 Networks) together with Guardium’s real-time
database activity monitoring to solve the problem of identity propagation between
web application and database application server layers.

This solution uses Google’s protocol buffers (.protobuf) as the wire format between
BIG-IP ASM and the Guardium system.

Information about configuring the integration between Big-IP ASM and Guardium
real-time database activity monitoring is provided at the F5 website:
http://www.f5.com/pdf/deployment-guides/ibm-guardium-asm-dg.pdf.

Guardium Integration with BigInsights


IBMBigInsights®, when configured, can send audit log events to the IBM Guardium
application.

Once the BigInsights events are in the IBM Guardium repository, other Guardium
features will be available (for example, workflow to email and track report signoff,
alerting, reporting, etc.)

IBMBigInsights and Guardium use an open API (Universal Feed) format to


communicate audit data. Universal Feed is based on Google’s standard data
interchange format, Protocol buffer or protobuf.

IBM Hadoop Offering


Big Insights Hadoop uses the Universal Feed format and four types of Hadoop log
files: HDFS; MapReduce; Hadoop RDC; and, Oozie.

Note: Guardium does intercept Hadoop HDFS when clients are local and the
setting, dfs.client.read.shortcircuit, is set to true.

Integration of Hadoop activity into IBM Guardium


With Hadoop environments, an S-TAP (or in the case of IBM BigInsights, a proxy)
is enabled on the NameNode. Event information, such as session ID, user
information, and action, are sent to the IBM Guardium collector for analysis and
reporting.

The following events are monitored:


v Session and user information
v HDFS operations – commands (cat, tail, chmod, chown, expunge, etc.), files,
permissions
v MapReduce jobs – job, operation, target, permissions

199
v Exceptions, such as authorization failures
v Hive/HBASE queries using the Thrift protocol (Cloudera Hadoop only) – alter,
count, crate, drop, get, put, list, etc.
v Oozie jobs (IBM BigInsights only)

Limitations of using a proxy

Because Hadoop does not log exceptions to its logs, there is no way to send
exceptions to Guardium. If you require exception reporting, you must use an
S-TAP. There is no support for monitoring Hive queries, although you can see the
underlying MapReduce or HDFS messages from Hive. Additionally, if you require
the names of the HBase tables being created, you must use an S-TAP.

Follow these steps to configure BigInsights


1. Enable Guardium integration and define your Guardium server in
guardiumproxy.properties.
2. Set up the log4j.properties files and synch them across the BigInsights cluster.
3. Restart Hadoop (this starts the GuardiumProxy).

One single agent (GuardiumProxy) on a BigInsights cluster communicates (over


one connection) with a Guardium server (outside the cluster). The GuardiumProxy
on BigInsights cluster gathers audit log events from all nodes, prepares protobuf
messages and sends them to the Guardium server on-the-fly, thus taking care of
handshake, pings and also reconnects after a connection failure/timeout.

Logging events are sent over a socket connection. Port 16015 is used for this socket
connection (16016 is the default Guardium port).

Once the BigInsights logging events reside in a Guardium database, Guardium


reports can be created.

Configuration on BigInsights
1. Stop the services. The stop scripts are in $BIGINSIGHTS_HOME/bin.
Stop-all.sh will stop all BigInsights services. Can also do stop.sh hadoop oozie.
2. 2. The following properties files need to be changed:
Open the properties file $BIGINSIGHTS_HOME/hdm/components/
guardiumproxy/conf/guardium-proxy.properties
And change the setup. The default is:
guardiumproxy.enable=no guardiumproxy.host= <namenode>
guardiumproxy.port=16016
guardium.server=
In order to enable the proxy, change it to:
guardiumproxy.enable=yes
guardiumproxy.host=<namenode>
guardiumproxy.port=16015
guardium.server=<Guardium_server_IP>
3. Setting up the log4j.properties files:
HDFS, MapReduce, HRPC
$BIGINSIGHTS_HOME/hadoop-conf/log4j.properties
$BIGINSIGHTS_HOME/hadoop-conf-staging/log4j.properties

200 Administration
GUARDIUM PROXY INTEGRATION - Setup for HDFS, MapReduce and
Hadoop RPC
#Set up the following lines:
#Set RemoteHost to cluster node (main node, the one from which you installed
BigInsights)
#When changing the Port for cluster-intern communication with
GuardiumProxy, also change it in $BIGINSIGHTS_HOME/conf/
guardiumproxy.properties (main node)
log4j.appender.GuardiumProxyAppender=org.apache.log4j.net.SocketAppender
log4j.appender.GuardiumProxyAppender.RemoteHost=<namenode>
log4j.appender.GuardiumProxyAppender.Port=16015
log4j.appender.GuardiumProxyAppender.Threshold=INFO
#MapReduce audit log Guardium integration: Uncomment to enable.
log4j.logger.org.apache.hadoop.mapred.AuditLogger=INFO,
GuardiumProxyAppender
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
#Hadoop RPC audit log Guardium integration: Uncomment to enable.
log4j.logger.SecurityLogger=INFO, GuardiumProxyAppender
log4j.additivity.SecurityLogger=false
#GUARDIUM PROXY INTEGRATION - End of Setup
Oozie
$BIGINSINGHTS_HOME/oozie/conf/oozie-log4j.properties
$BIGINSIGHTS_HOME/hdm/components/oozie/conf/oozie-log4j.properties
#GUARDIUM PROXY INTEGRATION - Setup for Oozie
#Set up following lines
#Set RemoteHost to cluster node (main node, the one from which you installed
BI)
#Note: When changing the Port for cluster-intern communication with
GuardiumProxy, also change it in $BIGINSIGHTS_HOME/conf/
guardiumproxy.properties (main node)
log4j.appender.GuardiumProxyAppender=org.apache.log4j.net.SocketAppender
log4j.appender.GuardiumProxyAppender.RemoteHost=<namenode>
log4j.appender.GuardiumProxyAppender.Port=16015
log4j.appender.GuardiumProxyAppender.Threshold=INFO
#Oozie audit log Guardium integration: Switch (un)comment between lines to
enable GuardiumProxyAppender for Oozie
#log4j.logger.oozieaudit=INFO, oozieaudit (make sure this line is
COMMENTED OUT)
log4j.logger.oozieaudit=INFO, oozieaudit, GuardiumProxyAppender (this line
should be UNCOMMENTED)
#GUARDIUM PROXY INTEGRATION - End of Setup
4. Update files in all the nodes
In $BIGINSIGHTS_HOME/bin, run syncconf.sh
5. Now restart the services. The start scripts are in $BIGINSIGHTS_HOME/bin/
start.sh and start-all.sh will start the GuardiumProxy if it is enabled in the
properties file.
Stop.sh and stop-all.sh will stop GuardiumProxy

Chapter 5. Product integration 201


Note: The restart of Hadoop (for the hadoop-conf/log4j.properties) and Oozie
(for oozie/conf/oozie-log4j.properties) components are required to make
changes effective.
6. In order to test the RPC security, add this to the /opt/IBM/BigInsights/conf/
hadoop-conf/core-site.xml file
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
And watch for authorization messages.
In order to debug, change INFO to DEBUG in the following lines...
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=INFO,
GuardiumProxyAppender
Logs are in: /var/ibm/BigInsights/guardiumproxy/logs/guardiumproxy.log
Status file will be in: /var/ibm/BigInsights/guardiumproxy/status/
guardiumproxy.status

For further information, see the BigInsights user documentation at


http://www-01.ibm.com/software/data/infosphere/biginsights/

Guardium Support for Cloudera Hadoop


For Cloudera Hadoop, IBM Guardium uses an S-TAP and can monitor the network
traffic generated by applications that use HDFS, MapReduce, Hbase and Hive.
Guardium monitors NameNode and JobTracker servers. It also monitors other
HPRC servers like the Hbase RegionServer.

For Cloudera Hadoop, in order to capture Hbase inserts, S-TAPs will need to be
installed at the Hbase region servers.

Guardium supports Cloudera version cdh3u2 and cdh3u3.

Guardium-supported Hadoop subprojects and protocols

Guardium can monitor the network traffic generated by applications that use
Hadoop subprojects HDFS, MapReduce, HBase, and Hive.

Guardium monitors NameNode and JobTracker servers. It also monitors other


HPRC servers like the HBase RegionServer.

Guardium intercepts any HRPC-based Hadoop protocol messages. Guardium also


recognizes Thrift protocol and MYSQL protocol (used by Hive). When the
Guardium collector receives Hadoop traffic, it parses and logs information into
Guardium tables in internal TURBINE database.
Table 26. Guardium-supported Hadoop subprojects and protocols
Hadoop Communication
subprojects protocol Interface Examples

HDFS HRPC NAMENODEPROTOCOL, CLIENTPROTOCOL

MapReduce HRPC, RPC JOBSUBMISSIONPROTOCOL,


TASKUMBILICALPROTOCOL

202 Administration
Table 26. Guardium-supported Hadoop subprojects and protocols (continued)
Hadoop Communication
subprojects protocol Interface Examples

HBase HRPC, Thrift

Hive HRPC, Thrift,


MYSQL

Terms

HDFS is a distributed file system and the primary storage system used by Hadoop
applications.

Hadoop applications use MapReduce framework to process vast amount of data


in-parallel on large clusters.

HBase is the Hadoop database and a type of NoSQL database.

Hive is a data warehouse system for Hadoop for ad-hoc queries and analysis of
larger datasets.

Hadoop Predefined Reports

Predefined reports for Hadoop are:


v Hadoop - Hue/Beeswax Report
v Hadoop - MapReduce Report
v Hadoop - HBase Report
v Hadoop - HDFS Report
v Hadoop - Unauthorized MapReduce Jobs
v Hadoop - BigInsights Report
v Hadoop - Exception Report
v Hadoop - Full Message Details report

Exclude Hadoop noise - using CLI


CLI> store gdm_analyzer_rule new Enter rule description (optional): HDP Enter rule type (required)

Method name is case sensitive

Possible application codes: 0 - HDFS; 1- HBASE; 2- HIPC; 3 - JOBT.

OPTIM to Guardium Interface


An OPTIM to Guardium interface, using Protobuf (Universal Feed Agent), sends
Optim activity logs to Guardium.

The objective of this interface is to use Guardium auditing capabilities for OPTIM
activities. The auditing capabilities include: Reporting tools (user-defined queries
and reports); Audit Processes (workflow automation that enables assigning a task
to a role/user/group, user-defined status-flow process, escalation, export...): and,
Thresholds Alerts.

Chapter 5. Product integration 203


The Optim-audit activity information includes the access details, session number,
activity type (verb), table (object), details (fields), execution time (response time)
and number of errors (records affected).

The data is mapped to the Guardium standard object model.

Enabling OPTIM auditing requires enabling via OPTIM and the steps required in
Guardium are: (1) link user to Optim Audit Role; (2) add the predefined reports to
the appropriate pane; (3) enable sniffer; and, (4) set policy action to Log Data With
Values.

This interface includes an optim-audit role, a default layout (psml file) for the
optim-audit role, and seven predefined reports.

These reports are:


v Optim - Failed Request Summary per Optim Server
v Optim - Request Execution per User
v Optim Server Optim - Table Usage Details
v Optim - Request Log
v Optim - Table Usage Summary
v Optim - Request Summary

Note: When creating the optim-audit role and user, only one tab OPTIM Audit
will display. Similar to roles with custom layouts that customers can generate, this
is a role layout that is meant to be used alone (the optim-audit user has no interest
in the other user role tabs) but since the user role is required, layout merging has
been turned off when the user has the optim-audit role so that they get only the
items of optim interest. Other roles that work in this same way are "review-only"
and "inv".

Note: After creating and saving the optim-audit role, click the Generate Layout
selection within the User Browser menu and click Reset to get the layout
associated with the role. Do this again if changing roles within the User Browser.

Combining real-time alerts and correlation analysis with SIEM


products
Distribute contextual knowledge of database activity patterns, structures, and
protocols directly to the third-party database of the SIEM system.

About this task

Guardium pre-processes large volumes of database traffic and distills important


information. Then, it provides the condensed summary to external SIEM (Security
Incident Event Manager) systems such as ArcSight, Envision, and QRadar. Thus,
SIEM products do not have to work as hard to process large traffic streams. Rather,
it can concentrate on correlating all activity, alerting on unauthorized or suspicious
behavior, and helping with the regulatory compliance requirements on event logs.

This Guardium SIEM (Security Incident Event Manager) integration can be done in
one of the following ways:
v Syslog forwarding (the most common method for alerts and events)

204 Administration
v Using the CLI command, store remotelog, to specify the Syslog forwarding to
facility/priority, and host (destination).
v Using Guardium templates for ArcSight, Envision, and QRadar
v SCP/FTP (CSV or CEF Files sent to an external repository and the SIEM system
must upload and parse from this external repository.)

Guardium distributes its contextual knowledge of database activity patterns,


structures, and protocols directly to the third-party database of the SIEM system
(Guardium has credentials to the SIEM system. It can also write directly to the
SIEM database in the SIEM schema. Contact Guardium support as Guardium's
entities must be mapped to the third-party schema.

Note: The SIEM system must enable remote logging as well to know to listen for
the correct facility/priority which is defined within syslog.

By combining Guardium's real-time security alerts and correlation analysis with


SIEM and log management products, companies can enhance their ability to:
v Proactively identify and mitigate risks from external attacks, trusted insiders,
and compliance breaches;
v Implement automated controls from Sarbanes-Oxley (SOX), the Payment Card
Industry Data Security Standard (PCI-DSS), and data privacy regulations;
v Manage system and network events alongside critical logs and events from the
core of their data centers – enterprise databases and applications – for
enterprise-wide correlation, forensics, incident prioritization, and reporting.

Security Information and Event Management (SIEM) solutions, also referred to as


Security Event Management (SEM) solutions, are offered by companies such as
QRadar, ArcSight, CA, Cisco MARS, LogLogic, RSA enVision and SenSage. SIEM
products are complementary to Guardium's database activity monitoring solution.
They can also use Guardium's filtering and preprocessing of database events to
provide 100% visibility and database analytics for SOX, PCI-DSS, and data privacy.

SIEM technology provides real-time analysis of security alerts that are generated
by network hardware and applications. It helps companies to respond to network
attacks faster and to organize the massive amounts of log data that is generated
daily. SIEM solutions are log-based correlation engines.

SIEM solutions are primarily focused on detection and security, but not on
auditing. They assemble data from other logs and analyze it at a high level. They
correlate much more data such as IP addresses and routers but have little database
visibility. They do not have forensics-quality, digitally signed, audit monitoring
capabilities so they can be used for immediate information, but not historical proof.

Security information and event management (SIEM) users are faced with the
challenge of importing raw logs that are generated by internal DBMS utilities. The
performance of DBMS logging utilities, the unfiltered information that they
produce, and the lack of necessary granular information create challenges.

Through the Guardium user interface, Guardium can be configured easily to


integrate with various SIEM tools.

Chapter 5. Product integration 205


Note: With SIEM integration, the reports and policies do not change on the
Guardium system. Users can continue with their existing policies and reports,
trigger alerts, and send reports to the SIEM system.

For SIEM-Guardium Integration, there are predefined templates for QRadar,


Envision, and ArcSight so you do not need to define them. You can select the
appropriate message template within the rule action.

You can change the default message template, specify the parameters for syslog
forwarding, and create the CSV or CEF file to export.

Note: CEF is only used for ArcSight. The other SIEM products have a different
format and do not use CEF.

In order for the SIEM product to recognize the information that is being sent, the
message template must be changed through the Global Profile. This formatting
agreement between the SIEM solution and Guardium allows SIEM products to
parse incoming messages and update its own database with the new event/data.
1. To open the Global Profile, click Setup > Tools and Views > Global Profile.
2. Click Edit to Named template.

3. Select a template or create a new template with the Icon.

The Guardium appliance can be configured to send Syslog messages to remote


systems. Specific types of Syslog messages can be sent to specific hosts. The Syslog
message type is determined from the facility-priority of the message.

The following are examples of facility: all, auth, authpriv, cron, daemon, ftp, kern,
local0, local1, local2, local3, local4, local5, local6, local7, lpr, mail, mark, news,
security, Syslog, user, uucp. The following are examples of priority: alert, all, crit,
debug, emerg, err, info, notice, warning.

Reports containing information that can be used by other applications or reports


that contain large amounts of data can be exported to a CSV file format. Report,

206 Administration
Entity Audit Trail, and Privacy Set task output can be exported to CSV
(Delimiter-separated Value) files. Additionally, CSV file output can be written to
Syslog. If the remote Syslog capability is used, the output CSV file is forwarded to
the remote Syslog locations.

Each record in the CSV or CEF files represents a row on the report. Contact
Guardium Support for a tool that permits the reformatting of CSV files before
export.

The Guardium appliance can be configured to send Syslog messages to remote


systems, using the store remotelog CLI command. Specific types of Syslog
messages can be sent to specific hosts. The Syslog message type is determined
from the facility-priority of the message.

Examples of facility are: all, auth, authpriv, cron, daemon, ftp, kern, local0, local1,
local2, local3, local4, local5, local6, local7, lpr, mail, mark, news, security, Syslog,
user, uucp. Examples of priority are: alert, all, crit, debug, emerg, err, info, notice,
warning.

Reports containing information that can be used by other applications, or reports


containing large amounts of data, can be exported to a CSV file format. Report,
Entity Audit Trail, and Privacy Set task output can be exported to CSV
(Delimiter-separated Value) files. Additionally, CSV file output can be written to
Syslog. If the remote Syslog capability is used, this action results in the immediate
forwarding of the output CSV file to the remote Syslog locations.

Each record in the CSV or CEF files represents a row on the report.

To send Syslog messages and export reports to CSV files, complete the following
steps.

Note: Do not zip the file within the audit process definition so that the SIEM
vendor can parse it correctly.
1. To open the Audit Process Finder, click Comply > Tools and Views > Audit
Process Builder.
2. Click the Icon to add a process or select an existing process from the
drop-down list.

3. Click New Audit Task under Audit Tasks.


4. Enter a description and select Report.
5. Select a report from the drop-down list and enter the CSV/CEF File Label.

Chapter 5. Product integration 207


6. Select Export CSV file and Write to Syslog. Choose a named template from the
drop-down list.
7. Under Task Parameters, choose the Enter Period From >= and Enter Period To
<= by using the calendar icon.
8. Click Apply.

CSV/CEF files can also be exported on a schedule to the SIEM host. Modify or
add an audit task.
1. Click Comply > Tools and Views > Audit Process Builder to open the Audit
Process Finder and modify or add an audit task.
2. Choose Export CSV file or Export CEF file.

Note: ACCESS reports can be saved and forwarded in CEF or LEEF format but
other reports, such as Guardium Logins, Aggregation Activity Log, and CAS
events cannot be mapped to CEF or LEEF.
3. Uncheck the Write to Syslog. Otherwise, Syslog messages will be generated
instead of a file.
4. Open the CSV/CEF Export menu by clicking Manage > Data Management >
Results Export (Files).

5. Select either the SCP or FTP Protocol. Then, enter the Host, Directory,
Username, Port, and SCP/FTP password. Click Apply to save the changes
orRevert to clear the fields.
6. Click the Modify Schedule button to schedule the exports of CSVs regularly.
7. Select the Start Time, Restart frequency, Repeat frequency, Schedule by
Day/Week or Month, Schedule Start Time. Check the box to automatically run
dependent jobs. Then, click Save.

To have a policy alert that is routed to Syslog, exception rules, access rules, and
extrusion rules must be modified to trigger notifications to be sent to Syslog. This
action can be accomplished by going to the Policy Builder. Policy rules can be sent
as email or sent to Syslog and forwarded.
1. To open the Policy Builder, click Setup > Tools and Views > Policy Builder.
2. Select the policy and click Edit Rule.
3. Click Add Rule... > Add Exception Rule.
4. Enter the Description, Category, Classification, and select a Severity level
from the drop-down list.

208 Administration
For every policy rule violation logged during the reporting period, the Policy
Violations report provides the Timestamp from the Policy Rule Violation entity,
Access Rule Description, Client IP, Server IP, DB User Name, Full SQL String from
the Policy Rule Violation entity, Severity Description, and a count of violations for
that row. With this report, users can group violations and create incidents, set the
severity of each violation, and assign incidents to users.

How to transfer sensitive data


Take sensitive data information, identified and classified in IBM Security Guardium
and transfer that information to InfoSphere® Discovery.

Both IBM Guardium and InfoSphere Discovery have the capability to identify and
classify sensitive data, such as Social Security Numbers or credit card numbers.

A customer of the IBM Guardium product can use a bidirectional interface to


transfer identified sensitive data information from one product to another.

Note: In IBM Guardium , the Classification process is an ongoing process that runs
periodically. In InfoSphere Discovery, Classification is part of the Discovery process
that usually runs once.

Note: The data will be transferred via CSV files.

The summary of Export/Import procedures is as follows:


v Export from Guardium - Run the predefined report (Export Sensitive Data to
Discovery) and export as CSV file.
v Import to Guardium - Load to a custom table against CSV datasource; define
default report against this datasource.

Follow these steps:


1. Export from Guardium - Export Classification Data from IBM Guardium to
InfoSphere Discovery
2. As an admin user in the Guardium application, go to Tools > Report Building
>Classifier Results Tracking > Select a Report > Export Sensitive Data to
Discovery (See screenshot).

Chapter 5. Product integration 209


Note: Add this report to the UI pane (it is not by default).

3. Click on Customize icon on Report Result screen and specify the search
criteria to filter the classification results data to transfer to Discovery.
4. Run the report and click on Download All Records icon.
5. Save as CSV and import this file to Discovery according to the InfoSphere
Discovery instructions.
6. Import to Guardium - Import Classification Data from InfoSphere Discovery
to IBM Guardium
7. Export the classification data as CSV from InfoSphere Discovery based on
InfoSphere Discovery instructions.
8. As an admin user in the Guardium application, go to Tools > Report Building
>Custom Tables screen, select ClassificationDataImport and click on Upload
Data button. (See screenshot).

9. In Upload Data screen, click on Add Datasource, click on New button, define
the CSV file imported from Discovery as new datasource (Database Type =
Text). See the following screenshot of CSV Datasource definition.

210 Administration
Note: Alternatively you can load the data directly from Discovery database if
you know how to access the Discovery database and Classification results
data.
10. After defining the CSV as Datasource, click on Add button in Datasource list
screen.
11. In Upload data screen click on Verify Datasource and then Apply.
12. Click on Run Once Now button to load the data from the CSV.
13. Go to Report Builder, select Classification Data Import report, Click on Add to
Pane to add it to your Portal and then navigate to the report.
14. Access the Report, click on Customize to set the From/To dates and execute
the report.

Chapter 5. Product integration 211


The report result has the classification data imported from InfoSphere Discovery.
Double click to invoke APIs assigned to this report. The data imported from
Discovery can be used for the following:
v Add new Datasource based on the result set.
v Add/Update Sensitive Data Group.
v Add policy rules based on datasource and sensitive data details.
v Add Privacy Set.
Table 27. CSV Interface signature
Interface Signature Example
Type DB2
Host 9.148.99.99
Port 50001
dbName (Schema name for DB2 or cis_schema
Oracle, db name for others)
Datasource URL
TableName MK_SCHED
ColumnName ID_PIN
ClassificationName SSN
RuleDescription Out-of-box algorithm of InfoSphere Discovery
HitRate 70% - not available for export in Guardium Vers.
8.2
ThresholdUsed 60% - not available for export in Guardium Vers.
8.2

CEF Mapping
The CEF standard from ArcSight defines a set of required fields, and a set of
optional fields.

The latter are called extensions in the CEF standard. Data is mapped to these fields
from Guardium configuration information and reports. Note that not all Guardium
fields map to a CEF field, so there may not be a one-to-one relationship between
the rows of a printed report and the CEF file produced for that report. Also note
that this facility is intended to map data from data access domains (Data Access,
Exceptions, and Policy Violations, for example), and not from Guardium
self-monitoring domains (Aggregation/Archive, Audit Process, Guardium Logins,
etc. ).

Note: Analyzed Client IP has a map for CEF source. If the query used for the CEF
does NOT contain the Client IP but contains the analyzed client IP, the analyzed
client IP will be used for the source. If both included in the query, then Client IP
takes precedence.

The CEF fields in the following table are always present.


Table 28. Required CEF Fields Mapping
CEF Field Guardium Mapping
Version 0 (zero); Currently the only version for the CEF format
Device Vendor Guardium

212 Administration
Table 28. Required CEF Fields Mapping (continued)
CEF Field Guardium Mapping
Device Product Guardium
Device Version Guardium software version number
Signature ID ReportID
Name Report Title
Severity Numeric severity code in the range 0-10, with 10 being the most
important event. If not reset in the report, 0 (zero, which translates
to Info for Guardium).

The CEF extension fields are optional, and will be present only when the mapping
applies. For example, if the report does not contain an access rule description, the
act field (the first extension field) will not be present. For more detailed
information about the Guardium entities and attributes, see the appropriate entity
reference topic.
Table 29. CEF Mapping, Guardium Version 8.2
CEF Field Entity Attribute
severity Policy Rule Severity
Violation
act Policy Rule Access Rule Description
Violation
app Client/Server DB Protocol
app Exception Database Protocol
dst Client/Server Server IP
dst Exception Destination Address
dhost Client/Server Server Host Name
dpt Session Server Port
dpt Exception Destination Port
dproc Client/Server Source Program
duid Client/Server OS User
duser Client/Server DB User Name
duser Exception User Name
end Exception Exception Timestamp
end Policy Rule Timestamp
Violation
end Access Period Period End
end Session Session End
msg Exception Exception Description
msg Message Text Message Text
msg Message Text Message Subject
src Client/Server Client IP
src Client/Server Analyzed Client IP
src Exception Source Address
shost Client/Server Client Host Name

Chapter 5. Product integration 213


Table 29. CEF Mapping, Guardium Version 8.2 (continued)
CEF Field Entity Attribute
smac Client/Server Client MAC
spt Session Client Port
spt Exception Source Port
start Exception Exception Timestamp
start Policy Rule Timestamp
Violation
start Access Period Period Start
start Session Session Start
proto Client/Server Network Protocol
request FULL SQL Full Sql
request SQL Sql
cs1 Session Uid Chain
cs2 Session Uid Chain Compressed

Table 30. CEF Mapping, Guardium Version 9.0


CEF Field Entity Attribute
severity Policy Rule Severity
Violation
act Policy Rule Access Rule Description
Violation
app Client/Server DB Protocol
app Exception Database Protocol
dst Client/Server Server IP
dst Exception Destination Address
dhost Client/Server Server Host Name
dpt Session Server Port
dpt Exception Destination Port
dproc Client/Server Source Program
duid Client/Server OS User
duser Client/Server DB User Name
duser Exception User Name
end Exception Exception Timestamp
end Policy Rule Timestamp
Violation
end Access Period Period End
end Session Session End
msg Exception Exception Description
msg Message Text Message Text
msg Message Text Message Subject
src Client/Server Client IP
src Client/Server Analyzed Client IP

214 Administration
Table 30. CEF Mapping, Guardium Version 9.0 (continued)
CEF Field Entity Attribute
src Exception Source Address
shost Client/Server Client Host Name
smac Client/Server Client MAC
spt Session Client Port
spt Exception Source Port
start Exception Exception Timestamp
start Policy Rule Timestamp
Violation
start Access Period Period Start
start Session Session Start
proto Client/Server Network Protocol
request FULL SQL Full Sql
request SQL Sql
cs1 Session Uid Chain
cs2 Session Uid Chain Compressed

For more information about CEF, search the web for Common Event Format: Event
Interoperability Standard, or visit the ArcSight Website: www.arcsight.com.

LEEF Mapping
Log Event Extended Format (LEEF) from QRadar

The LEEF format consists of an optional syslog header, an LEEF header and a
collection of attributes describing the event.

Syslog_Header(optional) LEEF_Header|Event_Attributes

The LEEF header is pipe (‘|’) separated and attributes are tab separated

Example

Jan 18 11:07:53 host


LEEF:Version|Vendor|Product|Version|EventID|Key1=Value1<tab>Key2=Value2<tab>Key3=Valu
Table 31. LEEF Parameters
Parameters Description
LEEF: Version Version Integer identifying the version of LEEF used for the log
message
Vendor String identifying the vendor of the device or application sending the
event log
Product Product String identifying product sending the event log Note: The
combination of vendor and product must be unique
Version String identifying the version of the device or application Sending the
event log
EventID ID that uniquely identifies the event

Chapter 5. Product integration 215


Table 31. LEEF Parameters (continued)
Parameters Description
Attributes 1..N A set of key value pairs attributes for the event separated by the tab
character. Order is not enforced.

A pre defined set of keys are defined and should be used when
possible.

LEEF format is extensible and allows for additional key value pairs to
be added to the event log.

Keys must not contain spaces or equal signs

Values must not contain tabs

Example:
Jan 18 11:07:53 192.168.1.1 LEEF:1.0|QRadar|QRM|1.0|NEW_PORT_DISCOVERD|src=172.5.6.67 dst=172.50.123.

Character Encoding

UTF8

Predefined Attributes
Table 32. Predefined Attributes
Key Name Data Type Max Length Description
Cat string Event category
devTime date Time the device or application emitted the event
devTimeFormat string Defined by the java SimpleDateFormat. This is only
required if using a customized date format. See
Date Format section for further details.
proto integer Transport protocol
sev integer (1-10) Severity of this event
src IPv4 or IPv6 address Source address
dst IPv4 or IPv6 address Destination address
VSrc IPv4 or IPv6 address Virtual source address
srcPort integer Source Port. The valid port numbers are between 0
and 65535.
dstPort integer Destination Port. The valid port numbers are
between 0 and 65535.
srcPreNat IPv4 or IPv6 address Source address for the message before Network
Address Translation (NAT) occurred
dstPreNat IPv4 or IPv6 address Destination address for the message before Network
Address Translation (NAT) occurred
srcPostNat IPv4 or IPv6 address Source address for the message after Network
Address Translation (NAT) occurred
dstPostNat IPv4 or IPv6 address Destination address for the message after Network
Address Translation (NAT) occurred
usrName string 255 User name associated with the event
srcMAC MAC address Six colon-separated hexadecimal numbers. Example:
1:2D:67:BF:1A:71

216 Administration
Table 32. Predefined Attributes (continued)
Key Name Data Type Max Length Description
dstMAC MAC address Six colon-separated hexadecimal numbers. Example:
11:2D:67:BF:1A:71
srcPreNATPort integer Source Port. The valid port numbers are between 0
and 65535.
dstPreNATPort integer Destination Port. The valid port numbers are
between 0 and 65535.
srcPostNATPort integer Source Port. The valid port numbers are between 0
and 65535.
dstPostNATPort integer Destination Port. The valid port numbers are
between 0 and 65535.
identSRC IPv4 or IPv6 address
identHostName string 255 Host name associated with the event. Typically, this
parameter is only associated with identity events
identNetBios string 255 NetBIOS name associated with the event. Typically,
this parameter is only associated with identity events
identGrpName string 255 Group name associated with the event. Typically, this
parameter is only associated with identity events.

Custom Attributes

In some cases custom attributes may be required to identify more information


about the event being generated. In these cases vendors may define their own
custom attributes and include them in the event log. Custom attribute fields
should be used only when there is no acceptable mapping in to a predefined field.

Custom attributes keys must be:


v Single word no spaces
v Alphanumeric
v Clear and concise
v Cannot be named the same as any predefined attribute key

Custom attributes may be used for viewing in the QRadar Event Viewer by
creating custom properties.

Custom attributes may be used by the QRadar reporting engine by creating


customer properties.

Custom attributes can NOT be used for event correlation

Note: Add databaseName=%%DBname to the LEEF template in order to capture


the MS-SQL database name. Update the existing LEEF template or make a new
template by cloning.

Date Formats

You can use any of these predefined formats:


1. Milliseconds since January 1, 1970 (integer)
2. MMM dd yyyy HH:mm:ss, for example, Jun 06 2012 16:07:36
3. MMM dd yyyy HH:mm:ss.SSS, for example, Jun 06 2012 16:07:36.300
Chapter 5. Product integration 217
4. MMM dd yyyy HH:mm:ss.SSS zzz, for example, Jun 06 2012 02:07:36.300 GMT
If these formats are not suitable, you can define a custom date format in the dTime
field by specifying the date format using the dTimeFormat key.

For further information on specifying a date format, visit the SimpleDateFormat


page at: http://java.sun.com/javase/6/docs/api/java/text/
SimpleDateFormat.html

218 Administration
Chapter 6. Troubleshooting problems
To isolate and resolve problems with your IBM products, you can use the
troubleshooting and support information. This information contains instructions for
using the problem-determination resources that are provided with your IBM
products, including IBM Guardium.

Techniques for troubleshooting problems


Troubleshooting is a systematic approach to solving a problem. The goal of
troubleshooting is to determine why something does not work as expected and
how to resolve the problem. Certain common techniques can help with the task of
troubleshooting.

The first step in the troubleshooting process is to describe the problem completely.
Problem descriptions help you and the IBM technical-support representative know
where to start to find the cause of the problem. This step includes asking yourself
basic questions:
v What are the symptoms of the problem?
v Where does the problem occur?
v When does the problem occur?
v Under which conditions does the problem occur?
v Can the problem be reproduced?

The answers to these questions typically lead to a good description of the problem,
which can then lead you to a problem resolution.

What are the symptoms of the problem?

What is the problem? This question might seem straightforward, however, you can
break it down into several more-focused questions that create a more descriptive
picture of the problem. These questions can include:
v Who, or what, is reporting the problem?
v What are the error codes and messages?
v How does the system fail? For example, is it a loop, hang, crash, performance
degradation, or incorrect result?

Where does the problem occur?


Determining where the problem originates is not always easy, but it is one of the
most important steps in resolving a problem. Many layers of technology can exist
between the reporting and failing components. Networks, disks, and drivers are
only a few of the components to consider when you are investigating problems.

The following questions help you to focus on where the problem occurs to isolate
the problem layer:
v Is the problem specific to one platform or operating system, or is it common
across multiple platforms or operating systems?
v Is the current environment and configuration supported?
v Do all users have the problem?

219
v (For multi-site installations.) Do all sites have the problem?

If one layer reports the problem, the problem does not necessarily originate in that
layer. Part of identifying where a problem originates is understanding the
environment in which it exists. Take some time to completely describe the problem
environment, including the operating system and version, all corresponding
software and versions, and hardware information. Confirm that you are running
within an environment that is a supported configuration; many problems can be
traced back to incompatible levels of software that are not intended to run together
or have not been fully tested together.

When does the problem occur?

Develop a detailed timeline of events leading up to a failure, especially for those


cases that are one-time occurrences. You can most easily develop a timeline by
working backward: Start at the time an error was reported (as precisely as possible,
even down to the millisecond), and work backward through the available logs and
information. Typically, you need to look only as far as the first suspicious event
that you find in a diagnostic log.

To develop a detailed timeline of events, answer these questions:


v Does the problem happen only at a certain time of day or night?
v How often does the problem happen?
v What sequence of events leads up to the time that the problem is reported?
v Does the problem happen after an environment change, such as upgrading or
installing software or hardware?

Responding to these types of questions can give you a frame of reference in which
to investigate the problem.

Under which conditions does the problem occur?

Knowing which systems and applications are running at the time that a problem
occurs is an important part of troubleshooting. These questions about your
environment can help you to identify the root cause of the problem:
v Does the problem always occur when the same task is being performed?
v Does a certain sequence of events need to happen for the problem to occur?
v Do any other applications fail at the same time?

Answering these types of questions can help you explain the environment in
which the problem occurs and correlate any dependencies. Remember that just
because multiple problems might have occurred around the same time, the
problems are not necessarily related.

Can the problem be reproduced?


From a troubleshooting standpoint, the ideal problem is one that can be
reproduced. Typically, when a problem can be reproduced you have a larger set of
tools or procedures at your disposal to help you investigate. Consequently,
problems that you can reproduce are often easier to debug and solve.

However, problems that you can reproduce can have a disadvantage. If the
problem is of significant business impact, you do not want it to reoccur. If possible,

220 Administration
recreate the problem in a test or development environment, which typically offers
you more flexibility and control during your investigation.
v Can the problem be re-created on a test system?
v Are multiple users or applications encountering the same type of problem?
v Can the problem be re-created by running a single command, a set of
commands, or a particular application?

Searching knowledge bases


You can often find solutions to problems by searching IBM knowledge bases. You
can optimize your results by using available resources, support tools, and search
methods.

About this task


You can find useful information by searching the information center for Guardium.
However, sometimes you need to look beyond the information center to answer
your questions or resolve problems.

Procedure

To search knowledge bases for information that you need, use one or more of the
following approaches:
v Find the content that you need by using the IBM Support Portal.
The IBM Support Portal is a unified, centralized view of all technical support
tools and information for all IBM systems, software, and services. The IBM
Support Portal lets you access the IBM electronic support portfolio from one
place. You can tailor the pages to focus on the information and resources that
you need for problem prevention and faster problem resolution. Familiarize
yourself with the IBM Support Portal by viewing the demo videos
(https://www.ibm.com/blogs/SPNA/entry/the_ibm_support_portal_videos)
about this tool. These videos introduce you to the IBM Support Portal, explore
troubleshooting and other resources, and demonstrate how you can tailor the
page by moving, adding, and deleting portlets.
v Search for content about Guardium by using one of the following additional
technical resources:
– Tivoli Identity Manager Version 4.3 technotes and Authorized Program
Analysis Reports (APARs - problem reports)
– Tivoli Identity Manager Support website
– Tivoli support communities (forums and newsgroups)
v Search for content by using the IBM masthead search. You can use the IBM
masthead search by typing your search string into the Search field.
v Search for content by using any external search engine, such as Google, Yahoo,
or Bing. If you use an external search engine, your results are more likely to
include information that is outside the ibm.com® domain. However, sometimes
you can find useful problem-solving information about IBM products in
newsgroups, forums, and blogs that are not on ibm.com.

Tip: Include “IBM” and the name of the product in your search if you are
looking for information about an IBM product.

Chapter 6. Troubleshooting problems 221


Getting fixes from Fix Central
You can use Fix Central to find the fixes that are recommended by IBM Support
for a variety of products, including Guardium. With Fix Central, you can search,
select, order, and download fixes for your system with a choice of delivery options.
A product fix might be available to resolve your problem.

About this task


Procedure

To find and install fixes:


1. Obtain the tools that are required to get the fix. If it is not installed, obtain your
product update installer. You can download the installer from Fix Central. This
site provides download, installation, and configuration instructions for the
update installer.
2. Select Guardium as the product, and select one or more check boxes that are
relevant to the problem that you want to resolve.
3. Identify and select the fix that is required.
4. Download the fix.
a. Open the download document and follow the link in the Download
Package section.
b. When downloading the file, ensure that the name of the maintenance file is
not changed. This change might be intentional, or it might be an
inadvertent change that is caused by certain web browsers or download
utilities.
5. Apply the fix.
a. Follow the instructions in the Installation Instructions section of the
download document.
b. For more information, see the Installing fixes with the Update Installer topic
in the product documentation.
6. Optional: Subscribe to receive weekly email notifications about fixes and other
IBM Support updates.

Contacting IBM Support


IBM Support provides assistance with product defects, answers FAQs, and helps
users resolve problems with the product.

Before you begin


After trying to find your answer or solution by using other self-help options such
as technotes, you can contact IBM Support. Before contacting IBM Support, your
company or organization must have an active IBM maintenance contract name, and
you must be authorized to submit problems to IBM. For information about the
types of available support, see the Support portfolio topic in the “Software Support
Handbook”.

Procedure

To contact IBM Support about a problem:


1. Define the problem, gather background information, and determine the severity
of the problem. For more information, see the Getting IBM support topic in the
Software Support Handbook.

222 Administration
2. Gather diagnostic information.
3. Submit the problem to IBM Support in one of the following ways:
v Online through the IBM Support Portal: You can open, update, and view all
of your service requests from the Service Request portlet on the Service
Request page.
v By phone: For the phone number to call in your region, see the Directory of
worldwide contacts web page.

Results

If the problem that you submit is for a software defect or for missing or inaccurate
documentation, IBM Support creates an Authorized Program Analysis Report
(APAR). The APAR describes the problem in detail. Whenever possible, IBM
Support provides a workaround that you can implement until the APAR is
resolved and a fix is delivered. IBM publishes resolved APARs on the IBM Support
website daily, so that other users who experience the same problem can benefit
from the same resolution.

Basic information for IBM Support


Before you call IBM Support, collect basic information about IBM Guardium
(collector, aggregator, Central Manager; UNIX/Linux S-TAP; Windows S-TAP).

Use support must_gather commands, which can be run through the CLI to generate
specific information about the state of any Guardium system. This information can
also be collected through the Guardium GUI.

This information can be uploaded from the Guardium system and sent to IBM
Support whenever a Problem Management Report (PMR) is logged.

Gathering support information results

To gather support information, click Manage > Maintenance > Support


Information Results. Complete the following sections.
1. Describe the support information gathering session.
2. Complete the PMR number.
3. To send the results to an email address, specify email: and complete the email
address.
4. Schedule a start time by clicking the calendar icon.
5. Check off gather log information that is related to the following categories:
v Alert
v Audit
v DB User
v Scheduler
v Patch Install
v Application Masking
v User Interface
v Backup
v Purge
v System DB
v Network

Chapter 6. Troubleshooting problems 223


v Aggregation
v Backup
v DB User
v Scheduler
v System DB
v Network
v Alert
v Audit
v Central Manager
v Purge
v Sniffer
v Patch Install
v Application Masking
6. Input a value to gather information for a certain amount of time in minutes.
The default value is 10 minutes. This value is the time period the logs will be
gathered for. If you specify an email, the logs are gathered for 10 minutes from
the time you start the process and an email is sent afterwards. You must
reproduce the problem and generate the log information during the specified
time period so that the logs can contain the debug information that is needed
to troubleshoot problems.
7. Input the maximum number of rows that appears in the result log file.
8. When you are finished with the configuration, click Start.
9. Go to Support Information Results to view the results. You can open or save
the .tgz file.

Must Gather for Guardium Appliance with CLI

IBM Guardium Collector, Aggregator, or Central Manager

The must_gather commands can be run at any time by the user through the CLI.
Complete the following steps.
1. Open a putty session (or similar) to the appropriate collector, aggregator, or
Central Manager.
2. Log in as user cli.
3. Depending on the type of issue, paste the relevant must_gather commands into
the CLI prompt. More than one must_gather command might be needed to
diagnose the problem. The commands are listed and described in the following
list.
v support must_gather agg_issues (aggregation process)
v support must_gather alert_issues (alerts)
v support must_gather app_issues (application)
v support must_gather app_masking_issues (application masking)
v support must_gather audit_issues (audit process)
v support must_gather backup_issues (backup process)
v support must_gather cm_issues (Central Manager)
v support must_gather datamining_issues (data mining)
v support must_gather miss_dbuser_prog_issues (system database user)
v support must_gather network_issues (network architecture)

224 Administration
v support must_gather ocr_issues
v support must_gather patch_install_issues (patch installation and
upgrades)
v support must_gather purge_issues (purge process)
v support must_gather scheduler_issues (scheduler function)
v support must_gather sniffer_issues (sniffer function)
v support must_gather system_db_info (Guardium system database or
operating space performance)
v support must_gather user_interface_issues (user interface)
The output is written to the must_gather directory with a file name such as the
following example:
must_gather/system_logs/.tgz
4. Send the resulting output to IBM Support.
By using fileserver <ip address>, you can upload the .tgz files and send to
IBM Support.
Send the file through email or upload to ECUREP by using the standard data
upload. Specify the PMR number and file to upload.

Must Gather for UNIX/Linux S-TAP

The guard_diag script produces statistics on the server that helps Guardium with
diagnostics.

Explanation of guard_diag:

Diagnostic Script (guard_diag)

General Overview:

There is now a diagnostics script (guard_diag) that runs out of


/usr/local/guardium/guard_stap/guard_diag when S-TAP logging is set to level 7
from the GUI. It is also possible to transfer this script to a machine that is running
S-TAP.

Usage: ./guard_diag output_dir

The script prompts for the location if the script cannot automatically determine
where S-TAP is installed. The run time is about 1.5 minutes and if no output
directory is specified, the script places the generated .tar file in /tmp. When the
script runs and enables logging from the GUI, the .tar file is placed in /var/tmp.

General System Data Collected:


v Uname -a
v List of kernel modules installed
v Output for one cycle
v Uptime
v Processor number and type
v Dump of most recent syslog
v Netstat output
v IPC list
v Disk free statistics

Chapter 6. Troubleshooting problems 225


v copy of /etc/services
v Directory listing of /etc
v Various platform-specific information
v Contents of /etc/inittab

S-TAP Data Collected:


v S-TAP version
v Contents of guard_tap.ini
v Ls -l on the K-TAP device nodes
v 30s trace of S-TAP
v K-TAP statistics
v List of all the files in the installation directory
v K-TAP khash
v Verbose debug log for K-TAP (2) and S-TAP(4)

Known Issues:
v Tusc is not installed on all HP-UX operating systems, so tracing the S-TAP PID
does not work.
v gzip isn't always installed on the system. The fall back is to compress (final
extension of .tar.Z) and failing that, the .tar file is placed in the output
directory.
v Topas output on AIX is best interpreted by the terminal since it contains control
codes that makes it mostly unintelligible when it is opened in an editor.
v The non-root S-TAP has a number of issues concerning the diagnostics script.
v In Linux, /var/log/messages is only readable by the root.
v Some Solaris operating systems might not be configured correctly and causes
netstat to print an error.
v The path for the non-root user is rather basic, and as a result, some commands
might not run at all. Notably, this known issue happens on HP-UX with gzip.

Platforms Supported:
v Linux
v HP-UX
v AIX
v Solaris

Requirements for STAP: None

Requirements for Linux: None

Requirements for AIX: topas

Requirements for Solaris: top, prtdiag, psrinfo

Requirements for HP-UX: tusc

Must Gather for Windows S-TAP


Running this script generates the following text files in the current directory:
v stap.txt

226 Administration
v tasks.txt
v system.txt
v evtlog.txt or evtlog2008.txt
v reg.txt

Notes:
1. This diag script can be run with any S-TAP version.
2. Rename the diag script to diag.bat and place it under directory where S-TAP
was installed. Then, you can run it manually. It generates text files with
diagnostic information.
3. Submit the results to Guardium L3 Support or Research & Development.

The script collects the following data:


v Content of %system%guard_tap.ini.
v The Guardium S-TAP installation log
v All running tasks
v List of all installed kernel drivers
v OS information that is collected from the system information utility
v ipconfig /all
v netstat -nao
v Ping and trace results from the database server to the Guardium system
v CPU usage for guardium_stapr
v Overall system CPU usage
v Guardium_stapr process handle count and memory usage
v Event log messages that are generated by S-TAP
v System event log messages
v The following registry entries:
– HKLMSOFTWAREMicrosoftWindowsCurrentVersionUninstall?
– HKLMSYSTEMCurrentControlSetServices?
– HKLMSYSTEMCurrentControlSetControlGroupOrderList?
– HKEY_LOCAL_MACHINESOFTWAREMicrosoftMSSQLServer

Encrypt Must Gather


Encrypt Must Gather was added to the Global Profile screen. To go to the Global
Profile screen, click Setup > Global Profile.. The default value is cleared (Do not
encrypt). If it is cleared, must gather output is compressed and not encrypted
(current function). When the check box is checked, all future must gather output is
encrypted. Encryption can be also set by store encrypt_must_gather on CLI
command and cleared by using the command store encrypt_must_gather off.

GuardAPI Must Gather


Use the GuardAPI command to run the GuardAPI Must Gather collection of
information from a script.

grdapi must_gather --help=true.

The following function parameters are listed.

Chapter 6. Troubleshooting problems 227


ID=0
function parameters :
commandsList - String -required - Constant values list
description - String
email - String
maxLogLength - Integer - Constant values list
pmrNumber - String
runDuration - Integer - Constant values list
startRun - Date
To get a Constant values list for a parameter, call the function with --get_param_values=<param-name>

The --commandsList requires a string. The --description is also a required string.


The --runDuration indicates how long the must_gather runs. Type in an email
address to send the must_gather report. The --maxLogLength parameter is a
required integer that sets the maximum length of the log report. The --pmrNumber
is the problem management report number that is used by IBM Support to track
and resolve customer reports. The --startRun is a required date such as now. You
can get a list of values for each parameter by calling the function grdapi
must_gather --get_param_values=<param-name>.

Exchanging information with IBM


To diagnose or identify a problem, you might need to provide IBM Support with
data and information from your system. In other cases, IBM Support might
provide you with tools or utilities to use for problem determination.

Sending information to IBM Support


To reduce the time that is required to resolve your problem, you can send trace
and diagnostic information to IBM Support.

Procedure

To submit diagnostic information to IBM Support:


1. Open a problem management record (PMR).
2. Collect the diagnostic data that you need. Diagnostic data helps reduce the
time that it takes to resolve your PMR. You can collect the diagnostic data
manually or automatically:
v Collect the data manually.
v Collect the data automatically.
3. Compress the files by using the .zip or .tar file format.
4. Transfer the files to IBM. You can use one of the following methods to transfer
the files to IBM:
v The Service Request tool
v Standard data upload methods: FTP, HTTP
v Secure data upload methods: FTPS, SFTP, HTTPS
v Email
All of these data exchange methods are explained on the IBM Support website.

Receiving information from IBM Support


Occasionally an IBM technical-support representative might ask you to download
diagnostic tools or other files. You can use FTP to download these files.

228 Administration
Before you begin

Ensure that your IBM technical-support representative provided you with the
preferred server to use for downloading the files and the exact directory and file
names to access.

Procedure

To download files from IBM Support:


1. Use FTP to connect to the site that your IBM technical-support representative
provided and log in as anonymous. Use your email address as the password.
2. Change to the appropriate directory:
a. Change to the /fromibm directory.
cd fromibm
b. Change to the directory that your IBM technical-support representative
provided.
cd nameofdirectory
3. Enable binary mode for your session.
binary
4. Use the get command to download the file that your IBM technical-support
representative specified.
get filename.extension
5. End your FTP session.
quit

Subscribing to Support updates


To stay informed of important information about the IBM products that you use,
you can subscribe to updates.

About this task

By subscribing to receive updates about Guardium, you can receive important


technical information and updates for specific IBM Support tools and resources.
You can subscribe to updates by using one of two approaches:
RSS feeds and social media subscriptions
The following RSS feeds and social media subscriptions are available for
Guardium:
v RSS feed 1
v RSS feed 2
v RSS feed 3
For general information about RSS, including steps for getting started and
a list of RSS-enabled IBM web pages, visit the IBM Software Support RSS
feeds site.
My Notifications
With My Notifications, you can subscribe to Support updates for any IBM
product. (My Notifications replaces My Support, which is a similar tool
that you might have used in the past.) With My Notifications, you can
specify that you want to receive daily or weekly email announcements.
You can specify what type of information you want to receive (such as
publications, hints and tips, product flashes (also known as alerts),
downloads, and drivers). My Notifications enables you to customize and

Chapter 6. Troubleshooting problems 229


categorize the products about which you want to be informed and the
delivery methods that best suit your needs.

Procedure
To subscribe to Support updates:
1. Subscribe to the Guardium RSS feeds.
2. Subscribe to My Notifications by going to the IBM Support Portal and click My
Notifications in the Notifications portlet.
3. Sign in using your IBM ID and password, and click Submit.
4. Identify what and how you want to receive updates.
a. Click the Subscribe tab.
b. Select the appropriate software brand or type of hardware.
c. Select one or more products by name and click Continue.
d. Select your preferences for how to receive updates, whether by email, online
in a designated folder, or as an RSS or Atom feed.
e. Select the types of documentation updates that you want to receive, for
example, new information about product downloads and discussion group
comments.
f. Click Submit.

Results

Until you modify your RSS feeds and My Notifications preferences, you receive
notifications of updates that you have requested. You can modify your preferences
when needed (for example, if you stop using one product and begin using another
product).
Related Information

IBM Software Support RSS feeds

Subscribe to My Notifications support content updates

My Notifications for IBM technical support

My Notifications for IBM technical support overview

Problems and solutions


Search here for solutions to problems that you encounter.

User Interface
Cannot view SVG graphics in Internet Explorer 9
If you cannot view SVG graphics in IE9, switch to Standard mode.

Symptoms

When you open the IBM Security Guardium GUI with Internet Explorer 9 (IE9),
the SVG graphics do not display. The IE9 status window displays the following
message:
alt="SVG Plugin Required”

230 Administration
Causes

With the SVG Viewer, you can view items like the Access Maps and Current Status
Monitor. However, IE9 is in Document mode and not in Standard mode. In
Document mode, the SVG viewer is not automatically loaded by the browser.

Environment

All Guardium configurations (collector, aggregator, central manager) are affected.

Resolving the problem

Switch IE9 to Standard mode with the following steps:


1. Open IE9.
2. Press F12.
3. Select Standard mode.

Note: If Standard mode is not available as a choice, then IE9 is already in the
Standard mode. In such an event, contact Guardium Technical Support.

Changes are not saved when you add an inspection engine


If your changes are not saved when you add an inspection engine, check that the
parameters are valid.

Symptoms
When you add an inspection engine, the new settings remain for a few minutes
and then disappear.

Causes
There is an error in one or more parameter values with either the new inspection
engine or a different inspection engine in the S-TAP configuration file
guard_tap.ini.

Environment
The Guardium collector user interface is affected.

Resolving the problem


Check that every parameter that must be set for the inspection engine is set to a
valid value. For example, some database types require that you set db_install_dir
to the path of the installation directory on the server. However, for other database
types, this parameter must not be set or must be set to NULL. Check the specific
requirements for your database type in the S-TAP Help Book and make sure that
everything is correctly set.

HTTP error 403


If you receive a HTTP error 403, you can disable the Cross-Site Request Forgery
(CSRF) protection feature to prevent the error.

Symptoms

When you refresh the IBM Security Guardium GUI from the system main page,
you receive in the following error:
HTTP Status 403-
type Status report
message
description Access to the specified resource () has been forbidden

Chapter 6. Troubleshooting problems 231


Causes

The cause is a feature in Guardium designed to prevent Cross-Site Request Forgery


(CSRF). CSRF protection is enabled by default.

Environment

All Guardium configurations (collector, aggregator, central manager) are affected.

Resolving the problem

You can disable this feature by using the following CLI command: store gui
csrf_status off

Note: If you turn off CSRF protection, the security level of the Guardium system is
reduced.

The following command enables protection against Cross-Site Request Forgery. It is


enabled by default: store gui csrf_status on

You can check the status by running this CLI command: show gui csrf_status

Java.lang.IllegalStateException
If you receive a java.lang.IllegalStateException error, clean up the Java servlets.

Symptoms

You receive the following error message.


There has been an Error. Please Contact your System Administrator
(java.lang.IllegalStateException)

Causes

The error is raised when a method is invoked and the Java VM is in a state that is
inconsistent with the method. There might also be corrupted Java servlets that are
caused by deadlocks.

Environment

The Guardium system is affected.

Resolving the problem

Wait a few minutes and retry. If the error persists, restart the GUI by logging in as
user cli and executing the command restart GUI.

To clean up the Java servlets, run the command support clean sevlets.

If the problem is not resolved, please collect the following tomcat logs and contact
IBM Security Guardium Technical Support.
tomcat_log/localhost.<date_stamp>.log
tomcat_log/catalina.<date_stamp>.log

232 Administration
Pages are not loading correctly
If pages do not load correctly, restart the GUI or use a different browser.

Symptoms
You might see a blank screen or other errors. The problem appears to happen with
certain browsers on specific systems but not with others.

Causes
The cause might be restricted to a localized browser or there is a Java virtual
machine issue.

Environment
The collector, aggregator, and central manager are affected.

Resolving the problem


To resolve the problem, run restart GUI from the CLI prompt on the Guardium
system. If that does not help, try the following actions.
v Restart the system.
v Uninstall and reinstall the Java virtual machine.
v Uninstall and reinstall the browser.
v Use a different browser.

Policies
Query does not appear in the co-relation alert definition
If the query does not appear in the co-relation alert definition, check the count
field and sort by time stamp.

Symptoms
You created an access query for creating a co-relation alert. However, in the
co-relation alert definition, this query does not appear in the drop-down list.

Causes
The co-relation alert search in the report is based on the time stamp.

Environment
The collector and aggregator are affected.

Resolving the problem


Mark the Add Count check box and sort by time stamp.

Rule does not trigger


If a rule with a value in the policy command field does not trigger as expected,
reconfigure the rule.

Symptoms
Rules with a value in the policy Command field do not trigger as expected.

Causes
The cause is a misconfiguration in the command field. The Guardium parser does
not consider the command modifiers to be a part of a command.

Environment
Guardium Collectors. The command field in the policy rule is also affected when it
is used with wildcard (%).

Chapter 6. Troubleshooting problems 233


Resolving the problem
The value in the Command field of the rule must match a value exactly that is
shown in SQL Verb, plus a wildcard (%) as needed. This example is correct.
GRANT
GRANT%

This example is incorrect.


GRANT% TO PUBLIC
%GRANT% ADMIN OPTION%

ADMIN OPTION and TO PUBLIC do not match and cannot trigger a rule because the
Guardium parser does not recognize them as a part of a command. Generally, the
parser does not consider command modifiers to be part of a command. Instead,
create a report to inspect the traffic that the policy monitors and include the SQL
Verb field from the Command entity in that report. Anything that is listed in the
SQL Verb field is recognized by the parser and can be used in the Command field
of a policy rule. Several commands can be added to a group and the group can be
used in the rule instead of a single command. In this case, each group member
must match an entry in SQL Verb. Guardium includes several such command
groups that you can use or clone.

Redact function causes overly masked result


If the redact function causes an overly masked result, use the regular expression
[\x0c]{1}[0-9]{8}([0-9]{4}).

Symptoms
The redact function causes an overly masked result or an ORA-03106 error in
Oracle traffic.

Causes
The redact function in the Guardium policy rule is doing a pattern match with the
result set. It has a feature to replace the matched string with the user specified
character.

Environment
Guardium collectors are affected.

Resolving the problem


Use the regular expression [\x0c]{1}[0-9]{8}([0-9]{4}). This regular expression
ensures that it starts with the length of the column followed by 12 digits and
replaces the last 4 digits.

SSH sessions and automated CRON jobs that log in to your


Oracle database are shown as failed logins
If SSH sessions and automated CRON jobs that log in to your Oracle database are
shown as failed logins, amend the policy.

Symptoms
SSH sessions and automated CRON jobs that log in to your Oracle database
through SQLPLUS and RMAN with /as sysdba show as failed logins.

Causes
Oracle responds to these logins with the following error on such attempts, even if
it is not shown on the screen.
ORA-01-17: invalid username/password; logon denied.

234 Administration
This error triggers the failed login alert. For example, if the database user
WRONGLOGIN is a member of the DBA group, and logs as sqlplus WRONGLOGIN
as sysdba, the database authentication of WRONGLOGIN fails. This failure causes
the ORA-01-17 error alert to trigger and is reflected in the Guardium log. However,
users with sysdba privileges can connect to the database without database
authentication so the session is allowed to continue. Both events are captured and
recorded.

Environment
Guardium collectors are affected.

Resolving the problem


You can amend the policy to include an allow action before the rule that alerts
about failed logins. Create an exception rule in the policy with the following
conditions.
Client IP=<Server IP>
Source program = SQLPLUS
DB user in trusted group
OS user in group of Oracle DBAs
Net protocol = BEQUEATH (if local BEQUEATH, not TCP)

This rule skips the failed login alerts that are caused by the ORA-01-17 error but
are still logged. To filter the failed login alerts out of the reports, add these
conditions to the end of the conditions list:
AND
(
client IP<>server IP OR
src prg <> SQLPLUS OR
db user NOT IN group of trusted OR
os user NOT IN group of oracle DBAs OR
net protocol <>BEQUEATH (if this is local BEQUEATH, not TCP )
)

The Guardium internal database is filling up


If the Guardium internal database is filling up, you can purge the data manually
or as part of the regular purge strategy.

Symptoms
The Guardium internal database is filling up and most of the data is in the
GDM_POLICY_VIOLATIONS_LOG table.

Causes
A change to the policy can cause a policy violation rule to be triggered frequently.
You might find that most of the data is stored in the
GDM_POLICY_VIOLATIONS_LOG table.

Environment
The Guardium collector is affected.

Diagnosing the problem


Run the CLI command support show db-top-tables all.

Resolving the problem


Check the Policy Violations / Incident Management report to identify which
policy rule is getting triggered constantly. Then, adjust the policy rule to prevent it
from getting triggered as often.

Chapter 6. Troubleshooting problems 235


The excess data in the GDM_POLICY_VIOLATIONS_LOG table is purged as part
of the regular purge strategy. However, if you would like to manually clean data
from GDM_POLICY_VIOLATIONS_LOG table, you can use the command support
clean DAM_data policy_violations<start_date><end_date>.

Reports
Cannot modify the receiver table for an Audit Process after it has
been executed at least once
If you cannot modify the receiver table for an audit process, clone the audit
process and replace the original.

Symptoms
After an audit process runs at least once, you can neither remove nor add a
receiver. You can also not modify the following properties for a receiver.
v Action Req.
v Cont.
v Appv. if Empty

Causes
After an Audit Process runs at least once, the receiver table is locked and you
cannot modify most of the properties.

Environment
All Guardium configurations (collector, aggregator, central manager) are affected.

Resolving the problem


The following steps enable you to modify the receiver table.
1. Clone the audit process.
2. Make changes to the cloned audit process.
3. Delete the original audit process. However, if you do not want to lose the audit
process history, you can rename the audit process.
4. Rename the cloned audit process to the name of the original one.

Cannot see multi-byte characters


If you export a Guardium report to PDF and the characters are not correct, switch
the PDF font configuration.

Symptoms
You can view reports in the GUI. However, when you export the report to PDF, the
characters are not correct or missing. The characters appear as question marks or
other symbols in the PDF report.

Causes
The default font in Guardium PDF exports does not show multi-byte characters
correctly. For example, Greek, Cyrillic, and Chinese characters do not display
correctly.

Environment
The collector, aggregator, and central manager are affected.

Resolving the problem


In version 9 and later, switch the PDF font configuration to resolve the problem.
1. Log in as a user in the CLI.

236 Administration
2. Run the command store pdf-config multilanguage_support
3. Select 2 Multi-language.

File system is almost full


If the Guardium file system is almost full, change the log rotation strategy.

Symptoms
The file system is filling up and approaching 100%.

Causes
Alerts and reports are sent to the syslog and can fill up the file system.

Environment
The collector or aggregator might be affected.

Resolving the problem


By default, the log files rotate weekly and keep five files. However, you can change
the log rotation strategy for the log files. Use the following command to keep
fewer messages in the system.
store logrotate [agg|message] [daily|weekly|monthly] [# of rotations]

Guardium audit reports viewed in Microsoft Excel have rows with


unexpected characters
If you view an Audit report in .csv and see rows with unexpected characters, use
another .csv viewer or view it as a .pdf file.

Symptoms
When you view an Audit report (in .csv format) in Microsoft Excel, you notice that
certain rows are filled with unexpected characters. The characters might look
similar to what you find in the full SQL column. The problem is not seen in .pdf
reports or in GUI reports.

Causes
Microsoft Excel has a limit on what a cell can contain of 32,767 characters. If your
captured SQL is longer than this limit, it will spill over onto the next row.

Environment
The Collector, Aggregator, and Central Manager are affected.

Resolving the problem


Use another .csv viewer that has a larger limit on characters per cell or view the
audit report as a .pdf file instead.

Reports show IP address as 0.0.0.0


Symptoms
The IP address shows as 0.0.0.0. in Guardium.

Causes
While Guardium is decrypting the traffic, the IP address is initially recorded as
0.0.0.0 because the sniffer does not know what the actual IP address is. After the
decryption is completed, a separate thread repopulates the session tables with the
correct IP address.

Environment
Any database that encrypts the database traffic is affected.

Chapter 6. Troubleshooting problems 237


Resolving the problem
Run the same report after a few minutes. To view the correct client IP for newer
traffic, add the field Analyzed Client IP from the client/server domain to the
report. It is possible that for some rows, the Analyzed Client IP is blank. If it is
blank, the decryption for that piece of traffic is not completed.

Request was interrupted or quota exceeded error message


If you receive an error message that states the request was interrupted or the quota
was exceeded when you run a report, divide the report into pieces of shorter
reporting interval.

Symptoms
When you run a report in Guardium, you receive the following error
message.Request was interrupted or quota exceeded.

Causes
The error message Request was interrupted or quota exceeded appears when an
interactive report does not complete within the 3-minute time limit. The
underlying cause is generally the size of the report.

Environment
The collector and aggregator are affected.

Resolving the problem


To resolve the problem, complete one of the following options.
v Divide the report into pieces of a shorter reporting interval. This action is the
most recommended method. If a report exceeds 4 GB, it causes a MYSQL table
data pointer size exhaustion.
v Increase the query timeout value to a larger value. Click Manage > Activity
Monitoring > Running Query Monitor to open the Running Query Monitor.
v Uninstall and reinstall the browser. Type a number of seconds in the
Report/Monitor Query Timeout box, and click Update.
v Run the report in the background. Reports that run in the background are not
subject to the query timeout.
v Run the report as an audit process.

Rule does not trigger


If a rule with a value in the policy command field does not trigger as expected,
reconfigure the rule.

Symptoms
Rules with a value in the policy Command field do not trigger as expected.

Causes
The cause is a misconfiguration in the command field. The Guardium parser does
not consider the command modifiers to be a part of a command.

Environment
Guardium Collectors. The command field in the policy rule is also affected when it
is used with wildcard (%).

Resolving the problem


The value in the Command field of the rule must match a value exactly that is
shown in SQL Verb, plus a wildcard (%) as needed. This example is correct.

238 Administration
GRANT
GRANT%

This example is incorrect.


GRANT% TO PUBLIC
%GRANT% ADMIN OPTION%

ADMIN OPTION and TO PUBLIC do not match and cannot trigger a rule because the
Guardium parser does not recognize them as a part of a command. Generally, the
parser does not consider command modifiers to be part of a command. Instead,
create a report to inspect the traffic that the policy monitors and include the SQL
Verb field from the Command entity in that report. Anything that is listed in the
SQL Verb field is recognized by the parser and can be used in the Command field
of a policy rule. Several commands can be added to a group and the group can be
used in the rule instead of a single command. In this case, each group member
must match an entry in SQL Verb. Guardium includes several such command
groups that you can use or clone.

Scheduled Job Exceptions every 5 minutes


If you receive a Scheduled Job exception every 5 minutes, deactivate the alert from
the Anomaly Detection page.

Symptoms
You receive the same message in the Scheduled Jobs Exceptions report at regular
short intervals, typically every 5 minutes. This interval is the same as the polling
interval that anomaly detection runs on.

An example of the Scheduled Jobs Exceptions report might look like the following.
Timestamp Exception Description Count of Exceptions
2013-12-05 15:51:22.0 java.lang.NumberFormatException: empty String 1

The same exception appears every 5 minutes.

Causes
One of the active alerts is causing the error.

Environment
Guardium collectors and the Aggregator are affected.

Diagnosing the problem


You can check the polling interval and active alerts in the Anomaly Detection page.
Click Protect > Database Intrusion Detection > Anomaly Detection to open the
Anomaly Detection page.

Resolving the problem


Identify the exact alert that is causing the problem and deactivate it.
1. Deactivate one alert from the Anomaly Detection page.
2. Wait for the length of the polling interval to elapse.
3. Check to see whether the errors stop with that alert deactivated.
4. If not, reactivate the alert and deactivate the next one.
5. Repeat steps 2-5 until you try all alerts.

If you find the alert that is causing the problem and need assistance to understand
or stop the error, contact IBM Guardium Technical Support and provide the
following items:

Chapter 6. Troubleshooting problems 239


1. The exact error text and screen capture.
2. Output of the following CLI commands. If requested, specify the length of one
polling interval.
support must_gather app_issues
support must_gather alert_issues

Scheduled jobs exception: merge required, delay executing


process
If you receive an error message that states merge required, delay executing process,
reschedule the Audit process.

Symptoms
You receive the following message. Merge required, delay executing Process.
You might receive several of these messages over a short period.

Causes
The audit process requires the merge process to finish before it can run.

Environment
The aggregator is affected.

Diagnosing the problem


Click Reports > Guardium Operational Reports > Aggregation/Archive Log to
open the Aggregation/Archive Log. You can also diagnose the problem in
agg_progress.log.

Resolving the problem


Reschedule the audit process to run at least 10 minutes after the merge process.

The database user is not shown correctly in Guardium reports


when you monitor Teradata
If Guardium reports do not show the database user correctly when you monitor
Teradata, configure the Teradata Database.

Symptoms
When you view records from the monitored Teradata Database in Guardium
reports, the database user name field does not show up as expected. The user
name is truncated or missing.

Causes
The Teradata Database is not enabled to return the full user name.

Environment
Any Guardium collector that captures data from the Teradata database is affected.

Resolving the problem


Use the following command to enable the Teradata Database to return the full user
name, in the correct character set, to the monitoring application. Other applications
are not affected. gtwcontrol -u yes -d

The -d command displays the updated GDO settings.

Note: This setup returns the user name in unencrypted form. If encryption is
enabled, the system returns an error message.

240 Administration
Unexpected results in Guardium reports with embedded
commands
If you receive unexpected results in Guardium reports, configure your policy rules
to handle depth by using tuples.

Symptoms
You see results in your reports that you do not expect or that you believe should
be filtered out by the policy. Conversely, you do not capture statements that you
expect to capture.

Causes
The SQL usually has several objects and commands that are embedded in the
statement. The policy or report definition is not configured to deal with objects or
commands at different depths.

Environment
Guardium collectors are affected.

Resolving the problem


Verify that your conditions match the correct object name. Use the correct main
entity to show objects or SQL verbs at different depths. If you still see unexpected
behavior, use the group builder to define a group of tuples to use in the policy. A
tuple allows multiple attributes to be combined to form a single group member.

Note: Tuple supports the use of one slash and a wildcard character (%). It does not
support the use of a double slash.

Assess and Harden


CAS is not working with Java 1.7 on Windows
If Guardium change audit system is not working with Java version 1.7 on
Windows, copy msvcr100.dll to your CAS bin folder.

Symptoms
Guardium CAS works with older Java versions but not with Java 1.7.

Causes
msvcr100.dll is missing from <GUARDIUM STAP directory>\cas\bin\

Environment
Guardium CAS on Windows is affected.

Resolving the problem


To resolve the problem, complete the following steps.
1. Find the path where Java 1.7 is installed on your system such as C:\Program
Files (x86)\Java\jre7\bin
2. Find the location of the library jvm.dll within the Java path found in the
previous step.
3. Edit the cas.cfg file in the <CAS directory>\conf directory. For example,
C:\Program Files (x86)\GUARDIUM_STAP\cas\conf\cas.cfg is a typical file path.
4. Find the line corresponding to the JVM such as ;JVM=c:\program
files\java\jre1_2_3\bin\client\jvm.dll.

Chapter 6. Troubleshooting problems 241


5. Remove the semicolon from the beginning of the line. Then, set the JVM to the
path of the library jvm.dll in step 2. JVM=C:\Program Files
(x86)\Java\jre7\bin\server\jvm.dll.
6. Copy msvcr100.dll from the bin folder in your Java 7 installation directory to
your <CAS directory>\bin folder. For example, copy C:\Program Files
(x86)\Java\jre7\bin\msvcr100.dll to C:\Program Files (x86)\Guardium\
GUARDIUM_STAP\cas\bin\msvcr100.dll.
7. Restart the change audit system.

Note: This is only needed for Java version 1.7. For older versions of Java, this step
is not needed.

Vulnerability Assessment exception group members appear in


failed test
If members of a test exception group appear in a failed vulnerability assessment
test, use an escape sequence for the backslash character.

Symptoms
Some members of a test exception group appear in the details field when you run
a vulnerability assessment. The group contains members with a backslash character
and a REGEX tag such as (R)US\John Doe.

Causes
Special characters can trigger errors when Guardium parses the exception group.

Environment
Guardium collectors are affected.

Resolving the problem


Use an escape sequence for the backslash character or do not use the REGEX tag
(use an exact match). Either of these examples work.
US\John Doe
(R)US\\John Doe

The REGEX tag (R) is used to trigger a regular expression search of the details
field to remove any string that matches the regular expression. A backslash or any
other character that has a meaning in a regular expression needs a backslash
escape sequence to avoid parsing errors. If you do not use the (R) tag, the group
member must exactly match the entire line in the details field for Guardium to
make a match. To pass the vulnerability test, the details field of the test must be
empty.

Configuring your Guardium system


Cannot configure STAP after upgrade
Configure S-TAP in Guardium after you upgrade S-TAP.

Symptoms
After you upgrade S-TAP using the Guardium Installation Manager (GIM), you
cannot configure the database path parameters in the Inspection Engine in
Guardium even though the installation results for the module show as successful.

Causes

242 Administration
K-TAP is not properly upgraded if the new S-TAP is installed as a fresh module.
Because the old K-TAP module is not removed, there is a protocol mismatch
between the old K-TAP module and the new S-TAP.

Environment
S-TAP installed in UNIX and Linux such as AIX, HP-UX, Linux, and Solaris.

Diagnosing the problem


To diagnose the problem, run the guard_diag utility to collect must gathering data
for Guardium S-TAP.

The following lines are seen in the syslog file.


STAP and KTAP Protocol Version Mismatch,
Exit!!!!!: No such file or directory
Tap_controller::init failed
GUARD-01: Error Initializing STap

The modules log file lists the old K-TAP. For example: ktap_24276 338760 0

Resolving the problem


To resolve the problem, follow these steps in the GIM modules installation pane.
1. Set K-TAP Live Update to Y.
2. Set K-TAP_ENABLED to Y and reinstall the new S-TAP.

Guardium fails to recognize the network device VMXNET x


If Guardium fails to recognize the network device VMXNET x, install Guardium
on a virtual machine and add the network adapter.

Symptoms
Guardium fails to recognize the network device VMXNET x during the installation
on VMware. You receive the error eth0: unknown interface: No such device
when you install Guardium on VMware as a guest. The error message appears
after you restart the system.

Causes
VMXNET x virtual network adapter requires a specific driver that is only
contained in VMware tools and no operating system has the driver. Guardium is
running on Linux and the installer does not have a driver for VMXNET x.

Environment
The Guardium system is affected.

Resolving the problem


Resolve the problem by completing the following steps.
1. Create a virtual machine on VMware by using a default network adapter such
as E1000 or Flexible.
2. Install Guardium on the virtual machine.
3. Install the current GPU cumulative patch for Guardium.
4. After the installation, log on to the CLI console and run the command setup
vmware_tools install to install VMware tools.
5. Shut down the Guardium system from the CLI console with the command stop
system.
6. Edit the virtual machine settings with a VMware client tool such as VMware
Infrastructure Client. Select the current network adapter and remove it.

Chapter 6. Troubleshooting problems 243


7. Add the network adapter called VMXNET.
8. Restart the Guardium system.

Guardium network interface error after system board


replacement
If you receive an error message after a hardware repair, reset the network
parameters.

Symptoms
After a hardware repair such as replacing the system board on the Guardium
appliance, the network connectivity is lost. The following error message occurs for
each network interface when the appliance is rebooted.
rtnetlink answers: no such device

Causes
After you replace the system board, the MAC address will change. This change
causes a disparity between the actual MAC address and what is stored in the
interface configuration files.

Environment
Any Guardium appliance (collector, aggregator, or central manager) on which the
system board has been replaced and all Guardium versions are impacted.

Resolving the problem


Log in to the appliance from the console as user CLI and reset the network
parameters by running the following commands.
store network interface inventory
restart network
store network interface ip<IP_address>
store network interface mask<netmask>
store network routes defaultroute<gateway_address>
restart network

If the problem is still not resolved, contact Guardium Support for manual
intervention.

Guardium virtual machine is not accessible from the network


If the Guardium virtual machine is not accessible from the network, run the
command store network interface inventory and restart the system.

Symptoms
You implemented a new Guardium system as a virtual machine and performed all
the required initial network configuration. However, you cannot ping the system
using the IP address and the system is not accessible in the network.

Causes
The MAC address assigned to the virtual machine by the virtual environment does
not match the MAC address in Guardium.

Environment
The collector, aggregator, and central manager are affected.

Diagnosing the problem


To diagnose the problem, ping the IP address on a network. Use the command
ping<appliance's ip address>. If it fails, show the MAC address for the system.
1. Log in as user “cli” .

244 Administration
2. Run the command show network macs to show the MAC address stored in the
Guardium configuration.
3. From the administration utility for your virtual environment, check the MAC
address for the virtual machine.
a. Open the VMWare Workstation.
b. Right-click the virtual machine and select Settings or Properties to open the
Virtual Machine Settings.
c. Select Network Adapter under Hardware.
d. Click Advanced to open the Network Adapter Advanced Settings.
e. Compare the MAC address from steps 2 and 3.

Resolving the problem


To resolve the problem, complete the following steps.
1. Log in to the Guardium system as user “cli”.
2. Run the command store network interface inventory.
3. Enter y to reset the NICs.
4. Restart the system with the command restart system.

SSLv3 is enabled
If you receive a warning that SSLv3 is enabled, disable SSLv3 to prevent the
POODLE exploit.

Symptoms
You receive the following warning: SSLv3 is enabled.

Causes
SSLv3 contains a protocol vulnerability known as Padding Oracle On Downgraded
Legacy Encryption (POODLE). If SSLv3 is enabled on your system, this
vulnerability allows attackers to force an SSL/TLS fallback to SSLv3, break the
encryption, and intercept network traffic in plaintext. The vulnerability is detailed
in the National Vulnerability Database as CVE-2014-3566.

Guardium recommends disabling SSLv3 on all systems to prevent the POODLE


exploit, and SSLv3 is disabled by default on new Guardium systems. However,
older systems and some upgrade scenarios may leave SSLv3 enabled.

This topic describes how to check the status of SSLv3 and disable it if necessary.

Attention: Disabling SSLv3 can disrupt connectivity between a Guardium v10


Central Manager and some managed units running Guardium v9 before GPU 500.
If you have a mixed environment with managed units running Guardium v9
before GPU 500, either upgrade the managed units to GPU 500 or apply patch 9501
before disabling SSLv3.

Resolving the problem


1. Verify the status of SSLv3 using the following CLI command: show sslv3.
v If the output indicates SSL setting is disabled, SSLv3 is disabled. No
additional steps are required to disable SSLv3.
v If the output indicates SSL setting is enabled, SSLv3 is enabled. Continue
with this procedure to disable SSLv3.
2. Disable SSLv3 using the following CLI command: store sslv3 off. The
command output should be similar to the following:

Chapter 6. Troubleshooting problems 245


Current SSL setting is enabled. Will change to disabled.
Restarting gui
Changing to port 8443
From port 8443
Stopping.......
ok
3. Verify that SSLv3 is now disabled: show sslv3. The output should now indicate
SSL setting is disabled.

Access Management
Cannot log in to Guardium except as admin or accessmgr
If you cannot log in to the Guardium GUI except admin or accessmgr, check the
authentication configuration settings.

Symptoms
You are unable to log in to Guardium with any user except admin or accessmgr.
You see an invalid user name or password error despite using the correct user and
password as defined by accessmgr. You receive the following error message.
Invalid user name and/or password. Please reenter your credentials..

Causes
The authentication setting is not configured as local.

Environment
The collector, aggregator, and central manager are affected.

Resolving the problem


To solve the problem, change the authentication setting to local. This action enables
you to log in as any user defined in the accessmgr.

Guardium accessmgr password reset


If you lose the accessmgr password and cannot log in, contact Guardium support.

Symptoms
You lost the Guardium accessmgr password and cannot log in to the GUI. The
account is also locked after successive failed attempts.

Causes
Guardium prohibits multiple failed login attempts.

Environment
The collector, aggregator, and central manager are affected.

Resolving the problem


Log in to the CLI and run the following command: support reset-password
accessmgr<N>|random.

You can use <N> or random where <N> is a number in the range of 10000000 -
99999999. Random automatically generates a number in the range of 10000000 -
99999999. Open a PMR with IBM Guardium support and send the following
output.
G10.ibm.com> support reset-password accessmgr random
Password for accessmgr account have been successfully reset using keyword:<passkey>
Please provide these number to Guardium Customer Service to receive actual account password.
ok

246 Administration
After you receive the new password, unlock the account.
1. Use the following command to unlock the account. unlock accessmgr.
2. Log in as accessmgr and edit the accessmgr details to enter a temporary
password.
3. Log in again with the temporary password.
4. When you are prompted, enter a new password.

Aggregation
Cannot convert Guardium collector to aggregator
If you cannot convert a Guardium collector to a Central Manager aggregator,
reinstall Guardium and select aggregator during installation.

Symptoms
You try to convert a Guardium collector to an aggregator with the command store
unit type manager aggregator.

However, the following command shows that the unit type is still listed as
manager.
> show unit type
Manager

Causes
A collector cannot be converted to an aggregator with a CLI command.

Environment
Guardium collectors are affected.

Resolving the problem


To convert a collector to an aggregator, reinstall the Guardium product and select
aggregator as the unit type during installation. After you install the aggregator,
you can convert it to a central manager aggregator with the command store unit
type manager.

Data Export configuration change from a Guardium managed


system's GUI fails with error
If a Data Export configuration change fails, make sure that the shared secret key is
the same on the collector and aggregator.

Symptoms
You attempt to save new settings for the data export and get the error when you
click Apply to save the configuration:
Please correct the following errors and try again:
A test data file could not be sent to this host with the parameters given. Please confirm the host

Causes
Guardium attempts to log in with scp to the target host with the user and
password that are specified in the Data Export configuration. Then, Guardium
attempts to copy a test file to the target directory. The shared secret on this system
does not match the Shared Secret on the aggregator you are trying to set this
system to export to.

Environment
The Guardium configurations: collector and aggregator are affected.

Chapter 6. Troubleshooting problems 247


Resolving the problem
Make sure that the shared secret key is the same on the collector and aggregator.
You can use one of the following methods:
1. If you know the shared secret on the aggregator, set the shared secret on the
collector to the same value. You can use one of these methods:
v From CLI: use command store system shared secret to set the Shared
secret key
v From GUI, set the shared secret key under Administration Console > Config
& Control > System.
2. Back up the current shared secret on the aggregator and restore it to the
collector.
v On the aggregator, run the CLI command.
aggregator backup keys file <user@host:/path/filename>
Parameters
user@host:/path/filename

For the file transfer operation, specify a user, host, and full path name for the
backup keys file. The user that you specify must have the authority to write
to the specified directory.
v On the collector, run this command to restore the shared secret key:
aggregator restore keys file<user@host:/path/filename>
3. Reset the shared secret for both appliances to be the same.

Note: If you change the shared secret for the aggregator, you need to reset the
shared secret for all other Guardium systems that export to it.

Difference between audit process results and report


If there is a difference between your audit process results and the report, check
that all appliances are set to the same timezone.

Symptoms
You set a report to run on the aggregator as part of an audit process with time
parameters, for example, Start of Last Day and End of Last Day. When you look
at the results of that report, the first time stamps are always at a set tme after 00.00
for example, 02.00. Additionally the last time stamps are always at a set time
before 23.59 for example, 21.59. However, when you run the report interactively,
the time stamps are shown as expected.

Causes
The collector and aggregator time zones might not be set the same.

Environment
The aggregator is affected.

Diagnosing the problem


Check that all appliances are set to the same timezone. Use the following
command. show system clock timezone.

Resolving the problem


If the collector and aggregator are not set in the same timezone, configure the
timezone of the appliances with the CLI.
store system clock timezone list
store system clock timezone <timezone>

248 Administration
Verify that the time is correct on the appliance with the following commands.
show system clock datetime
store system clock datetime

The datetime can also be synchronized by using an NTP server with the following
commands.
show system ntp all
store system ntp state
store system ntp server

HY000 errors after restoring the configuration in an aggregator


If you receive HY000 errors after you restore the configuration in an aggregator,
run a dummy import.

Symptoms
When you restore the configuration of an aggregator or the Central Manager, you
receive one or both of these messages.
ERROR 1031 (HY000) at line 1: Table storage engine for ’GUARD_USER_ACTIVITY_AUDIT’ doesn’t have th
ERROR 1031 (HY000) at line 1: Table storage engine for ’AGGREGATOR_ACTIVITY_LOG’ doesn’t have this

Causes
This error condition can occur if there is a temporary mismatch in the internal
databases.

Environment
The collector and aggregator are affected.

Resolving the problem


To resolve the problem, run a dummy import.

Central Management
A user is disabled in a Guardium managed unit, but shows as
enabled on Central Manager
If a user is disabled in a Guardium managed unit but shows as enabled on Central
Manager, run the Portal User Sync.

Symptoms
A user is disabled in the managed unit. The user's account is re-enabled in the
Central Manager but the user is still showing as disabled in the managed unit. The
user's account shows as enabled in the Central Manager.

Causes
The user's account in the Central Manager is not synchronized with the managed
unit.

Environment
A combination of the Central Manager, collector, or aggregator might be affected.

Resolving the problem


To synchronize the current user status between the Central Manager and the
managed unit, run a Portal user sync.
1. Log in to the Central Manager as an admin user.
2. Click Manage > Central Management > Portal User Sync to open the Portal
User Synchronization.

Chapter 6. Troubleshooting problems 249


3. Click Run Once Now.

If the user's account between the managed unit and the Central Manager is still
not synchronized, contact the IBM Guardium Technical Support for assistance.

Central Manager does not recognize the new version of


upgraded units
If the Central Manager does not recognize the new version of upgraded units,
select the upgraded units and refresh the page.

Symptoms
The Central Manager might not immediately recognize the new version of an
upgraded aggregator or collector it manages. Pushing a patch from the Central
Manager, which requires the new version, can result in an error that shows the
unit is still at the previous version.

The managed unit's old version still displays in the Central Management view of
the GUI. The unit ping times in that view, which implies good communication
between the Central Manager and managed units.

Causes
The GUI needs to be refreshed to pull the new version information.

Environment
The Guardium Central Manager is affected.

Resolving the problem


In the Central Management view of the GUI, select the upgraded units and push
Refresh. This action pulls the new version information from the units.

Scheduled tasks do not fire at the scheduled time


If scheduled tasks do not fire at the scheduled time, schedule the import time to
run after the portal user sync.

Symptoms
Import fails and you receive the following message in agg_progress.log.
* 05/20 04:00:01 --- Import cannot start
(guard_agg|turbine_backup.sh|restore_from_file.pl already running)
* 05/20 20:00:46 --- Merge cannot start - aggregation still active

Causes
There is a conflict with the Central Manager portal user sync.

Environment
The aggregator is affected.

Diagnosing the problem


Find out which task is running in the background. Click Reports > Guardium
Operational Reports > Aggregation/Archive Log to open the Aggregation/Archive
Log.

Resolving the problem


To resolve the problem, schedule the import time to run after the portal user sync.
Run the portal user sync every hour and the import time 30 minutes after that
time.

250 Administration
Scheduled policy installation fails on managed units
If the scheduled policy installation fails on managed units, adjust the policy
installation schedules for managed units.

Symptoms
The scheduled policy installation fails on managed units. For example, on Monday,
collector 1, 3, and 4 fail to install the policy. On Tuesday, collector 1, 5, and 6 fail to
install the policy. The scheduled jobs exceptions report indicates an error but the
managed units fail intermittently.

Causes
All of the managed units cannot be scheduled to install a policy at the same time.

Environment
The managed units version 8.2, 9, and 10 are affected.

Resolving the problem


Adjust the schedule of policy installation for the managed units. For example,
stagger the policy installations by 5 or 10 minutes.

Torque exception in Central Management view of GUI


If there is a torque exception in Central Management, delete the custom group and
create a new group.

Symptoms
Selecting a certain custom group in the Central Management view of the
Guardium GUI displays an error instead of the managed units in the group.
org.apache.torque.TorqueException: Failed to select one and only one row.

After the exception appears, it shows for any group or view under the Central
Management tab. The exception even appears for groups that were previously
working until you log out of the GUI and log back in.

Causes
This torque exception might occur if one of the managed units in the group was
unregistered from the managed unit instead of the Central Manager.

Environment
Guardium Central Manager is affected.

Resolving the problem


Delete the custom group and create a new group that contains the same members.

S-TAPs and other agents


AIX 6.1 fails when you install or upgrade IBM Security Guardium
S-TAP
If the operating system fails when you install or upgrade Guardium S-TAP on AIX
6.1, apply the Fix Packs AIX 6.1.

Symptoms

The operating system fails when you install or upgrade Guardium S-TAP on AIX
6.1. The AIX crash memory dump shows the following stack trace.

Chapter 6. Troubleshooting problems 251


Error ID: DD11B4AF Resource Name: SYSPROC
Detail Data: 00007FFFFFFFD080 0000000000473260
0000000000020000 8000000000029032

Symptom Information:
Crash Location: [0000000000473260] execvex_common+1880
Component: COMP Exception Type: 131

Stack Trace:
[0000000000473260] execvex_common+1880
[000000000047744C] execve+A8
[F1000000C083E84C] my_execve+424

Causes

This crash is a known issue in AIX version 6.1 due to a system crash in the
execvex_common code path.

Environment

Any S-TAP to be installed in AIX 6.1 Operating System is affected.

Resolving the problem

To apply the Fix Pack AIX 6.1 6100-08-04 and resolve the problem, see
http://www-01.ibm.com/support/docview.wss?uid=isg1IV50179

Error opening shared memory area when you configure


Guardium COMM_EXIT_LIST for DB2
If you receive an error message when you configure Guardium COMM_EXIT_LIST,
authorize the DB2 instance owner with the guardctl command.

Symptoms
After you configure DB2 COMM_EXIT_LIST to use Guardium libguard and restart the
DB2 server, you get the following error in the DB2 diag log.
2013-06-28-11.41.12.306169-300 E870950E486 LEVEL: Severe
PID : 15764 TID : 139905833363200 PROC : db2sysc 0
INSTANCE: db2001 NODE : 000
APPHDL : 0-16
HOSTNAME: dbhost1
EDUID : 54 EDUNAME: db2agent () 0
FUNCTION: DB2 UDB, DRDA Communication Manager, sqljcCommexitLogMessage,
probe:234
DATA #1 : String with size, 91 bytes
WARNING: Shmem_access /.guard_writer0 failed Error opening shared memory area errno=2 err=8

Causes
The following message indicates that the Guardium library was unable to create
the shared memory device that it requires.
Shmem_access /.guard_writer0 failed
Error opening shared memory area
errno=2
err=8

The DB2 instance owner must be added as an authorized user using the guardctl
command.

Environment
Guardium collectors that use DB2 Exit (Version 10) Integration with S-TAP are
affected.

252 Administration
Resolving the problem
The DB2 instance owner must be added as an authorized user by using the
guardctl command.
1. Stop the DB2 instance.
2. Authorize the DB2 instance owner.
3. Start the DB2 instance.

If the Guardium Installation Manager (GIM) is not installed, authorize the DB2
instance owner with the following command. <guardium_installdir>/bin/
guardctl authorize-user<db2 instance owner>

If the Guardium Installation Manager (GIM) is installed, authorize the DB2


instance owner with the following command. <guardium_installdir>/modules/
ATAP/current/files/bin/guardctl authorize-user<db2 instance owner>

For example, if the DB2 instance owner is db2001 and GIM is installed in
/usr/local/guardium, the command is /usr/local/gim/modules/ATAP/current/
files/bin/guardctl authorize-user db2001.

Guardium fails to collect shared memory traffic from Informix


If Guardium fails to collect shared memory traffic from Informix, check the
inspection engine configuration.

Symptoms
Guardium S-TAP does not collect shared memory traffic from Informix.

Causes
The inspection engine is not correctly configured.

Environment
Any S-TAP collection from any Informix system can be affected.

Resolving the problem


Check the inspection engine configuration under Manage > Activity Monitoring >
S-TAP Control. Ensure that the value in the Process Name field matches the result
of the following command on the database server.
ls -lrt /INFORMIXTMP/.inf.*

Informix: /INFORMIXTMP/.inf.sqlexec Applies to all Informix platforms but


Linux. For Informix with Linux, example: /home/informix11/bin/oninit

Informix must be running for this command to return a value.

For Linux servers, A-TAP must be configured to collect any shared memory traffic.
Set the value to the same value as the --db-info parameter in the A-TAP
configuration before you activate A-TAP.

High CPU and I/O Use in Guardium STAP host


If you observe a high CPU or I/O usage, review the configuration for all of the
inspection engines.

Symptoms
You observe a high CPU or I/O usage by the Guardium S-TAP process.

Causes

Chapter 6. Troubleshooting problems 253


The following items are common causes.
1. An error in the configuration of one of the inspection engines. If there are
errors in an inspection engine, the S-TAP process restarts frequently or tries to
reconnect to the inspection engine repeatedly.
2. The K-TAP portion of the S-TAP is sending connection information along with
a confirmation request to the S-TAP. This step is causing delays.
3. ORACLE RAC is used, but the unix_domain_socket_marker parameter is not set
in the S-TAP configuration file to avoid monitoring potentially large amounts of
Oracle RAC traffic.
4. The User ID Chain (UID chain) feature is enabled, for example, parameter
hunter_trace=1 in the S-TAP configuration file. Hunter trace is used for UID
chain and can be quite CPU intensive for S-TAP.
5. The firewall is enabled (firewall_installed=1). This firewall forces S-TAP to
request verdicts for each new session that is observed which can hurt S-TAP
performance.

Environment
S-TAP installed in AIX

Resolving the problem


Based on the cause, take the corresponding actions.
1. Review the configuration for all of the inspection engines and make sure that
there are no errors in any of the parameters. For example, make sure the
database installation directory, executable, ports, and any other parameters
applicable to your inspection engine are correctly set with no misspellings or
wrong values.
2. Set S-TAP configuration parameter ktap_fast_tcp_verdict to 1
(ktap_fast_tcp_verdict = 1 in the guard_tap.ini configuration file) and restart
the S-TAP. Here are the possible settings.
ktap_fast_tcp_verdict=0: KTAP confirms that the session is the database
connection that the inspection engine configured by checking ports and Ips.
ktap_fast_tcp_verdict=1: KTAP does not send the request to S-TAP while the
session's ports are in the range.
3. Disable the UID Chain feature if not needed by setting hunter_trace=0 and
restarting the S-TAP.
4. Set firewall_installed=0 if SGATE is not needed and restart the S-TAP.

Missing information from the login packet


If you are missing information from the login packet, collect the S-TAP debug trace
and slon trace.

Symptoms
You encounter issues in Guardium relating to missing information from the login
packet such as database user name, source program, or database name.

Causes
Login packets might miss information when the session is too short.

Environment
The Guardium collector is affected.

Resolving the problem

254 Administration
Collect the S-TAP debug trace on the database server where the Guardium S-TAP
is installed and the slon trace on the collector.

Refer to the Technotes in the Related URL section for details on collecting each of
these traces.
1. Run both traces at the same time.
2. Generate a new database session that re-creates the issue while both traces are
running. Login packets are only sent when the database connection is open.
3. Add session start, client port, and server port to your existing report. Refresh
the report after you re-create the issue with the new connection.
4. Confirm that the traces are running during the session by checking the session
start.
5. Leave the session open for at least 5 minutes to allow the sniffer to analyze the
login packets.
6. Send the session with the missing fields. State the application name you used
to generate the session, database name, DB user you connected as, type of
connection, SQL statement, and any other pertinent details.
7. Collect the S-TAP debug trace file on the database server, the slon trace on the
Guardium collector, and the current sniffer must gather.

Nanny process is killing sniffer


If the nanny process is killing the sniffer, you might have too much traffic coming
in.

Symptoms
A message similar to the following is reported one or more times in Guardium
system log (messages) or Alerts:
Nanny process error condition. The nanny process killed the sniffer. VmData was number and was ove

Causes
The sniffer memory usage reached over 90% of the available memory and the
nanny process has restarted it, which is expected behavior of the product.

Environment
Guardium collector

Resolving the problem


If you are observing this message frequently, there is too much traffic coming to
the Guardium system. Reduce traffic to this Guardium system to resolve this
message. For example, you may move some STAPs to a collector with less load,
ignore some traffic in your policy, or implement load balancing to spread the traffic
among more than one collector.

If the message is observed on very few occasions, it is most likely a momentary


spike in traffic. To resolve the message, identify the reason for the spike and avoid
the trigger. For example, you can review which processes were running at that
time, identify the ones generating more traffic. If this message always coincides
with a particular process or processes running, reduce the concurrent traffic at that
time. For example, you can move heaviest process to run at a different time, or
ignore some of this traffic through a policy.

Sniffer cannot connect to UNIX S-TAP


Symptoms

Chapter 6. Troubleshooting problems 255


When you specify a different number of threads, such as 20, by using the
command snif -t 20, the sniffer cannot connect to the UNIX S-TAP. In the GUI
console, the status of the S-TAP is inactive.

Causes
The sniffer starts with six threads by default. When the number of threads exceeds
the limitation, the sniffer cannot connect to the UNIX S-TAP because of undefined
behavior.

Environment
UNIX S-TAP is affected.

Resolving the problem


Reduce the number of threads to make sure that the connection can be established
successfully.

S-TAP cannot start


If an S-TAP cannot start, its buffer size might be too large.

Symptoms
The S-TAP cannot start and issues the following messages:
mmap: Not enough space
Can’t initialize: Can’t mmap buffer file /tmp/stapbuf/192.168.100.107.0.buf
Error Initializing: Stap cannot initialize SQLGuard queue

Causes
The S-TAP is unable to allocate enough memory to match the buffer file.

Resolving the problem


Reduce the buffer file size for the S-TAP. The size is specified in the
buffer_file_size parameter in the guard_tap.ini file.

S-TAP does not start automatically on Linux


If the S-TAP agent does not start automatically on Linux, check for the
/etc/event.d/ directory.

Symptoms
The S-TAP process does not automatically start on Linux even though the
/etc/inittab file shows a correct U-TAP entry.

Causes
Various Linux distributions such as RedHat 6 deprecated the use of the traditional
init daemon that uses the etc/inittab file. They replaced it with an init process
called Upstart. Upstart uses the /etc/event.d and /etc/init directories for the
automated start, stop, and respawn of processes such as U-TAP.

The S-TAP installer now checks for the existence of the /etc/event.d directory. If it
exists, then entries in /etc/init are created for use by Upstart. If it does not exist,
then entries in /etc/inittab are created for use by the traditional init daemon.

If /etc/event.d is missing for any reason on a system with Upstart, the inittab file
is populated instead. The S-TAP process does not start or respawn when needed.

Environment
S-TAP running on Linux is affected.

256 Administration
Resolving the problem
Check for the existence of the /etc/event.d/ directory.

If the /etc/event.d/ directory does not exist, complete the following steps to
resolve the situation.
1. Uninstall the existing S-TAP installation.
2. Create the /etc/event.d dir as user root (mkdir /etc/event.d) .
3. Install the S-TAP.

S-TAP returns not FIPS 140-2 complaint


If you receive an error that about FIPS 140-2, change the configuration through the
S-TAP Control page.

Symptoms
You see the following message in the S-TAP event log. LOG_ERR: Not FIPS 140-2
compliant - use_tls=0 failover_tls=1.

Causes
FIPS 140-2 is a U.S. government security standard for cryptographic modules. If
you see this message, it indicates that the S-TAP configuration does not meet
government requirements.

Note: This message does not indicate that there is an error with the S-TAP.

Environment
Guardium S-TAP is affected.

Resolving the problem


To enable FIPS compliance, the guard_tap.ini file must have the following
settings.
use_tls=1
failover_tls=0

Any other combination turns off the FIPS mode and result in an error message.

You can change the configuration by using one of the following methods.
1. Click Manage > Activity Monitoring > S-TAP Control.
2. Modify the details section for the relevant S-TAP and use the TLS check boxes.
3. Restart the S-TAP.

You can also edit the guard_tap.ini file on the DB server directly and restart the
S-TAP.

The K-TAP kernel module is still present after the uninstallation


of S-TAP
If the K-TAP kernel module is still present after the uninstallation of S-TAP,
manually remove it.

Symptoms
The K-TAP kernel module is still present after the uninstallation of S-TAP on a
Solaris server.

Causes
The server did not restart properly to remove the K-TAP kernel module on Solaris
servers.

Chapter 6. Troubleshooting problems 257


Environment
The Solaris server after the uninstallation of S-TAP is affected.

Diagnosing the problem


Check on the Solaris server by running both modinfo | grep ktap and ls -al
/dev/*tap*.

Resolving the problem


Manually remove the K-TAP kernel with the following steps.
1. Check that /etc/init.d/upguard is removed.
2. Remove /kernel/drv/sparcv9/ktap* and /kernel/drv/ktap*.
3. Run modinfo | grep ktap to get the name of the loaded driver.
4. Then, run rem_drv<loaded driver>. For example: rem_drv ktap_36821.
5. Remove /dev/ktap* and /dev/guard_ktap.
6. Restart the server.
7. Run modinfo | grep ktap to make sure that the driver is no longer loaded.
8. Remove GIM and gsvr entries from /etc/inittab (if you are using GIM only).
9. Manually clean up remaining files in /usr/local/guardium.

UNIX S-TAP cannot read more than 16 inspection engines


If UNIX S-TAP cannot read more than 16 inspection engines, change listening port
parameters or use PCAP.

Symptoms
UNIX S-TAP reads only the first 16 port_range definitions in the inspection engine
settings.

Causes
By design K-TAP can read only 16 port_range definitions.

Environment
UNIX S-TAP that uses K-TAP and defines more than 16 inspection engines is
affected.

Resolving the problem


Use port_range_start and port_range_end parameters to include all of the
required ports in the first inspection engine definition. This action intercepts all of
the traffic from the specified port range. If you need to ignore some ports in the
range, you can define a policy to ignore the unnecessary server ports.

The following example defines listening ports 50000 - 50020 as target ports to be
monitored.
[DB_0]
port_range_end=50020
port_range_start=50000

Otherwise, use PCAP for TCP connections by setting ktap_local_tcp=1 and


devices=<device_name>.
[TAP]
ktap_local_tcp=1
devices=<Network Device Name>

258 Administration
Windows S-TAP service crashes on startup with error ID 1000
If the S-TAP crashes with error ID 1000, check the SOFTWARE_TAP_IP parameter
in the guard_tap_ini configuration file.

Symptoms

The S-TAP on a Windows server does not start. The Windows event log shows
errors from Guardium S-TAP with event ID 1000.
Log Name: Application
Source: Application Error
Event ID: 1000
Task Category: (100)
Level: Error
Keywords: Classic
Description:
Faulting application name: guardium_stapr.exe, version: 9.0.0.0
Exception code: 0x40000015

Causes

S-TAP cannot connect to the Windows system because the wrong


SOFTWARE_TAP_IP is specified in the guard_tap.ini file.

Environment

Any Guardium S-TAP for Windows is affected.

Resolving the problem

Ensure the SOFTWARE_TAP_IP parameter in the guard_tap.ini configuration file


matches the correct IP address of the Windows server. This parameter is passed on
the installation CLI or in the IBM Guardium Installation Manager (GIM)
parameters.

z/OS S-TAP fails to show active the Guardium system


If z/OS S-TAP fails to show active on the Guardium system, restart the
inspection-core.

Symptoms
z/OS S-TAP fails to show active on the Guardium system after you start it for the
first time. The policy is correctly configured with a DB2 or IMS Collection Profile
and installed. The z/OS S-TAP is properly configured to use port 16022. All
messages on the mainframe indicate connectivity.

Causes
If the collector has not been actively used as a collector since being built and
configured, the sniffer appears to time out port 16022.

Environment
z/OS is affected.

Resolving the problem


Restart the inspection-core by using the CLI command restart inspection-core.

Chapter 6. Troubleshooting problems 259


GIM
Error installing the Guardium Installation Manager (GIM)
If GIM does not install properly, create the directory manually.

Symptoms
When you attempt to install the Guardium Installation Manager (GIM) on RHEL6,
you see the following error message. cp: cannot stat `/usr/local/GIM/modules/
central_logger.log’: No such file or directory Installation failed

Causes
Various Linux distributions such as RedHat 6 deprecated the use of the traditional
init daemon that uses the etc/inittab file. They replaced it with an init process
called Upstart. Upstart uses the /etc/event.d and /etc/init directories for the
automated start, stop, and respawn of processes.

Environment
The Guardium Installation Manager (GIM) is affected.

Resolving the problem


To fix the issue, complete the following steps.
v Remove the partial GIM installation.
v Create the /etc/event.d directory manually with the command mkdir
/etc/event.d
v Run the GIM installer.

Guardium Installation Manager (GIM) service does not start in


Windows
If the Guardium Installation Manager (GIM) service does not start in Windows,
reinstall GIM in a folder that is reserved for 32-bit applications.

Symptoms
After you successfully installed the Guardium Installation Manager (GIM) on
Windows, you notice that the service is not running.

Causes
GIM is a 32-bit application. If you are using a Windows 64 bit, GIM might be
installed in Program Files instead of Program Files(x86).

Environment
GIM is affected.

Resolving the problem


Install GIM in Program Files(x86) because it is a Windows folder that is reserved
for 32-bit applications.

Installing Your Guardium System


Checksum error during S-TAP installation
If you receive a checksum error, set the transfer mode to binary on the FTP client.

Symptoms
You receive an error similar to the following when you run the S-TAP installer to
install Guardium S-TAP on UNIX or Linux.

260 Administration
./guard-stap-v81_r26808_1-aix-6.1-aix-powerpc.sh
Verifying archive integrity...Error in checksums: 2082112805 is
different from 3728267449

Causes
The installer file is corrupted. The file became corrupted when the file was
transferred to the database server or when the product was downloaded.

Environment
S-TAP on UNIX or Linux is affected.

Resolving the problem


To resolve the problem, make sure that the transfer mode is set to binary on the
FTP client. Then, try the transfer to the database server again. If the process fails,
download the product again.

Guardium S-TAP returns an illegal cp: option - f error message


If the S-TAP installation fails with cp: illegal option - f, run the command which cp
and change the file path.

Symptoms
The S-TAP installation fails with the following error message.
A directory called ’guardium’ containing Guardium software needs to be created under a path provid
Enter the path prefix [/usr/local]? /opt/guardium
Directory /opt/guardium/guardium/guard_stap does not exist, would you like to create it [Y/n]? Y
Run STAP as root, or as user ’guardium’ [R/u]? R
Please be patient... This might take more than a minute.
Copying installation files...
cp: illegal option -- f
UX:vxfs cp: INFO: V-3-21462: Usage: cp [-i] [-p] f1 f2
cp [-i] [-p] f1 ... fn d1
cp [-i] [-p] [-r|-R] [-e { force | ignore | warn}] d1 d2

Causes
The path to/usr/bin/cp is different from what the installer expects.

Environment
The UNIX/Linux database server is affected.

Resolving the problem


Run the command which cp

If which cp returns a value other than /usr/bin/cp, run the command export
PATH=/usr/sbin:/usr/bin:$PATH.

Rerun the command which cp to confirm that the path is /usr/bin/cp.

Installing a new Guardium patch does not complete


If you cannot complete the installation of a new Guardium patch, stop the
interfering process and reinstall the patch.

Symptoms
When you install a new patch it does not complete. The status column in the CLI
command show system patch installed shows one of the following messages.
STEP: Setting “java” off
STEP: Setting “amei” off
STEP: Setting “sqlw” off

Chapter 6. Troubleshooting problems 261


Causes
Tomcat, the inspection core, or another process on the machine interfered with the
patch installation.

Environment
The Collector, Aggregator, and Central Manager are affected.

Resolving the problem


To install the new Guardium patch, stop any processes from interfering with the
installation.
1. Delete the patch that is stuck by using the command delete scheduled-patch.
2. Restart the system by using the command restart system.
3. After the system restarts, stop the GUI and inspection core by using the
commands stop gui and stop inspection-core.
4. Reinstall the patch and restart the GUI and inspection core by using the
commands restart gui and start inspection-core.

Missing file or directory after new Guardium S-TAP installation


Symptoms

When you attempt to install S-TAP, you receive the following error message.
Tap_controller::init failed Opening pseudo device /dev/guard_ktap No such file or directory

In addition, /dev/*ktap* does not exist.

Causes

There are many possible reasons why the K-TAP device creation can fail. The
following are the most common causes.
v You did not use the module files, including the K-TAP module for the Linux
kernel.
v You did not specify the Flex Loading option to load the K-TAP module from the
module files.
v A previous K-TAP module from an old installation is still running or installed.

Environment

All Linux and UNIX operating systems in which the IBM Guardium S-TAP
product can be installed are affected.

Resolving the problem

To resolve the problem, take the following steps.


1. Run these commands as root.
<STAP directory>/KTAP/guard_ktap_loader stop
<STAP directory>/KTAP/guard_ktap_loader uninstall
<STAP directory>/KTAP/guard_ktap_loader install
<STAP directory>/KTAP/guard_ktap_loader start
2. Check whether the K-TAP device is now created with the command ls
/dev/*ktap*. If it was created, issue is resolved. If not, continue to next step.
3. Stop the S-TAP process guard_stap if it is running. You can check whether it is
running with command ps -ef | grep guard_stap.

262 Administration
4. Verify that the S-TAP process is not running with the command ps -ef | grep
guard_stap.
5. Uninstall the S-TAP.
6. Confirm that the S-TAP directory is gone.
7. Check whether a K-TAP module is still running from an old installation. Use
the appropriate command for your operating system.
Linux : lsmod | grep ktap
Solaris : modinfo | grep tap
HP-UX : lsdev | grep tap
AIX : genkex | grep tap

If a device such as ktap_<release> is listed, then a K-TAP module is running.


8. If you find a K-TAP module is running in previous step, run the following
steps to stop and uninstall the K-TAP module.
<STAP directory>/KTAP/guard_ktap_loader stop
<STAP directory>/KTAP/guard_ktap_loader uninstall

Restart the server.


9. If you are using the Guardium Installation Manager (GIM), click Reset
Clients on the appliance's GUI GIM interface (Administration Console /
Module Installation). Wait for server (usually a few minutes) to show up in
the client list in the GIM GUI again.
10. Reinstall the S-TAP. If you are using GIM to install the S-TAP, reinstall the
S-TAP bundle with GIM and the following commands.
KTAP-ALLOW_COMBOS=Y
KTAP_LIVE_UPDATE=Y
KTAP_ENABLED=Y

Partition error installing Guardium


If you receive a partition error, select Custom installation and specify the disk
location and size explicitly.

Symptoms
When you install the Guardium appliance in VMWare, you receive the following
error:
Error Partitioning
Could not allocate requested partitions:
Partitioning failed: Could not allocate partitions as primary partitions.
Not enough space left to create partition for /boot.

Causes
When you install the Guardium system with VMWare, if you select Typical,
VMWare uses configuration parameters that are predefined for the OS type in
VMWare. These configuration parameters might not be suitable for this installation.

Environment
All Guardium configurations (collector, aggregator, central manager) are affected.

Resolving the problem


Select Custom installation and specify the disk location and size explicitly. Specify
a disk size that is large enough for your monitoring and audit needs. After it is
configured, Guardium does not support adding disk space to the system.

Chapter 6. Troubleshooting problems 263


Patch installation fails: No such file or directory
If the patch installation fails, check that the file matches the MD5SUM of the
downloaded patch.

Symptoms
Patch installation in Guardium fails with the error patch.reg: No such file or
directory.

Causes
The following cases can cause the patch installation to fail.
v The patch was not downloaded in binary mode and corrupted the file.
v The compressed file itself was uploaded to the Guardium system.
v The patch was received from Guardium support and has the PMR number
prefixed to the file name.
v The patch was uploaded to the Guardium system from a Windows FTP server.

Environment
The collector, aggregator, and central manager are affected.

Resolving the problem


Verify that the contents of the file match the MD5SUM of the downloaded patch. If
the compressed file cannot be extracted or the MD5SUM does not match,
download the file in binary mode.

If the compressed file itself was uploaded to the Guardium system, extract the
compressed file and upload only the patch.

If there is a PMR number prefixed to the file name, remove the number and then
upload the patch to the Guardium system.

If the patch is uploaded from a Windows FTP server, specify the exact file name
with the correct case.

264 Administration
Index
A I SSH Public Keys 130
Synchronizing Portal User Accounts 76
Access Management Overview 33 Implementing Central Management 72 System Backup 25
Aggregation 57 Implementing Central Management in a System Configuration 1
Alerter Configuration 22 New Installation 73
Aliases 187 Implementing Central Management in an
Anomaly Detection 23 Existing Installation 77
Archive, Purge and Restore 106 Inspection Engine Configuration 5 T
Installing Security Policies on Managed Time Periods 191, 192
Units 83 troubleshooting > problems and solutions
B Investigation Center 91
IP to Hostname Aliasing 25
for a new Guardium patch
installation 261
Basic Information for IBM Support 223 troubleshooting > problems and solutions
for authentication configuration >

C L cannot log in to any GUI users except


admin and accessmgr 246
LDAP, Import Users 48 troubleshooting > problems and solutions
CEF Mapping 212
LEEF Mapping 215 for Central Management> does not
Central Management 68
Central Manager Redundancy 86 recognize new version of upgraded
Comments 192 units 250
Configure Authentication 13 M troubleshooting > problems and solutions
Configure Permission to Socket Manage Custom Classes 129 for co-relation alert definitions > query
connection 32 Manage Users 41 does not appear 233
Custom Alerting Class Monitoring Managed Units 78 troubleshooting > problems and solutions
Administration 183 Monitoring via SNMP 156 for Data Export configuration
Customer Uploads 101 change 247
troubleshooting > problems and solutions
N for GIM > error message installing
D Notifications 180
GIM 260
troubleshooting > problems and solutions
Data Security, User Hierarchy and for GIM> GIM does not install on
Database Associations 51 Windows 260
Dates and Timestamps 188
Distribute Configuration 85
O troubleshooting > problems and solutions
Optim to Guardium Interface 203 for Guardium > HTTP error 403 when
Distributed Interface 127 refreshing 231
troubleshooting > problems and solutions

E P for Guardium aggregator> Difference


between audit process results and
Patch Backup 25 report 248
Export/Import Definitions 122
Portal Configuration 11 troubleshooting > problems and solutions
Express Security Setup 132
Predefined Alerts 183 for Guardium Audit reports > rows
Predefined Groups 164 with unexpected characters in Microsoft
Provide visibility and control over patch
G installation, status, and history. 84
Excel 237
troubleshooting > problems and solutions
Generate New Layout 12 for Guardium Cannot view SVG
Global Profile 14 graphics in IE 9 230
GRC Heatmap 136
Guardium Administration 95
Q troubleshooting > problems and solutions
Query Hint 199 for Guardium CAS> not working with
Guardium Component Services 69 Java version 1.7 on Windows 241
Guardium Integration with Big troubleshooting > problems and solutions
Insights 199
R for Guardium collectors > cannot
convert collector to aggregator 247
Registering Units 73 troubleshooting > problems and solutions
H Results Export (CSV, CEF, PDF)
Running Query Monitor 158
121 for Guardium from Informix > fails to
How to manage backup and collect shared memory traffic 253
archiving 115 troubleshooting > problems and solutions
How to monitor the Guardium system for Guardium internal database >
via alerts 147 S GDM_POLICY_VIOLATIONS_LOG
Scheduling 185 table 235
Security Roles 179 troubleshooting > problems and solutions
Self Monitoring 138 for Guardium Job Exceptions every 5
Session Inference 24 minutes 239

265
troubleshooting > problems and solutions troubleshooting > problems and solutions troubleshooting > problems and solutions
for Guardium PDF reports > cannot see for patch installation > no such file or for the Guardium managed unit> user
multi-byte characters 236 directory 264 is disabled in the managed unit but
troubleshooting > problems and solutions troubleshooting > problems and solutions shows as enabled in the Central
for Guardium policy rule for Product X > keywords for the Manager. 249
misconfiguration 233, 238 specific problem 262 troubleshooting > problems and solutions
troubleshooting > problems and solutions troubleshooting > problems and solutions for the GUI> pages are not loading
for Guardium reports > unexpected for receiver table for an Audit correctly 233
results 241 Process 236 troubleshooting > problems and solutions
troubleshooting > problems and solutions troubleshooting > problems and solutions for the inspection engine > changes not
for Guardium reports> Guardium for redact function > overly masked saved 231
reports do not show the database user result 234 troubleshooting > problems and solutions
correctly 240 troubleshooting > problems and solutions for the login packet > missing
troubleshooting > problems and solutions for reports > IP shows as 0.0.0.0 237 information 254
for Guardium reports> request was troubleshooting > problems and solutions troubleshooting > problems and solutions
interrupted or quota exceeded 238 for S-TAP configuration 242 for torque exception in Central
troubleshooting > problems and solutions troubleshooting > problems and solutions Management 251
for Guardium S-TAP > Not FIPS 140-2 for S-TAP installation > checksum troubleshooting > problems and solutions
complaint 257 error 260 for UNIX S-TAP > cannot read more
troubleshooting > problems and solutions troubleshooting > problems and solutions than 16 inspection engines 258
for Guardium S-TAP > OS Crash when for S-TAP installation > cp: option -f troubleshooting > problems and solutions
you install or upgrade Guardium error message 261 for VMXNET x 243
S-TAP 251 troubleshooting > problems and solutions troubleshooting > problems and solutions
troubleshooting > problems and solutions for S-TAP on Linux > process does not for vulnerability assessment > test
for Guardium virtual machine> not start automatically 256 exception group appear in failed VA
accessible from the network 244 troubleshooting > problems and solutions test 242
troubleshooting > problems and solutions for scheduled jobs exception> merge troubleshooting > problems and solutions
for Guardium Windows S-TAP > required delay executing process 240 for z/OS S-TAP > fails to show active
startup error ID 1000 259 troubleshooting > problems and solutions on the Guardium system 256, 259
troubleshooting > problems and solutions for scheduled policy installation> troubleshooting > sniffer > cannot
for GUI > installation fails on managed units 251 connect to UNIX S-TAP 255
java.lang.IllegalStateException troubleshooting > problems and solutions
error 232 for scheduled tasks > Import and
troubleshooting > problems and solutions
for high CPU and I/O Use in
Archive will not fire at the scheduled
time 250
U
Unit Utilization Level 99
Guardium 253 troubleshooting > problems and solutions
Unregistering from a Managed Unit 74
troubleshooting > problems and solutions for shared memory area >
Upload Key File 130
for K-TAP kernel module > still present COMM_EXIT_LIST 252
Using Central Management
after uninstallation of S-TAP 257 troubleshooting > problems and solutions
Functions 78
troubleshooting > problems and solutions for SSH sessions and automated CRON
for network interface error after jobs > failed logins 234
motherboard replacement 244 troubleshooting > problems and solutions
troubleshooting > problems and solutions for the aggregator > HY000 errors in V
for partition error 263 target system 249 Viewing Management Maps 83
troubleshooting > problems and solutions troubleshooting > problems and solutions
for password reset > accessmgr for the Guardium file system > file
password reset 246 system is almost full 237

266 Administration
IBM

S-TAP and other agents


ii S-TAP and other agents
Contents
Chapter 1. S-TAPs and other agents .. 1 S-TAP events panel . . . . . . . . . .. 140
Installing S-TAPs . . . . . . . . . . . .. 1 S-TAP reports . . . . . . . . . . . .. 140
Installing an S-TAP on Windows. . . . . .. 4 S-TAP error messages . . . . . . . . .. 141
Installing an S-TAP on a UNIX server . . .. 10 S-TAP appendix . . . . . . . . . . .. 142
Installing an S-TAP with a native installer . .. 22 IMS Definitions. . . . . . . . . . . .. 143
Building a K-TAP on Linux . . . . . . .. 25 DB2 for i S-TAP . . . . . . . . . . .. 143
Copying a new K-TAP module to other systems 25 Monitoring strategy . . . . . . . . .. 144
Getting Java or Perl information . . . . .. 26 Installing the S-TAP for IBM i . . . . . .. 146
Install and Configure SharePoint Agent . . .. 28 Defining the S-TAP for IBM i . . . . . .. 147
When to restart, When to reboot . . . . .. 31 IBM Security GuardiumS-TAP for z/OS . . .. 148
Enterprise load balancing. . . . . . . . .. 33
Using enterprise load balancing . . . . .. 35 Chapter 2. Guardium Installation
Enterprise load balancing configuration Manager . . . . . . . . . . . .. 165
parameters . . . . . . . . . . . .. 38 GIM Server Allocation . . . . . . . . .. 166
S-TAP administration guide . . . . . . . .. 42 Installing the GIM client on a Windows server .. 169
Configuring the Guardium system to manage Installing the GIM client by using silent
S-TAPs . . . . . . . . . . . . . . .. 47 installation . . . . . . . . . . . .. 169
S-TAP Certification . . . . . . . . . . .. 47 Uninstalling the GIM client. . . . . . .. 169
How to Set Up S-TAP Authentication with SSL Installing the GIM client on a UNIX server . .. 170
Certificates . . . . . . . . . . . . .. 47 Uninstalling the GIM client. . . . . . .. 171
Increasing S-TAP throughput . . . . . . .. 50 Upgrading the GIM client . . . . . . . .. 171
UNIX S-TAP . . . . . . . . . . . . .. 50 Using groups with GIM . . . . . . . . .. 171
S-TAP Discovery. . . . . . . . . . . .. 63 GIM - GUI . . . . . . . . . . . . .. 172
A-TAP . . . . . . . . . . . . . . .. 64 GIM - CLI . . . . . . . . . . . . .. 179
Configuring the A-TAP . . . . . . . .. 65 Copying a K-TAP module by using GIM . . .. 182
Activate and deactivate your A-TAPs . . . .. 73 GIM dynamic updating . . . . . . . . .. 183
Tee . . . . . . . . . . . . . . . .. 77 When you upgrade your database server operating
Windows S-TAP . . . . . . . . . . . .. 83 system . . . . . . . . . . . . . .. 185
Configure S-TAP from the GUI . . . . . . .. 90 Distributing GIM bundles to managed units . .. 186
Editing the S-TAP configuration file . . . . .. 109 Removing unused GIM bundles . . . . . .. 187
Windows S-TAP parameters . . . . . .. 109 Running GIM diagnostics . . . . . . . .. 188
UNIX S-TAP parameters. . . . . . . .. 121 Debugging GIM operations. . . . . . . .. 188
Delayed cluster disk mounting . . . . . .. 135 Enabling GIM client debugging . . . . .. 189
S-TAP Status Monitor . . . . . . . . .. 135 Restarting the supervisor for Solaris with SMF
Viewing S-TAP verification results . . . . .. 136 support . . . . . . . . . . . . . .. 189
Configuring the S-TAP verification schedule . .. 137
Troubleshooting S-TAP problems . . . . . .. 138
Index . . . . . . . . . . . . . .. 191
Monitoring S-TAP behavior. . . . . . . .. 139

iii
iv S-TAP and other agents
Chapter 1. S-TAPs and other agents
The Guardium S-TAPs is a lightweight software agent that is installed on a
database server or file server system. The S-TAPs monitors database or file traffic
and forwards information about that traffic to a Guardium system. Other agents,
including K-TAPs and A-TAPs, perform complementary functions.

Installing S-TAPs
You can install S-TAPs on servers with databases or file systems that you want to
monitor. There are several options for installing an S-TAP.

S-TAP® Installation Overview

When you install S-TAP on a database server, you must provide the IP address or
fully qualified host name of the Guardium system that will receive data from the
S-TAP. After the S-TAP has connected to the Guardium system, all of the remaining
S-TAP configuration parameters can be set from the Administration Console on the
Guardium system.

Note: During the installation, the S-TAP installer checks if the K-TAP is available
for the kernel version. If the K-TAP cannot be installed or does not start up, a
query is presented to the user whether to continue installation.

The installation directory for the S-TAP must be empty or not exist. You cannot
install an S-TAP into a directory that already contains any files.

Before installing an S-TAP, check the System Requirements for IBM® Security
Guardium version 10.0 to make sure that your database and operating system
versions are supported.

There are two major tasks you need to perform to install and start using an S-TAP:
1. Install the S-TAP on a database server.
2. Configure the S-TAP to monitor the appropriate traffic.

These tasks are explained in this and subsequent sections.

Installation methods

The recommended method for installing S-TAP and other agents on your database
servers is the Guardium Installation Manager (GIM). GIM enables you to install,
upgrade, and manage agents on individual servers or groups of servers. GIM also
monitors processes that were installed under its control. You can use GIM to
modify parameters and perform other management tasks. See the Guardium
Installation Manager section for details about GIM.

In some cases, you might prefer to install the S-TAP locally. You can do so by
using an interactive installer, or by using the command line. These methods are
described in this section.

When you install an S-TAP on a UNIX server, the installation program checks
whether the guardium group exists. If the group does not exist, the installation

1
program creates it. If you use certain components or features, such as A-TAP or
DB2 Exit, you must add users to this group to ensure proper functioning. These
requirements are described in the relevant sections of this information.

S-TAP Installation Prerequisites

The following tables list database components that must be installed, at a certain
release or patch level, or configured to support S-TAP.
Table 1. Database components
Component Prerequisite

CAS under HP-UX Java™ 1.5 or higher

CAS under any other Java 1.4.2 or higher


UNIX

CAS under Windows If CAS will monitor the MS SQL Server event log, the dumpel.exe
program from the Microsoft Windows Resource Kit must be
installed on the database server. Check if this program exists in
the c:\Program Files\Resource Kit\ directory. If not, you can
download it from Microsoft.

S-TAP, all UNIX If the Tee monitoring method and the Hunter component are
used, Perl 5.8.0 or later

S-TAP on Red Hat Make version 3.81 or later. To view your version of the make
Linux V4 utility, issue this command: make -v
For AIX 5.3, technology level 5 or later is required to support
Oracle ASO, SSL AIX® - LDR_PRELOAD. AIX 6 and 7 include this support.
all

Oracle ASO, HP-UX LD_PRELOAD must be installed. It is installed by patch


11.11 PHSS_28436 or later.

TLS For S-TAP on a UNIX server, either /dev/random or /dev/urandom


must be present on the server.

For both UNIX and Windows servers, see Guardium® Port


Requirements, and check the TLS port requirements

Note: During installation / upgrade, for Java 1.6.0, an error may be generated by
the JVM that indicates it is unable To Locate DLL, The dynamic link library
MSVCR71.dll could not be found in the specified path. This error can be remedied
by implementing one of the two workarounds: 1) use a different (another release)
JVM (if one is available on the system) or 2) download the DLL from Microsoft
and place it in the windows system directory.
Table 2. Requirement Type per platform
Requirement
Type HP-UX Solaris AIX Linux

file exist /bin/sh /bin/sh /bin/sh /bin/sh

2 S-TAP and other agents


Table 2. Requirement Type per platform (continued)
Requirement
Type HP-UX Solaris AIX Linux

file exist /bin/sed /bin/sed /bin/sed /bin/sed

or or or or

/usr/bin/sed /usr/bin/sed /usr/bin/sed /usr/bin/sed

file exist tar, awk, grep, tr tar, awk, grep, tr tar, awk, grep, tr tar, awk, grep, tr

file exist prealloc dd dd dd

and and and

/dev/zero /dev/zero /dev/zero

file exist uudecode in uudecode in uudecode in uudecode in

/usr/bin or /usr/bin or /usr/bin or /usr/bin or

/tmp or perl exist /tmp or perl exist /tmp or perl exist /tmp or perl exist

S-TAP and CAS - Disk Space Requirements


Table 3. Disk Space Requirements
Disk Space Description

S-TAP Program files AIX: 90 MB HP-UX: 360 MB Linux: 176 MB Solaris: 243
MB Windows: 138MB

CAS Program files including AIX: 309 MB HP-UX: 630 MB Linux: 405 MB Solaris: 390
Java MB Windows: 277 MB

Buffer file By default, the S-TAP uses anonymous memory to stage


data for transmission to the Guardium system. If you
configure the S-TAP to use a buffer file, the size defaults to
100 MB. The size is controlled by the buffer_file_size
configuration file parameter.

Java If CAS is used, Java is required. On a UNIX server, you


must obtain and install Java yourself (due to licensing
constraints). Installing Java will require a certain amount
of disk space.

Perl UNIX only. If the Tee data collection mechanism and its
optional Hunter component are used, Perl 5.8.0 is
required. If it has not been installed previously, you must
obtain and install it yourself. For space requirements or to
download Perl, see perl.org.

The installation process for each component creates a log file. Locations include
/var/tmp and the component installation directory.

The installation process updates inittab, upstart, and rc scripts.

Chapter 1. S-TAPs and other agents 3


Guardium Port Requirements
If there is a firewall between Guardium components (for example, between a
Guardium system and an S-TAP or CAS agent on a database server), you must
verify that the ports used for connections between those components are not being
blocked. Refer to Table 4, and use your firewall management utility to check (and
possibly open) the appropriate ports.
Table 4. Port Requirements for UNIX servers
Port Protocol Guardium system connection to ...

16016 TCP Clear (open the port) UNIX S-TAP

16017 TCP Clear (open the port) UNIX CAS

16018 TLS Encrypted UNIX S-TAP

16019 TLS Encrypted UNIX CAS


16020-16021 Used for pooled connections
16022 Feed protocol

Table 5. Port Requirements for Windows servers


Port Protocol Guardium system connection to ...

8075 UDP WindowsS-TAP heartbeat signal


Note: The Unix S-TAP agent does not use UDP for
heartbeat signals, so there is no corresponding Unix
port for this function.

9500 TCP Clear (open the port) Windows S-TAP

9501 TLS Encrypted Windows S-TAP

16017 TCP Clear (open the port) Windows CAS

16019 TLS Encrypted Windows CAS

When installing an S-TAP or CAS agent on a database server, it is useful to verify


that there is connectivity between that server and the Guardium system. On a
UNIX system, you can use the nmap command to check for connectivity, using the
following options:
nmap -p <port> <ip_address>

For example, to check that port 16018 (the port Guardium uses for TLS) is
reachable at IP address 192.168.3.104, you would enter the following command:
nmap -p 16018 192.168.3.104 Starting nmap V. 3.00 Interesting ports on g4.guardium.com (192.168.3.10

Installing an S-TAP on Windows


Use this section for Windows S-TAP configuration and installation information.

4 S-TAP and other agents


Before You Start
v Obtain the IP address of the database server or domain controller on which you
are installing the S-TAP. If virtual IP addresses are used, note all of those
addresses as well.
v Obtain the IP address of the Guardium system that will control this S-TAP.
v If there is a firewall between the Guardium system and the database server,
verify that the ports used for connections between those components are not
being blocked. See Guardium Port Requirements.
v Check the S-TAP Prerequisites topic in the S-TAP Installation Overview to see
whether any additional software components must be installed or configured in
a particular way for this S-TAP.
v We strongly recommend that you take the defaults for all other options
suggested by the wizard. All of those items can be configured more easily later,
from the Guardium administrator portal.
v For Windows 2008 and up, the computer browser service should be started for
the S-TAP installer to be able to check the username/domain of the account that
is performing the installation.
v When you install an S-TAP or other agent on a Windows system, be aware of
the Microsoft best practices for security. For example, when you install on
Windows 2012 or later, right-click and choose Run as Administrator to start
each process.

Note: If using Windows 2012 R2, make sure that Microsoft.NET 3.5 (includes 2.0)
is available on the server. Microsoft.NET 3.5 is not loaded by default. Availability
of this file is required. For further information on Microsoft.NET 3.5, search for key
words, "Microsoft .NET Framework 3.5 Deployment Considerations" or click on
this link, https://technet.microsoft.com/en-us/library/dn482066.aspx

Note: The log file for Windows S-TAP installation is viewable from this location,
c:\IBM Windows S-TAP.ctl

Note: If using Load Balancer options, review the Windows S-TAP installation
information in “SQLGuard parameters” on page 110.

Note: A Windows S-TAP will not stay connected to more than three Guardium
systems that participate in mirroring traffic.

Note: Windows S-TAP has limited support for IPV6 tunneled over IPV4. The IPV6
traffic is generated by LHMON using the IPV4 addresses of the ISATAP tunnel.

Automatic discovery of databases


When you install an S-TAP on a Windows system, it automatically checks for
Couch DB, DB2, Informix, MongoDB, MSSQL, and Oracle databases, and creates
an inspection engine for each one that it discovers. This process occurs only when
you first install the S-TAP. Autodiscovery will be triggered during upgrade,
resulting in any inspection engine not part of the autodiscovery-generated
inspection engines being created and that already exist being readjusted. This
means that if an inspection engine for a database does not exist is added or uses a
port that does not work, then that inspection engine, assuming it is supported by
autodiscovery, will be adjusted.

You might not want the S-TAP to perform this automatic discovery. If you want to
prevent it, you must configure the installation before you begin. The procedure

Chapter 1. S-TAPs and other agents 5


depends on the method that you choose for installation. Each installation
procedure describes how to prevent automatic discovery.

Maintain Windows S-TAP with GIM


The automatic and easy-to-use installation capabilities of the Guardium Installation
Manager (GIM) makes it the primary installation method for Guardium modules
such as S-TAP and CAS in a Windows environment. After a simple wizard-driven
installation of a GIM Client on the database server, installation of modules can
easily be scheduled from theGuardium system, which acts as the GIM Server.

See Installing GIM on the Database Server (Windows) and Guardium Installation
Manager (GIM) - GUI for additional information on installing and using GIM to
install Guardium components in a Windows environment.

Maintain Windows S-TAP without GIM


While GIM is provided for ease of installation and management of Guardium
components, there are still environments that may benefit from a more manual
approach. The following section is provided for those environments.
Install Windows S-TAP
1. Log on to the database server system using a system administrator
account.
2. Insert the S-TAP installation disk in the DVD drive and follow the
installation instructions provided by the wizard. There will be a series
of screens that need a click on Next.
3. If you do not want the S-TAP to automatically discover databases and
create inspection engines for them, uncheck the box marked Start
S-TAP Service on the Network Addresses screen. You can start the
S-TAP manually after the installation has completed. Automatic
discovery is not performed when the S-TAP is started manually.
4. Complete the S-TAP configuration from the administrator portal. See
Configure S-TAPs from the GUI.

Note: It is advisable to configure the Windows S-TAP service to use


Windows service recovery.
Install Windows S-TAP from the Command Line
This feature is intended for users who are familiar with S-TAP.
1. Log on to a Windows system from which the database server can be
accessed.
2. Change to the directory that contains the S-TAP setup program.
3. Run the setup program.
4. Verify that the S-TAP is online, and complete the S-TAP configuration
from the administrator portal of the Guardium system to which this
S-TAP reports. See Configure S-TAPs from the GUI.

Note: It is advisable to configure the Windows S-TAP service to use


Windows service recovery.
Windows Setup Program
The command line structure is simple.
Setup.exe -PARAMETER value

6 S-TAP and other agents


Do not use “=” signs to assign values to the parameters. The only time “=”
is used is when you wish to add a parameter to the TAP section of the
guard_tap.ini file directly as it typed in the command line.

Note: <install_table_file> format is no longer used.

Note: The log file for Windows S-TAP installation is viewable from
location, c:\IBM Windows S-TAP.ctl
Table 6. Parameters applicable to all .NET installers
Parameter Description
-UNATTENDED Install silently. (Does not require value)
-INSTALLPATH This is the install directory. Default install path is C:\Program
Files\IBM\Windows S-TAP
-UNINSTALL Uninstall
-CUSTOMER To change customer name
-COMPANY To change company name
-SERVICEUSER To specify a user to run the service under
-SERVICEPASSWORD
The password for the user

Table 7. S-TAP Parameters with Applicable Value “ON”


Parameter Description
-TCP TCP_DRIVER_INSTALLED=1
-NMP NAMED_PIPE_DRIVER_INSTALLED=1
-DB2SHMEM DB2_TAP_INSTALLED=1
-DB2EXIT DB2_EXIT_DRIVER_INSTALLED=1
-FAM FSM_DRIVER_INSTALLED=1
-ORACLEPLUGIN ORA_DRIVER_INSALLED=1
-MSPLUGIN KRB_MSSQL_DRIVER_INSTALLED=2 (Only if originally set to 0 or
for a clean install)

These parameters are mostly off by default except DB2 relevant ones, but
setting them to “ON” will turn them on. DB2SHMEM will not turn off if
DB2 is installed on DB server. This may also be the behavior for other
databases in this list. If the value is not “ON”, then it defaults to “OFF”.
Table 8. Other S-TAP Parameters
Parameter Description
-TAPHOST This is the local/client IP
-APPLIANCE This is the SQLGUARD IP. You can set up multiple appliances by
simply specifying this parameter with a new value multiple times.

This is a required parameter during any installation.


-LOAD- This is the IP for the CM which allows Load Balancer to be enabled
BALANCER-IP
-LB-APP-GROUP To specify an appliance group name
-LB-MU-GROUP To specify MU group name
-LB-NUM-MUS To specify the number of MUs you wish allocated

Chapter 1. S-TAPs and other agents 7


Table 8. Other S-TAP Parameters (continued)
Parameter Description
-START Used to start the service upon install. This parameter is on by default
but can be turned off by setting it equal to 0
-NOAUTODISCOVERY
To prevent Auto-Discovery from running upon install

Adding Additional Parameters


For additional parameters not specified here but required in the .INI file,
you can append the [TAP] section by specifying the parameter and value
with an = sign.
For example:
debuglevel=4 debug_file_name=C:\stap.txt
PARTICIPATE_IN_LOAD_BALANCING=1
Example:
setup.exe -UNATTENDED -INSTALLPATH "c:/program
files/ib/mwindows S-TAP" -APPLIANCE 9.70.148.160 -TAPHOST
9.70.146.160 QRW_INSTALLED=0 QRW_DEFAULT_STATE=0
DB2® Shared Memory User Mode Implementation
There are two ways to capture DB2 shared memory traffic using the
Windows S-TAP.
When using Guardium Versions 8.x, the following method is deprecated:
(uncheck it).
On the Guardum system, uncheck DB2 at menu S-TAPCONTROL> Shared
Memory Monitor > DB2.
Use the TAP: (check it).
From the details (Guardium S-TAP CONTROL) > Shared Memory Monitor
> TAP should be checked.
(If both methods are checked, the DB2 TAP method will be used.)
In the S-TAP inspection engine, the instance name for DB2 is required
(Instance name can be found through Control Panel->Services-> name,
instance is last part of DB2 service name. For example, if service name for
DB2 is DB2-DB2COPY1-DB2-0, the instance name will be DB2-0. There is a
utility db2TAP.exe in the Guardium directory. Run db2tap.exe list in a
command window to display the instance name.
Example of inspection engine:
[DB_DB21]
PORT_RANGE_START=50001
PORT_RANGE_END=50001
DB2_FIX_PACK_ADJUSTMENT=80
INSTANCE_NAME=DB2_01-0
DB_TYPE=DB2
NETWORKS=1.1.1.1/0.0.0.0

Restart DB2 (only need to do this once after installing S-TAP).


Upgrade Windows S-TAP from the Command Line
If a prior version of the Windows S-TAP has been installed, an upgrade
can be performed using the setup program, as follows:

8 S-TAP and other agents


1. Log on to the database server system using a system administrator
account.
2. Change to the directory containing the S-TAP setup program.
3. Run the setup program with the following options: setup -UNATTENDED
4. A full machine restart is required if the upgrade updates driver files.

Note: Certain files from previous releases will not be fully removed until
the next scheduled reboot.
Remove Previous Windows S-TAP
This procedure will remove the installed S-TAP while making sure the
configuration file is saved for future use. If you simply want to un-install
the product, start with Step 4.
1. Log on to the database server system using a system administrator
account.
2. Copy the current S-TAP configuration file to a safe location (a
non-Guardium directory). Look for this file in "C:Program Files
(x86)\IBM\Windows S-TAP\Bin\guard_tap.ini"
3. From the Add/Remove Programs control panel, remove
GUARDIUM_STAP.

Note: The removal of IBM Security Guardium Windows S-TAP can also
be done through a silent uninstall option of the setup program: setup -
UNINSTALL

Note: Certain files from previous releases will not be fully removed
until the next scheduled reboot.

DB2 shared memory parameters


DB2 Shared Memory Adjustment / DB2 Shared Memory Client Position

Values for DB2 shared memory parameters on Windows:


DB2_FIX_PACK_ADJUSTMENT
The default for this parameter is 80 decimal. This should be correct for
DB2 versions 8.2 and later. For earlier versions try 20 decimal.
DB2_CLIENT_OFFSET
The default for this parameter is 61440 decimal. This parameter is
calculated by taking the DB2 database configuration value of ASLHEAPSZ
and multiplying by 4096.
To get the value for ASLHEAPSZ, run the following DB2 command: db2
get dbm cfg and look for the value of ASLHEAPSZ. This value is typically
15 which yields the 61440 default. If it is not 15, multiply the value by 4096
to get the appropriate client offset.

DB2 Exit (Version 10) Integration with Windows S-TAP


When using DB2 releases 10.1 and up, the S-TAP captures all DB2 traffic directly
from the DB2 engine. When using this method, firewall and scrub/redact
functionality are not supported. Also, stored procedures are not captured. All DB2
traffic is captured, regardless of encryption/network protocol. This solution

Chapter 1. S-TAPs and other agents 9


simplifies the S-TAP configuration for customers that deploy this version of DB2,
and gives them native DB2 support. Follow these steps to configure the S-TAP for
this type of database:
1. 1. Create a new folder within the DB2 SQLLIB folder, for each
instance$DB2PATH\security\plugin\commexit\instance_name For example:
C:\Program Files\IBM\SQLLIB\security\plugin\commexit\DB2_01
2. Copy the corresponding DLLs from the S-TAP installation directory into the
created directories:
For 32-bit DB2:
v db2fexitx86.dll
v db2exitx86.dll
For 64-bit DB2:
v db2exitx64.dll
v db2fexitx64.dll
3. Stop the DB2 instance(s), and issue the following command:
UPDATE DBM CFG USING COMM_EXIT_LIST db2fexitx86 (for 32 bit)
UPDATE DBM CFG USING COMM_EXIT_LIST db2fexitx (for 64 bit)
4. Start the DB2 instances.
5. Add an inspection engine for DB2 Exit
In the TAP section:
DB2_EXIT_DRIVER_INSTALLED=1
The DB section reflects the new type of inspection engine:
[DB_DB2_EXIT1]
DB_TYPE=DB2_EXIT
INSTANCE_NAME=Service_name
The service name is not the instance name. You can determine the service name
by using the db2tap utility in the S-TAP installation folder, or from the control
panel. Set the instance name to the portion of the service name that follows the
second dash ( - ) delimiter. For example, if the service name in the control
panel is DB2 - DB2COPY1 - DB2-01-0, set INSTANCE_NAME to DB2-01-0.
6. To stop using the feature and stop DB2, issue the following command and start
DB2 again: db2 UPDATE DBM CFG USING COMM_EXIT_LIST NULL

Installing an S-TAP on a UNIX server


Use this section for UNIX S-TAP configuration and installation information.

Before Installing S-TAP on a UNIX Host


Note: During the setup installation, the S-TAP installer checks if the K-TAP is
available for the kernel version. If K-TAP cannot be installed or does not start up, a
query is presented to the user whether to continue installation.
v If you are installing on a Guardium system, it will install into
/usr/local/guardtap instead of the normal default of /usr/local/guardium on
database servers. This should be taken into consideration when reading this, or
other, documentation that refers to the default location for database servers. As
an example, for a Guardium system, the default configuration file and uninstall
script would be at /usr/local/guardtap/guard_stap/guard_tap.ini, and
/usr/local/guardtap/guard_stap/uninstall respectively.

10 S-TAP and other agents


v If you are upgrading S-TAP, you can remove the previous version first (see
Remove Previous Unix S-TAP), or you can use the new UNIX upgrade
procedure. Also refer to Upgrade Procedure Utility.
v When installing S-TAP in a Solaris zones configuration, regardless of the zone in
which the database runs, S-TAP must be installed on the master zone (global
zone) since the local zones share information from the master zone. Also, both
DB Install Dir path and Process Name in the Inspection Engine has to be from
the global zone also. (From the global zone, S-TAP monitors access to databases
in all zones.)

Note: At the end of the installation:


– K-TAP will not be loaded on the local zone as it is only loaded on the global
but is visible on the local zones
– S-TAP will not be running on the local zones
v Obtain the IP address of the database server on which you are installing S-TAP.
If virtual IPs are used, note those as well (you will need to configure those later,
when completing the configuration from the administrator portal).
v Obtain the IP address of the Guardium system that will control this S-TAP, and
to which this S-TAP will report.
v If there is a firewall between the Guardium system and the database server,
verify that the ports used for connections between those components are not
being blocked. See Guardium Port Requirements in the S-TAP Installation
Overview.
v Choose the monitoring method to be used by S-TAP:
– K-TAP is a kernel module, and it supports all protocols and connection
methods (TCP, TLI, SHM, Named Pipes, etc.)
– TEE is not a kernel module, and supports only TCP connections. In this
configuration there is an option to alert on rogue connections. See the
description of the S-TAP monitoring mechanisms, in the Overview topic. This
option requires Perl. See Get Perl Information to verify that you have the
correct version of Perl.
v Decide if you want to install the CAS agent. If so, see Get Java Information to
verify that you have the correct distribution and version of Java, and to obtain
the JAVA_HOME directory location.
v Decide if you need to install A-Tap, which is an add-on product. If so, see
Configure A-Tap.
v Check the S-TAP Prerequisites topic in the S-TAP Overview to see if any
additional software components must be installed or configured in a particular
way for S-TAP.
v For Oracle, TCP redirect is not captured unless tcp_redirect is used for the
intercept_types parameter in the guard_tap.ini file.
v You must decide whether to install and run the S-TAP as the root user, or create
the Guardium user and run the S-TAP as the Guardium user. If you decide to
have S-TAP run as the Guardium user, this might cause some database or
protocol to stop working because of permission levels. The solution to solve
these issues is to either have S-TAP run as the root user or to make sure the
database path or exec file has permission that allow the user Guardium to read.
Here are the limitations that, depending on your environment, you may
experience:
– wait_for_db_exec may not work. So for cluster, check the database path or
exec file for permission that allow for Guardium user to read.

Chapter 1. S-TAPs and other agents 11


– Database on AIX WPAR and Solaris Zones may not work, check the
permission to access install path or exec file
– For Oracle BEQ, S-TAP should be restarted after database is started/restarted.
– For Informix® shared memory, S-TAP should be restarted after database is
started/restarted.
– For db2 shared memory, if shmctl failed because of permission issue, then in
most cases S-TAP should be changed to run as root
- If shared memory segment has read permission by group, then make sure
the DB2 instance has been added to user (Guardium) group. But still on
each server, only one set of configuration of DB2 can be supported.
- If shared memory segment has read permission by db2 user only, then
S-TAP has to run as root. (open a DB2 shared memory session, run
command ipcs -ma, check MODE on the output)

Install UNIX S-TAP

To install UNIX S-TAP, run the appropriate installation script, as detailed in the
following steps. If any stage of the installation fails, undo all of the steps up to that
point. Do not leave S-TAP partially installed.
1. Log on to the database server system using the root account.
2. Some companies require the use of native installers to register packages on the
system, or to perform other house-keeping functions. If this is a requirement
for you, see Installing an S-TAP with a native installers before continuing with
the next step.
3. Copy the appropriate S-TAP installer script from the Guardium Installation
DVD (or network), to the local disk on the database server. The installer script
name identifies the database server operating system. See full list of UNIX
Installer Files to select the correct file.
4. For Linux only: the S-TAP installer includes all possible modules specific to
the different Linux kernels. In case the particular module is not included in
the modules built with the S-TAP installer the module file can be copied to
the system via FTP/SSH and then use the --modules option to specify,
including the full path, the K-TAP module.
As an example, assuming modules will be in the /tmp directory:
./guard-stap-guard-8.0.xx_r20992_1-rhel-5-linux-x86_64.sh --
--modules /tmp/modules-guard-8.0.xx_r20992_1.tgz"
or for non-interactive installation:
./guard-stap-guard-8.0.xx_r20992_1-rhel-5-linux-x86_64.sh --
--modules /tmp/modules-guard-8.0.xx_r20992_1.tgz --ni --tls 1 -k
--dir /usr/local --tapip 19.12.144.102 --sqlguardip 19.12.148.109"
5. For any modules needed that are not supported in the current distribution,
obtain via FTP the modules-<stap version>.tgz file and copy it to the /tmp/
folder on the destination server.
6. Decide how you will run the installer:
v Non-interactive mode is recommended for larger S-TAP deployments (10 or
more servers). To use this mode, skip the remainder of this procedure and
go to Install UNIX S-TAP from the command line.
v Interactive mode is recommended for smaller deployments (fewer than 10
servers). Continue with this procedure for interactive mode.
7. Run the installer and respond to the legal notification and other prompts, as
directed by the installer. We suggest that you accept all of the supplied
defaults.

12 S-TAP and other agents


The installer asks for the IP address of the Guardium system. Based on your
input, it sets the values for the IP addresses of the managed server and the
primary Guardium system. Setting these properties enables you to start the
S-TAP and connect to the Guardium system.
The installer asks whether you want to run the S-TAP as the root user, or as
the user Guardium. The default is to run as root, but if it is your policy not to
allow processes to run as root, you can choose to have the S-TAP run as
Guardium.
Then the installer asks whether you want to edit the parameters file. It is
usually not necessary to edit the file, because you can set the values of other
parameters from the Guardium user interface.
8. If you chose to edit the configuration file, use the wq command to save the file
and quit. The install program checks the parameter values you have set. If
OK, it continue to the next prompts. Otherwise, you must correct any
erroneous parameters and save the file again.
9. Respond to the CAS installation prompt, and if installing CAS enter the name
of the JAVA_HOME directory (see Get Java Information).
10. For AIX only: Restart the database service and the listener. All others can skip
this step.
11. Complete the S-TAP configuration from the administrator portal. See
Configure S-TAPs from the GUI.
12. If you are using the Tee to monitor local connections, perform the appropriate
procedure:
v Prepare for Local Unix DB2 Clients to Use the Tee
v Prepare for Local Unix Informix Clients to Use the Tee
v Prepare for Local Unix Oracle Clients to Use the Tee
v Prepare for Local Unix Sybase Clients to Use the Tee
13. Once S-TAP is installed, for database instances that need to be monitored by
ATAP, add the database user to the Guardium group. This group is created by
the S-TAP installer and users can be added by the system administrator using
the usermod utility.
As an example, Where Oracle is the user ID of the OS user for the Oracle
database and sybase15 is the user ID of the OS user for Sybase 15 database:
usermod -a -G guardium oracle
usermod -a -G guardium sybase15

Installing an S-TAP from the Command Line


You can supply all of the parameters needed to install an S-TAP from the
command line. In fact, if you are installing the same operating-system version of
S-TAP on multiple database servers, you can perform the task by running a single
command, using the -tapfile parameter.

Installer Script Command Line Syntax

These are description of each component. Variables are shown enclosed in angled
brackets: < >.
guard-stap-setup -- [--modules <linux modules files>] [--ni]
[--tls <0|1>] |-k|-t] --dir <dir>|--tapip <tapip>|--sqlguardip <sqlguardip>
|--tapfile <file>|--ktap_allow_module_combos]
[--presets <presets-file> | <preset-option-list>...]

Chapter 1. S-TAPs and other agents 13


--modules
Identifies the tgz file, with all the compiled kernel modules, include full
path to tgz file
--ni Indicates that the shell is being run in non-interactive mode.
--tls
Specifies that the S-TAP and collector communication is in TLS protocol
with failover 0 or 1.
0 - do not failover. If fails to connect to collector, keep on trying using TLS.
1 - failover to non-tls protocol, if fails to connect to collector, failover to
non-secure protocol
v --tls 0 sets use_tls to 1 and failover_tls to 0.
v --tls 1 sets use_tls to 1 and failover_tls to 1.
v --tls force sets use_tls to 1 and failover_tls to 0.
v --tls failover sets use_tls to 1 and failover_tls to 1.
v --tls none sets use_tls to 0 (this is the default if --tls is not specified).
-k Indicates that K-TAP should be installed.
-t Indicates that the Tee should be installed.
--ktap_allow_module_combos
enables loader flexibility during a non-interactive mode. See Loader
Flexibility for additional information and the use of loader flexibility
interactively.
-dir <s-tap_dir>
Identifies the S-TAP installation directory. The installation directory for the
S-TAP must be empty or not exist. You cannot install an S-TAP into a
directory that already contains any files.
--tapip <ip_address>
Specifies the IP address of the database server.

Note: Omit if --tapfile is used.


--sqlguardip<guardium_ip>
Specifies the IP address of the Guardium system.

Note: Omit if --tapfile is used.


--tapfile <file>
Identifies an old guard_tap.ini file, from which information is to be
extracted.
–ipfile <file>
Identifies a text file containing a list of host names on which to install the
S-TAP. Use this parameter to install on multiple hosts. Each entry in the
file has this format: hostname hostip sqlguardip, where hostname and
hostip are the host name and IP address of the server on which to install
the S-TAP, and sqlguardip is the IP address of the Guardium system. For
example:
loki 9.70.144.116 9.70.148.105
waxwing 9.70.144.162 9.70.148.105

If there are multiple lines containing the same hostname, the first one is
used.

14 S-TAP and other agents


--presets
Identifies a file that contains a subset of global guard_tap.ini options or
an option list; keeping in mind that:
v A list of presets can be given on the command line provided the
--presets argument and option list is at the end of the command line
and the options do not contain any spaces
v When a file name is provided, it should be a full path and should be in
the guard_tap.ini format.
v Only parameters in the global TAP section will be used and updated
v This will only update parameters that are already included in the
original (current) guard_tap.ini. New parameters introduced in the new
guard_tap.ini (the parameter) will be silently ignored.
–userinst
Tells the installer to create the guardium user. S-TAP files will be owned by
guardium user. This is the default option even if no install flag is specified.
The guardium group is always created.

As an example, assuming modules are located in the /tmp directory:


./guard-stap-guard-8.0.xx_r20992_1-rhel-5-linux-x86_64.sh -- --modules /tmp/modules-guard-8.0.xx_r

or for non-interactive installation:


./guard-stap-guard-8.0.xx_r20992_1-rhel-5-linux-x86_64.sh -- --modules /tmp/modules-guard-8.0.xx_r
-rootinst
Tells the installer to not create the guardium user. STAP files are owned by
root. The guardium group is always created.
-user Tells the installer to run S--TAP as guardium user. Can only be used with
-userinst option.
-root Tells the installer to run S-TAP as root user. Can be used with either
-userinst or -rootinst options.

Loader Flexibility

Loader Flexibility aids in the installation of currently built modules when an exact
match between module and kernel version does not exist.
v Loader flexibility is only enabled if explicitly requested at installation time
v Pass the option of --ktap_allow_module_combos when using the non-interactive
installer
v If installing interactively, answer y to the question posed after editing the
guard_tap.ini file (and setting ktap_installed=1) The loader flexibility default
is disable. This means that the K-TAP will be disabled, if the booted kernel is
not directly supported or tested as working with another module.
v If you wish to switch from not allowing the loader to try module combinations,
you will need root access on the database machine and perform the instructions
printed in /var/log/messages when it was detected that no module is available
for the running kernel
v When performing a K-TAP live update, whatever was specified to the question
of whether or not to try module combinations (implicitly or explicitly) will be
applied to the K-TAP installed as part of the update. The same procedure
applies for switching from not allowing module combinations as is printed in
/var/log/messages.

Chapter 1. S-TAPs and other agents 15


v When non-exact match combos are found, a warning message is printed in the
ktap-install.log in the guard_stap/ktap directory and in the
/var/log/messages noting the current kernel and module extracted being loaded

Install CAS from the Command Line

You can supply all of the parameters needed to install Unix CAS from the
command line.

Installer Script Command Line Syntax

These are description of each component. Variables are shown enclosed in angled
brackets: < >.
usage: guard-cas-setup -- install --java-home <JAVA_HOME> --install-path <INSTALL_PATH> --stap-conf <

usage: guard-cas-setup -- uninstall


<guard-cas-setup> is the name of the script file
-- install indicates an install of CAS
-- uninstall indicates an uninstall of CAS
--java-home <JAVA_HOME> identifies the JAVA_HOME directory. See Get Java Information.
--install-path identifies the installation path
--stap-conf <FULL_PATH_TO_GUARD_TAP_INI>identifies where the guard_tap.ini file is located after an S

Starting and Stopping CAS

Depending on the install / uninstall scenario, you may need to start and stop CAS
from the command line. One scenario might be not supplying the --stap-conf
path to the guard_tap.ini file as this is an optional parameter; resulting in CAS
not starting. Use the following methods when needing to start or stop CAS:
1. Log on to the database server system using the root account.
2. For Red Hat Enterprise Linux 6
a. Stop / Start CAS using the stop cas or start cas commands.
3. All others:
a. Comment out (if stopping CAS) or remove comment (if starting CAS) the
cas agent entry in the /etc/inittab file. In a default installation, this
statement should look like this:
cas:<nnnn>::respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/
b. Save the /etc/inittab file.
c. Run the init q command
4. You may validate if CAS is running or not by issuing the ps -fe | grep cas
command.

Upgrade S-TAP/K-TAP without reboot


S-TAP/K-TAP may be upgraded without a reboot when using the
guard-stap-update utility. This utility is for S-TAP/K-TAP live update from 8.X.X to
any other version higher then previous one. The utility must be download alone
with a newer S-TAP installer and copied to a directory on the database server
where it will be executed.

Usage:
./guard-stap-update <full_path_Guard-Installer.sh> <existing Guard-Install-Dir> [<Linux-Kernel-Module

Place the latest installer along with the guard-stap-update utility in the database
server folder /var/tmp and specify the install directory for the existing S-TAP of

16 S-TAP and other agents


/usr/local/guardium../guard-stap-update /var/tmp/Stap_installer_name
/usr/local

Command Line Update for K-TAP (Manual)


A command line update is available for K-TAP, when used, the user should save
the output from the script. guard-ktap-update-doberman_r19987_1-sunos-5.9-
solaris-sparc.sh <guard_stap path> <current version of ktap> <updating
version of ktap>

As an example:guard-ktap-update-doberman_r19987_1-sunos-5.9-solaris-
sparc.sh 2>&1 | tee /tmp/output.save.txt

Remove Previous Unix S-TAP (Manual)

Perform this procedure before installing a new version of S-TAP if you want to
save the old configuration file. For an upgrade, we recommend that you use the
Upgrade Procedure Utility, as previously described.

If S-TAP was previously installed, there will be a directory named:


/usr/local/guardium/guard_stap.

Note: If you have installed A-TAP, you must deactivate it before attempting any
upgrade/install operations; see the description of the A-TAP deactivation
command, in the Configure A-TAP topic.

If you are removing a previous version of S-TAP that used K-TAP, you will need to
reboot the database server. If K-TAP has been installed, you will have a device file
named: /dev/guard_ktap.
1. Log on to the database server system using the root account.
2. If un-installing version 6.0 or later of S-TAP
a. For Red Hat Enterprise Linux 6
1) Stop S-TAP using the stop utap command.
b. All others:
1) Remove the utap agent entry in the /etc/inittab file (regardless of
whether or not it has been commented). In a default installation, this
statement should look like this: utap:<nnnn>:respawn:/usr/local/
guardium/guard_stap/guard_stap /usr/local/guardium/guard_stap/
guard_tap.ini
2) Save the /etc/inittab file.
3) Run the init q command
c. You can then run ps - ef | grep stap to verify that S-TAP is no longer
running.
3. Copy the S-TAP configuration file to a safe location (a non-Guardium
directory). By default, the full path name is: /usr/local/guardium/guard_stap/
guard_tap.ini.
You can use this file later if you have to re-install this version of the software,
or you can refer to it when configuring an updated version of S-TAP. Do not
ever use an older configuration file directly with a newer version of the
software - newer properties may be missing, and the defaults taken may result
in unexpected behavior when you start S-TAP.
4. Run the uninstall script. For example, if the default directory has been used:
[root@yourserver ~]# /usr/local/guardium/guard_stap/uninstall

Chapter 1. S-TAPs and other agents 17


Note: Do not run the uninstall program with S-TAP running. Be sure that you
have stopped S-TAP.
5. If your previous version of S-TAP included K-TAP, reboot the database server
now.
6. This step applies to HP-UX servers only (skip for all others). If you are
uninstalling a previous version of S-TAP that included K-TAP:
a. Run the uninstall script again
7. This step applies to AIX WPARs only (skip for all others). If you are
uninstalling a previous version of S-TAP that included K-TAP, issue the
following commands from the master node: rm -f /wpars/<server>/dev/ktap*
and rm -f /wpars/<server>/dev/guard_ktap*, where /wpars/<server> is the
path from the master node to the Wpar.
8. If upgrading, upgrade the Guardium system that serves as the S-TAP host,
before upgrading S-TAP.
9. Return to the installation procedure: Install UNIX S-TAP.

A-TAP Support Matrix


OS Oracle Informix DB2 Sybase
Linux 9, 10, 11 8, 9, 10, 11 8.1, 8.2, 9.1, 9.5, 15 (32-bit only)
9.7, 10.1
Solaris 9, 10, 11 15 (SPARC only)
HP-UX 9, 10, 11 15
AIX 9, 10, 11 15 on AIX 5.3,
6.1, 7.1

Note: A-TAP needs to be deactivated before an upgrade of a database service and


then re-activated after the upgrade.

Note: A-TAP is installed as an added product and requires separate configuration


on the database machine itself.

A-TAP Installation
A-TAP installs as a part of S-TAP installation.

Note: A-TAP depends on K-TAP, so make sure that K-TAP is installed as well. In
particular, the ktap_installed parameter must be set to 1 in the guard_tap.ini file.

Note: For the installation procedure on Solaris Zones see Procedure to Make
A-TAP Work Under Solaris Zones. The guardctl utility DOES NOT automatically
add db-user to group Guardium. That behavior matches the behavior of
guard_tap.ini encryption=1 ATAP based activation, i.e. database OS user is never
added automatically to group guardium.

Note: If the software is installed with GIM, A-TAP expects GIM_ROOT_DIR to be


defined as an absolute path to the modules, for example /usr/local/guardium/
modules. Otherwise, when activating A-TAP through the guard_tap.ini file,
encryption=1 will silently fail, and the guard_stap log will show that
guard-atap-ctl failed. This is especially important when running guard_stap
manually - be sure you have defined this environment variable when running
guard_stap.

18 S-TAP and other agents


v If A-TAP is not a member of group guardium it will not be able to open the
K-Tap device and you will see the following syslog message; indicating : ATAP
[UID= GID= EUID= EGID=] Opening ktap '' [OWNER UID= GID= PERMS=]:
Permission denied.
v If the UID or EUID are not members of OWNER group GID, the reason for
permission denied is that the user matching UID or EUID does not belong to
group matching OWNER GID.
v To make it easier, not having to handle different OS syntaxes for adding users
and groups, while disabling the automatic addition to group Guardium, two
commands are available within guardctl which can be used irrespective of the
method you use to activate ATAP (i.e. guardctl or guard_tap.ini):
– #/path/to/guardium/bin/guardctl is-user-authorized
– #/path/to/guardium/bin/guardctl authorize-user ...

Note: The database must be stopped when either a user is being added to the
guardium group or when activating A-TAP using the guardctl utility.

Note: The database must be restarted after performing an upgrade for modules
that include ATAP.

Note: Group Guardium can be removed on most OS's with groupdel guardium.
However, after removal, only the guard_ktap_loader parameter can correctly
re-create it and change the K-TAP device permissions.

A-TAP Uninstallation/Upgrade

A-TAP is uninstalled/upgraded by standard S-TAP uninstall/upgrade tools.


However, in order to uninstall, A-TAP has to be deactivated first on all DB
instances.

The list-active command lists all currently active instances.

Syntax: <guardium_base>/xxx/bin/guardctl list-action

The is-active command returns true if there is at least one active instance and
false otherwise.

Syntax: <guardium_base>/xxx/bin/guardctl db_instance=<instance_name>


is-active

Check Network Address and Port (Unix)


When installing an S-TAP or CAS agent on a database server system, it is useful to
verify that there is connectivity between the two systems. On a Unix system, you
can use the nmap command to check for connectivity, using the following options:
nmap -p <port> <ip_address>

To check that port 16018 (the port Guardium uses for TLS) is reachable at IP
address 192.168.3.104, you would enter the following command: > nmap -p 16018
192.168.3.104 Starting nmap V. 3.00 Interesting ports on g4.guardium.com
(192.168.3.104): Port State Service 16018/tcp open unknown >

DB2 Exit (Version 10) Integration with UNIX S-TAP

Initial setup

Chapter 1. S-TAPs and other agents 19


Two versions of DB2 EXIT (Version 10) library are available with the Guardium
installer - 32- and 64-bit. The one to use depends on the installed DB2.

Both versions will be in the Guardium installation directory in the lib directory. On
Linux servers, the 64-bit version will be found in lib64.

Library names

Linux/Solaris/HP-UX

libguard_db2_exit_32.so

libguard_db2_exit_64.so

AIX

libguard_db2_exit_32.a

libguard_db2_exit_64.a

To determine DB2's bitwise

As the db2 user, run db2level, the output will be similar to this -

DB21085I Instance db2inst1 uses 64 bits and DB2 code release SQL09070.

with level identifier 08010107.

Copy the library:

After installing S-TAP as root

mkdir $DB2_HOME/sqllib/security/plugin/commexit

OR

mkdir $DB2_HOME/sqllib/security64/plugin/commexit.

(This is done only the first time the library is installed, as the directory does not
exist)

Copy the appropriate library to $DB2_HOME/sqllib/security/plugin/commexit or


$DB2_HOME/sqllib/security64/plugin/commexit .

$DB2_HOME is the db2 installation directory.

NOTE: if copy failed with error ....: Text file busy, remove the file first and do
a copy.

Change owner to the commexit directory and lib files in the directory

For example:
root@buzzard:~# su - db2inst2
Oracle Corporation SunOS 5.11 snv_151a November 2010
db2inst2@buzzard:/export/home/db2inst2$ id
uid=109(db2inst2) gid=102(db2iadm1) groups=102(db2iadm1),101(dasadm1)

20 S-TAP and other agents


db2inst2@buzzard:/export/home/db2inst2$ exit
root@buzzard:~#
chown db2inst2:db2iadm1 /export/home/db2inst2/sqllib/security64/plugin/commexit /export/home/db2in

When the S-TAP is installed, it creates the Guardium group. You must add the DB2
user to this group before starting the database with the exit library loaded. This
requirement increases the security of shared memory regions that are created by
the S-TAP. You can use the guartctl command to add the user. For example, if the
DB2 user is named db2inst2: guardctl authorize-user db2inst2

Set up DB2

(a) enable libguard

As the db2 user:

db2 UPDATE DBM CFG USING COMM_EXIT_LIST libguard_db2_exit

OR

db2 UPDATE DBM CFG USING COMM_EXIT_LIST libguard_db2_exit_64 for 64


bit installations.

(b) disable libguard

db2 UPDATE DBM CFG USING COMM_EXIT_LIST NULL

(c) check if libguard lib is used

db2 get database manager configuration

then check the line Communication buffer exit library list.

Restart DB2

Make sure starting the DB2 results in

The DB2START command completed successfully

If not - check the log file for clues

~/sqllib/db2dump/db2diag.log

To stop using the library -

-- db2 UPDATE DBM CFG USING COMM_EXIT_LIST NULL

Setup Zones/WPARs

Copy the S-TAP to zones/wpars:


1. On the master/global Zone/WPAR: (assuming Guardium software is installed
on the master Zone/WPAR under /usr/local/guardium, and there exists a
writable directory /usr/local with enough free space on the
sub-Zone/sub-WPAR):
cd /usr/local
tar -cvf - guardium | ssh root@subzonehost ’cd /usr/local && tar -xvf -’

Chapter 1. S-TAPs and other agents 21


2. On Zone/Wpars: Add DB2_EXIT IE in the guard_tap.ini

Note: -- ktap_installed should be set to 0. -- tap_run_as_root should be set to


1. -- tap_ip should be zones/wpars' local IP address. No other IEs should be
specified in order to start S-TAP on the zone.

Create /var/guard directory

start stap: (on wpars, need to manually copy/add the utap server entry in inittab
file)

(on solaris zone follow the information in this link:

https://scm.guard.swg.usma.ibm.com/wiki/index.php?page=Use_Solaris_services

Following instructions in initial setup

Log Level

When S-TAP log level = 10, debug info will be logged into both S-TAP log and
db2_exit log (db2diag.log)

When S-TAP log level = 11, debug info will only be logged into db2_exit log
(db2diag.log)

Informix EXIT with UNIX S-TAP

Informix EXIT is similar to DB2 EXIT. It supports firewall and UID chain.

Instructions for configuring


1. Initial Setup - add db user to guardium group, for example,
/usr/local/guardium/bin/guardctl authorize-user informix
2. Set up Informix - As the Informix user: copy correct informix exit library from
the guard_stap directory to the informix user's lib directory, for example, cp
/usr/local/guardium/guard_stap/libguard_informix_exit_64.so ~/lib
3. Bring up ifxguard (running as informix) - If $INFORMIXDIR/etc/
ifxguard.$INFORMIXSERVER file exists and LIBPATH is set correctly, then just
run ifxguard. If not, (for example, starting ifxguard with a 64bit library)
ifxguard -p $HOME/lib/libguard_informix_exit_64.so -l /tmp/logfile.txt
4. Add INFX_EXIT IE in the guard_tap.ini
5. To disable libguard, ifxguard -kill $INFORMIXSERVER

Starting with Informix 12.10 version, Informix provides ifxguard which is


integrated with the Guardium application (Informix_Exit) to support all protocols
of Informix. It also provides an ability to support all Guardium features (S-gate,
UID chain, Redaction, query-rewriate, etc). Use Informix_Exit on Informix 12.10
and above versions.

Installing an S-TAP with a native installer


If your company’s policy requires that you use a native installer to install your
S-TAPs, find your operating system in this list.

A native installer ensures that S-TAP is registered in the operating system asset
repository. This registration is not required by Guardium for the installation of the

22 S-TAP and other agents


S-TAP, but it might be a requirement at your company. There is a separate native
installer for each supported variation of UNIX.

You can use a native installer on any of these operating systems:


v AIX
v HP-UX
v Linux
v Solaris

Note: There are no native installers for Tru64 systems.

AIX S-TAP Native Installer

Use the following command to generate the AIX native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate native installer file (.bff file) from the Guardium S-TAP
Installation DVD, for your version of AIX.
2. Enter the following command on a clean server (no previous S-TAP installation)
to extract the shell installer for AIX, substituting the appropriate file name with
the appropriate .bff file:
installp -aX -d/var/tmp<filename> SqlGuardInstaller
Example:
installp -aX -d/var/tmp/guard-stap-guard-8.0.00rc1_r20934_1-aix-5.2-aix-powerpc.bff SqlGuardIns
The shell installer that is extracted, named guardium, is under /usr/local.
3. Continue with Step 3 of the installation procedure, running the generated
installation script rather than the default installation script for the operating
system version.
4. Return to Step 5 of the UNIX Installation procedure.

Remove AIX S-TAP Using Native Installer

Use the following command to remove AIX S-TAP using the native installer:
/usr/lib/instl/sm_inst installp_cmd -u -f ’filename’

Example
/usr/lib/instl/sm_inst installp_cmd -u -f’SqlGuardInstaller’

HPUX S-TAP Native Installer


Use the following command to generate the HP-UX native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate native installer file (.depot.gz file) on the Guardium
S-TAP Installation DVD, for your version of HPUX.
2. Extract file with:
gzip -d <filename>.depot.gz
3. Enter the swinstall command as follows, supplying the selected file name (the
appropriate native installer file) and your database server host name. This
command starts an interactive program. Follow the prompts and use the
appropriate controls to install the appropriate S-TAP installation program (.sh
file), which is located in /var/spool/sw/var/tmp.

Chapter 1. S-TAPs and other agents 23


swinstall -s /var/tmp/<filename>.depot @ ,hostname>:/var/spool/sw
4. Continue with Step 3 of the installation procedure, running the generated
installation script rather than the default installation script for the operating
system version.
5. Return to Step 5 of the UNIX Installation procedure.

Remove HPUX S-TAP Using Native Installer

To remove HPUX S-TAP using the native installer, use the following command:
swremove @<hostname>:/var/spool/sw

Linux S-TAP Native Installer

Use the following command to generate the Linux native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate Linux native installer file (.rpm file) on the Guardium
S-TAP Installation DVD, for your version of Linux.
2. Enter the rpm command, supplying the selected file name where filename is the
native installer file:
rpm -ivh <filename> --ignorearch
The shell installer is extracted under /usr/local/guardium
3. Continue with Step 3 of the installation procedure, running the extracted shell
installer script rather than the default installation script for the operating
system version.
4. Return to Step 5 of the UNIX Installation procedure.

Remove Linux S-TAP Using Native Installer

To remove S-TAP using the native installer, use the following command (selecting
filename):
rpm -e <filename>

Solaris S-TAP Native Installer

Use the following command to generate the Solaris native installer script, and then
continue with Step 3 of the installation procedure, running the generated script
rather than the default installation script for the operating system version.
1. Locate the appropriate native installer file (.pkg file) on the Guardium S-TAP
Installation DVD, for your version of Solaris:
2. Enter the pkgadd command to run the installer using the selected file:
pkgadd -d <filename>.pkg
The shell installer is extracted under /usr/local/guardium
3. Continue with Step 3 of the installation procedure, running the extracted shell
installer script rather than the default installation script for the operating
system version.
4. Return to Step 5 of the UNIX Installation procedure.

Get information on package


pkginfo | grep GrdTapIns

Remove Solaris S-TAP Using Native Installer

24 S-TAP and other agents


To remove Solaris S-TAP using the native installer, use the following command:
pkgrm GrdTapIns

Building a K-TAP on Linux


There are hundreds of Linux distributions available, and the list is growing. This
means that there might not be a K-TAP already available for your Linux
distribution. If the correct K-TAP is not available, the S-TAP installation process
can build it for you.

When you install an S-TAP on a Linux system, the installation process checks the
Linux kernel to determine whether a K-TAP has been created to work with that
kernel. If the installation process does not find a matching K-TAP, it will attempt to
build one to match your Linux kernel.

Most of the K-TAP code is independent of the kernel. The installer for version 9.1
provides a new layer of code, which enables the kernel-independent code to
interact with your kernel. This new layer is delivered as proprietary source code.
The installer builds the complete K-TAP by compiling this proprietary source code
against your Linux kernel. This produces a K-TAP specific to your Linux
distribution.

This process requires that the standard kernel development utilities, provided with
Linux distribution, are present on the database server, where the K-TAP is to be
built. The development package must be an exact match for the kernel. The gcc
compiler and version 3.81 (and newer) of the make utility are also required.

If you have several systems running the same Linux distribution, you can build a
K-TAP on one system and copy it to the others. For example, you might build a
K-TAP on a test system and then copy it to one or more production database
servers after testing. If you use the Guardium Installation Manager (GIM) to install
the S-TAP, GIM can automatically copy the bundle containing the new K-TAP to a
Guardium system from which you can distribute it to other database servers.

When the installer attempts to build a K-TAP module, you see messages issued by
guard-ktap-loader. These messages can include:
v It is attempting to build
v The build has completed
v The K-TAP has been loaded
v The build cannot be attempted, because the kernel development package is not
found
“Copying a new K-TAP module to other systems”
When you build a new K-TAP module for a Linux database server, you can
copy that module to other database servers that run the same Linux
distribution.
“Copying a K-TAP module by using GIM” on page 182
If you build a custom K-TAP module for a Linux database server, you can use
GIM to copy that module to other Linux database servers.

Copying a new K-TAP module to other systems


When you build a new K-TAP module for a Linux database server, you can copy
that module to other database servers that run the same Linux distribution.

Chapter 1. S-TAPs and other agents 25


Before you begin
Use this procedure after you have built and tested a K-TAP module on a Linux
database server.

About this task

If you use the Guardium Installation Manager (GIM) to manage agents on your
database servers, see the GIM section of this information for the procedure to use.

Procedure
1. Change directory to /usr/local/guardium/guard_stap/ktap/current/ and run
./guard_ktap_append_modules to add the locally built modules to modules.tgz.
2. Copy the updated modules.tgz file to the target server.
3. Log in to the target server and change directory to /usr/local/guardium/
guard_stap/ktap/current/.
4. Run the K-TAP loader with the retry parameter and the full path to the
updated modules.tgz file. For example:
guard_ktap_loader retry /tmp/modules-9.0.0_r55927_v90_1.tgz
5. Restart the S-TAP to connect it to the new K-TAP module.

Results

The custom K-TAP module is ready to use on the target system. Repeat this
procedure for each matching Linux system to which you want to deploy the
K-TAP module.
“Building a K-TAP on Linux” on page 25
There are hundreds of Linux distributions available, and the list is growing.
This means that there might not be a K-TAP already available for your Linux
distribution. If the correct K-TAP is not available, the S-TAP installation process
can build it for you.
“Copying a K-TAP module by using GIM” on page 182
If you build a custom K-TAP module for a Linux database server, you can use
GIM to copy that module to other Linux database servers.

Getting Java or Perl information


You might need to check the version of Java or Perl being used on your data
server.

Get Java Information


When installing CAS (the Configuration Auditing System) on a UNIX system, there
are two requirements:
v You must locate the JAVA_HOME directory. You will be prompted for its
location during the CAS installation.
v You must verify that a supported version of Java is installed (see the following
table). If a supported version is not installed, you must install it before installing
CAS.
CAS Java Version Requirements

OS Type Java Version


HP-UX 1.5 or higher

26 S-TAP and other agents


OS Type Java Version
All others 1.4.2 or higher

Note: To use CAS over SSL in a FIPS-compliant environment, you must install
IBM Java on the server where the CAS agent runs.

Note: CAS is supported only on 32-bit Java.


Locate the JAVA_HOME Directory
Perform this procedure first, and after locating the JAVA_HOME directory,
perform the following procedure to check the Java version.

The JAVA_HOME directory contains the Java command. For example:


v If the java command is /usr/local/j2sdk1.4.2_03/bin/java
v the JAVA_HOME directory is /usr/local/j2sdk1.4.2_03

Use one of the following techniques to locate the java command directory.
1. Enter the which java command. For example:
[root@yourserver ~]# which java
/usr/local/j2sdk1.4.2_03/bin/java
2. If the which java command returns a symbolic link, use the ls -ld
<symbolic_link> command to determine the real Java directory name.
3. If the which java command returns the message command not found, Java may
be installed, but it has not been included in the PATH variable. In this case, use
the find command to locate the Java directory; for example:
[root@yourserver ~]# find . -name java

./usr/bin/

Determine the Java Version


1. From the java directory, run the java -version command to check the version
number. For example:
[root@yourserver ~]# /usr/local/j2sdk1.4.2_03/bin/java -version
java version "1.4.2_03"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_03-b02)
Java HotSpot(TM) Client VM (build 1.4.2_03-b02, mixed mode)
2. Note the Java version that is returned. You will not be prompted for this
information, but in the event that an issue arises later, you will be able to
eliminate the possibility of an unsupported Java version.

Get Perl Information


This Perl requirement applies only to the following combination:
v Unix S-TAP
v The TEE mechanism selected to monitor local traffic
v The Hunter process is used to detect and optionally kill processes that bypass
the Tee listening port

In this situation, you must have:


v Version 5.8.0 or later of Perl
v Perl must be installed in the /usr/bin directory

Chapter 1. S-TAPs and other agents 27


To verify that you have version installed, use the following command:
/usr/bin/perl -v

If Perl is not installed, is installed in a different directory, or an older version is


installed, you must install version 5.8.0 or later of Perl in the /usr/bin directory,
before installing S-TAP.

Install and Configure SharePoint Agent


You can use IBM Guardium to monitor the activity of SharePoint, Microsoft’s
Documents Management product.

SharePoint repositories often contain sensitive information such as corporate


financial results and valuable intellectual property such as product design data, but
they do not have the necessary controls to prevent misuse by insiders. Now, with
Guardium, you can have continuous real-time monitoring controls, making it
easier to detect unauthorized access to SharePoint repositories.

High-level view of how SharePoint Agent works

Guardium stored procedures are placed in the SharePoint database on the host
system. Guardium software code runs as an extra thread in the SQL Server
process, on the host system. This code is the GuardSp TAP agent (SharePoint
Agent).

The only dependency for GuardSp TAP is a SharePoint SQL Server database on the
host and the Guardium system.

SharePoint logging/monitoring/auditing

SharePoint native auditing must be set up and activated. The way to enable
Sharepoint native auditing is to navigate, via a web browser, to the Central
Administration pane of SharePoint 2007 or SharePoint 2010. Choose Site Actions
and specify the events to be audited/logged.

Guardium does not monitor everything with a SharePoint TAP, only what is
logged in SharePoint itself.

SharePoint's own logging/monitoring/auditing must be enabled for Guardium


SharePoint to function.

If nothing is set up to be logged by SharePoint, no traffic will be seen in the


Guardium SharePoint reports.

Looking for errors and messages in the application event log of SharePoint as a
way of determining SharePoint configuration correctness.

MS-SQL Server Cluster Installation


If you are installing a GuardSpTap to monitor an instance of SharePoint that is
installed on a cluster, read this section before installing. If SharePoint is not
installed on a cluster, ignore this section.
v Install GuardSpTap on the current active MS-SQL node in the cluster. Make the
install directory on the same shared media. Do not use the default installation
folder which is on the C drive. For example, if MS-SQL is installed on
cluster-drive G, then install GuardSpTap to G:/GuardSpTap.

28 S-TAP and other agents


v When you provide the hostname/login credentials for the database, use the
cluster name, not the individual node name.
v You must browse and choose a new place to install at the beginning of the
install process. If you try to install anywhere but the drive MS-SQL is on, you
will get permission errors.
v If you ever need to uninstall for any reason, you can uninstall from whichever
node is currently active. It does not matter which node was active when it was
originally installed.
v Run the installer as an Administrator with elevated privileges. The installer
must be run with elevated privileges as a user that has access to C:\Windows.

Known Limitations
v
Server-IP and Client-IP will list as the virtual-IP of the cluster.
v
Server-hostname will be the virtual hostname of the cluster, not those of the
individual nodes.

Install and Configure SharePoint Agent


1. To install and configure the SharePoint agent, go to the Guardium installation
media, select the GuardSpTap folder, and launch the setup.exe file in the
GuardSpTap folder. Click through two screens and the configuration screen
appears.

Note: The SQL server user used to perform the installation must have
sysadmin privilege AND must have been granted UNSAFE ASSEMBLY
permission by another sysadmin user. A SQL user cannot grant UNSAFE
ASSEMBLY permission to itself, even if it is a sysadmin user.
2. Enter the Name listed on the certificate of the Guardium system. (The default
is gmachine if a custom certificate is not used on the system.)
3. Specify the IP address of the Guardium system.
4. Use the default value of the Buffer Size.
5. Specify the SSL Port and the non-SSL port of the Guardium system. (Leave at
default unless using custom ports.)
6. Check Use Secure Socket Layer if using a SSL port.
7. Choose SQL Table Logging if Microsoft SharePoint itself is monitoring the
activity. Guardium is not collecting information with this choice.
8. Choose SQL Guard Logging if Guardium is monitoring the SharePoint
activity. This choice does not place records in Microsoft SharePoint audit logs.
9. Choose Both for monitoring to be visible on both the Guardium system and
the Microsoft SharePoint audit logs.
10. Click Next to continue with configuration.
This screen is where you log on to the database, set the authentication and
specify the SharePoint version.
11. Choose your authentication method, supply username and password if
necessary, then click Login.
12. Choose Microsoft SharePoint installation from Sharepoint database.
13. Choose Microsoft SharePoint version from Sharepoint version.
14. Click Finish.

Chapter 1. S-TAPs and other agents 29


Reconfiguring the Installation
To re-configure the installation, click the Windows Start button. Go to All
Programs and select the menu item entitled IBM IBM Guardium Agent
GuardSpTap. In that menu is an item called Configure. Selecting that item brings
you back to step 2 in the previous topic Install and Configure SharePoint Agent.

Where to See SharePoint activity within Guardium

Audit Events for SharePoint are now into the Guardium database as SQL
statements.

Access S-TAP reports (Administrator portal, TAP monitor tab) to see the SharePoint
events. See S-TAP Reports for more information.

Background service that installs GuardSpTp into newly created


content databases

Sharepoint TAP has been modified so it fixes the permission problems of different
application pools in different content databases.

New component: Sharepoint TAP service. The short name for this service is
SpTapService, and this name is used in the application eventlog.

This service monitors Sharepoint content databases, and when a new database
shows up on the server, it automatically installs Sharepoint TAP into that content
database within a minute of the database's creation.

The Installation application (Setup.exe) as well as the Configuration application


(Configure.exe) allows the installer to configure the background service to run as
LocalSystem, LocalService, or some other user. Since the service needs to login to
the database, you must provide database login credentials. The database user
provided must have sysadmin privileges.

Note: This background service must use Windows Authentication, as SQL Server
Authentication is inherently less secure.

Questions on use of SharePoint Agent


Q1. Are separate reports needed for SharePoint reporting? Or will the common
detailed sessions list show SharePoint transactions/activities?

A: No separate reporting is necessary. The traffic comes in as any other straight


SQL traffic (like from a S-TAP or net inspection engine) and ends up in the same
tables. If there is a need for a report narrowed down to the specific server
displaying app_user as well as SQL, etc, it can be created.

Q2. What use is made of SqlSetup2007.sql and SqlSetup2010.sql? This topic


does not describe the usage of these SQL files.

A: Think of them as DLLs and leave them alone. They are used by the installer to
set the modified stored-procedures within SharePoint. (2007 going to SharePoint
2007, 2010 to 2010)

Q3. What is the CPU and memory usage of this SharePoint agent?

30 S-TAP and other agents


A: There is no separate agent. Guardium uses SharePoint’s own reporting to sends
a data-stream to the Guardium system. Guardium tests show negligible
performance impact.

Q4. What are the pre-requisites for SharePoint monitoring? For example, ports
required to be opened, SharePoint agent to be installed, etc.

A: Have SharePoint 2007 or 2010 installed. Have the Guardium Sharepoint TAP
agent installed. Allow TCP traffic from dbsystem to port 16016 (or 16018 if using
SSL) on the Guardium system. No Windows S-TAP is needed.

Q5. Do we need a downtime for SharePoint for this installation?

A: There is no rebooting or restarting of the application required. In Guardium


testing, no running processes were disrupted.

Q6: How do we verify that SharePoint traffic is reaching the Guardium collector?

A: The easiest way to see that SharePoint traffic is reaching the collector is to use
an SQL or deeper report and to see what SQLs come through from the SharePoint
database server.

Q7: The report Detailed sessions list should show SharePoint traffic, but it is not.
What might cause this problem with the SharePoint TAP?

A: For the Detailed session list report, as long as you have set the server IP, the
port, and permissions correctly, you should see the SharePoint sessions.

Q8: Will the SharePoint traffic pass through Policy rules?

A: SharePoint traffic is subject to policy rules, but the only actions that can be
taken on the SharePoint traffic are logging actions, alerting actions, and
Ignore-session (at the Guardium system level) actions. Ignore actions are not
recommended for use with SharePoint traffic.

Q9: Why did the Sharepoint TAP installation not work because the SQL user did
not have UNSAFE ASSEMBLY permission?

A: The SQL server user used to perform the Sharepoint TAP installation must have
sysadmin privilege AND must have been granted UNSAFE ASSEMBLY permission
by another sysadmin user. An SQL user cannot grant UNSAFE ASSEMBLY
permission to itself, even if it is a sysadmin user.

When to restart, When to reboot


This topic details the instances, after S-TAP installation, of when to restart and
when to reboot the database server or database instance. Both Windows S-TAP and
UNIX/Linux S-TAPs are covered. Restart/reboot requirements are the same for
GIM and non-GIM implementations.
What must be restarted after a fresh installation of UNIX/Linux S-TAP
To see full traffic, certain databases must be restarted after an S-TAP
installation.

Chapter 1. S-TAPs and other agents 31


Table 9. Database restart after S-TAP installation
OS/
Database Oracle DB2 Sybase MS-SQL Informix
TPC/IPC SHM TPC/IPC SHM TPC/IPC SHM TPC/IPC SHM TPC/IPC SHM
RedHat NR NR NR NR NR NR NR NR NR NR
SuSE NR NR NR NR NR NR NR NR NR NR
AIX REQ* NR REQ NR REQ NR NA NR REQ NR
Solaris NR NR NR NR NR NR NR NR NR NR
HP-UX NR NR NR NR NR NR NR NR NR NR
Windows REQ-W REQ-W REQ-W REQ-W REQ-W

SHM - Shared memory


NR = No restart/reboot required (based on utilizing live update
mechanism and referencing live update link if you have one)
REQ = Restart required
REQ-W = Database instance only restart required for fresh installation
(database server does not require a restart), no restart required of database
server or database instance for live update
REQ * = restart required for database and listener
NA = not applicable
What must be restarted after a live upgrade of UNIX/Linux S-TAP
No restarts are necessary at all for live upgrades that do not include ATAP.

Note: When a customer live updates, the files of the older version will
remain on the system until the next reboot. In fact, some of the files of the
older version will remain even after the reboot of the server.
Windows S-TAP
There is a need to distinguish between a fresh Windows installation and a
live upgrade.
All database instances must be restarted (not rebooted) when installing
from scratch (fresh installation). Database instance restarts are not required
after live upgrade.
The Windows server does not need to be rebooted, unless upgrading from
V7.0. The reason for this is that upgrading from V7.0 requires a full
uninstall of the S-TAP software.
However, if proxy driver files are updated, a system reboot is required.
Examples of proxy driver files: LhmonProxy.sys/NpProxy.sys. For each
release, check the release document to see if proxy drivers are updated.
A reboot is required if upgrading from Guardium V7.0. This is true for all
installation/upgrade methods - GIM, Interactive, or Batch.
UNIX/Linux S-TAP
No reboot
S-TAP/KTAP may be upgraded without a reboot when using the
guard-stap-update utility. This utility can be used from V8.0 versions and
up.
If the system is being "upgraded" from a non-GIM version to the same
GIM version, the system doesn't need to be rebooted.

32 S-TAP and other agents


If upgrading a non-GIM S-TAP with GIM BUNDLE-STAP, with the same
revision number that is currently running, no reboot is required.
Bundle GIM upgrade (in addition to Bundle S-TAP upgrade) is required
ONLY in the following upgrade paths:
v If you are going from V8 to V9.
v If you have a V9 bundle S-TAP installed with a patch less than V9.0
patch 100 or build number less than 9.0.0_r57263.
v All the other upgrade paths do NOT require bundle GIM upgrade.
Reboot required
Upgrade S-TAP at next reboot with the Guardium Upgrader utility. Use of
this utility requires a reboot.
If you are removing a previous version of S-TAP that used KTAP, you will
need to reboot the database server.
When upgrading S-TAP through GIM:
v If specifying KTAP_LIVE_UPDATE=Y, no reboot required.
v If specifying KTAP_LIVE_UPDATE=N, a reboot is required.
If upgrading a non-GIM S-TAP with GIM BUNDLE-STAP, with a different
revision number than currently running, a reboot is required.
To re-install S-TAP with KTAP using the same revision number requires an
un-install and a reboot.
After installation of UNIX ATAP in Oracle cluster environment, instances
must be restarted as well as all inter-cluster processes.
Restart/load/instrumented/activated requirements for ATAP
ATAP or S-TAP or KTAP are not rebooted.
S-TAP is stopped/started/restarted. KTAP is loaded/unloaded. ATAP is
instrumented/activated/deactivated/de-instrumented.
Database instances which require ATAP must be stopped prior to
instrumenting (if required) and activation.
De-instrumenting or deactivation of ATAP also requires appropriate
database instances to be stopped.
ATAP should be deactivated (and de-instrumented, if applicable), prior to
any upgrades of the database, such as when a Fixpack is applied.
Finally, ATAP should be deactivated and de-instrumented prior to any
S-TAP upgrades (not necessary for GIM bundle upgrades).

Enterprise load balancing


The enterprise load balancer is a Central Manager application that dynamically
allocates managed units to S-TAP agents.

Overview
Load balancing automatically allocates managed units to S-TAP agents when new
S-TAPs are installed and during fail-over when a managed unit is unavailable. The
load balancing application also dynamically re-balances loaded or busy managed
units by relocating S-TAP agents to less-loaded managed units.

Chapter 1. S-TAPs and other agents 33


The enterprise load balancing application automates several tasks:
v
Load balancing removes the need to manually evaluate the load of managed
units before assigning those managed units to an S-TAP agent.
v
Load balancing eliminates the need to define fail-over managed units as part of
post-installation S-TAP configuration because the load balancer dynamically
manages fail-over scenarios.
v
Load balancing removes the need to manually relocate S-TAP agents from
loaded managed units to less loaded managed units.

Important: When using the enterprise load balancing application, the Guardium
system assumes control over the allocation of managed units to S-TAP agents. This
is an automated and dynamic process: you will see S-TAPs change association
based on the relative load of available manged units. Use the Load Balancer
Events report to review all load balancing activity.

Prerequisites

The enterprise load balancer runs on a Central Manager, listens to port 8443, and
uses Transport Layer Security (TLS). No new firewall or additional system setup is
required.

Load balancing is enabled by default on Guardium systems. For information about


enabling S-TAPs to participate in load balancing, see “Configuring S-TAP
installations for enterprise load balancing” on page 35.

How it works
The enterprise load balancing application works by collecting and maintaining
up-to-date load information from all its managed units.

It uses the load information from managed units to create a load map. This load
map provides the data that directs load balancing and managed unit allocation
activities. Use the GuardAPI command grdapi get_load_balancer_load_map to
view the current load map at any time.

Load information is only collected from managed units that are online and
configured with the parameter LOAD_BALANCER_ENABLED=1. Setting
LOAD_BALANCER_ENABLED=0 disables load balancing and prevents that managed unit
from being dynamically allocated to S-TAP agents during load balancing activities.

Load collection errors from specific managed units are recorded in the Load
Balancer Events report but do not interfere with the overall load collection and
load balancing processes. However, failure to collect load information from a
managed unit excludes that managed unit from participation in load balancing
processes.

Note: When the S-TAP is installed interactively with the Load balancer options for
a specified S-TAP group and a specified Managed Unit (MU) group and with the
client IP being set to anything other than the default value, an incorrect MU may
be allocated by the load balancer and the host name in the S-TAP group will be

34 S-TAP and other agents


incorrect. This is due to the default TAP_IP being sent to the load balancer before
the user modifies the S-TAP local IP during installation.

Using enterprise load balancing


Follow this task sequence to begin using enterprise load balancing functionality.

Configuring S-TAP installations for enterprise load balancing


Learn how to use enterprise load balancing when installing an S-TAP agent.

Procedure

Specify enterprise load balancing parameters during S-TAP installation.


v When using non-GIM S-TAP installers, specify the following parameters:
Table 10. Parameters for non-GIM S-TAP installers
Parameter Description
--load-balancer-ip load_balancer_ip Required. The load_balancer_ip option
specifies the IP address of the Central
Manager this S-TAP should use for load
balancing.
--lb-app-group app_group Optional. The app_group option specifies the
S-TAP group this S-TAP will belong to.
--lb-mu-group mu_group Optional. The mu_group option specifies the
managed unit group the app-group will be
associated with. An application group must
also be specified to use this parameter.
--lb-num-mus number_of_mus Optional. The number_of_mus option specifies
the number of managed units the load
balancer should allocate for this S-TAP.

v When using GIM to install S-TAP use the following parameters in GIM's module
parameter screen:
Table 11. Parameters for using the GIM S-TAP installer
Parameter Description
STAP_LOAD_BALANCER_IP Required. This option specifies the IP
address of the Central Manager this S-TAP
should use for load balancing.
STAP_INITIAL_BALANCER_TAP_GROUP Optional. This option specifies the S-TAP
group this S-TAP will belong to.
STAP_INITIAL_BALANCER_MU_GROUP Optional. This option specifies the managed
unit group the app-group will be associated
with. An application group must also be
specified to use this parameter.
STAP_LOAD_BALANCER_NUM_MUS Optional. This option specifies the number
of managed units the load balancer should
allocate for this S-TAP.

Associating S-TAP with managed units for load balancing


Learn how to use enterprise load balancing by creating and associating S-TAP
groups with groups of managed units.

Chapter 1. S-TAPs and other agents 35


About this task

Load balancing creates associations between S-TAP groups and groups of managed
units such that S-TAPs within a group are allowed to be reallocated to the
most-available managed unit within a group. This task introduces you to the
process of establishing associations between S-TAP groups and managed unit
groups for the purposes of enterprise load balancing.

Procedure
1. On a Central Manager, navigate to Manage > Central Management >
Enterprise Load Balancer > Associate S-TAPs and Managed Units.
2. If necessary, create a new S-TAP group.
a. Click the icon to open the Create New S-TAP Group dialog.
b. Provide a name in the Group Name field. For example, North American
S-TAPs.
c. Add group members by selecting from existing host names or adding new
members using the Group Member field. S-TAPs indicated with a icon are
included with the new S-TAP group.
d. Click Create New Group to create the S-TAP group.
3. Associate the S-TAP group with a group of managed units.
a. Select the S-TAP group you want to associate. For example, North American
S-TAPS.
b. Click Associate Managed Units to open the Associate Managed Unit Group
dialog.
c. If necessary, create a new group of managed units.
1) Click the icon to open the Create New Managed Unit Group dialog.
2) Provide a name in the Group Name field. For example, North American
MUs.
3) Add group members by selecting from existing Managed Unit IP
addresses.
4) Click Create New Group to create the new group of managed units.
d. Select the group(s) of managed units to associate with the S-TAP group. For
example, North American MUs.
e. Click Apply.
4. Click Save to complete the association between an S-TAP group and a group of
managed units.

Viewing the enterprise load balancing load map


Learn how to view the current enterprise load balancer load map.

About this task

The enterprise load balancing application uses the load information from managed
units to create a load map. This load map provides the data that directs load
balancing and managed unit allocation activities.

Procedure

Issue the following GuardAPI command: grdapi get_load_balancer_load_map.


The load map should look like the following example:

36 S-TAP and other agents


ID=0
LOAD MAP
Loaded MU List:
Vacant MU List:
{
MU=myguard_01.domain.com
MU_QUEUE_SIZE(MB)=25.0
MU_TIMES_REBALANED=0
MU_EFFECTIVE_MAX_USED_QUEUE(%)=0.0
MU_MAX_LOAD_CONTIB_BY_STAP_TO_MAX_USED_QUEUE(MB)=0.0
MU_ADJUSTED_STAP_CONTRIB_IN_MB=0.0
MU_BASE_MAX_USED_QUEUE_IN_MB=0.0
IS_REBALANCABLE=trueINSTALLED_POLICIES=log full|

APPLIANCE_RESOURCE_INFO={NUM_PROCESSORS=2,CPU_SPEED=2660,CPU_CACHE=24576,CPU_CORES=2,CACHE_READ_RA
}
{
MU=gct1.domain.com
MU_QUEUE_SIZE(MB)=25.0
MU_TIMES_REBALANED=0
MU_EFFECTIVE_MAX_USED_QUEUE(%)=0.0
MU_MAX_LOAD_CONTIB_BY_STAP_TO_MAX_USED_QUEUE(MB)=0.0
MU_ADJUSTED_STAP_CONTRIB_IN_MB=0.0
MU_BASE_MAX_USED_QUEUE_IN_MB=0.0
IS_REBALANCABLE=true
INSTALLED_POLICIES=GATF_STAP_Policy_Ignore|GATF_STAP_Policy_Firewall|GATF_STAP_Policy_SCRUB|
APPLIANCE_RESOURCE_INFO={NUM_PROCESSORS=24,CPU_SPEED=2601,CPU_CACHE=15360,CPU_CORES=6,CACHE_RE
STAP_LIST=
{
STAP_IP=01_gct1.domain.com,
STAP_HOST=01_gctl.domain.com,
CONNECTED_TO_MU=gct1.domain.com,
PARTICIPATES_IN_LOAD_BALANCING=false,
STAP_CONTIBUTION_TO_LOAD_IN_MB=0.0
}
{
STAP_IP=02_gct1.domain.com,
STAP_HOST=02_gct1.domain.com,
CONNECTED_TO_MU=gct1.domain.com,
PARTICIPATES_IN_LOAD_BALANCING=true,
STAP_CONTIBUTION_TO_LOAD_IN_MB=0.0
}
{
STAP_IP=03_gctl.domain.com,
STAP_HOST=03_gctl.domain.com,
CONNECTED_TO_MU=gct1.domain.com,
PARTICIPATES_IN_LOAD_BALANCING=true,
STAP_CONTIBUTION_TO_LOAD_IN_MB=0.0
}
}
03_gctl.domain.com->gct1.domain.com
02_gctl.domain.com->gct1.domain.com
01_gctl.domain.com->gct1.domain.com
ok

Viewing an enterprise load balancing activity report


View a report of enterprise load balancing events and activities.

About this task

The Load Balancer Events report shows all load balancing events and activities,
including successfully associations between S-TAP agents and managed units,
changes in managed unit load, and failed associations.

Chapter 1. S-TAPs and other agents 37


Procedure

To view the report, navigate to Manage > Reports > Activity Monitoring > Load
Balancer Events.

Enterprise load balancing configuration parameters


This reference information provides detailed descriptions of configuration
parameters for enterprise load balancing.

Parameter Default value (valid values) Description


720 (≥10)
STATIC_LOAD_COLLECTION_INTERVAL Static managed unit load
collection interval (in
minutes).

If
ENABLE_DYNAMIC_LOAD_COLLECTION
is set to 0, the load balancer
will collect the load from all
the managed units at the
interval specified by
STATIC_LOAD_COLLECTION_INTERVAL.
LOAD_BALANCER_ENABLED 1 (0 or 1) Controls the load balancer
feature.
v 0 enables the feature
v 1 disables the feature

If disabled on the managed


unit, the load balancer
(running on the Central
Manager) will not collect
load information from that
managed unit. All the
S-TAPs connected to that
managed unit will not
participate in load balancing.
Enabling this parameter
(after it was disabled) will
trigger an immediate full
load collection from all the
managed units.

38 S-TAP and other agents


Parameter Default value (valid values) Description
1 (0 or 1)
ENABLE_DYNAMIC_LOAD_COLLECTION Controls the load collection
method.
v 0 disables the dynamic
load collection interval
(uses
STATIC_LOAD_COLLECTION_INTERVAL
as the collection interval)
v 1 enables dynamic load
collection interval

When this parameter is


enabled (set to 1), the
collection intervals are
proportional (1 hour per 10
connected managed units) to
the number of managed
units. Changes to this
parameter will trigger an
immediate recalculation of
the next full load collection
time.
1 (0 or 1)
USE_APPLIANCE_HW_PROFILE_FACTOR The load balancer can use
managed units' hardware
profile indicators (specified
by the parameter
APPLIANCE_HW_PROFILE_INDICATORS)
when evaluating vacant
managed units for relocating
S-TAPs.
v 0 ignores hardware profile
indicators
v 1 uses managed unit
hardware profile indicators
3 (≥-1)
MAX_RELOCATIONS_BETWEEN_FULL_LOAD_COLLECTIONS Defines the maximum
number of S-TAP relocations
(between managed units)
allowed after a full load
collection.

Negative values means


unlimited relocations are
allowed.
1 (0 or 1)
ALLOW_POLICY_MISMATCH_BETWEEN_APPLIANCES The load balancer can take
into account managed units'
installed policies.
v 0 does not allow S-TAP
relocation when there is a
policy mismatch between
the source and target
managed units
v 1 allows the relocation of
S-TAPs even if there is a
policy mismatch between
the source and target
managed units

Chapter 1. S-TAPs and other agents 39


Parameter Default value (valid values) Description
10 (≥5)
TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOAD When collecting the load
statistics for S-TAPs of each
managed unit, we want to
avoid including data that
represents the S-TAP
connection to the managed
unit. This data can indicate
traffic spikes that create a
false-positive for the load
balancer. The
TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOAD
parameter tells the load
balancer to ignore S-TAP
load for the specified
number of minutes after the
S-TAP has connected to the
managed unit.
ENABLE_RELOCATION 1 (0 or 1) Relocation of resources
(rebalancing) is a process
that the load balancer
executes after full load
collection. Relocation here
means transferring S-TAPs
from loaded managed units
to vacant manged units.
v 0 does not allow relocating
S-TAPs to vacant managed
units
v 1 allows relocating S-TAPs
to vacant managed units
0.6 (0.1 to 1 in increments of
LOADED_SNIFFER_QUEUE_USAGE_THRESHOLD A managed unit is
0.1) considered loaded if its
sniffer has at least one queue
that uses the amount of the
sniffer queue size specified
by LOADED_SNIFFER_QUEUE_
USAGE_THRESHOL. The sniffer
queue size is defined by the
ADMINCONSOLE_PARAMETER_DEFAULT_QUEUE_SIZE
parameter.

This parameter should not be


changed under normal
circumstances.

40 S-TAP and other agents


Parameter Default value (valid values) Description
DEFAULT_STAP_MAX_QUEUE_USAGE0.15 (0.10 to 1 in increments When a new S-TAP is being
of 0.10) assigned to a managed unit,
the load balancer does not
initially have load
information about it. The
value of this parameter
defines the temporary sniffer
max used queue until the
real load is collected from
the managed unit (after the
interval defined by the
TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOA
parameter).

This parameter should not be


changed under normal
circumstances.
0.1 (0.1 to 1 in increments of
DEFAULT_STAP_MAX_CONTRIBUTION_TO_MAX_QUEUE_USAGE When a new S-TAP is being
0.1) assigned to a managed unit,
the load balancer does not
initially have load
information about it. The
value of this parameter
defines the temporary max
S-TAP load contribution to
the temporary max used
queue until the real load is
collected from the managed
unit (after the interval
defined by the
TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOA
parameter).

This parameter should not be


changed under normal
circumstances.
1:168 (≥0 : ≥0)
REBALANCE_IF_MU_CLASSIFIED_AS_LOADED_N_TIMES_IN_M_HOURS Loaded managed units can
be rebalanced only if they
have been classified as
loaded a specified number of
instances over a specified
period of hours. For
example, a value of 1:168
requires that a managed unit
be classified as loaded at
least 1 time during a period
of 168 hours.

Chapter 1. S-TAPs and other agents 41


Parameter Default value (valid values) Description
NUM_PROCESSORS:
APPLIANCE_HW_PROFILE_INDICATORS The load balancer can take
CPU_SPEED: CPU_CACHE: into account managed units'
CPU_CORES: hardware profile indicators.
MEMORY_SIZE (Columns A colon delimited list of
names from the table indicators (column names
APPLIANC E_RESOU from the table
RCE_INFO) APPLIANCE_RESOURCE_INFO)
are used by the load balancer
to evaluate the hardware
profile.

This parameter should not be


changed under normal
circumstances..
10 (≥1)
MAX_CONCURRENT_LOAD_COLLECTIONS This parameter sets the
maximum number of
concurrent load collection
processes the load balancer
will run in a given time. That
is, this value sets the amount
of concurrent,non-persistent,
remote SQL connections from
the Cental Manager to the
managed unit.
3 (≥-1)
MAX_RELOCATIONS_PER_MU_BETWEEN_FULL_LOAD_COLLECTIONS The maximum number of
S-TAP relocations allowed
from a specific managed
unit.

Negative values allow


unlimited relocations.

S-TAP administration guide


The Guardium S-TAP is a lightweight software agent installed on a database server
system.

S-TAP monitors database traffic and forwards information about that traffic to a
Guardium system.

S-TAP Overview
v S-TAP can monitor database traffic that is local to that system. This is important
because local connections can provide back door access to the database, and all
such access needs to be monitored and audited.
v S-TAP can be used to monitor any network traffic that is visible from the
database server on which it is installed. It can thus act as a collector on remote
network segments, where it is not practical to install a Guardium system.
v S-TAP can be installed remotely from the command line on both Windows or
Unix servers as well as installed through the Guardium Installation Manager.
Upgrades can be configured to be applied at the next server reboot. Under
Linux, S-TAP takes care of upgrading S-TAP kernel components at boot time
--adjusting to kernel upgrades in Linux environments.

42 S-TAP and other agents


Failover processing
S-TAP collects and sends data to a Guardium host in near real time. S-TAP buffers
the data, so that it can continue to work if the Guardium host is momentarily
unavailable. If the primary host is unavailable for an extended period of time (time
can be shorter if the buffer is filling up), S-TAP can fail over to a secondary
Guardium host. It will continue to send data to the secondary host until either that
Guardium system becomes unavailable, until the S-TAP is restarted (which will
attempt to connect to its primary host first), or a connection to the primary server
has been reestablished and remains up for a period of 5*connection_timeout_sec
seconds (configurable in guard_tap.ini , default is 60 seconds). In this case S-TAP
will fail over from secondary Guardium host back to Primary Guardium host.

Note: While S-TAP is normally deployed on a database server, S-TAP can be


installed on client side systems such as application servers and database clients.

Note: If S-TAP is installed both on the application side (see previous note) and on
the database server, additional precautions should be taken so as to not monitor
duplicate traffic.

Session Data Failover

When a failover of S-TAP occurs, session information can also be sent over to the
current active Guardium host. See Edit the S-TAP Configuration File for more
information for setting tap_failover_session_size (0 will disable feature) and
tap_failover_session_quiesce.

Restartability

Valid when wait_for_db_exec is greater than 0, when S-TAP restarts, either from a
system reboot or user initiated S-TAP stop / start commands, S-TAP will poll all
databases that have been configured to be monitored and begin monitoring them
when available. Any configuration anomalies (either on the database side or the
S-TAP side) that limits S-TAP ability to monitor a database will not limit S-TAP
from monitoring other databases with valid configurations. Instead, S-TAP will
start successfully, monitor all valid configurations, and continue to poll other
databases until they become available and then start monitoring them as well. It is
advisable that users use existing alerts and reports to monitor and report on any
failed statuses.

For Oracle, after relinking Oracle BEQ traffic will not be logged for 15 minutes,
this is the time it takes for S-TAP to check if an Oracle device node has been
changed.

Proxy Firewall
While S-TAP is normally deployed on a database server, a K-TAP based firewall
can be deployed to a proxy server. By setting the parameter app_server=1 and
utilizing S-GATE, you can monitor traffic that originates from the proxy server. See
Edit the S-TAP Configuration File and S-GATE Actions (Blocking Actions) in the
Policies help topic for more information on setting app_server and using S-GATE
within Policies.

Chapter 1. S-TAPs and other agents 43


Secondary Guardium Hosts for S-TAP Agents
If the Guardium system designated as the primary host for S-TAP becomes
unavailable, S-TAP can fail over to a secondary host. It remains connected to the
secondary host until either that connection is lost, until the S-TAP is restarted
(which will attempt to connect to its primary host first), or a connection to the
primary server has been reestablished and remains up for a period of
5*connection_timeout_sec seconds (configurable in guard_tap.ini file, default is
60 seconds).

S-TAP restarts under slightly different conditions, depending on the database


server operating system:
v Unix: S-TAP restarts each time configuration changes are applied from the active
host.

Before designating a Guardium system as a secondary host for an S-TAP, verify


these items.
v The Guardium system must be configured to manage S-TAPs. To check this and
re-configure if necessary, see Configure Guardium system to Manage Agents.
v The Guardium system must have connectivity to the database server where
S-TAP is installed. When multiple Guardium systems are used, they are often
attached to disjointed branches of the network.
v The Guardium system must not have a security policy that will ignore session
data from the database server where S-TAP is installed. In many cases, a
Guardium security policy is built to focus on a narrow subset of the observable
database traffic, ignoring all other sessions. Either make sure that the secondary
host will not ignore session data from S-TAP or modify the security policy on
the Guardium system as necessary.

To define secondary hosts for an S-TAP, see Define Secondary Guardium Hosts for
an S-TAP, under Configure S-TAPs from the GUI.

Note: While S-TAP is normally deployed on a database server, S-TAP can be


installed on client side systems such as application servers and database clients.

S-TAP and Certificates

Note: Guardium does not provide Certificate Authority (CA) services and will not
ship systems with different certificates than the one installed by default. A
customer that wants their own certificate will need to contact a third party CA
(such as VeriSign or Entrust).

Note: In addition to ensuring that the S-TAP feed to a collector is encrypted, the
S-TAP client can also be configured to authenticate the Guardium system it is
trying to talk to. This way, in addition to ensuring that the traffic is encrypted, it is
ensuring that the S-TAP is not feeding information to a non-authorized server.

S-TAP Setup

In order to enable Guardium system authenticity verification, three settings need to


be enabled in guard_tap.ini in addition to use_tls=1:
1. guardium_ca_path

44 S-TAP and other agents


If guardium_ca_path is set to point to a file containing one or more trusted CA
self-signed certificates in PEM format, a verification of the Guardium system is
performed.
A system certificate installed on the Guardium system has to be signed by one
of the CAs provided in the file, and the Guardium system has to have the
correct corresponding key.
By default, a Guardium self-signed root certificate is provided in our S-TAP
installation (either classic or GIM based). Pointing guardium_ca_path to the file
provided by Guardium will ensure that the Guardium system has a
key/certificate pair signed by Guardium.
In order to use a third party signed certificates and keys, the guardium_ca_path
needs to be set to a file containing the CA certificates of the given third party
(for example, Verisign). The Guardium system in that case has to have the
key/certificate pair signed by the same third party.
2. sqlguard_cert_cn
In addition to verifying the Guardium system certificate's signature and its
possession of the respective private key, a customer can chose to accept
certificates whose CN (Common Name) does not match a regular expression
pattern set by sqlguard_cert_cn.

Note: The same certificate/key pair can be installed on several machines. The
customer does not have to buy N certificate for N machines.
3. guardium_crl_path
If this path points to a PEM-encoded file with Certificate Revocation List from
the CA, any Guardium system certificate that has been revoked will be rejected.
The Guardium CRL is provided in the STAP installation (or GIM) and can be
and will be updated via software patches and upgrades.
In addition a customer can manually install a CRL provided by the CA
(Guardium or third party).
Since Guardium systems are not assumed to have internet access, no web-based
CRL servers are queried automatically.

Guardium system CLI System Certificate-related Commands

There are four CLI commands related to system key and certificate management:
v show certificate sniffer
This command will print the system certificate in a text format, followed by the
Base64 encoded PEM form encoding. The text format only serves the purpose of
viewing the certificate details (in particular the CN and the Signer/Serial that
can be filtered by the S-TAP). The PEM encoded part between ---BEGIN
CERTIFICATE--- and ---END CERTIFICATE--- is the one that should be used to
backup/store/email the certificate to other machines and parties (BEGIN and
END delimiters should always be included together with the Base64 encoded
part).
v store certificate sniffer <console | import>
This command enables a user to set the system certificate used by the Guardium
system (in communication with S-TAP). The certificate can either be pasted from
the console or imported via one of the standard import protocols. The certificate
should format should be PEM and should include the BEGIN and END
delimiters. This certificate needs to be signed by a CA whose self-signed
certificate is available to S-TAP software through the guardium_ca_path.
v store certificate keystore <console | import>

Chapter 1. S-TAPs and other agents 45


This command enables a user to set the system key used. The key needs to
match the public part in the certificate. In addition the key needs to be in an
encrypted envelope. The user password used to encrypt it needs to be supplied
during the store process. The store command re-encrypts the key using the
Guardium's internal code before finally storing it in the system.

Note: Only once both the certificate and the matching key are available on the
Guardium system can S-TAP successfully perform Guardium system
authentication.
v create csr sniffer
This command can be used to create a Certificate Signing Request in PEM
format. The command will internally generate the 2048-bit key and issue a set of
questions to the user to fill out the CSR form (Country, State/Province,
Locality/City, Organization and Organizational Unit). Finally the user needs to
provide the Common Name. As a rule, the common name should include only
letters, digits, underscores and dots. It should be a unique identifier for a
particular installation and include the company name, department, cluster or
Guardium system specific name. However, the instructions from the external CA
override those recommendations. For example:
GCluster1DataCenterGuardiumIBM - which stands for GCluster1 in the
DataCenter at Guardium, an IBM company
SqlGuard1DataCenterGuardiumIBM - which stands for SqlGuard1 machine
system (might have a failover too)
Provide a valid email when asked, so that you can be contacted by support
personnel.
You can leave the Challenge Password and Optional Company Name blank.
Finally the Certificate Signing Request will be displayed in the readable and
PEM encoded forms.
You should verify the details and send the PEM encoded part (between ---BEGIN
CERTIFICATE REQUEST--- and ---END CERTIFICATE REQUEST---, inclusively) to the
CA for signing.

Note: At this point, the system has a new, internally generated key, that does
not correspond to the system certificate previously installed. This is to ensure
that S-TAPs will not feed the information while the certificate is being submitted
for signing. If you need to ensure continuous operation and S-TAP feed, you will
need to disable the Guardium system authentication on the S-TAP side during
this period.
Once CSR has been verified the CA will issue the signed certificate in the PEM
format. You need to install this certificate using the store system certificate
command.
At this point the new certificate and the internally generated key (during the
create csr sniffer command) will be matching and ready to use for Guardium
system authentication by S-TAPs.
Ensure that all certificate-related parameters in the S-TAP configuration file are
correct.
If you need to install the same key/certificate on more than one Guardium
system, you can use the show system certificate | key command to export and
back them up.
Be extra careful when storing the key (which is encrypted by a user-supplied
password) on an external computer or device. Use non trivial passwords when
asked by the show system key.

46 S-TAP and other agents


Configuring the Guardium system to manage S-TAPs
Before you can manage an S-TAP from the administrator portal, you must
configure the Guardium system.

About this task

Procedure
1. Log out of Guardium.
2. From an SSH client window, log in to the Guardium system command line
interface (CLI), as the cli user.
ssh –l cli 192.168.2.16
See CLI Overview for more information on using the Guardium CLI.
3. Enter the following two commands:
store unit type stap
restart inspection-core
See Configuration and Control CLI Commands and Inspection Engine CLI
Commands respectively for more detailed information on these two commands.
4. Enter the quit command to log out of the Guardium CLI.

S-TAP Certification
Use this function to block unauthorized S-TAPs from connecting to the Guardium
system.

If there is a check mark in the S-TAP Approval Needed box, then S-TAPs cannot
connect until they are specifically approved.

If an unapproved S-TAP connects, it is immediately disconnected until someone


goes to this GUI screen and specifically authorizes the IP address of that S-TAP.

The function S-TAP Approval Needed can be controlled by using the CLI
command store stap approval or by the GuardAPI command, grdapi
store_stap_approval.

The new configuration will be effective after you run the restart inspection-core
command if you use the CLI command stap approval ON | OFF .

Approve S-TAPs
1. Place a check mark in the box for S-TAP Approval Needed.
2. Specify the approved S-TAP clients.

Note:

Use the valid IP address, not the host name.

Within a Central Managed environment, after you add the IP addresses to


approved S-TAPs, there is a wait time for synchronization that might take up to an
hour. After synchronization is complete, the status of the approved S-TAPs appears
green in the GUI.

How to Set Up S-TAP Authentication with SSL Certificates


Set up authentication between an S-TAP server and Guardium system.
Chapter 1. S-TAPs and other agents 47
Value-added: S-TAPs can be configured to only connect to a certain [group of]
machine[s] that authenticate with a given certificate or set of certificates. These
certificates can either be generated locally on the Guardium system and sent off to
the Certificate Authority (CA) for signing or can be created at the CA and installed
whole on the Guardium system.

Before you begin: You need to know either who/what/where is acting as your
CA. If the CA is sending you a whole certificate to install, you need two files, the
private key in PKCS#8 (password protected) format, and the public key in PEM
format. The certificate generated needs to be a 2048 bit RSA key.

Generating certificate signing request (CSR) on Guardium system

Log into your Guardium system with CLI

cli> create csr sniffer

[fill in asked for data]

When you've finished, It will display something like:

Copy from the -----BEGIN CERTIFICATE REQUEST----- to the -----END


CERTIFICATE REQUEST----- into a file and send this to your CA for signing.

The CA will sign the certificate and send you back a public key.

The public key the CA sends you back will look something like:

Have this file handy for either copying the contents of or importing to the
Guardium system.

cli> store certificate sniffer [console | import]

If console, copy paste from -----BEGIN CERTIFICATE----- all the way to -----END
CERTIFICATE----- (including those within the copy) and paste into cli when
prompted. If choosing import, tell the Guardium system where to import the file
from.

It will ask you to confirm that you want to store the certificate, and when you
confirm, it does.

You need to restart the inspection-core for the new certificate to take effect.

Installing a certificate generated outside of the Guardium system

You will receive a pair of files from you CA (plus the public cert for you CA)
which is your certificate.

One will be the public-cert of you CA, like:

The next will be the public-cert specific to you/this Guardium system, it will look
like:

The last will be a private key (encrypted with pkx#8) and will look like:

48 S-TAP and other agents


Have these files handy to either import (via scp/ftp/etc) to the Guardium system
or to copy-paste into the cli interface on the Guardium system.

Log in to the Guardium system via cli.

To store the private key do:

cli> store certificate keystore [import | console]

import takes the saved file, console, and then copy and paste the contents of the
file into your console interface.

It will ask for the password that the file was saved with. Either you provided this
to the CA for creation of the certificate, or more likely, they provided you with a
password when they sent your files.

Here's what it looks like on the Guardium system:

Next you need to import the signed certificate with:

cli> store certificate sniffer [import | console]

It will display the information on the cert and then ask you to confirm storing the
cert.

Here's what it looks like:

You need to restart the inspection-core for the new certificate to take effect.

Configuring the S-TAP to use x.509 certificate authentication

First, take note of what you have assigned as the CA and the CN of the certificate.
If you don't remember, use the show system certificate cli command to display
the values.

You need the CN of the cert installed on the Guardium system and the public-key
for the CA that signed the certificate on the Guardium system. You also might
want a Certificate Revocation list signed by the same CA that signed the Guardium
system cert, but it's not necessary.

The parameters in the guard_tap.ini you're concerned with look the same in Unix
vs. Windows:

The only functional difference between UNIX and windows, is, in windows, if you
do not choose to use a value for a parameter, simply do not include it in the
guard_tap.ini, instead of putting the parameter equal to NULL. (This is pertinent
to the CRL path in particular, or if you want to shut off certificate authentication
and go back to TLS.)

Chapter 1. S-TAPs and other agents 49


Copy the public key [and the CRL if wanted] for the CA that the CA sent you to a
directory on the S-TAP host. Take note of this directory.

Set guardium_ca_path=[path-to-CA.pem]

Set sqlguard_cert_cn=[the full CN or partial CN (using * as a wildcard) of the


Guardium system]

Set guardium_crl_path=[path-to-crl.crl] <-- only if you want to use a certificate


revocation list at this time.

In real life it would look like:


guardium_ca_path=/var/tmp/pki/Victoria_QA_CA.pem
sqlguard_cert_cn=sample1_qa.victoria
guardium_crl_path=/var/tmp/pki/Victoria_QA_CA.crl

Once those parameters are set, change tls=1 and restart the S-TAP.

You will now be connected using Openssl.

Increasing S-TAP throughput


You can configure your S-TAPs that report to multiple Guardium systems to
increase the throughput of data.

You can configure any S-TAP to create multiple threads to increase the throughput
of data. If the S-TAP configuration file defines more than one Guardium system, a
thread can be created for each Guardium system. This feature is activated by
setting the value of the participate_in_load_balancing parameter to 4. When this
value is set to 4, the S-TAP creates extra threads, matching the number of
Guardium systems, and the K-TAP creates a similar number of buffers. The K-TAP
alternates between the buffers, placing entire packets in each buffer. Each S-TAP
thread reads from a different K-TAP buffer, and sends traffic data to a single
Guardium system.

In this configuration, no one Guardium receives all the data from the S-TAP. The
distribution is similar to that used when participate_in_load_balancing is set to
1. However, when a Guardium system becomes unavailable, no failover is
provided. Data that was being sent to that Guardium system is lost until the
system becomes available or the configuration is changed.

Also, as when participate_in_load_balancing is set to 1, encrypted and


unencrypted A-TAP traffic cannot be sent to the same Guardium system.

UNIX S-TAP
Install and configure UNIX S-TAP.

A UNIX S-TAP is a userspace daemon that collects data from various sources in
order to send it to the Guardium system for analysis and logging. It collects traffic
from: KTAP - kernel module that performs interception in kernel; ATAP -
userspace libraries that are loaded when a database starts to collect traffic; and,
EXIT libraries to send traffic directly from the database server to S-TAP.

New or enhanced for V10

50 S-TAP and other agents


64-bit STAP
v S-TAP binary for 64 bit platforms is now built as a 64 bit binary
v Increases the amount of data S-TAP can map for the buffer
v Max signed 32 bit integer – approximately 2Gb (allocated per sqlguard_ip)

64-bit session keys


v Decreases the likelihood of collisions on the key causing traffic losses
v Side effect is that a V-10 S-TAP can only be connected to a V10 Guardium
system

Fast SHMEM verdict


v On by default
v Pushes the information needed about DB2 to the kernel to determine if segment
belongs to database
v Inspection Engine configuration is unchanged
v Pushes the db2_shmem_size, db2_shmem_client_position, and
db2_fix_pack_adjustment as well as the dev/ino of the executable in
db_exec_file
v Can be difficult to debug, so recommend to disable if having trouble
intercepting SHMEM traffic
v KTAP Informix improved to capture traffic and removed limitations on segments

Default fast TCP verdict


v On by default
v Network/Exclude_Network parameters supported with fast verdict
v Maximum of 20 Inspection Engines with a maximum of 20 entries each in
Network/Exclude_Network supported, after which gracefully degrade to
fast_verdict off - If fast_tcp_verdict is on and there are more than the supported
maximum 20 {Inspection Engines, network masks, exclude network masks}
configured to intercept TCP traffic, then fast_tcp_verdict will be disabled.

Note: Because of the complexity and diversity of environments, there are notes
that might, if not read carefully and followed, cause installations/upgrades to fail
or work improperly. While not all inclusive, for the ease of finding some of these
notes, the following sections are listed to aid the reader in pinpointing areas that
require careful and special attention.
v Live vs. Non-Live K-TAP upgrade (Solaris, AIX, HP-UX)
v UID Chains (Solaris Zones, AIX WPAR, Solaris 8/9, Solaris 11 SPARC)
v Before Installing S-TAP on a UNIX Host (Solaris Zones)
v Maintain UNIX S-TAP with GIM (IBM DB2 pureScale®)
v Install UNIX S-TAP (Linux, AIX)
v Upgrade Procedure Utility (SUSE 11, HP-UX)
v Remove Previous UNIX S-TAP (Manual) (HP-UX, AIX WPAR)
v A-TAP Installation (Solaris Zones)
v A-TAP Configuration (Oracle, DB2)
v A-TAP DB Instance Activation (Solaris Zones)
v A-TAP Configuration Pitfalls and Mistakes (Oracle, DB2, Informix)
v A-TAP Procedure to help ensure A-TAP works with Solaris Zones/Aix Wpars
(Solaris Zones, AIX WPAR, Solaris 10/11)

Chapter 1. S-TAPs and other agents 51


v A-TAP Procedure when working with Oracle Patch Installations (Solaris, Solaris
Zones)

UNIX S-TAP monitoring mechanisms


Depending on how it is installed and configured, UNIX S-TAP collects traffic by
using various mechanisms. Regardless of the mechanism used, the traffic is
filtered, so that only database-related traffic for specific sets of client and server IP
addresses is collected.
PCAP PCAP is a packet-capturing mechanism that listens to network traffic from
and to a database server. In a UNIX environment, since the K-TAP captures
all network traffic, PCAP is rarely used. In a Windows environment PCAP
is used to capture non-encrypted network traffic (except for IA64). Also, on
Linux, PCAP is used to capture local TCP/IP traffic on the lo device.
To use PCAP in a Solaris Zones environment, you must add the IP
addresses of all zones that you want to monitor to the alternate_ips
parameter in the guard_tap.ini file on the Solaris machine.

Tip: The PCAP uses the client IP/mask values for all local inspection
engines to determine what to monitor and report. If the PCAP is installed
with an S-TAP with multiple inspection engines, and those inspection
engines have different client IP/mask values, the PCAP captures traffic
from all clients that are defined in all inspection engines. This can result in
more information being processed and sent to the Guardium system than
you intend.
K-TAP
K-TAP is the recommended mechanism to collect local and network traffic
on a UNIX database server. Unlike the Tee, with K-TAP you do not need to
change how database clients connect to the server. K-TAP is a kernel
module that is installed into the operating system. After it is installed, it
can be enabled or disabled by using a configuration file setting. When
enabled, it observes access to a database server by hooking the
mechanisms used to communicate between the database client and server.
When K-TAP is disabled, the Tee can be used to monitor local traffic.
K-TAP and Tee are almost always mutually exclusive - to monitor local
access you either use K-TAP or the Tee.
At installation time, you will choose whether or not to load the K-TAP
kernel module to the server operating system. This is the only way to load
that module. If you do not load K-TAP initially, and decide later that you
want to use it (instead of the Tee), you will need to remove S-TAP, and
then re-install it.

Note: If K-TAP fails to load properly during installation, possibly caused


by hardware or software compatibility, Tee will be installed as the default
collection mechanism. To switch back to K-TAP, after compatibility issues
have been resolved, follow the steps outlined in Switching from Tee to
K-TAP.

Note: On Solaris 11 only - If Tee is not installed initially, a re-install is


required. Or TEE should be installed manually.
A-TAP

52 S-TAP and other agents


Some traffic can only be tapped at the database server application level.
This may be required because the DBMS uses its own encryption, or
because of other internal database implementation details. For these cases,
the A-TAP (application-level tapping) mechanism monitors communication
between internal components of the database server. A-TAP uses K-TAP as
a proxy to pass data to S-TAP, and it must be configured separately for
each database instance to be monitored.
A-TAP is used for monitoring these types of traffic:
v ASO encrypted traffic for Oracle (versions 9, 10 and 11) on AIX, HPUX,
Solaris, and Linux
v SSL encrypted traffic for Oracle (versions 10 and 11) on Linux, AIX,
Solaris, and HPUX (platforms supporting LD_PRELOAD)
v SSL encrypted traffic for Sybase (version 15) on AIX (AIX 5.3 with
LDR_PRELOAD patch or newer only), Solaris (SPARC), and Linux
(32bit).
v Shared memory traffic for DB2 and Informix on Linux
Tee
Tee is a proxy mechanism that reads and forwards traffic from local clients
to a database server. As the Tee receives database traffic, it forwards one
copy to the database server and one copy to S-TAP. When the Tee is used,
database clients must connect to the Tee listening port instead of the
database listening port. This means that you must either modify how the
database client connects to the server, or how the database server accepts
client connections. In either case, this is usually a minor configuration
change to one or two files (depending on the database type) and the end
result is that, as far as the clients are concerned, the Tee is the database,
and as far as the database is concerned, the Tee is the client. All this is
transparent to both the clients and the server - but the configuration
change is required to ensure that the connection is made through the Tee.
When the Tee is used, database clients can bypass the Tee by connecting to
the database listening port (instead of the Tee listening port), or by using
named pipes, shared memory, or other inter-process connection
mechanisms depending in the database type. For detailed information
about configuring clients to connect to the Tee, see Prepare for Local
Clients to Use the Tee.
We refer to any connections that are not made through the Tee listening
port as rogue connections. When the Tee is used, you can enable an
optional component called the Hunter to watch for, report upon, and
optionally disable rogue connections. The Hunter runs at random intervals,
so it may not detect all such connections, and while it can report on rogue
connections and optionally disable them, it cannot audit what actions were
performed by those connections. Another aspect of the Hunter is that when
it wakes up to hunt for rogue connections, it can be CPU-intensive, so if
you look at the Hunter process at that instant, it may appear to be
consuming a lot of a server CPU resource. (The CPU use will drop quickly
after a momentary spike.)

Note: To use the Hunter, version 5.8.0 or later of Perl must be installed in
the /usr/bin/ directory.
K-TAP upgrades - live vs. non-live

Chapter 1. S-TAPs and other agents 53


K-TAP upgrades support a live and reboot-less upgrade through the use of
the mandatory parameter KTAP_LIVE_UPDATE. This parameter must be set
during every upgrade and is controlled through the GUI or
BUNDLE-STAP/KTAP installers.
v Before running live update, either through GIM or shell installers, you
must make sure no process is using the K-TAP device. S-TAP must be
stopped and A-Tap must be deactivated. Run fuser /dev/ktap_xxx or
lsof | grep ktap_xxx (where xxx is the old version number) to see if
any process is holding the device open. Failure to do so can result in
unpredictable behavior.
v From the GUI the new K-TAP parameter KTAP_LIVE_UPDATE will be
initialized (blank) every time we want to upgrade. Just like any other
un-initialized mandatory parameter, the parameter must be set before
continuing with the upgrade process.
The valid values for the new parameter are:
– Y/y – For a live (reboot-less) K-TAP upgrade
– N/n – non-live K-TAP upgrade (requires system reboot in order to
complete the upgrade)
v When upgrading K-TAP by running the KTAP/BUNDLE-STAP installers
directly on the DB server. The new feature will impose specifying a new
argument in the installer command line where the argument name is
--live_update [Y|N].
v After a K-TAP live upgrade:
– The first SQL for an existing session after updating K-TAP will not be
captured.
– Existing ATAP session on Solaris local zone will not be logged.
– It is possible that some processes will still referencing memory in the
old K-TAP module. Under this scenario, the module will refuses to
free the resources to prevent future instability. When this happens, the
user should, after those resources are no longer being used, try a
manual cleanup by running the guard_ktap_cleanup that is kept in
the ktap directory.
– On HP-UX 11.11, the old K-TAP module will no longer be installed,
but it will still show up as registered when you execute kmadmin -s |
grep tap. The module needs to be manually unregistered with
kmmodreg -U ktap_<version>.
– On Solaris and AIX, the old dev-nodes will not be automatically
deleted after a reboot and they need to be removed manually.
Exceptions:
v If the DB server is installed with a version that was not installed
through GIM, and the non-GIM K-TAP version is not the same with
installing K-TAP version, the value of the KTAP_LIVE_UPDATE will be
ignored, since an upgrade from a non-GIM version requires system
reboot
v In scratch installation KTAP_LIVE_UPDATE value will be ignored
v If the system is being upgraded from a non-GIM version to the same
GIM version, the system does not need to be rebooted
v You can NOT reinstall a previously installed K-Tap version without
rebooting the machine
Error Handling:

54 S-TAP and other agents


v In the event of a failure, it is extremely important to check the GIM
Events List report, since some failures will require system reboot in
order to fully recover.

Note: K-TAP for AIX only will fail to load during a S-TAP installation or
upgrade if ODMDIR environment is not defined. ODMDIR is Object Data
Manager Directory. ODM is a database of system and device configuration
information integrated into the OS. It is intended for storing system
information, software information, and device information. All ODM
commands use the ODMDIR environment variable, that is set in file
/etc/environment. The default value of ODMDIR is /etc/objrepos.
K-TAP and UID Chains
UID chain is a mechanism which allows S-TAP (by way of K-TAP) to track
the chain of users that occurred prior to a database connection. For
example, a user may have changed users several times before connecting
to the database; perhaps he ran ssh informix@barbet, su - db2inst1, su -
, su - oracle9, before finally running sqlplus scott/tiger@onora1. With
UID Chains, Guardium can trace this process back to the process that
called it and so on back to the original (offending) user.

Note:
v For Solaris Zones, we may have the user ids instead of user names in
the UID Chain.
v For Solaris Zones and AIX WPAR, db2bp_path in the guard_tap.ini file
should point to the full path of the db2bp executable, the full path of the
relevant db2bp as seen from the global zone/wpar.
v No UID Chains for Inter-process Communication (IPC) on Solaris 8/9.
v UID chains are not detected for Hadoop databases.
v When using any database, the UID chain is not logged for all sessions if
the session is very short.
v Setting of hunter_trace is required for TCP/IP connections on UNIX
S-TAP and should be set according to:
– HUNTER_TRACE parameter can set to 0 or 1 to enable or disable
UID CHAIN
– For regular installations, set hunter_trace = 1 will enable uid_chain
for local TCP/IP connections.
– For appserver connections, you need to set hunter_trace=2.
– For Solaris zones and AIX WPARs, you need to set hunter_trace=3 to
capture zones/WPARs connections.
v Local TCP is not supported for UID chain on Linux for DB2. In addition,
DB2 exit requires a specific version of the database to support UID
chains.
v When running as user, UID chain does not work for DB2 Shared
Memory (SHM) with S-TAP.
Purging of UID Chain Records
UID Chain Records older than 2 hours are purged when the regular
inference process runs. Also, records older than one day are purged on a
nightly basis.
Discovery Agent

Chapter 1. S-TAPs and other agents 55


The Guardium Discovery Agent is an optional software agent installed on
a database server system. Its purpose is to detect database instances
running on the database server and report them to the Guardium system.
For Discovery Agent to work the following steps / prerequisites must be
performed / adhered to:
v GIM client must be installed on the database server
v S-TAP bundle must be installed first on the database server utilizing the
GIM installation method
v All databases on the database server that you would like discovered
must be started
v Install the Discovery bundle through GIM
v On Solaris zones architecture, when DB2 instances are running on slave
zones, Discovery will not discover the DB2 shared memory parameters
After the Discovery Agent has been installed newly discovered databases
can be seen in the Discovered Instances report. From this report,
datasources and inspection engines can quickly be added to using the
GuardAPI Input Generation tools.
If databases on the database server are not operational (started) or will be
added later, the Discovery Agent can still discovery these instances if the
Discovery Agent is cycled (disabled and then enabled) on that database
server. See the “GIM - GUI” on page 172 for more information on
modifying installed module parameters. For the Discovery Agent this
would require the modification of the discovery_enabled parameter to
2-disabled, and then back to 1-enabled to cycle properly.

Note: The Discovery Agent reports its findings back to the primary S-TAP
target, NOT to the system listed as GIM_URL or secondary S-TAP target.

Maintain Unix S-TAP with GIM

The automatic and simplistic installation capabilities of the Guardium Installation


Manager (GIM) makes it the primary installation method for Guardium modules
such as S-TAP and CAS in a Unix environment. After a simple wizard-driven
installation of a GIM Client on the database server, installation of modules can
easily be scheduled from the Guardium system (GIM Server).

See Installing GIM on the Database Server (Unix) and Guardium Installation
Manager (GIM) - GUI for additional information on installing and using GIM to
install Guardium components in a Windows environment.

Note: If A-Tap is being used, A-Tap must first be disabled on the database server
before performing a GIM-based S-TAP upgrade or uninstall.

Note: In an IBM DB2 pureScale environment, perl must be accessible during a


reboot to ensure S-TAP is started.

Maintain UNIX S-TAP without GIM


Maintain Unix S-TAP without GIM

While GIM has been provided for ease of installation and management of
Guardium components, there are still environments that may benefit from a more
manual approach or fine-tuning of the installation at a more granular level. The
following section is provided for those environments.

56 S-TAP and other agents


v Install Unix S-TAP
v Install S-TAP from the Command Line
v Install CAS from the Command Line
v Upgrade Procedure Utility
v Remove Previous Unix S-TAP (Manual)
v Command Line Update for K-Tap (Manual)
v Stop Unix S-TAP
v Restart Unix S-TAP
v Determine Unix S-TAP Version Number
v Use Unix S-TAP Native Installers

Stop Unix S-TAP


Depending on the method of S-TAP installation, you can stop S-TAP by:
GIM Installation
You can use GIM to stop S-TAP without ever having to log into the
database server. Complete the following steps to change the STAP_ENABLED
parameter and schedule the change on the database server.
1. Click Manage > Install Management > Setup by Client to open the
Client Search Criteria.
2. Perform a filtered search of registered clients or click Search to view
all of the registered clients.
3. Select the clients that will be the target for the action (stopping S-TAP)
v If there are more than 20 clients, then the list of clients will be split
onto additional pages.

Note: Clicking the Select All button will only select the clients on
the current page being viewed.
4. Click Next to open the Common Modules panel.
5. Select the Module for S-TAP
6. Click Next button to open the Module Parameters panel.
7. Select the client that will be the target for the action (stopping S-TAP).
8. Change the STAP_ENABLED parameter to 0 (zero).
9. Click Apply to Clients to apply to the targeted clients.
10. Click the Install/Update button to schedule the update to the targeted
clients. This update can be scheduled for NOW or some time in the
future.
On the database host itself, you can stop S-TAP (and all other GIM
modules except GIM itself) by stopping GIM's supervisor service with the
command: stop gsvr_<release number>. Use initctl list to get the list of
services statuses.
Non-GIM Installation
1. Log on to the database server system by using the root account.
2. For all non-Red Hat Enterprise Linux 6
a. Open the /etc/inittab file for editing.
b. Locate and comment the following two statements in the
/etc/inittab file, by inserting a comment character (: for AIX, # for
all others) at the start of each statement:

Chapter 1. S-TAPs and other agents 57


utap:2345:respawn:/usr/local/guardium/guard_stap/guard_stap /usr/local/guardium/guard_
c. Optional. If you are using the TEE monitoring mechanism, comment
the following two statements by inserting a comment character
(colon : for AIX, pound sign # for all others) at the start of each
statement.

Note: These processes are not used in the default configuration, so


the statements may be commented already.
#utee:2345:respawn:/usr/local/guardium/guard_stap/guard_tee /usr/local/guardium/guard_
#hsof:2345:respawn:/usr/local/guardium/guard_stap/guard_hnt
d. Run the init q command to restart the S-TAP processes.
3. For Red Hat Enterprise Linux 6
a. List the currently running agents by using the operating system
command initctl list. The output shows the agents that are listed
as in the following example:
gim_33264 start/running, process 910
gsvr_33264 start/running, process 2552
b. Stop each of the agents that might be running by using the stop
<agent> command where agent would be the first entry in the list
from Step 3a.
stop gim_33264
stop gsvr_33264
stop guard_utap
Use stop guard_utap to stop the S-TAP or stop guard_tee to stop
the TEE mechanism of the S-TAP agent.
4. Run ps -ef | grep stap to verify that the S-TAP processes have been
stopped.
5. From the administrator portal of the Guardium system to which this
S-TAP was reporting, verify that the Status light in the S-TAP control
panel is now red.

Restart Unix S-TAP

Depending on the method of S-TAP installation, you can stop S-TAP by:
GIM Installation
Use GIM to start S-TAP without ever having to log into the database
server. Complete the following steps to change the STAP_ENABLED parameter
and schedule the change on the database server.
1. Click Manage > Install Management > Setup by Client to open the
Client Search Criteria
2. Perform a filtered search of registered clients or click Search to
perform an unfiltered search of all registered clients.
3. Select the clients that will be the target for the action (starting S-TAP)
v If there are more than 20 clients, then the list of clients will be split
onto additional pages.

Note: Clicking Select All will only select the clients on the current
page being viewed.
4. Click Next to open the Common Modules panel.
5. Select the Module for S-TAP.
6. Click Next to open the Module Parameters panel.

58 S-TAP and other agents


7. Select the client that will be the target for the action (starting S-TAP).
8. Change the STAP_ENABLED parameter to 1 (one).
9. Click Apply to Clients to apply to the targeted clients.
10. Click Install/Update to schedule the update to the targeted clients.
This update can be scheduled for NOW or some time in the future.
Non-GIM Installation
1. Log on to the database server system by using the root account.
2. For all non-Red Hat Enterprise Linux 6
a. Open the /etc/inittab file for editing.
b. Un-comment the following two statements by deleting the comment
character (: for AIX, # for all others) at the start of each line:
#utap:2345:respawn:/usr/local/guardium/guard_stap/guard_stap /usr/local/guardium/gu
c. Optional. If you are using the TEE monitoring mechanism,
un-comment the following two statements by deleting the comment
character (: for AIX, # for all others) at the start of each line.

Note: These processes are not used in the default configuration and
must not be started if you are using the K-Tap monitoring
mechanism.
#utee:2345:respawn:/usr/local/guardium/guard_stap/guard_tee /usr/local/guardium/guar
#hsof:2345:respawn:/usr/local/guardium/guard_stap/guard_hnt
d. Run the init q command to restart the S-TAP processes.
3. For Red Hat Enterprise Linux 6
a. List the currently running agents by using the operating system
command initctl list. The output shows the agents that are listed
as in the following example:
gim_33264 start/running, process 910
gsvr_33264 start/running, process 2552
b. Stop each of the agents that might be running by using the stop
<agent> command where agent would be the first entry in the list
from a. See the following example.
stop gim_33264
stop gsvr_33264
stop guard_utap
Use stop guard_utap to stop the S-TAP or stop guard_tee to stop
the TEE mechanism of the S-TAP agent.
4. Run ps -ef | grep stap to verify that S-TAP is running.
5. From the administrator portal of the Guardium system to which this
S-TAP reports, verify that the Status light in the S-TAP control panel is
green.

Stop and start S-TAP using Solaris services in Solaris 10 and 11


Solaris 10 and 11 no longer use inittab. Instead, Solaris services are used to stop
and start the S-TAP. Use the svcadm utility.
Stop
-bash-3.00# svcadm -v disable guard_utap
svc:/site/guard_utap:default disabled.
-bash-3.00# ps -eaf | grep stap
root 2375 1930 0 14:25:36 pts/2 0:00 grep stap

Chapter 1. S-TAPs and other agents 59


Restart
-bash-3.00# svcadm -v enable guard_utap
svc:/site/guard_utap:default enabled.
-bash-3.00# ps -eaf | grep stap
root 2379 1 0 14:25:57 ? 0:00
/usr/local/guardium/guard_stap/guard_stap /usr/local/guardium/guard_stap/guard_
root 2396 1930 0 14:26:00 pts/2 0:00 grep stap
-bash-3.00# svcs guard_utap
STATE STIME FMRI
online 14:25:56 svc:/site/guard_utap:default
-bash-3.00#

Determine UNIX S-TAP Version Number


From the administrator portal of a Guardium server for an S-TAP, the S-TAP
version number is displayed in the S-TAP Status Monitor report on the System
View tab.

If the administrator portal is not available, you can display the S-TAP version
number from the UNIX command line of the database server, by running the
guard_stap binary with the -version or --version argument.

To check the UNIX S-TAP version, assuming S-TAP has been installed in the
default installation directory, enter the following command:
-bash-3.2# <guardium_base>/modules/STAP/current/guard_stap --version
or
-bash-3.2# <guardium_base>/guard_stap/guard_stap --version
STAP-doberman_r20511_1-20100728_0514

How to determine DB2 parameters

DB2 interception in S-TAP relies on the following parameters:


Table 12. DB2 Parameters
Default
Parameter STAP Name ATAP Name Value Comment
Packet header db2_fixed_pack_adjustment
db2_header_offset20 Default value is tested for DB2 8.2 and
size newer on various 64-bit platforms. Other
versions of DB2 and 32-bit platforms might
need a different offset. The usual suspects
are 16 and 12.
Client I/O area db2_shmem_client_position
db2_c2soffset 61440 This parameter is derived from
offset ASLHEAPSZ DB2 parameter.
DB2 shared db2_shmem_size db2_shmsize 131072 This parameter is determined empirically.
memory See a sequence of command that can be
segment size used to get it.

Note: If there are multiple DB2 instances that are configured for a single WPAR in
guard_tap.ini file and they have the same db2_shmem_size, then the
db2_fix_pack_adjustment and db2_shmem_client_position configured in the first
DB2 section for that WPAR will be returned. So in cases where there are multiple
DB2 instances running on the WPAR:
1. If all DB2 instances have the same db2_shmem_size, db2_fix_pack_adjustment,
and db2_shmem_client_position, the packets from all instances will be collected
even if only one instance is configured.

60 S-TAP and other agents


2. If all DB2 instances have the same db2_shmem_size, but different
db2_fix_pack_adjustment or db2_shmem_client_position, then only packets
from the first configured DB2 instance will be collected.

Computing Client I/O area offset (db2_shmem_client_position)


1. Open a new bash shell as the db2 instance user.
2. Run the ps -x command to verify that the db2bp command processor is not
currently running for this shell. You should not see a command called db2bp
running. If it does, either kill it or run a new shell.
3. Run the following two commands:
db2 get database manager configuration | awk ’/ASLHEAPSZ/{print $9 * 4096}’
The output is the required value for db2_shmem_client_position.

The ASLHEAPSZ parameter is specified in 4K memory pages in DB2. It determines


the size of the application support layer heap. As shown in the previous diagram,
the client I/O area starts after the application heap in the Agent/Application
shared memory segment.

Note: The theory behind this computation is based on the IBM DB2 Universal
Database Administration Guide: Performance manual. The following diagram
shows the DB2 shared memory layout.

Finding DB2 shared memory segment size (db2_shmem_size)

ATAP and KTAP rely on the size for identification of the Application/Agent shared
memory segments. These segments are then tapped for C2S and S2C packets.

The segments are equal to the sum of the ASLHEAPSZ and RQRIOBLK parameters. DB2
allocates much larger segments. In most cases, the size is equal to (ASLHEAPSZ + 1) *
2 pages, or (ASLHEAPSZ + 1) * 8192 bytes. Exact size can be determined by
observation of the shared memory segments in the system before and after new
DB2 local connection is created.

The following sequence of commands helps you to determine the shared memory
segment size.

ipcs command parameters and output format differ from platform to platform. The
following script is based on the AIX version.
ipcs -ma | sort -n -2 +3 > /tmp/before.txt
db2 connect to <some_existing_database>ipcs -ma | sort -n -2 +3 > /tmp/after.txt
db2 terminate
diff /tmp/before.txt /tmp/after.txt | awk ’{if ($10 == 2) print $11}’

It is always a good idea to verify the result. It is equal or at least close to the
output of the following command:
db2 get database manager configuration | awk ’/ASLHEAPSZ/{print ($9 + 1) * 8192}’

Finding the DB2 shared memory segment size


Follow these steps to find the segment size:
1. Start a DB2 shared memory connection and keep it open.
2. Run this command to get the process ID for db2sysc: ps -eaf | grep db2sysc
The output looks like the following:
db2inst1 5309370 5505772 0 Nov 11 - 1232:12 db2sysc 0

Chapter 1. S-TAPs and other agents 61


In this example, the process ID is 5309370.
3. Run this command to retrieve information about shared-memory processes:
ipcs -ma The output looks like the following:
IPC status from /dev/mem as of Wed Nov 20 13:21:45 CST 2013
T ID KEY MODE OWNER GROUP CREATOR CGROUP NATTCH SEGSZ CPID
m 2097152 0xffffffff D-rw------- pconsole system pconsole system 1 536870912 4522088
m 1 0x78000015 --rw-rw-rw- root system root system 3 16777216 3605314
m 2 0x78000016 --rw-rw-rw- root system root system 3 268435456 3605314
m 219152387 0xffffffff D-rw------- root system root system 1 536870912 5243842
m 1048580 0x61013002 --rw------- pconsole system pconsole system 1 10485760 4522088
m 10485765 0xd9fd8a61 --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 5 47644672 5571082
m 9437190 0xd9fd8a74 --rw-rw-rw- db2inst1 db2iadm1 db2inst1 db2iadm1 9 140852104 5571082
m 9437191 0xe1bd8858 --rw-rw---- oracle dba oracle dba 40 53687107584 3801352
m 3145736 0x52594801 --rw-rw---- root informix root informix 13 223019008 5702650
m 3145737 0xd9fd8b68 --rw-rw---- db2inst1 db2iadm1 db2inst1 db2iadm1 1 58720256 6619354
m 3145738 0xffffffff --rw------- db2fenc1 db2fadm1 db2inst1 db2iadm1 7 268435456 5505772
m 11 0x52594802 --rw-rw---- root informix root informix 13 33439744 5702650
m 12 0x52594803 --rw-rw-rw- root informix root informix 13 573440 5702650
m 13 0xf2033f7e --rw------- sybase15 sybase sybase15 sybase 1 115564544 5178168
m 409993231 0x52594804 --rw-rw---- informix informix informix informix 13 8388608 5702650
m 763363344 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 268435456 5309370
m 125829140 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 2 131072 5309370
m 201326613 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 163905536 5309370
m 103750230 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 134217280 5309370

The output contains several columns beyond those shown here, but they do not
affect this procedure. Find the line that contains the process ID that was
identified in step 2 and also has a value of 2 under NATTCH. The DB2
shared-memory segment size is the value in the SEGSZ column. In this
example, it is 131072.
4. Tip: if the list returned in step 3 is too long, you can filter it by using the
process ID. In this case, you would enter ipcs -ma | grep 5309370. The results
do not contain the column headers, but you can look at the previous results to
see the column headers and identify the correct line and column. In this
example, it is the last line.
m 131072014 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 1342177280 5309370
m 763363344 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 268435456 5309370
m 227541013 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 1 163905536 5309370
m 106353238 0xffffffff --rw------- db2inst1 db2iadm1 db2inst1 db2iadm1 2 131072 5309370

Sybase 15 SSL Encryption Support


SSL-encrypted traffic interception for Sybase 15 is supported on the following
platforms:
v Linux: all supported distributions and versions
v Solaris 8, 9, 10 (SPARC only)
v AIX 5.3 (with LDR_PRELOAD patch)
v HPUX 11.00, 11.11, 11.23 and 11.31 (PARISC only)
Both guardctl and GUI-based invocation methods are supported. If the GUI-based
method is used, the Sybase OS user must be added to the Guardium group before
invocation. In the Sybase inspection engine definition, the DB executable path
parameter (db_exec_path in guard_tap.ini) must be specified and contain the full
path to the Sybase executable (dataserv).

62 S-TAP and other agents


S-TAP Discovery
Enable S-TAP to periodically discover database instances and send the results to
the current active S-TAP system.

Overview

Use the Guardium S-TAP Discovery application to discover database instances


periodically and send the instances to the current active S-TAP system. If the
S-TAP is being run as a user, the discovery functionality will be limited. The
following message is displayed:
WARNING: Discovery is enabled and STAP is running as user guardium.
The discovery function is limited when STAP runs as user guardium.
Discovery is most effective when ’tap_run_as_root=1’

Note: S-TAP Discovery is not supported on AIX 5.3 because of static libraries are
needed on that platform.

Note: In order to avoid an instance where S-TAP discovery does not open the
Informix database, it is recommended to start the databases using the full path to
the executable.

The parameters of the S-TAP Discover application are described in the following
table.
Table 13. Parameters
Parameter Description
tap_ip The value of this parameter determines the name of the
S-TAP that the S-TAP Discovery application uses in its
results. This parameter does not affect the inspection
engines that the S-TAP Discovery creates. It is used for
associating discovered instances with an S-TAP host.
sqlguard_ip This parameter determines where to send the results of
the S-TAP Discovery application.
discovery_interval This parameter specifies how long the S-TAP waits in
between runs of the S-TAP Discovery application. The
unit is in hours. Specifying 0 disables S-TAP Discovery
from running automatically. The default is 24 hours.
discovery_dbs This parameter is a colon (':') separated list of database
types for S-TAP Discovery to look for. The default is
"oracle:db2:informix:mysql:postgres:sybase:hadoop".
discovery_debug This parameter determines the level of logging for S-TAP
Discovery. The default value of 0 logs only errors. A value
of 1 logs both errors and debug statements.
discovery_ora_alt_locations This parameter specifies alternative locations to look for
listener.ora files in a comma (',') separated list.
discovery_port This parameter defines what port S-TAP Discovery to use
when it connects to the Guardium system. The default
port number is 8443.

If a scheduled discovery is running, and a new request comes in from the user
interface for running discovery, the new request is ignored.

Chapter 1. S-TAPs and other agents 63


Configuration
The S-TAP Discovery application uses the UNIX configuration file, guard_tap.ini,
for its configuration. The file has several of its own configuration parameters in the
file and relies on the configuration parameters to function.

S-TAP Discovery can be run manually but this action is not suggested. The main
reason is to run it manually is for debugging purposes.

To send results to the Guardium system, use the file path <absolute path to
guard_discovery binary>/guard_discovery<path to guard_tap.ini>/
guard_tap.ini or <absolute path to guard_discovery binary>/guard_discovery
<path to guard_tap.ini>/guard_tap.ini --print-output

To print results to stdout, use the command <absolute path to guard_discovery


binary>/guard_discovery <path to guard_tap.ini>/guard_tap.ini
--print-output.

A-TAP
The A-TAP monitors communication between internal components of the database
server.

Some traffic can only be tapped at the database server application level. This may
be required because the DBMS uses its own encryption, or because of other
internal database implementation details. For these cases, the A-TAP
(application-level tapping) mechanism monitors communication between internal
components of the database server. A-TAP uses K-TAP as a proxy to pass data to
S-TAP, and it must be configured separately for each database environment.

A-TAP can be controlled from the guard_tap.ini parameter file, by the guardctl
utility, or on some platforms A-TAP can also be activated from S-TAP
configuration.

The guardctl utility is installed under <guardium_base>/bin directory where


<guardium_base> is the directory where Guardium software is installed. By
default <guardium_base> is /user/local/guardium. In the case of a GIM
installation guardctl will be installed under <guardium_base>/modules/ATAP/
current/files/bin.

The guardctl utility provides commands that facilitate different aspects of A-TAP
installation, activation, deactivation, uninstallation and upgrade.

To use the guardctl utility, you must log in as root, since it requires superuser
privileges.

Note: The guardctl utility requires version 3 or greater of bash. Enter bash
--version at the command prompt to display the current version.

Syntax
<guardium_base>/xxx/guardctl [<name>=value>] [<name>=<value> ...] [command]

Commands
v help - default command, prints the list of supported commands, parameters and
their default values

64 S-TAP and other agents


v store-conf - allows storing parameter values as defaults
v list-active - lists DB instance user names of all active DB instances
v is-active - returns 1 if there is at least one active instance, 0 otherwise
v activate - activates DB instance. Requires DB-specific parameters
v deactivate - deactivates one DB instance
v deactivate-all - deactivates all active DB instances
v instrument - Create relinked instrumented Oracle for AIX ATAP activation
v deinstrument - remove instrumented Oracle
v is-user-authorized - checks if db-user is authorized to log information
v authorize-user - adds the user to 'guardium' authorization group

For most of these commands the db_instance parameter is mandatory.

Configuring the A-TAP


After installing and configuring the S-TAP, then configure the A-TAP.

The following table summarizes the configuration parameters that have to be


specified for different databases.
Table 14. A-TAP Configuration
DB (db_type) Parameter Default Value Description Mandatory?
Common db_instance A unique identifier of yes
the instance. For
Oracle, use
$ORACLE_SID, for
Informix and DB2 use
OS user name or DB
instance name.
db_user OS user name for this yes
DB instance. Has to
be specified explicitly
even if the user name
is used as
db_instance. Value
must match the value
for db_install_dir in
the guard_tap.ini
file.
db_type DB type (oracle, yes
informix or db2 )
db_base home directory of DB instance user no
db_user home directory. Value
for db_base must
match the correct
path for
$ORACLE_BASE or
DB instance user
home directory. It
cannot be ~DB_USER.
db_bits guessed based on DB DB instance no
executable architecture (32 for
32-bit, 64 for 64-bit)

Chapter 1. S-TAPs and other agents 65


Table 14. A-TAP Configuration (continued)
DB (db_type) Parameter Default Value Description Mandatory?
1
Oracle (oracle) db_home $ORACLE_HOME Where DB software is yes
value if defined, installed
db_base otherwise
db_version any (has to be set to DB instance version no
numerical value on
AIX ) (yes on AIX)
db_relink no (yes on AIX ) ATAP activation no
method
db_use_instrumented no (yes on AIX ) ATAP activation uses no
relinked version of
Oracle previously
created with
instrument command.
Informix (informix) db_home db_base Where DB software is no
installed
db_version any (has to be set to DB instance version yes
numerical value)
db_info /INFORMIXTMP/ Additional DB info no
.inf.sqlexec file
DB2 (db2)2 db_home db_base Where DB software is no
installed
db2_shmsize 131072 DB2 shared memory yes
size
db2_c2soffset 61440 DB2 shared memory yes
client-to-server area
offset
db2_header_offset 20 DB2 shared memory yes
header offset
db_version any DB instance version no
Sybase (sybase) db_home db_base Where DB software is no
installed
db_version any (has to be set to DB instance version yes,
numerical value)
has to be set to 15

1
If the Oracle Listener and all Oracle instances are not running under the same
user, all users must belong to the same group (a shared one) in order to capture
Oracle TCP traffic. In addition, in HPUX, HP-2005-security-patch is required.
2
The DB2 shared memory-related parameters should be determined at installation
time using the procedure described in DB2 Linux S-TAP Configuration Parameters.
store-conf command
Use the store-conf command to name and store the configuration of an
instance of the database for future use. These stored configurations may
later be used for A-TAP activation and deactivation.
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [<name>=<value> ...] store-conf
The value specified for instance (db_instance parameter) can be used later
to reference this configuration in other guardctl commands.

66 S-TAP and other agents


The stored configuration may be later retrieved using default command:
<guardium_base>/xxx/guardctl db_instance=<instance>

ATAP configuration on AIX and Oracle


AIX
On AIX, there is a step in the configuration of A-TAP that does not exist on
other platforms - instrumentation. This step is only used for the Oracle
database (the step is also called Oracle relink). This extra step must be
done prior to activating A-TAP.

Note: The database must be stopped, before changing the guard_tap.ini


parameter, encryption, from 1 to 0. Otherwise, decrypted traffic will be
continue to be intercepted. Stop the database, make the change and then
restart the database.
To configure A-TAP with instrumentation (overview):
v Authorize the user with guardctl authorize.
v Instrument with guardctl instrument, specifying db-use-
instrumented=yes
v Activate either (1) manually, with guardctl activate, specifying
db-use-instrumented=yes or (2) automatically, by setting encryption=1 in
the inspection engine

Note: For the installation directory, db_base in the ATAP configuration


needs to be the same as the directory in the Inspection Engine, otherwise
the S-TAP will complain that A-TAP is active for a database that has no
Inspection Engine.
To configure A-TAP without instrumentation (overview):
v Authorize the user with guardctl authorize.
v activate either (1) manually, with guardctl activate, specifying
db-use-instrumented=no, or (2) automatically, by setting encryption=1 in
the inspection engine.

Note: In a GIM installation, when configuring A-TAP, always run guardctl


from this directory: <installdir>/modules/ATAP/current/files/bin/
guardctl
For all versions of AIX, authorize the user first - for example, guardctl
authorize_user oracle11.
For AIX 5.2 : No support for ATAP
For AIX 5.3 with oslevel <=4. No support for A-TAP
For AIX 5.3 with oslevel => 5, AIX 6.1, 7.1
Oracle
Instrumentation is required in the following cases: Oracle versions 7, 8, 9,
10, and 11.1; and, Oracle versions 11.2 or 12 for SSL encryption
Instrumentation is not required in the following case: Oracle version 11.2
for ASO encryption
For Oracle 11.1, static instrumentation (Oracle relink) is required.

Chapter 1. S-TAPs and other agents 67


On Oracle 11.2, if the user is using Oracle ASO encryption, then no
instrument step is needed. However, if the user is using SSL encryption,
then the instrument step IS needed.
For Oracle 8/9/10/11.1 to install A-TAP you need to: authorize the user
(guardctl authorize), instrument the database (guardctl instrument), and
either (a) set encryption=1 in guard_tap.ini or (b) activate the database
(guardctl activate with db-use-instrumented=yes).
For Oracle 11.2 to install ATAP you need to authorize the user (guardctl
authorize), and activate the database (guardctl activate with
db-use-instrumented=no).
Note that method (1) will work for Oracle 11.2 as well and can be safely
used in all cases.
For Oracle 12 to install ATAP, all platforms must be instrumented to
capture SSL on Oracle 12.

To show the technology level, use the oslevel -s command

A-TAP Configuration Mistakes

This section summarizes common mistakes made during A-TAP configurations,


their symptoms, and how to avoid them.
Table 15. Oracle Common Mistakes
Mistake Platform Symptoms Error Message(s) How to Avoid
Wrong db_home All Activation command Always specify the
parameter fails. value of $ORACLE_HOME
as db_home name.
OS user logged in All Activation command Always make sure
fails. the OS user is not
logged in. Use w
command to see
which users are
logged in.
Wrong instance name All Database fails to start. Failed to execute Always specify the
oracleon1jumbo- value of $ORACLE_SID
guard: No such file or as db_instance name.
directory: No such
file or directory
ERROR: ORA-12547:
TNS:lost contact
Wrong or missing AIX Traffic is not logged. Always specify
db_version numeric version (for
example, 10.2 or 9.2 ).
Note - specify version
as one number after
the decimal
(1-decimal only).
Missing AIX Fails to activate. Missing Instrument command
Oracle-guard- Oracle-guard- must be run first to
instrumented instrumented. create a re-linked
instrumented Oracle
executable

68 S-TAP and other agents


Table 16. DB2 Common Mistakes
Mistake Platform Symptoms Error Message(s) How to Avoid
Wrong or missing Linux Traffic is not logged. See How to
db2_* parameter determine DB2
parameters

Table 17. Informix Common Mistakes


Mistake Platform Symptoms Error Message(s) How to Avoid
Wrong or missing Linux Traffic is not logged Always specify
db_version properly. numeric version (e.g.
7 or 11 ).

Configuring A-TAP to run under Solaris Zones/AIX WPARs

For WPAR: run activate command as root using guardctl utility.

Installing
1. Install STAP/KTAP on the master/global Zone/WPAR by the normal method
2. For Solaris Zones:
v for each sub-zone where Oracle is installed, make sure Guardium device is
mapped:
zoneadm -z <zonename> halt
zonecfg -z <zonename>
<zonename>> add device
<zonename>device> set match=/dev/ktap_xxx (for Solaris 10)
<zonename>device> set match=/dev/guard_ktap (for Solaris 11 on v8.2 and later)
<zonename>device> end
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
v With multiple KTAP devices, repeat the steps for each KTAP device by using
the name, ktap_xxxx (Solaris 10) or guard_ktap_x (Solaris 11).
3. Copy the entire A-TAP installation directory to a sub-Zone/sub-WPAR:
v On the master/global Zone/WPAR:(assuming Guardium software is installed
on the master Zone/WPAR under /usr/local/guardium, and there exists a
writable directory /usr/local with enough free space on the
sub-Zone/sub-WPAR)
cd /usr/local; tar -cvf - guardium | ssh root@subzonehost ’cd /usr/local && tar -xvf -’

Note: Note: For GIM installations, the installation path on the master/global
and sub-Zones/sub-WPARs must be identical. For non-GIM installs the paths
may be different, although this is not recommended.
4. Copy the A-TAP libraries to each sub-Zone/sub-WPAR:
v If an A-TAP is to be activated on the master Zone/WPAR, activate it
normally using guardctl. (Activation must be done using guardctl; it cannot
be done by setting encryption=1 in the guard_tap.ini file).
v If A-TAP will not be used on the master Zone/WPAR, use guardctl to
prepare the libraries for use. On the master Zone/WPAR:
/usr/local/guardium/bin/guardctl --db_instance=<instance-name> --db_type=<database-type> --d
5. Normally activate A-TAP for database instances using guardctl on each desired
sub-Zone/sub-WPAR:

Chapter 1. S-TAPs and other agents 69


Note: Activation must be done by using guardctl; it cannot be done by setting
encryption=1 in the guard_tap.ini file.

Note: A-TAP (guardctl) activation may complain and issue warnings about the
following:
v errors installing libraries under /usr/lib (since that directory belongs to the
global/master zone)
v not being able to change the guard_tap.ini to monitor oracle-guard instead
of oracle.
v not being able to restart stap (since it is running only on the master zone)

Note: After A-TAP activation, if the database indicates that libguard-xxx.so


cannot be found, re-check step 4
6. Adjust the guard_tap.ini file in the master/global Zone/WPAR. Do this by
manually editing the guard_tap.ini file, and changing the appropriate
db_exec_path line as follows:
v For Oracle on Solaris:
– set db_exec_path to oracle-guard-original instead of oracle
v For Oracle on AIX:
– set db_exec_path to oracle-guard-instrumented instead of oracle
Restart S-TAP when changes are completed.
7. For Solaris, verify the guard_ktap link and permissions on each sub-Zone. This
operation needs to be performed as root from the global/master Zone.
v cd to the sub-zone device directory
for example sub-zone device directory=/export/home2/zones/iris3/dev
cd /export/home2/zones/iris3/dev
v Verify that the KTAP device exists (if it does not, there was a problem with
the installation in step 2):
ls -l ktap_*
v Verify that the guard_ktap symbolic link exists:
ls -l guard_ktap
v If it does not exist, create it. (Note: ktap_xxxxx is the device just listed):
ln -fs ktap_xxxx guard_ktap
v Make sure that guard_ktap and ktap_xxxxx are usable by everyone:
chmod 0666 ktap_xxxxx
chmod 0666 guard_ktap

Uninstalling
1. On every sub-Zone/sub-WPAR with A-TAP installed/active:
v Deactivate (and deinstrument if necessary, i.e. for Oracle on AIX) all A-TAPs
using guardctl
v Manually remove (rm -rf) the installation directory (usually
/usr/local/guardium)
v Manually remove the ATAP libraries:
find /usr/lib -type f -name ’libguard-*.so’ | xargs rm -f

Note: removing the libraries may give errors; these can be ignored.
2. On the master/global Zone/WPAR:
v Uninstall STAP/KTAP using the normal method
v Remove the libraries:

70 S-TAP and other agents


find /usr/lib -type f -name ’libguard-*.so’ | xargs rm -f
v With multiple KTAP devices, repeat the steps for each KTAP device by using
the name, ktap_xxxx (Solaris 10) or guard_ktap_x (Solaris 11).

Upgrading K-TAP device


1. For Solaris Zone, On the master/global-zone, remove previous installed K-TAP
device
v — remove previous K-TAP device from zone config
zoneadm -z ,zonename> halt
zonecfg -z <zonename>
<zonename>> info
if K-TAP device is found, remove it
/<zonename> remove device match=/dev/ktap_xxxx (for Solaris 10)
/<zonename> remove device match=/dev/guard_ktap (for Solaris 11 on v8.2 and later)
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
v — For Solaris 10, remove the previous K-TAP device file and link from the
sub-zone device directory, for example, /export/home2/zones/iris3/dev
cd /export/home2/zones/iris3/dev
rm -f ktap_xxxx guard_ktap
2. For Solaris Zone, On the master/global-zone, add new K-TAP device
v — add new K-TAP device to zone config
zoneadm -z <zonename> halt
zonecfg -z <zonename>
<zonename>> add device
<zonename>device> set match=/dev/ktap_xxxx (for Solaris 10)
<zonename>device> set match=/dev/ktap_xxxx (for Solaris 11 on v8.2 and after)
<zonename>device> end
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
v — add guard_ktap link and change permission
v — On Solaris 10, go to sub-zone device directory for example sub-zone
device directory=/export/home2/zones/iris3/dev
cd /export/home2/zones/iris3/dev
ln -fs ktap_xxxx guard_ktap
chmod 0666 ktap_xxxx
chmod 0666 guard_ktap
v — On Solaris 11, change guard_ktap permission
v — go to sub-zone device directory, for example sub-zone device
directory=/export/home2/zones/iris3/root/dev
cd /export/home2/zones/iris3/root/dev
chmod 0666 guard_ktap
3. For AIX WPARs, change permission on ktap devices
v — go to wpars device directory, for example:
cd /wpars/odin3/dev
ln -fs ktap_xxxx guard_ktap
chmod 0666 ktap_xxxx
chmod 0666 guard_ktap

A-TAP Procedure when working with Oracle Patch Installations

Oracle patches may invoke relink and will replace the Oracle executable, causing
the A-TAP to stop functioning.

Chapter 1. S-TAPs and other agents 71


The correct procedure is:
1. Make sure all A-TAP instances are deactivated
2. Apply Oracle patch(es).
3. Activate A-TAP

However, in case A-TAP was not properly deactivated prior to Oracle patch
installation, DO NOT try to deactivate it after patch installation. Instead follow
these steps:
1. Check if A-TAP IS OK.
grep guardium $ORACLE_HOME/bin/oracle >& /dev/null && echo "ATAP IS OK"
a. If ATAP IS OK is displayed, the A-TAP is still active and there is no need to
do anything.
b. If ATAP IS OK is NOT displayed, remove $ORACLE_HOME/bin/oracle-guard
and activate the A-TAP.

In case everything else fails:


v Remove $ORACLE_HOME/bin/oracle-guard
v Run relink all

A-TAP Problems And Solutions associated with Oracle


Permissions

Several problem may occur that have to do with user and group permissions.
v In 'BEQUEATH' access from the user other than the one that installed the
database the permissions have to be set manually:
– add user running sqlplus to group 'guardium'
– open the read permissions 'chmod a+rx' on the following two directories:
/usr/local/guardium/xxx/etc/guard
/usr/local/guardium/xxx/etc/guard/executor
– make sure that the SUID and SGID bits are on ${ORACLE_HOME}/bin/oracle.
- If not, run the command chmod ug+s ${ORACLE_HOME}/bin/oracle')

Adding all local users of BEQUEATH traffic


It is necessary to add all local users of BEQ traffic to the Guardium group in order
for their decrypted traffic to be intercepted. For some customers, this is an
unwieldy process given the number of users. In this case, those customers can
instead set the permission of the KTAP device node pointed to by
/dev/guard_ktap (/dev/ktap_revision) to be world read/write.

For example

root@ub10u4x64t:~# ls -lL /dev/guard_ktap

crw-rw---- 1 root guardium 251, 0 Jul 21 14:22 /dev/guard_ktap

root@ub10u4x64t:~# chmod 0666 /dev/guard_ktap

root@ub10u4x64t:~# ls -lL /dev/guard_ktap

crw-rw-rw- 1 root guardium 251, 0 Jul 21 14:22 /dev/guard_ktap

root@ub10u4x64t:~#
72 S-TAP and other agents
Activate and deactivate your A-TAPs
A-TAP Database Instance Activation

Use the activate command to activate an A-TAP. The A-TAP must be activated for
each DB instance to be monitored on the server. Note the following:
v A-TAP cannot be activated or deactivated while the DB instance is up and
running.
v A-TAP activation relies on stored configuration for given instance.
v A-TAP parameters may also be specified on the command line. Command line
parameters override the stored ones.
v Operating system users for the DB instances have to be completely logged off
from the system during DB instance activation.
v A-TAP has to be deactivated prior to any upgrade of the Database server.
v On AIX and Oracle, the instrument command must be used before activating the
A-TAP, either by using the activation command or by setting encryption=1 in the
.ini file.
v Enabling encryption in the inspection engine is only supported on AIX, HP-UX,
and Solaris. It is not supported in Linux, WPAR, or zones environments. Enable
encryption using encryption=1 in the guard_tap.ini file or from the S-TAP
Control > Edit S-TAP Configuration screen in the Guardium user interface.
v In a GIM installation, every zone has to be populated with libguard-* as well
(see Solaris Zones 2.)
v For a multi-instance configuration where a single executable is used for all of the
instances, guardctl activate should only be done once as it will be effective for
all instances.
v For Solaris Zones and WPARS, to make A-TAP to work on zone architectures,
the file system /usr/local on the sub-zone system has to be read and write.
Instrument command
To instrument an Oracle executable (needed on AIX), use the following
syntax:
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ <name1=value1> ... <nameN=valueN> ]
activate command
A-TAP activation can either be done from the guard_tap.ini (via the
encrypted=1) on Solaris (not on Solaris zones) and HP-UX, only, or by
issuing the following command:
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ <name1=value1> ... <nameN=valueN> ]

Note: Command line parameters (if specified) supersede those stored for
given instance. The parameters are stored for future use, overwriting
previously specified ones.

Note: After Oracle instrument has been issued, the monitoring has to be
activated as well.

Chapter 1. S-TAPs and other agents 73


Activate A-TAPs from the Guardium system
The following table summarizes a list of platforms and supported versions.

Platform Supported Versions Note


Solaris 9, 10, 11 See special instructions for Solaris zones
HP-UX 11.11, 11.23, 11.31
AIX 5.3, 6.1, 7.1 Use new instrument command first to create
relinked Oracle and be sure db_use_instrumented
=yes and db_relink =yes (defaults for AIX)
Linux Not supported

On the S-TAP control screen of the Guardium system, check the 'Encryption'
checkbox in the inspection engine definition screen. Note that in A-TAP activation
through 'Encryption=1' is not possible within a Solaris subzone.

In order to enable A-TAP using this method on supported platforms, follow these
steps:
v Upon installation of S-TAP on the host, the Oracle OS user has to be added to
the Guardium group. The group is created by S-TAP install script. Some
platforms require the user to be completely logged off in order for this change to
take effect.
– On Solaris, the user has to be completely logged off from the system.
– No process should be running in the system under this user id.
– In order to verify this, use the following command (assuming the user is
Oracle):
ps -efU oracle
– If the output is empty, use the following command to add the user to the
group:
usermod -G dba,guardium oracle
– Please note that if the user belongs to groups other than dba, they should be
listed as well. The latter can be verified using the following command:
id -a oracle
– Once the user is added to the Guardium group, the encrypted traffic should
be logged for this user.
v On Solaris zone architecture, the following extra steps are to be taken:
– On the local zone with the Oracle instance, create new user group named
guardium with GID equal to the GID of group guardium on the global zone.
– On the local zone with the Oracle instance, create the /var/guard directory
like this:
mkdir -p /var/guard
chown root:guardium /var/guard
chmod ug+wx /var/guard
– On the local zone with the Oracle instance, add Oracle OS user to the group
Guardium.
– On global zone, edit guard_tap.ini file. Prepend the global zone path to local
zone /var/guard directory to atap_exec_location parameter. Use ':' (colon) as
a separator.
atap_exec_location=/data/zones/oracle10/root/var/guard:/var/guard

74 S-TAP and other agents


– On the S-TAP control screen of the Guardium system, specify global zone
path and local zone path for Process Name parameter. Use ':' (colon) as a
separator.
/data/zones/oracle10/root/data/oracle10/product/10.2.0/Db_1/bin/oracle:/data/oracle10/product

Deactivate A-TAPs

Use the deactivate command to deactivate an A-TAP for a specific database


instance. Note the following:

Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ --force-action=yes ] deactivate

If the optional --force-action parameter is specified and its value is set to yes,
forced deactivation will be attempted. In particular, it will try to deactivate a DB
instance even if it is running or the OS user is logged in. This can be beneficial to
use if a normal deactivate attempt is unsuccessful. The --force-action parameter
must precede the deactivate command, as shown in the example, or an error will
be issued.

In addition, the --force-action option may be used to clean up leftovers of previous


activations; for example. if a database instance has been uninstalled or reinstalled
without deactivation, the --force-action switch instructs guardctl to clean up its
records and get rid of stale information.
v A-TAP cannot be activated or deactivated while DB instance is up and running.
v DB users have to be completely logged off from the system during DB instance
deactivation.
deinstrument command
On instrumented instances, the deinstrument command should be run as
well:
Syntax
<guardium_base>/xxx/guardctl db_instance=<instance> [ --force-action=yes ] deinstrument

deactivate-all Command

Use the deactivate-all command to deactivate A-TAP for all database instances
on the server.

Syntax
<guardium_base>/xxx/guardctl [ --force-action=yes ] deactivate-all

Note: Note: The --force-action parameter may be specified if any of the instances
fail to deactivate after a normal deactivate-all is attempted.

Setup Teradata ATAP


1. Determine the user running gtwgateway and the path.
su11u1x64-tera:~ # ps -ef | grep gtwgateway
teradata 5000 4608 0 Jan03 ? 00:00:05 /usr/tgtw/bin/gtwgateway
root 20128 20063 0 12:35 pts/0 00:00:00 grep gtwgateway
su11u1x64-tera:~ #
gtwgateway runs as user teradata
Use parameter --db-user=teradata to guardctl
Chapter 1. S-TAPs and other agents 75
Path to gtwgateway is /usr/tgtw/bin/gtwgateway. This is the default value for
the parameter tdc_gtwgateway and as such does not need to be specified.
Otherwise, the parameter should be --tdc_gtwgateway=/usr/tgtw/bin/
gtwgateway
2. Determine the path to pdemain. Typically, this will be /usr/pde/bin/pdemain
su11u1x64-tera:~ # ps -ef | grep pdemain
root 4608 1 0 Jan03 ? 00:00:25 pdemain -debug
su11u1x64-tera:~ # ls -l /proc/4608/exe
lrwxrwxrwx 1 root tdtrusted 0 2015-01-03 01:20 /proc/4608root 20620 20063
0 12:40 pts/0 00:00:00 grep pdemain/exe ->
/opt/teradata/tdat/pde/15h.00.00.07/bin/pdemain
Checking the inodes for this file and /usr/pde/bin/pdemain, we see that they
are the same.
su11u1x64-tera:~ # ls -li /opt/teradata/tdat/pde/15h.00.00.07/bin/pdemain
1638875 -r-xr-xr-x 1 teradata tdtrusted 1294666 2014-01-22 01:40
/opt/teradata/tdat/pde/15h.00.00.07/bin/pdemain
su11u1x64-tera:~ # ls -li /usr/pde/bin/pdemain
1638875 -r-xr-xr-x 1 teradata tdtrusted 1294666 2014-01-22 01:40
/usr/pde/bin/pdemain
su11u1x64-tera:~ #
Since the inodes are the same and the default value for --db-home is /usr/pde,
the parameter in this case does not need to be specified. Otherwise, you can
specify --db-home=/opt/teradata/tdat/pde/15h.00.00.07 or
--db-home=/usr/pde since bin/pdemain in both paths is the same file
hardlinked in this case.
3. Store the configuration for ATAP using the parameters determined in steps 1
and 2.
/usr/local/guardium/guard_stap/guardctl --db-instance=teradata
--tdc_gtwgateway=/usr/tgtw/bin/gtwgateway --db-type=teradata
--db-home=/opt/teradata/tdat/pde/15h.00.00.07 --db-user=teradata store-conf
4. Authorize the DB user to the guardium group /usr/local/guardium/
guard_stap/guardctl --db-instance=teradata authorize-user
5. Stop the Teradata instance.
su11u1x64-tera:~ # /etc/init.d/tgtw stop
tgtw Shutdown complete
su11u1x64-tera:~ # /etc/init.d/tpa stop
PDE stopped for TPA shutdown
su11u1x64-tera:~ #
6. 6. Activate ATAP.
/usr/local/guardium/guard_stap/guardctl --db-instance=teradata activate
7. Restart the Teradata instance.
su11u1x64-tera:~ # /etc/init.d/tpa start
Teradata Database Initiator service is starting...
Teradata Database Initiator service started successfully.
su11u1x64-tera:~ # /etc/init.d/tgtw start
tgtw Startup complete

76 S-TAP and other agents


su11u1x64-tera:~ #

Tee
The Tee is a non-kernel-based data collection mechanism that can be used as an
alternative to K-TAP.

Prepare for Local UNIX DB2 Clients to Use the Tee

This topic does not apply if the K-TAP mechanism will be used to monitor local
connections. The Tee is a non-kernel-based data collection mechanism that can be
used as an alternative to K-TAP, and as such, requires the clients to explicitly
connect to the Tee.

Do not perform this procedure until the S-TAP has been installed on the DB2
server, and you are ready to start collecting data. For the local DB2 clients to use
the Tee, you will create a database alias named tee, and the clients will change
their login sequence to log into tee (instead of the DB2 server).
1. Log on to the database server system using an administrative account.
2. Locate the entry in the /etc/services file for the node name that clients use to
connect to the database. Each entry in this file is in the following format:
node_name port_number/protocol [aliases]
For example:
db2inst1 50000/ tcp # DB2 connection service port

Note: Record the node name (db2inst1, in this example) and the port number
(50000). When you configure the inspection engine, this is the port number
you will specify as the Tee Real Port.
3. Select an unused port number in the range of 1025-65535 for use by the S-TAP.
Search the /etc/services file for the selected port number to be certain that it
is not used. When you configure the inspection engine, this is the port
number you will specify as the Tee Listen Port.
4. Enter the db2 command to start the db2 command-line interface. To execute
this command, you may need to add the command to the $PATH, or switch
users to a db2 user on the system.
5. Enter the list node directory command to list all nodes defined. A very simple
example:
db2 => list node directory
Node Directory
Number of entries in the directory = 2
Node 1 entry:
Node name = GACCTEST
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = merlin
Service name = 50000
Node 2 entry:
Node name = LOCGOOSE
Comment =
Directory entry type = LOCAL
Protocol = LOCAL
Instance name = db2inst1

Note: The /etc/services entry that we looked at previously related the


instance name db2inst1 to the service name 50000.

Chapter 1. S-TAPs and other agents 77


6. Use the catalog command to create a node on the local server for the port to
be assigned as the Tee Listen Port. For example, to define a node named
localtee for port 12344 on the server named goose, we would enter the
following command:
db2 => catalog tcpip node localtee remote goose server 12344
DB20000I The CATALOG TCPIP NODE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is refreshed.
7. Enter the terminate command to update the directory. (This closes the db2
utility.)
db2 => terminate
DB20000I The TERMINATE command completed successfully.
8. Restart the db2 utility using the db2 command, and then enter the list node
directory command again to verify that the new node has been defined
correctly. Continuing with our simple example, the new node now appears in
the list:
db2 => list node directory
Node Directory
Number of entries in the directory = 3
Node 1 entry:
Node name = GACCTEST
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = merlin
Service name = 50000
Node 2 entry:
Node name = LOCALTEE
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = goose
Service name = 12344
Node 3 entry:
Node name = LOCGOOSE
Comment =
Directory entry type = LOCAL
Protocol = LOCAL
Instance name = db2inst1
9. Configure a database alias named tee for the database. In our example we will
use a database named SAMPLE (replace this with the name of your database):
db2 => catalog database SAMPLE as tee at node localtee
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
10. Enter the terminate command to update the directory. (This closes the db2
utility.)
db2 => terminate
DB20000I The TERMINATE command completed successfully.
11. Restart the db2 utility using the db2 command, and then enter the list
database directory command to verify that the tee database alias has been
defined correctly. Continuing with our simple example, the new database
should appear in the list of databases (only a partial list is shown):
db2 => list database directory
System Database Directory
Number of entries in the directory = 6
Database 1 entry:
...
Database 3 entry:
Database alias = DN0GOOSE
Database name = SAMPLE

78 S-TAP and other agents


Node name = DN0GOOSE
Database release level = a.00
Comment =
Directory entry type = Remote
Catalog database partition number = -1
Database 4 entry:
...
Database 5 entry:
Database alias = TEE
Database name = SAMPLE
Node name = LOCALTEE
Database release level = a.00
Comment =
Directory entry type = Remote
Catalog database partition number = -1
12. Enter the quit command to close the db2 utility:
db2 => quit

Do not log out of the database server system yet. After configuring an
inspection engine, you will enter one or more SQL commands using the DB2
command-line SQL utility to verify the alias connection.
13. When you are ready to start collecting data, define a DB2 inspection engine to
listen on the selected Tee Listen Port (12344 in the example), and forward
messages to the Tee Real Port (50000 in the example). Be sure to set all other
properties required for a DB2 inspection engine, as described elsewhere.
14. Use the DB2 command-line to verify that the database connection through the
local tee process works correctly. Log in to the database from the command
line using a command like the following (where sample is the database (or
some tee catalog name), db2inst1 is user name, passwd is the password, and
tee is the database alias):
$ db2 connect to sample user db2inst1 using passwd
Database Connection Information
Database server = DB2/LINUXX8664 9.7.0
SQL authorization ID = DB2INST1
Local database alias = SAMPLE
15. Enter a command that you know will create an SQL exception (for example,
select * from my_mistake), and then quit the session.
16. Log in to a user portal on the Guardium system, and navigate to the Reports
& Alerts - Report Templates - Exceptions tab, and select the SQL Errors report.
You should be able to locate your SQL error near the beginning of the report,
and thus verify that the tee is seeing the Informix traffic.
17. Now modify all client logins to log into the tee alias (instead of the DB2
server)

Prepare for Local Unix Informix Clients to Use the Tee


This topic does not apply if the K-Tap mechanism will be used to monitor local
connections. The Tee is a non-kernel-based data collection mechanism that can be
used as an alternative to K-TAP, and as such, requires the clients to explicitly
connect to the Tee.

Do not perform this procedure until the S-TAP has been installed on the Informix
server, and you are ready to start collecting data. For the local Informix clients to
use the Tee, you will create an staptcp service name in the /etc/services file, create
an stap_sqlhosts file, and modify several environment variables such that local
Informix clients will connect to the Tee Listen Port instead of to the Informix
server.

Chapter 1. S-TAPs and other agents 79


1. Locate the sqlhosts file. The default file name is sqlhosts, and by default it is
located in the $INFORMIXDIR/etc/ directory. The default filename and location
can be overridden using the INFORMIXSQLHOSTS environment variable,
which when present, defines the full path name for the file.
2. Make a copy of the sqlhosts file and name it stap_sqlhosts. You will modify
the copy. Do not modify the original sqlhosts file. There are no naming
requirements for the file. We will use the name stap_sqlhosts for the
remainder of this section.
3. Open the stap_sqlhosts file in a text editor.
4. Locate the entry that local clients use to connect to the database. Each entry
contains several positional parameters, in the following format:
dbservername netptype hostname servicename [options]

For example:

jumboinformix onsoctcp jumbo nettcp


5. Note the servicename parameter value (nettcp in this example ). It identifies a
servicename that maps to a port number for this database server, in the
services file. You will use this name later to locate an entry in that file.
6. Replace the servicename specified with a new servicename for S-TAP. There
are no naming requirements. We will use staptcp for our example. Continuing
the example, the entry would be changed as follows:
jumboinformix onsoctcp jumbo staptcp
7. Save the stap_sqlhosts file.
8. Locate the services file. By default, it is in the /etc directory, but if Network
Information Service (NIS) is used, you must edit the services file on the NIS
server.
9. Make a backup copy of the file, and open the original for editing.
10. Locate the entry in the services file for the servicename that you replaced in
the stap_sqlhosts file. Each entry in this file is in the following format:
servicename port_number/protocol [aliases]
In our example services file, the nettcp entry is defined as follows:
nettcp 1400/tcp

Note: Pay attention to the port number (1400, in the example). When you
configure the inspection engine, this is the port number you will specify as the
Tee Real Port.
11. Select an unused port number in the range of 1025-65535 for use by the S-TAP.
Search the services file for the selected port number, to be certain that it is not
used. In our example, we will use 12344. When you configure the inspection
engine, this is the port number you will specify as the Tee Listen Port.
12. Add a line to the services file for S-TAP listening port, staptcp in the example:
staptcp 12344/tcp
13. Save the services file.
14. Set the environment variable INFORMIXSQLHOSTS, to specify the full path
name for the cloned version of the sqlhosts file that you created earlier. For
example:
setenv INFORMIXSQLHOSTS $INFORMIXDIR/etc/stap_sqlhosts
15. When you are ready to start collecting data, define an Informix inspection
engine to listen on the selected Tee Listen Port (12344 in the example), and
forward messages to the Tee Real Port (1400 in the example).

80 S-TAP and other agents


16. You can use the dbaccess command to verify that client SQL requests are
being seen by S-TAP. To use dbaccess, three environment variables must be set
appropriately: INFORMIXDIR, INFORMIXSERVER, and
INFORMIXSQLHOSTS. Verify that those variables are set correctly using the
following command:
-bash-3.00# env | grep INFO
INFORMIXDIR=/data/informix
INFORMIXSERVER=jumboinformix
INFORMIXSQLHOSTS=/data/informix/etc/stap_sqlhosts
INFORMIXSERVER identifies the database server that we are trying to
connect to (jumboinformix in this example).
INFORMIXSQLHOSTS identifies the sqlhosts file used to resolve connections
to jumboinformix. During this resolution it will be either shared memory or a
TCP connection. In our previous definition, it is a TCP connection with a
service name of staptcp. This will connect to the correct TCP port of 12344
which is resolved in the /etc/services file.
17. Enter the dbaccess command.
18. Navigate to connection - connect - Select Database Server and select the
database servername (jumboinformix in our example).
19. When prompted, enter an appropriate database user name and password.
20. Exit from the connection portion of the configuration and select
Query-language - select database.
21. Select New and enter an SQL command, for example: select * from
my_mistake.
22. Log in to a user portal on the Guardium system, and navigate to the Reports
& Alerts - Report Templates - Exceptions tab, and select the SQL Errors report.
You should be able to locate your SQL error near the beginning of the report,
and thus verify that the tee is seeing the Informix traffic.

Prepare for Local Unix Oracle Clients to Use the Tee

This topic does not apply if the K-Tap mechanism will be used to monitor local
connections. The Tee is a non-kernel-based data collection mechanism that can be
used as an alternative to K-Tap, and as such, requires the clients to explicitly
connect to the Tee.

Do not perform this procedure until the S-TAP has been installed on the Oracle
server, and you are ready to start collecting data. Use the following procedure
outlined to modify the tnsnames.ora file, which maps service aliases to ports. Do
not change this file until the S-TAP has been installed and you are ready to start
collecting data.
1. Make a backup copy of the tnsnames.ora file, which is located in the
$ORACLE_HOME/network/admin directory.
2. Open the tnsnames.ora file for editing in a text editor program.
3. Locate the entry in this file for the service alias used to access the database.
An entry named EAGLE10 on the EAGLE host is illustrated here:
EAGLE10 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eagle)(PORT = 1521)))
(CONNECT_DATA = (SERVICE_NAME = GUARD10))
)

Chapter 1. S-TAPs and other agents 81


4. Note the port number used (1521, in this example). When you configure the
inspection engine, this is the port number you will specify as the Tee Real
Port. Do not change the entry until you have verified that the S-TAP is
configured correctly.
5. Select an unused port number in the range of 1025-65535 for use by S-TAP.
Search the file for the selected port number, to be certain that it is not used. In
our example, we will use 12344. When you configure the inspection engine,
this is the port number you will specify as the Tee Listen Port.
6. Create a duplicate entry for the service by copying and pasting the entry, and
making the highlighted changes. We will name our duplicate LOCALTEE:
LOCALTEE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eagle)(PORT = 12344)))
(CONNECT_DATA = (SERVICE_NAME = GUARD10))
)
7. Save the tnsnames.ora file.
8. When you are ready to start collecting data, define an Oracle inspection
engine to listen on the selected Tee Listen Port (12344 in the example), and
forward messages to the Tee Real Port (1521 in the example).
9. Log on to the database server locally, using sqlplus to verify that the S-TAP is
configured properly and will see a local access. For example:
# sqlplus scott/tiger@LOCALTEE

Where scott is the database user name, tiger is the password, and LOCALTEE
identifies the service.
10. Enter an invalid SQL command to create an SQL exception that will be easy to
find. For example: select * from my_mistake
11. Log in to a user portal on the Guardium system, and navigate to the Reports
& Alerts - Report Templates - Exceptions tab, and select the SQL Errors report.
You should be able to locate your SQL error near the beginning of the report,
and thus verify that the tee is seeing the local Oracle traffic.
12. Reopen the tnsnames.ora file and replace the database service port number
with the selected number. Continuing our example, the EAGLE10 entry would
be UPDATED as follows:
EAGLE10 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eagle)(PORT = 12344)))
(CONNECT_DATA = (SERVICE_NAME = GUARD10))
)
13. Save the tnsnames.ora file. All local clients connecting to EAGLE10 will now
connect to port 12344 (the Tee Listen Port) instead of the actual database port
(the Tee Real Port).

Prepare for Local Unix Sybase Clients to Use the Tee


This topic does not apply if the K-Tap mechanism will be used to monitor local
connections. The Tee is a non-kernel-based data collection mechanism that can be
used as an alternative to K-Tap, and as such, requires the clients to explicitly
connect to the Tee.

Follow these steps to modify the local interface file, which maps servers to ports.
Do not change this file until the S-TAP has been installed and you are ready to
start collecting data.

82 S-TAP and other agents


1. Make a backup copy of the interface file, which is located in the $SYBASE/
directory.
2. Open the interface file for editing in a text editor program.
3. Locate the entry in this file whose name matches the Sybase server name. For
example, a server named parrot might be defined as follows:
parrot
master tcp ether parrot 4100
query tcp ether PARROT 4100
4. Note the port number (4100, in the example). When you configure the
inspection engine, this is the port number you will specify as the Tee Real Port.
5. Select an unused port number in the range of 1025-65535 for use by the S-TAP.
Search the file for the selected port number, to be certain that it is not used. In
our example, we will use 12344. When you configure the inspection engine, this
is the port number you will specify as the Tee Listen Port.
6. Replace the port number with the selected number. For example:
parrot
master tcp ether parrot 12344
query tcp ether PARROT 12344
7. Save the interface file.
8. When you are ready to start collecting data, define a Sybase inspection engine
to listen on the selected Tee Listen Port (12344 in the example), and forward
messages to the Tee Real Port (4100 in the example).

Switching from Tee to K-Tap

Use the following steps to switch from Tee to K-Tap without an uninstall or
reinstall. This condition may exist after an unsuccessful loading of K-Tap.
1. Disable S-TAP. See Stop UNIX S-TAP for more information.
2. Comment guard_tee and guard_hnt lines out of inittab, or do the appropriate
change for Red Hat 6 which does not use inittab
3. Run 'init q', or Red Hat equivalent; alternately just kill the tee and hunter jobs
4. Edit guard_tap.ini and change ktap_installed to 1 and tee_installed to 0
5. Run the 'guard_ktap_loader install' command

Note: For Linux set environment variable


NI_ALLOW_MODULE_COMBOS="Y" first
Linux example:
NI_ALLOW_MODULE_COMBOS="Y"
/usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader install
non-Linux example:
/usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader install
6. Run the 'guard_ktap_loader start' command
example: /usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader start
7. Re-enable S-TAP. See Restart UNIX S-TAP for more information.

Windows S-TAP
Use this section for Windows S-TAP configuration information.

Chapter 1. S-TAPs and other agents 83


IIS Tap
IIS Tap monitors web traffic and is used to associate web user usernames and IP
addresses with the database traffic they generate.
v IIS Tap is installed as part of the general Windows S-TAP installation package.
v IIS Tap is installed on the web server instead of the database machine, unless
they are one and the same.
v IIS Tap software logs significant events to the Windows Application eventlog.

Start Windows S-TAP


Depending on the method of S-TAP installation you may start S-TAP by:
GIM Installation
GIM allows you to start S-TAP without ever having to log into the
database server. Use the following steps to change the
WINSTAP_ENABLED parameter and schedule the change on the database
server.
1. Click Install Management > Setup by Client to open the Client
Search Criteria.
2. Click Search to perform a filtered search.
3. Select the Clients that will be the target for the action (starting S-TAP)
4. Click Next to open the Common Modules panel.
5. Select the Module for WINSTAP.
6. Click Next to open the Module Parameters panel.
7. Select the client that will be the target for the action (starting S-TAP).
8. Change the WINSTAP_ENABLED parameter to 1 (one).
9. Click Apply to Clients to apply to the targeted clients.
10. Click Install/Update to schedule the update to the targeted clients.
This update can be scheduled for NOW or some time in the future.
When the schedule is run for this update the S-TAP service on the
targeted clients will be started.
Non-GIM Installation
1. Log on to the database server system using a system administrator
account.
2. From the Services control panel, start the GUARDIUM_STAP service.
You may also notice the GUARDIUM_TEE service. DO NOT start that
service. It is a rarely used component of Guardium, and if needed, it
will be started by the GUARDIUM_STAP service.
3. Log in to the Guardium system to which this S-TAP reports. Verify that
the Status light in the S-TAP control panel is green.

Note: When Windows S-TAP encounters a fatal error during start up that
is due to configuration problems (unknown local IP address, more than 1
primary SQL-Guards defined, etc.) it will log the reason to the Windows
event log. In some cases an exit after a failure may cause a crash and
another logged event. This crash should not cause any concern if it is
preceded by the event explaining the reason for the failure.

84 S-TAP and other agents


Stop Windows S-TAP
Depending on the method of S-TAP installation you may stop S-TAP by:
GIM Installation
GIM allows you to stop S-TAP without ever having to log into the
database server. Use the following steps to change the
WINSTAP_ENABLED parameter and schedule the change on the database
server.
1. Click Install Management > Setup by Client to open the Client
Search Criteria.
2. Enter Client Search Criteria if you want to perform a filtered search of
registered clients.
3. Click Search to perform filtered search and display the Clients panel.
4. Select the clients that will be the target for the action (stopping
S-TAP).
5. Click Next to open the Common Modules panel.
6. Select the Module for WINSTAP.
7. Click Next to open the Module Parameters panel.
8. Select the client that will be the target for the action (stopping S-TAP)
9. Change the WINSTAP_ENABLED parameter to 0.
10. Click Apply to Clients to apply to the targeted clients
11. Click Install/Update to schedule the update to the targeted clients.
This update can be scheduled for NOW or some time in the future.
When the schedule is run for this update the S-TAP service on the
targeted clients will be stopped.
Non-GIM Installation
1. Log on to the database server system using a system administrator
account.
2. From the Services control panel:
v Stop the GUARDIUM_STAP service.
v If it is running, stop the optional GUARDIUM_TEE service (typically,
it will not be running).
3. Log in to the administrator portal of the Guardium system to which
this S-TAP was reporting, verify that the Status light in the S-TAP
control panel is now red.

MS SQL Server Encryption and Kerberos


The method for dealing with encryption and Kerberos can be configured after
installing the S-TAP agent. MS SQL Server encryption and Kerberos authentication
are widely used in the MS SQL Server environment. In some cases, one or both
options (encryption and/or Kerberos authentication) may be used by default, and
most users will be unaware of that fact.
v If you are missing MS SQL Server traffic, it may be encrypted, and without
decrypting that traffic, Guardium will not recognize sessions.
v If you are seeing MS SQL Server traffic, but where you expect to see database
usernames you are seeing strings of hexadecimal characters, Kerberos
authentication is being used.

Chapter 1. S-TAPs and other agents 85


For the sake of simplicity, we will refer to the hexadecimal character strings that
appear in the username field as Kerberos names. These are not permanent
substitutions for database usernames, so it is not a simple matter of creating a
one-time mapping; Guardium needs to maintain a dynamic mapping of Kerberos
names to actual database usernames by constantly monitoring Kerberos. There are
two general methods for doing this.

Kerberos Name Handling Option

On a Windows MS SQL Server database server, configure S-TAP to automatically


replace Kerberos names with real database user names in the traffic, before
forwarding that traffic to the Guardium system.

Under normal conditions, the Kerberos names will never be seen on the Guardium
system. In heavy volume situations, if names have not yet been resolved by the
time messages must be sent to the Guardium system, traffic with Kerberos names
can either be sent as is (with the Kerberos names), or dropped (your choice).

Map Kerberos Names at the S-TAP

Perform this step from the Administrator GUI after installing the S-TAP agent on
the database server system.
1. Log on to the active Guardium host for the S-TAP just installed. (The active
host is the only host from which you can modify an S-TAP configuration.)
2. Click Manage > Activity Monitoring > S-TAP Control to open the S-TAP
Control panel.
3. Locate the database server on which the S-TAP was installed, in the S-TAP
Host column, and click Edit S-TAP Configuration to open the S-TAP
Configuration panel.
4. Expand the S-TAP Control Details pane.
5. Check the MSSQL Encryption box.
6. When Kerberos authentication is used, Kerberos Credentials Mapping
controls how S-TAP obtains the database user names. If either Sync option
(below) is selected, S-TAP will not forward messages to the S-TAP until it
resolves the real database user name. So in high-message-volume situations,
some messages may be lost. When the Async option is used, all messages will
be forwarded to the S-TAP, but initial sessions for users with new Kerberos
tickets will have strings of hexadecimal characters in the database username
field until S-TAP resolves the actual database user name.
At Startup, Sync During startup processing, S-TAP obtains all authenticated
users from the domain controller. This can be time consuming. After all users
have been obtained and tabled, S-TAP starts sending data to the S-TAP. When
it encounters a message from a user it does not recognize, it obtains that
database user name as described for On Demand, Sync, below.
On Demand, Sync When S-TAP encounters a Kerberos message for an
unrecognized user, S-TAP fetches the user name from the domain controller. It
does not forward any traffic from that user to theS-TAP until it has the actual
database user name.
On Demand, Async Like the above option, except that messages are not held
while waiting to obtain the database user name.
See further MSSQL Encryption and Kerberos Credentials Mapping in S-TAP
Controls - Details table in“Configure S-TAP from the GUI” on page 90 help
topic.

86 S-TAP and other agents


7. Click the Apply to save changes to the Details pane.
If you have not already done so, define an MS SQL Server inspection engine
on this S-TAP, as described in the following steps. Otherwise, skip the
remainder of the procedure.
8. Click Manage > Activity Monitoring > Inspection Engines
9. Select MSSQL from the Protocol menu.
10. Enter 1433 and 1434 in the Port Range.
11. Enter SQLSERVR.EXE for Process Names.
12. Enter MSSQLSERVER for Instance Name. When you select MSSQL as the
Protocol, this name appears by default - you will only need to change it if
your server does not use the default instance name.
13. Define one or more Client IP/Mask sets, and Add.

Note: To monitor all clients, enter 1.1.1.1 and 0.0.0.0 in the Client IP and
Mask fields.
14. Click Apply to save the inspection engine definition.

CAS Re-configuration of JAVA_HOME location

In most cases the installation program takes care of finding the JAVA_HOME
value. This value is placed in the CAS configuration file.

If for any reason (for example, you install a new Java version after installing the
Guardium CAS product), you need to change the location of JAVA_HOME, follow
the following procedure.
1. Locate and open the CAS configuration file for editing. Its full path name is:
<installation directory>/case/conf/wrapper.conf
2. Locate the following entry:wrapper.java.command=<value>
3. Replace value with the JAVA_HOME directory
4. Save the file.

CAS and the 64-bit Windows Registry

On Windows software configuration parameters are stored in the Registry tree in


the key HKEY_LOCAL_MACHINE\SOFTWARE. Since a 64-bit machine can run both 64-bit
and 32-bit versions of the same application, there is a need to distinguish the
configuration parameters of the 64 and 32 bit applications.

Microsoft's solution to the problem is to partition the registry. A special key,


labelled WOW6432Node, is added to the Registry tree within the key
HKEY_LOCAL_MACHINE\SOFTWARE. When a 32-bit application tries to access the
Registry through a path within the key HKEY_LOCAL_MACHINE\SOFTWARE, Windows
inserts the special key WOW6432Node into the path. This way the 32-bit application
deals with the Windows Registry just as it would on a 32-bit machine, and
Windows takes care of redirecting to the correct partition.

CAS is a 32-bit Java application, so it would not normally have access to the 64-bit
software configuration parameters. CAS has been enhanced to detect a 64-bit
environment and handle the partitioned Registry. CAS interest in the Registry is to
retrieve values of Registry keys to detect changes or to compare against
recommended values.

Chapter 1. S-TAPs and other agents 87


As an example, suppose that CAS is to retrieve the value of HKEY_LOCAL_MACHINE\
SOFTWARE\MyApp\Parameter1. That value could be in either, both, or neither
partition. If it is in neither partition, CAS will retrieve null. Otherwise, it returns a
string which is the concatenation of the two values separated by the string
WOW6432Node. If the value is in the 64 but not the 32-bit partition, the string
retrieved would look like Value64WOW6432Nodenull. Conversely, if the value is in
the 32 but not the 64-bit partition, the string is nullWOW6432NodeValue32. Finally,
if the value is in both partitions, the string returned is
Value64WOW6432NodeValue32. This new Registry value pattern search will search
both Registry partitions when appropriate.

Collect SIDs from Domain Collector

GuardiumDC is a service that collects updates of user accounts (SIDs and


usernames) from the primary domain controller and then signals the changes to
Guardium_S-TAP to update the S-TAP internal SID and UserName map. If S-TAP
cannot find a resolved SID in the map, it tries to get it from the primary Domain
Controller, in which case the S-TAP logs a message into debug log (level 7): The
account name *** has been retrieved for SID ***.

DC_COLLECT_FREQ in TAP section specifies the frequency of collection in hours, min


1, max 24 (default).

DC_COLLECT_MAXUSERS in TAP section specifies the maximum number of users to


collect, default is 200,000, minimum is 10,000.

DOMAIN_CONTROLLER example: DOMAIN_CONTROLLER=\\atari.

Registry changes when not seeing Named Pipes traffic

Problem: Named Pipes traffic on TCP port 445 not seen

This occurs when the LhmonProxy driver is loaded AFTER the NetBT driver by
the operating system during the boot process. To determine the relative boot order
of the drivers, you need the following information from the registry:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip Tag

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NetBT Tag

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LhmonProxy Tag

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\GroupOrderList
PNP_TDI

To determine if NetBT is loaded before LhmonProxy, find their Tag values in the
PNP_TDI list. If the LhmonProxy Tag number comes after the NetBT tag value
(even if LhmonProxy's tag value is smaller) in the list, then LhmonProxy loads
afater NetBT.

For example, let's say the Tcpip Tag is 4, NetBT Tag is 6, LhmonProxy Tag is 7, and
the PNP_TDI list looks like:

03 00 00 00 04 00 00 00 06 00 00 00 07 00 00 00

88 S-TAP and other agents


The first longword in the list (03 00 00 00) is the number of tags in the list. The
subsequent longwords are the tags themselves, and the drivers load in the order
that the tags appear in the list. So in this case LhmonProxy (07 00 00

00) is after NetBT (06 00 00 00) in the list, so the LhmonProxy driver starts after
the NetBT driver.

The solution to the problem is to force the system to load LhmonProxy after Tcpip
but before NetBT by editing the PNP_TDI entry. The solution for the previous
example would have the PNP_TDI entry look like:

03 00 00 00 04 00 00 00 07 00 00 00 06 00 00 00

Once changed, reboot the system.

Be careful when editing the PNP_TDI entry to insure that you put the proper
number of 0s after the tag value (3 pairs of zeros). Each number in the entry is in
hexadecimal, so tag 10 would look like 0A 00 00 00.

OQCR and microsecond timestamp

Collected DRDA traffic can be sent to Optim Query Capture Replay with a
microseconds timestamp, since OQCR requires a granularity of 1 microsecond. Use
the CLI command store unit type sink to switch from a granularity of 1
millisecond to 1 microsecond.

Configuration option for Windows S-TAP.

HIGH_RESOLUTION_TIMER

With the following values:

0 (default) - Send time stamps in milliseconds (Guardium Version 7.0 and Version
8.0 behavior)

1 - Send time stamps in microseconds, but use milliseconds system timer (to
reduce system performance hit - multiply milliseconds by 1000)

2 - Send time stamps in microseconds, use high resolution windows timer (most
accurate)

For values 1 and 2, the S-TAP will indicate to Sniffer that microseconds are sent, by
setting the reserved byte in PacketData to 1.

The S-TAP will send the same time stamp values to all connected Guardium
systems.

What's new or enhanced in V10


Native images (32 / 64 bit)
v All of the STAP services are native –32- or 64-bit (or Itanium). All images (32/64)
are contained in one universal installer.

New TCP and NP drivers (WFP , NmpMonitor)

Chapter 1. S-TAPs and other agents 89


Windows Filtering Platform (WFP) is replacing Transport driver interface (TDI)
based TCP drivers.

Wfpmonitor is the new S-TAP TCP driver, replacing lhmonproxy and lhmon

WFP provides the following benefits:


v Can be upgraded with no need to reboot
v No need to restart database instances to pick up TCP traffic after installation
v All Drivers now provide a (cyclic) logging facility. Log files can be found under
/logs . This is to enhance supportability, as driver errors/warnings were not
visible.

Named pipes driver has been redesigned. Now split into proxy and monitor:
NmpProxy and NmpMonitor, replacing Nptrc. This splits the functionality into
basic-OS (NmpProxy) and Guardium-logic (NmpMonitor).

Configure S-TAP from the GUI


After installing an S-TAP agent on a database server, S-TAP configuration can be
performed from the administrator portal.

S-TAP Control - Complete an S-TAP Configuration

To make changes to the S-TAP configuration, you must be logged into the
Guardium system that is the active host for the S-TAP. You can only edit an S-TAP
configuration from its active host. Some configuration changes require that the
S-TAP agent be restarted manually.

Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.

If there is no Local Taps section, you must first configure your Guardium system
to manage S-TAP agents. Refer to “Configuring the Guardium system to manage
S-TAPs” on page 47 for more information.

Locate the S-TAP to be configured in the S-TAP Host column by looking for its IP
address or the symbolic host name of the database server on which it is installed.
Each S-TAP has its own controls which are detailed in the following table.

Control Description
Delete Click Delete to remove an S-TAP.

You cannot remove an active S-TAP from the


list.

Clicking delete does not stop an S-TAP from


sending information, nor does it remove the
Guardium host from the list of hosts stored
in the S-TAP's configuration file.

Deleting S-TAP's is useful to clean up your


display when you know that an S-TAP has
become inactive, or when the Guardium unit
is no longer listed as a host in the S-TAP's
configuration file. In either of these cases,
the S-TAP will display indefinitely with an
offline status.

90 S-TAP and other agents


Control Description
Refresh Click Refresh to fetch a copy of the latest
S-TAP configuration from the agent.
Send Click Send to restart, reinitialize buffer, run
diagnostics, or change the parameters for
S-TAP or K-TAP logging.
S-TAP Logging S-TAP Logging parameters:

Debug Level:

0 - only critical error information

1 - all included at prior level plus repeatable


non-critical error information

2 - all included at prior level plus lost data


information (discontinued from version 4.03
and later)

3 - all included at prior level plus brief


information about packets sent to a
Guardium collector

4 - all included at prior level plus local


sniffing log

5 - all included at prior level plus network


sniffing log

6 - all included at prior level plus heartbeat


receiving log

7 - all included at prior level plus


miscellaneous debugging information

Debug Duration Sec: length of debug


session (in seconds)
Reinitialize Buffer Sends a message to the S-TAP requesting
that it reinitialize its buffer and restart
K-TAP Logging Enter the Function Name and Debug
Duration Sec
Run Diagnostics Runs diagnostics on the S-TAP

For Windows, debug files are located in


STAP_HOME/TempSnapshot/ with a filename
like debug_Feb_25_2009_10_59_59.txt.

For UNIX, debug files are located in the


/tmp directory with filename
/tmp/guard_stap.stderr.txt.
Edit Edit is only enabled when the Guardium
system is the active host for the S-TAP.

Note: In an IP load balancer environment,


the editing of the S-TAP configuration may
be disabled.
Information Log Displays the S-TAP event log
S-TAP Host The IP address or host name

Chapter 1. S-TAPs and other agents 91


Control Description
Status Status of the S-TAP

One of the three lights will be illuminated:

Green (Online) – The S-TAP is functioning


normally.

Red (Offline) – The S-TAP is not responding.

Yellow (Not Synchronized) – Configuration


changes have been sent to the S-TAP, but the
S-TAP has not yet acknowledged that the
changes were applied.

If the light remains yellow for an extended


period of time, you can assume that the
S-TAP did not accept the new configuration.
When this happens, the S-TAP attempts to
restart using the last good configuration.

When an error has occurred, you can open


the S-TAP Events panel in a separate
window by clicking Show Log. In many
cases the event log will contain error
messages indicating what was wrong with
the new configuration.

To reload the last good configuration from


the S-TAP host, click Refresh S-TAP
information.

Note: If you have trouble determining the


color of the light, hold the mouse pointer
over the set of lights to display the current
status of the S-TAP (Offline, Not
Synchronized, or Online).
Last Response Date and time of the last response from the
S-TAP

Make your desired changes, and verify that the Status Indicator is green. If the
Status Indicator is not green, the Guardium system and S-TAP are not connected.
1. Verify that the Status indicator is green. If it is not, the Guardium system and
S-TAP are not connected.
2. Click the (Edit) button for that S-TAP. If the Edit button is not active, this
Guardium system is not the active host for this S-TAP. You must log on to the
active host for this S-TAP to make any changes.
3. Expand and make modifications to any of the following sections of the S-TAP
configuration. Typically, the only additional task at this point is to define one or
more inspection engines. (An inspection engine identifies a set of database
connections to monitor.) Click any of these sections for a detailed description of
its use.
v S-TAP Control - Details
v S-TAP Control - Hunter
v S-TAP Control - Change Auditing
v S-TAP Control - Application Server User Identification

92 S-TAP and other agents


v S-TAP Control - Guardium Hosts
v S-TAP Control - Inspection Engines
4. If you have updated any information, and want to save the new configuration,
click Apply.
5. Check again that the status light in the S-TAP control panel is green. If the
status light has turned yellow, try refreshing the full S-TAP screen with the
refresh button. If the light remains yellow, S-TAP was unable to restart using
the new configuration. When that happens, S-TAP attempts to restart using the
last good configuration. The configuration in your S-TAP Control panel still
contains any changes that you have applied, including any errors. To reload the
last good configuration from the S-TAP host to your S-TAP Control panel, click
(Refresh S-TAP Information).

S-TAP Control - Details

The details section of the S-TAP Control panel applies to basic configuration
settings for the S-TAP agent.

Control Description
Version The S-TAP version installed
Devices Always blank for a Windows server.

For a UNIX server, identifies an interface to


monitor for database traffic:
v none (blank) monitor local traffic only,
except for a Linux server.
v lo to monitor local traffic only, on a Linux
server only.
v tap_ip_device_name to monitor both local
traffic and network traffic, the device
name on which the database server IP
address is defined. To find the device
name on an HPUX system, use the
lanscan command. On all other UNIX
systems, use the ifconfig –a command.
Load Balancing This box controls how the S-TAP reports
traffic to the Guardium system:
v 0 = Report all traffic to a single Guardium
system (the default).
v 1 = Load balancing; distribute sessions
evenly to all Guardium systems, by client
port number (all traffic for a single session
must go to the same Guardium system).
v 2 = Full redundancy; report all traffic to
all Guardium systems.
v 3 = In an IP load balancer environment, if
the Guardium system goes down, this
allows the IP load balancer to reconnect
the S-TAP to a different Guardium system.

Note: There is no need to set more than one


primary Guardium system, because primary
or secondary machines have the same
priority.

Chapter 1. S-TAPs and other agents 93


Control Description
Messages Controls where S-TAP processing messages
(not database traffic) will be written:
v Remote To the active Guardium host.
v Syslog To the syslog file on the database
server.

As AIX does not support syslog messages,


S-TAP messages that would normally go to
syslog will be written to the following
locations: (1) K-TAP logs will go to
/var/log/ktap.log, and (2) S-TAP logs
(when S-TAP is in debug mode) will go to
stdout/stderr.

Trace Files Dir The directory to which trace files will be


written.
Alternate IPs One or more alternate or virtual IP
addresses used to connect to this database
server. This is used only when your server
has multiple network cards with multiple
IPs or virtual IPs.

S-TAP only monitors traffic when the


destination IP matches either the S-TAP Host
IP defined for this S-TAP, or one of the
alternate IPs listed. Because of this, it is
recommended that you list all virtual IPs
here.
Shared Memory Windows only.

Controls the action to be taken when a


shared memory connection is detected:
v Disable Disconnect the session.
v Alert Send an alert.
Shared Memory Monitor Windows only.

Named Pipes Monitor Mark the checkboxes to enable shared


memory drivers. To improve performance,
Local TCP Monitor you should disable any drivers that are not
used.

Note: After enabling or disabling any shared


memory driver, you must restart the S-TAP
service for the change to take effect.

94 S-TAP and other agents


Control Description
App. Server User Identification Windows only.

Note: After enabling or disabling this


feature, you must restart the S-TAP service
for the change to take effect.

0 = Disabled.

1 = Reserved for future use.

2 = Windows Terminal Server or Citrix


Terminal Server is used. The terminal
session user of the database server is stored
in the OS user name, in the following
format:
OS_USER[ ;WTS:<DOMAIN NAME>;<WTS SESSION LOGIN NAME>;[RE

There will be no REMOTE CLIENT HOST


NAME when using a console application.
Oracle Encryption For Windows

For UNIX environments: Oracle (versions 9,


10 and 11) on AIX, HPUX, Solaris, and
Linux.

0 = disabled, 1 = enabled; controls the


automatic handling of Oracle Encryption.
When used, the instance name value must
be set in the inspection engine configuration.

Note: Oracle Encryption through the GUI is


not supported for AIX and Linux

Note: After enabling or disabling this


feature, you must restart the S-TAP service
for the change to take effect, and you must
restart the Oracle Monitor Service.

Chapter 1. S-TAPs and other agents 95


Control Description
MSSQL Encryption Windows only.

Note: After changing any of these


parameters, you must restart the S-TAP
service for the change to take effect, and you
must restart the MSSQL Monitor Service.

Controls the type of automatic decryption


applied to the traffic seen by S-TAP:

None No automatic decryption. All SQL in


SSL traffic will be ignored. All SQL in
Kerberos traffic will be seen, but the
database user name will be replaced by a
string of hexadecimal characters (by
Kerberos).

Kerberos and SSL Automatically decrypts


SSL and maps Kerberos names. Use this
option if some traffic of interest uses
Kerberos, but does not also use SSL. If both
Kerberos and SSL are used for all traffic of
interest, use the SSL Only option.

SSL Only Automatically decrypts SSL


traffic. Use this option if all traffic of interest
is SSL traffic. In this situation, even if
Kerberos authentication is also used, it is of
no consequence, because S-TAP obtains all
of the information it needs before the
message is encrypted, and before Kerberos
replaces the real database username.

96 S-TAP and other agents


Control Description
Kerberos Credentials Mapping Windows only.

When Kerberos authentication is used,


controls how S-TAP obtains the database
user names. If either Sync option is selected,
S-TAP does not forward messages to the
Guardium system until it resolves the real
database user name. So in
high-message-volume situations, some
messages may be lost. When the Async
option is used, all messages will be
forwarded to the Guardium system, but
initial sessions for users with new Kerberos
tickets will have strings of hexadecimal
characters in the database username field
until S-TAP resolves the actual database user
name.

At Startup, Sync During startup processing,


S-TAP obtains all authenticated users from
the domain controller. This can be time
consuming. After all users have been
obtained and tabled, S-TAP starts sending
data to the Guardium system. When it
encounters a message from a user it does
not recognize, it obtains that database user
name as described for On Demand, Sync.

On Demand, Sync When S-TAP encounters


a Kerberos message for an unrecognized
user, S-TAP fetches the user name from the
domain controller. It does not forward any
traffic from that user to the Guardium
system until it has the actual database user
name.

On Demand, Async Is like On Demand,


Sync, except that messages are not held
while waiting to obtain the database user
name.

Chapter 1. S-TAPs and other agents 97


Control Description
TLS Use - Mark to use a TLS (encrypted)
connection. This applies to both the S-TAP
and CAS agents. Before changing this
setting, verify that the ports used for this
purpose are not being blocked by a firewall
between the server and the Guardium
system. See the Guardium Port
Requirements table in “Installing S-TAPs” on
page 1.

Failover - Mark to indicate that if no TLS


connection can be established, a non-TLS
connection can be used.

TCP Alive Message - If checked, the S-TAP


sends an alive message to the Guardium
system every five seconds over the existing
TCP connection. If blank, the S-TAP sends
alive messages over TCP in response to UDP
messages from the Guardium system.

Note: Windows only. After changing this


setting, you must restart the S-TAP service
for the change to take effect.

Note: Since TLS connections can experience


a lot of traffic and have the additional
overhead of encryption, additional
connections can be opened, if supported by
additional CPUs, by using the
connection_pool_size parameter.
Auto Discover On Windows only, perform discovery of
MSSQL databases each time the S-TAP is
started. If a new MSSQL database is
discovered, an inspection engine is created
and a new record is written to the discovery
log. The default is off.

S-TAP Control - Hunter


The Hunter component of S-TAP is not used for Windows servers, and is not used
in the recommended UNIX S-TAP configuration. It is optionally used by UNIX
S-TAPs, when the TEE monitoring mechanism, rather than the recommended
kernel-level monitoring mechanism, is used.

Note: On Solaris 11 only - If Tee is not installed initially, a re-install is required. Or


TEE should be installed manually.

Note: The Hunter component is not visible when K-Tap is installed.

When used, the Hunter component can be configured to report and optionally kill
any rogue connections that it discovers on the database server. A rogue connection
is any connection that bypasses the TEE mechanism.

98 S-TAP and other agents


Control Description
Hunt Identifies any processes to be killed, using
the following syntax: db_type:process
[,db_type:process]

db_type can be:


v DB2
v Informix
v Oracle
v Sybase
v PostgreSQL
v Teradata

process can be:


v SHM Shared memory
v IPv4 Internet Protocol version 4
v IPv6 Internet Protocol version 6
v FIFO A named pipe IPC mechanism
v PIPE A simple (unnamed) pipe IPC
v INET Internet Protocol (HPUX)

These values are not case-sensitive, and each


entry is separated from the next by a
comma.

Example: To kill Oracle Bequeath processes,


which uses a simple pipe, you would enter:

oracle:pipe
Sleep Time Maximum number of seconds between the
randomized starting time of the Hunter’s
rogue process search routine. The start time
is random to increase the difficulty of
defeating it by running in fixed time slots or
intervals. The recommended value for sleep
time is anywhere between 60 and 300.
DBs Using a comma separated list, the database
types to be reported:
v DB2
v Informix
v Oracle
v Sybase
v PostgreSQL
v Teradata

S-TAP Control - Change Auditing

The Change Auditing pane of the S-TAP Control panel applies to the CAS
(Configuration Auditing System) agent only. The CAS product is an optional
component unrelated to S-TAP's, but all Guardium components installed on the
database server share a single configuration file.

Chapter 1. S-TAPs and other agents 99


Control Description
Task Baseline These files are reserved for future use. The
UNIX defaults are task_baseline and
Client Baseline client_baseline, respectively, and the
Windows defaults are the same, but with all
capital letters.
Task Checkpoint These files are used for restart processing.
For each of these two file names, there will
Client Checkpoint be a series of files created. Each version of
the file will end with a uniqueness number.
The UNIX defaults are task_checkpoint and
client_checkpoint, respectively, and the
Windows defaults are the same, but with all
capital letters.
Checkpoint Period The maximum number of seconds between
checkpoints. The default is 60.
Fail Over File Name of the file to which data is written
when the Guardium system cannot be
reached. During this time, the file may grow
to the maximum size specified. When the
limit is reached, a second file is created,
using the same name with the digit 2
appended to the end of the name. (This is
the point at which CAS begins trying to
connect to a secondary server.) If that file
also reaches the maximum size, the first file
is overwritten, and if the first file fills again,
the second file is overwritten. Thus,
following an extended outage, you may lose
data, but you will have an amount of data
up to twice the size of the Failover File Size
Limit. The default is fail_over_file.
Fail Over File Size Limit Failover file maximum size, in KB (the
default is 5000). There are two of these files,
so the disk space requirement will be twice
what you specify here. If you specify -1,
there will be no limit on the file size, but we
recommend not doing this so that the file
size is capped.
Max Reconnect Attempts After losing a connection to the Guardium
system, the maximum number of times CAS
will attempt to reconnect. Set this value to -1
to remove any maximum (CAS will attempt
to reconnect indefinitely). The default is 5000
times, so using the default reconnect
interval, this is about 3.5 days. After the
maximum has been met, CAS will continue
to run, writing to the failover files, as
described previously, but it will not attempt
to reconnect with a host.
Reconnect Interval Number of seconds between reconnect
attempts (60). See the description of the
re-connection process in Max Reconnect
Attempts.

100 S-TAP and other agents


Control Description
Raw Data Limit Maximum number of kilobytes written for
an item when the Keep data checkbox is
marked in the item template (1000). If you
specify -1, there will be no limit.
Md5 Size Limit Maximum size of a data item, beyond which
the MD5 checksum calculation will not be
performed (1000). If you specify -1, there
will be no limit.

S-TAP Control - Application Server User Identification

The Application Server User Identification pane is used by the End-user


Application ID Monitoring product. Contact Guardium Sales or Support if you
want more information about the End-user Application ID Monitoring product.

Control Description
Session Timeout Number of minutes for a timeout. Default is
1800.
Ports Application server ports. Use commas to
separate entries, or hyphens for inclusive
ranges. The default is 8080.
Login Pattern Pattern used to identify a user login.
Username Prefix Start of user name in the Post/Get data.
Username Postfix End of user name in the Post/Get data.
Session Pattern Pattern used to identify a new session.
Session Prefix Start of session ID in the Post/Get data.
Session Postfix End of session ID in the Post/Get data.
Session ID Pattern Pattern used to identify an existing session.
Session ID Prefix Start of session ID in the Post/Get data.
Session ID Postfix End of session ID in the Post/Get data.

S-TAP Control - Guardium Hosts

This pane lists all Guardium systems defined as hosts for the S-TAP. In many cases
only a single Guardium system will be defined as the host for an S-TAP.
Additional hosts can be defined to provide a fail over and load balancing
capability. Guardium S-TAP hosts are referred to using three terms:

Term Guardium Host


Active Host The host to which this S-TAP is currently
connected. If you want to modify the S-TAP
configuration, you must be logged into the
active host. Usually, the active host will be
the primary host.

Chapter 1. S-TAPs and other agents 101


Term Guardium Host
Primary Host The preferred Guardium system to receive
data from (and control) this S-TAP. This is
the host that the S-TAP attempts to connect
with each time that the S-TAP restarts, or
following a re-established the connection to
primary host.
Secondary Host If multiple Guardium systems are defined as
hosts for the S-TAP, any Guardium system
not designated as the primary host is a
secondary host. If the S-TAP loses its
connection to the active host, and it cannot
re-connect to the primary host, it will
attempt to connect to a secondary host, in
the order listed. When you are logged into
the administrator console of a secondary
host, you can view the S-TAP configuration,
but you cannot edit it unless that host is also
the active host at that moment.

In the S-TAP Configuration panel, the Guardium Host pane contains the controls
described here. Note that the buttons shown are available only in the S-TAP
Configuration panel (and not in the S-TAP Control panel):

Control Description
Active A check mark in this column indicates the
active host for this S-TAP.
Guardium Host Identifies a Guardium system by using
either the IP address or the symbolic host
name.
Delete Click to delete the associated host. This
control does not appear on the active host
row.
Down Click to move the associated host one
position down in the list.
Up Click to move the associated host up one
row in the list.
Check Set primary. Move this host to the beginning
of the list, designating it as the primary host.

S-TAP Control - Define Secondary Guardium Host

Before defining a secondary host, be sure that you understand how secondary
hosts are used. See Secondary Guardium hosts for S-TAP agents in the overview of
“S-TAP administration guide” on page 42.

To define a secondary host:


1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.
2. The first host listed is the primary host for the S-TAP. Following any outage or
restart, S-TAP attempts to connect to the primary host first.
3. Enter the IP address of the secondary Guardium host in the text box.
4. Click Add.

102 S-TAP and other agents


5. Optional. To change the host designated as the primary host (the first in the
list), or to change the order of secondary hosts, do one of the following:
v Click Down or Up to reorder the list.
v Click Set Primary on the line for a host to move it to the beginning of the
list.
6. Optional. To remove a secondary host, click its Delete button. You cannot
remove the active host.
7. When you are done, click Apply.

Note: If you have changed the primary host, and you want the S-TAP to begin
using the new primary host immediately, and this is a Windows server, you will
need to restart the GUARD_STAP service. Restarting the service is not required on
UNIX servers.

S-TAP Control - Inspection Engines


The layout of the Inspection Engines pane varies depending on the server
operating system, the database protocol, and for UNIX systems, whether the K-Tap
or TEE mechanism is being used.

Note: Do not configure an S-TAP inspection engine to monitor network traffic that
is also monitored directly by a Guardium system that is hosting the S-TAP, or by
another S-TAP reporting to the same Guardium system. If that happens, the
Guardium system will receive duplicate information, will not be able to reconstruct
sessions, and will ignore that traffic.

Note: Click Modify to edit an Inspection Engine.

Control Description
Protocol The type of database server being monitored
(Cassandra, CouchDB, DB2, DB2 Exit,
exclude IE, FTP, GreenPlumDB, Hadoop,
HTTP, ISERIES, Informix, KERBEROS,
MongoDB, MS SQL, Mysql, Named Pipes,
Netezza, Oracle, PostgreSQL, SAP Hana,
Sybase, Teradata, or Windows File Share).
Port Range The range of ports monitored for this
database server. There is usually only a
single port in the range. For a Kerberos
inspection engine, this value should always
display as 88-88. If a range is used, do not
include extra ports in the range, as this may
result in excessive resource consumption
while the S-TAP attempts to analyze
unwanted traffic.
TEE Listen Port Real Port Not used for Windows. Under UNIX,
replaced by the KTAP DB Real Port when
the K-Tap monitoring mechanism is used.
Required when the TEE monitoring
mechanism is used. The Listen Port is the
port on which S-TAP listens for and accepts
local database traffic. The Real Port is the
port to which S-TAP forwards traffic.

Chapter 1. S-TAPs and other agents 103


Control Description
K-TAP DB Real port Not used for Windows. Under UNIX, used
only when the K-Tap monitoring mechanism
is used. Identifies the database port to be
monitored by the K-Tap mechanism.
Client IP/Mask A list of Client IP addresses and
corresponding masks to specify which
clients to monitor. If the IP address is the
same as the IP address for the database
server, and a mask of 255.255.255.255 is
used, only local traffic will be monitored. An
address/mask value of 1.1.1.1/0.0.0.0 will
monitor all clients.

When editing the list, to create an additional


Client IP/Mask entry, click Add. To delete
the last Client IP/Mask entry, click Delete.

If Client IP/Mask is specified, then Exclude


Client IP/Mask can not also be specified at
the same time.
Exclude Client IP/Mask A list of Client IP addresses and
corresponding masks to specify which
clients to exclude. This option allows you to
configure the S-TAP to monitor all clients,
except for a certain client or subnet (or a
collection of these)

When editing the list, to create an additional


Exclude Client IP/Mask entry, click Add. To
delete the last Exclude Client IP/Mask entry,
click Delete.

If Exclude Client IP/Mask is specified, then


Client IP/Mask can not also be specified at
the same time.
Connect to IP The IP address for S-TAP to use to connect
to the database. Some databases accept local
connection only on the real IP address of the
machine, and not on the default (127.0.0.1).
DB Install Dir UNIX only. DB2, Informix, or Oracle: Enter
the full path name for the database
installation directory, for example,
/home/oracle10. All other database types:
enter NULL.

104 S-TAP and other agents


Control Description
Process Name For a Windows Server: For Oracle or MS
SQL Server only, when named pipes are
used. For Oracle, the list usually has two
entries: oracle.exe,tnslsnr.exe. For MS SQL
Server, the list is usually just one entry:
sqlservr.exe.

For a UNIX Server: For a DB2, Oracle, or


Informix database, enter the full path name
for the database executable.

For example:
v Oracle: /home/oracle10/prod/10.2.0/
db_1/bin/oracle
v Informix: /INFORMIXTMP/.inf.sqlexec
Informix: /INFORMIXTMP/.inf.sqlexec
Applies to all Informix platforms but
Linux. For Informix with Linux, example:
/home/informix11/bin/oninit
v MYSQL: mysql
v PostgreSQL: POSTGRES.EXE, PG_CTL.EXE
v Teradata: GTWGATEWAY.EXE
v For all other database types, enter NULL
Encryption Activate ASO encrypted traffic for Oracle
(versions 9, 10 and 11) and Sybase on Solaris
or HPUX.
Named Pipes Windows only. Specifies the name of the
named pipe used by MS SQL Server for
local access. If a named pipe is used, but
nothing is specified here, S-TAP attempts to
retrieve the named pipe name from the
registry.
Instance Name The database instance name is required for:
v MS SQL Server 2005 using encryption, or
MS SQL Server using Kerberos
Authentication (MSSQLSERVER is the
default)
v Oracle using database encryption (there is
no default)
DB2 Shared Memory The following three fields apply only when
DB2 is selected as the database type. If
shared memory connections are monitored,
the following three parameters must be set.
Adjustment Default is 20
Client Position Default is 61440
Size Default is 131072
Identifier Identifier is an optional field that can be
used to distinguish inspection engines from
one another. If you do not provide a value
for this field, Guardium will auto populate
the field with a unique name using the
database type and GUI display sequence
number.

Chapter 1. S-TAPs and other agents 105


Control Description
Add When adding an inspection engine, be sure
to click Add in theAdd Inspection Engine
panel before clicking Apply in the
Configuration panel.

Note: For Informix versions 7 or 11, the Informix version must be set for the
inspection engine through the use of the API (create_stap_inspection_engine) or
through editing the guard_tap.ini file (informix_version parameter).

Note: Hadoop protocol facilities the auditing of BigData datacenters. Apache


Hadoop is the open source software framework, used to reliably manage large
volumes of structured and unstructured data.

Ignore response at inspection level

Use this function to ignore all database responses at the S-TAP level, without
sending anything to the Guardium system.

In certain environments, where only interested in client transactions, this function


will save bandwidth and processing time for the S-TAP and the Guardium system.

Use this function for an easier configuration for ignoring unwanted responses from
the database, without loading the network.

[TAP] section DB_IGNORE_RESPONSE

Database types may be listed comma separated or ALL can be specified to ignore
responses from all types of databases, for example, see following. The default is
none.

If it is set to none, this means that no response is ignored.

If it is set to all, this means that the responses from all DBs are ignored.

DB_IGNORE_RESPONSE=MSSQL,SYBASE,DB2

DB_IGNORE_RESPONSE=all

DB_IGNORE_RESPONSE=none

DB_IGNORE_RESPONSE_BYPASS_BYTES, the default is 1000

DB_IGNORE_RESPONSE_BYPASS_TIMEOUT, the default is 5 second

The following are valid as database types: ALL, CIFS, FTP, DB2_EXIT, PGSQL,
MSSQL_NP, MSSQL, MYSQL, TRD, SYBASE, INFORMIX, DB2, ORACLE,
KERBEROS.

To add CIFS/FTP inspection, use fixed ports for CIFS or FTP. FTP always uses port
21, CIFS uses port 139 or port 445.

Configuring inspection engines for FTP traffic is easy. For net inspection, simply
select Protocol FTP, enter port 21, and enter the IPs/Masks as you normally would.

106 S-TAP and other agents


For S_TAP, select Protocol FTP, enter 21 for all ports, and enter the IPs/Masks (DB
Install Dir, Process Name, and Named Pipe are not required).

FTP Sniffing is the ability to sniff FTP traffic between a client and server as if it
were database traffic. With FTP, any machine can be a client (UNIX or Windows)
and also any system can also be a server, as long as there is a valid user to login
with. Note that there is no local FTP. However, FTP can be sniffed by either
network inspection or by network S-TAP sniffing. FTP traffic will typically appear
on port 21. In GDM_CONSTRUCT, FTP traffic will appear as "_FTP" followed by
the RAW FTP command that was sent (note that the raw FTP command is different
from the actual FTP that was sent).

CIFS Sniffing (or Windows File Share Sniffing) is the ability to sniff the sharing of
Windows files between a client and server as if it were database traffic. When
sharing out directories and files in Windows, this sharing system is based on the
smb or Samba language, which the Guardium system sniffs and translates into a
CIFS language. Use the smbclient function to sniff Windows File Share traffic but
also UNIX connections to Windows shared folders. Note that there is no local CIFS.
However, CIFS can be sniffed by either network inspection or by network
Windows S-TAP sniffing. Also note that there is no such thing as a CIFS Server.
Any Windows machine can either share files or access shared files, so any
Windows machine can be a client or server. CIFS traffic will typically appear on
either port 139 or port 445. In GDM_CONSTRUCT, CIFS traffic will appear as
"_CIFS" followed by the CIFS command that was sent.

S-TAP Control - Reload Last Good Configuration

After changing an S-TAP configuration, you may notice its status light in the
S-TAP Control panel turn yellow. A yellow light means that there is a mismatch
between the configuration on the Guardium system and the configuration on the
S-TAP. A temporary yellow light is acceptable, as it takes some time for the S-TAP
to receive and approve the new configuration. If the yellow light persists, it usually
means that the S-TAP did not accept the new configuration and reverted to the last
known good configuration.

When an error has occurred, you can review the errors by opening Reports >
Real-time Guardium Operational Reports > S-TAP Events. In most cases the
event log will contain error messages indicating what was wrong with the new
configuration. See Viewing the S-TAP Events Panel for a description of error
messages.

S-TAP and A-Tap Configuration - required parameters for


monitoring DB2 shared memory on Linux

The DB2-specific S-TAP and A-Tap parameters apply only when all of the
following conditions are met:
v The DB2 server is running under Linux.
v The K-Tap monitoring mechanism is installed.
v Clients connect to DB2 using shared memory.

The DB2-specific S-TAP parameters are set on the Inspection Engine definition
panel.

Set the Position parameter value according to the shared memory size used by
db2bp, as follows:

Chapter 1. S-TAPs and other agents 107


v Position=61440 (if db2bp uses 131072)
v Position=671744 (if db2bp uses 327680)
v Position=1064960 (if db2bp uses 524288)

If you do not know the shared memory size used by db2bp, you can use the
following procedure to find it.

How to find the db2 shared memory offsets

The following table summarizes the required parameters used both for S-TAP and
A-Tap when configured to monitor DB2 shared memory on Linux.

Parameter S-TAP Name A-TAP Name Default Value Comments


Packet header db2_fixed_pack_adjustment
db2_header_offset20 Default value is
size tested for DB2
8.2 and newer
on various 64-bit
platforms. Other
versions of DB2
and 32-bit
platforms may
need a different
offset. Typical
values are 16
and 12.
Client I/O area db2_shmem_client_position
db2_c2soffset 61440 This parameter
offset is derived from
ASLHEAPSZ
DB2 parameter.
DB2 shared db2_shmem_size db2_shmsize 131072 This parameter
memory is determined
segment size empirically.

Computing Client I/O area offset (db2_shmem_client_position)


1. Open a new bash shell as the DB2 instance user.
2. Run the ps -x command to verify that the db2bp command processor is not
currently running for this shell. You should not see a command called db2bp
running. If it does, either kill it or run a new shell.
3. Run the following command:
db2 get database manager configuration | awk ’/ASLHEAPSZ/{print $9 * 4096}’

The output is the required value for db2_shmem_client_position.

The ASLHEAPSZ parameter is specified in 4K memory pages in DB2. It determines


the size of the application support layer heap. As shown in the previous diagram,
the client I/O area starts immediately after the application heap in the
Agent/Application shared memory segment.

Note: The theory behind this computation is based on the DB2 Administration
Guide: Performance document.

Finding DB2 shared memory segment size (db2_shmem_size)

108 S-TAP and other agents


A-TAP and K-TAP rely on the size for identification of the Application/Agent
shared memory segments. These segments are then tapped for C2S and S2C
packets. For information on finding this value, see Finding the DB2 shared
memory segment size in “UNIX S-TAP” on page 50.

Editing the S-TAP configuration file


There are several ways to modify the configuration of an S-TAP after it is installed.

Sometimes a user is unable to make a decision during the process of installing an


S-TAP or may make the wrong decision and it goes undetected until after the
installation process is complete. For instance a user may forget to type in or use
the wrong IP address when defining a SQL Guard IP. These types of mistakes can
be remedied by editing the S-TAP configuration file or modifying the S-TAP
configurations.

If you have installed your S-TAP by using the Guardium Installation Manager
(GIM), you can update some parameters through the GIM UI or CLI. If you cannot
use any of these methods to update the parameters, you can edit the configuration
file on the data server.

The following tables provide a detailed description of the S-TAP parameters. They
indicate which parameters can be updated through the Guardium UI and by using
GIM.

If it is necessary to modify the configuration file from the database server, follow
the procedure outlined. The file contains comments that explain many of the
parameters.
1. Log on to the database server system using the root account.
2. Stop the S-TAP:
3. Make a backup copy of the configuration file: guard_tap.ini. It is located in
one of the following directories, depending on the server operating system
type:
v Windows: \Windows\System32
v Unix: /usr/local/guardium/guard_stap
4. Open the configuration file in a text editor.
5. Edit the file as necessary.
6. Save the file.
7. Restart the S-TAP and verify that your change has been incorporated.

Windows S-TAP parameters


The following tables define the parameters that are used to control the behavior of
S-TAPs on Windows.

The tables provide this information for each parameter:


Parameter
The name of the parameter
Version
If the parameter was introduced in Version 8.0 or later, the oldest version
with which the parameter can be used.
GUI Yes if the parameter can be modified through the Guardium user interface,
blank if not.

Chapter 1. S-TAPs and other agents 109


Default value
The default value of the parameter.
Description
The meaning of the parameter, including descriptions of possible values.

Note: If a parameter’s description begins with "Advanced" then you should


modify the value only if you are an expert user or you have consulted with IBM.

SQLGuard parameters
These parameters describe a Guardium system to which this S-TAP can connect.
Table 18. S-TAP configuration parameters for a Guardium system
Default
Parameter VersionGUI value Description
sqlguard_ip NULL IP address or hostname of the Guardium
system that will act as a host for the S-TAP
primary 1 Indicates if the server is a primary server:
Windows: 0=NO, 1=YES (1). UNIX: 1=Primary,
2=Secondary, 3=tertiary, etc. If
participate_in_load_balancing=1, there must be
at least one primary server. If
participate_in_load_balancing=0, there must be
exactly one primary server.

General parameters
These parameters define basic properties of the S-TAP running on a Windows
server and the server on which it is installed, and do not fall into any of the other
categories.

These parameters are stored in the [VERSION] section of the S-TAP properties file.
Table 19. S-TAP configuration parameters in the [VERSION] section
Default
Parameter VersionGUI value Description
stap_client_build Read only. The build version of the installed
S-TAP
protocol_version Read only. The version of the Guardium system

These parameters are stored in the [TAP] section of the S-TAP properties file.
Table 20. S-TAP configuration parameters in the [TAP] section
Default
Parameter Version
GUI GIMvalue Description
tap_type Read only. STAP for UNIX, WTAP for
Windows
tap_version Read only. The version of S-TAP installed on
the server
tap_ip IP address or hostname for the database
server system on which S-TAP is installed
all_can_control Yes 0 0=S-TAP can be controlled only from the
primary Guardium system. 1=S-TAP can be
controlled from any Guardium system.

110 S-TAP and other agents


Table 20. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
participate_in_load_balancing 0 Controls load balancing to Guardium
systems: 0=NO, 1=YES STAP balances all
traffic to the Guardium systems listed by
Client IP, 2=REDUNDANCY fully mirrored
STAP will send all traffic to all monitoring
SQLGUARD machines. 3=hardware load
balancing is in use. To designate a Guardium
system as a primary server, use the primary
property in the SQLGUARD section. If this
parameter is set to 0, and you have more
than one Guardium system monitoring
traffic, then the non-primary Guardium
systems are available for failover.
connection_timeout_sec 60 Number of seconds after which the S-TAP
will consider a Guardium server to be
unavailable. It can have any integer value.
use_tls Yes Yes 0 1=use SSL to encrypt traffic between the
agent and the Guardium system. 0=do not
encrypt.
failover_tls Yes Yes 1 1= If ssl connection is not possible for any
reason, fail over to using non-secure
connection. 0=use only secure connections.
number_of_processors 4 Read only. Number of processors on the
machine
alternate_ips NULL Comma-separated list of alternate or virtual
IP addresses used to connect to this database
server. This is used only when your server
has multiple network cards with multiple IPs,
or virtual IPs. S-TAP only monitors traffic
when the destination IP matches either the
S-TAP Host IP defined for this S-TAP, or one
of the alternate IPs listed here, so we
recommend that you list all virtual IPs here.
db2_tap_installed 0 Set to 1 for sniffing DB2 shared memory
traffic. Starts the DB2 TAP Service when set
to 1.
Yes
db2_exit_driver_installed DB2 Integration with S-TAP : set to 1 to
enable DB2 Exit library integration 1) Let
S-TAP capture all DB2 traffic directly from
the DB2 engine - Note, that it is only for
specifc DB2 releases - 10.1 and onwards 2)
When using this method, Firewall and
Scrub/Redact functionality are not supported.
Also, stored procedures will not be captured.
3) It lets us pick up all DB2 traffic ,
regardless of encryption/network protocol. 4)
This solution simplifies the S-TAP
configuration for customers that will deploy
this version of DB2, and gives them native
DB2 support.
Yes
db2_shmem_driver_installed This parameter has been deprecated and
replaced by db2_tap_installed. Note that this
parameter is always set to 0 after installation.

Chapter 1. S-TAPs and other agents 111


Table 20. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
db2_shmem_driver_level Deprecated
dc_collect_freq 9 24 Specifies the frequency of collection in hours.
Minimum is 1, maximum is 24. GuardiumDC
is a service that collects updates of user
accounts (SIDs and usernames) from the
primary domain controller and then signals
the changes to Guardium_S-TAP to update
S-TAP internal SID/UserName? map. If
S-TAP cannot find resolved SID in the map, it
tries to get it from the primary Domain
Controller, in which case S-TAP logs a
message into debug log (level 7) The account
name *** has been retrieved for SID ***.
9
dc_collect_maxusers 200,000 The maximum number of users to collect.
Minimum is 10,000.
9
db_ignore_response Ignore response at inspection level. Use this
function to ignore all database responses at
the S-TAP level, without sending anything to
the Guardium system. In certain
environments, where only interested in client
transactions, this function will save
bandwidth and processing time for the S-TAP
and the Guardium system. Use this function
for an easier configuration for ignoring
unwanted responses from the database,
without loading the network. [TAP] section
DB_IGNORE_RESPONSE Database types
may be listed comma separated or ALL can
be specified to ignore responses from all
types of databases, for example,
DB_IGNORE_RESPONSE=ALL or
DB_IGNORE_RESPONSE=MSSQL,DB2.
Supported DB types: ALL, MSSQL_NP,
MSSQL, MYSQL, TRD, PGRS, MSSYB,
ORACLE, DB2, DB2_EXIT, INFORMIX,
KERBEROS, FTP, CIFS.
9
domain_controller The name of the specific controller from
which the SID/usernames map should be
read.
high_resolution_timer 0 0: send time stamps in milliseconds. 1: send
time stamps in microseconds, but use
milliseconds system timer (to reduce system
performance hit - multiply milliseconds by
1000). 2: send time stamps in microseconds,
use high resolution windows timer (most
accurate). For cases 1 and 2, the S-TAP will
indicate to the Guardium system that micro
seconds are sent, by setting the reserved byte
in PacketData to 1.
buffer_file_size 50 Size in MB of the buffer allocated for the
packets queue

112 S-TAP and other agents


Table 20. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
buffer_file_name The full path of the memory mapped file if
BUFFER_MMAP_FILE=1. Default is WSTAP
working folder/StapBuffer/STAP_buffer.dtx
buffer_mmap_file9 0 1=memory mapped file option. 0=virtual
memory allocation
software_tap_host Windows only - Identifies the database server
host on which S-TAP is installed. It can be an
IP address or a name recognized by the
DNSserver. There is no default.
9
tcp_alive_message 0 1=send alive messages to g-machine every 5
seconds relying on existing TCP connection.
0=send alive messages by TCP in response to
UDP messages from g-machine.
stack_trace_file_mode Similiar to dump options
minimum_heartbeat_interval 180 Windows only - The number of seconds for
S-TAP to wait for a heartbeat from the active
Guardium system before attempting to
switch to the next server on its list of
Guardium systems.
tracefiles_dir The Directory in which access tracer files will
be stored. The default is INSTALLDIR.
buffer_file_creation_max 3 Not used
capture_client_traffic 0 "client' mode, if =1. switches S2C and C2S
packets and catches all outgoing (LHMON)
traffic from db-server where stap is installed.
compression_level 0 Compression level, from 1 to 9. 0=no
compression.
0
disable_shared_memory_if_turned_on
file_sniffer_frequency 45 Windows only - In seconds, determines how
often S-TAP checks for new SQL trace login
information. Also, this value defines the
frequency for registration attempts with a
Guardium system if a previous attempt was
not successful. In addition, this value defines
the frequency for checking MS SQL Server
configuration parameters (see the description
of the alert_on_shared_memory_enabling and
disable_shared_memory_if_turned_on
properties).
maximum_packet_num 300,000 Deprecated
min_bytes_to_compress 500 Advanced. Minimum size of message to
compress.
network_namedpipes Yes 0 Advanced
not_send_to_sqlguard 0 Advanced. Send nothing to the Guardium
system.
recv_level 0 Advanced.
remote_messages Yes 1 1=Send messages to the active Guardium
system. 0=Do not send messages

Chapter 1. S-TAPs and other agents 113


Table 20. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
send_level 0 Advanced. Used for thread prioritization.
sniffed_udp_ports 88 Deprecated.
synch_flag 1 Read only. Indicates whether parameters are
synchronized with the UI.
tap_dbserver_names
tap_hb_udp_port 8075 Windows only. The UDP port number on
which heartbeats and data are sent to S-TAP
from any Guardium system that will act as a
server for this S-TAP
tap_min_heartbeat_level 180 Number of seconds after which the S-TAP
should fail over
tcp_buffer_size 60000 Advanced. Minimum number of bytes to
collect before sending a message to the
Guardium system
time_network 0 Advanced. Used for debug only.
user_collector_level 0 Advanced.
web_server_connections 1 Maximum number of DB connections by .net
app
web_server_installed 0 Deprecated. Formerly used to enable IIS tap.
web_server_port 9000 Port for web-server
guardium_ca_path NULL Location of the Certificate Authority
certificate.
sqlguard_cert_cn NULL The common name to expect from the
Sqlguard certificate.
guardium_crl_path NULL The path to the Certificate Revocation list file
or directory.
tap_failover_session_quiesce 240 The number of seconds after failover, when
unused sessions in the failover list from the
previous active servers can be removed from
the current active server,
tap_failover_session_size 8192 size in MB of the failover session list. 0=no
failover sessions should be saved
db_ignore_response NULL Comma-separated list of db types to be
response-ignored. If it is set to none, no
response is ignored; if it is set to all, the
responses from all DBs are ignored.

Inspection engine parameters


These parameters affect the behavior of the inspection engine that the S-TAP uses
to monitor a data repository on a Windows server.

These parameters are stored in the Database section of the S-TAP properties file,
with the name of a data repository. There can be multiple sections in a properties
file, each describing one inspection engine used by this S-TAP.

114 S-TAP and other agents


Table 21. S-TAP configuration parameters for an inspection engine on Windows
Default
Parameter VersionGUI value Description
db_type Yes The type of data repository being monitored
instance_name Yes The name of the instance of the database on this
server.
port_range_start Yes Starting port range specific to the database
port_range_end Yes Ending port range specific to the database
named_pipe Yes Name of the pipe
networks Yes Identifies the clients to be monitored, using a
list of addresses in IP address/mask format:
n.n.n.n/m.m.m.m. There is no default. To select
all clients, omit the list of addresses. To select
local traffic only, use 127.0.0.1/255.255.255.255 If
an improper IP address/mask is entered, the
S-TAP will not start.
exclude_networks Yes A list of client IP addresses and corresponding
masks to specify which clients to exclude. This
option allows you to configure the S-TAP to
monitor all clients, except for a certain client or
subnet (or a collection of these). When editing
the list, to create an additional Exclude Client
IP/Mask entry, click the Add button. To delete
the last Exclude Client IP/Mask entry, click the
Delete button.
db_install_dir Yes NULL Unix only. DB2, Informix, or Oracle: Enter the
full path name for the database installation
directory. For example: /home/oracle10 All
other database types: enter NULL
tap_db_process_names Yes Database's running executables that are to be
monitored

These additional parameters are used with IBM DB2 databases:


Table 22. Additional S-TAP configuration parameters for a DB2 inspection engine
Default
Parameter Version
GUI GIMvalue Description
db2_client_offset8 Yes 61440 The offset to the client's portion of the shared
memory area. The client offset can be
calculated by taking the value of the DB2
parameter ASLHEAPSZ and multiplying by
4096 to get the appropriate offset. The default
for this parameter is 61440 decimal. This
parameter is calculated by taking the DB2
database configuration value of ASLHEAPSZ
and multiplying by 4096. To get the value for
ASLHEAPSZ, execute the following DB2
command: db2 get dbm cfg and look for the
value of ASLHEAPSZ. This value is typically
15 which yields the 61440 default. If it's not
15, take the value and multiply by 4096 to get
the appropriate client offset.

Chapter 1. S-TAPs and other agents 115


Table 22. Additional S-TAP configuration parameters for a DB2 inspection
engine (continued)
Default
Parameter Version
GUI GIMvalue Description
8 Yes
db2_fix_pack_adjustment 80 the offset to the server's portion of the shared
memory area. Offset to the beginning of the
DB2 shared memory packet, depends on DB2
version, 32 in the earlier versions, 80 - in 8.2.1
and later.
db2_log_size 8 Yes 25 The maximum file size, in megabytes, that the
functional DLL can keep buffered before it
starts throwing log entries away.
Yes
db2_shmem_client_position It should be set to monitor DB2 shared
memory traffic.
db2_shmem_size Yes 131072 It should be set to monitor DB2 shared
memory traffic.

Firewall parameters
These parameters affect the behavior of the S-TAP with respect to the firewall.
Table 23. S-TAP configuration parameters for firewall
Default
Parameter VersionGUI value Description
firewall_installed 0 Firewall feature enabled. 1=yes, 0=no.
firewall_timeout 10 Time in seconds to wait for a verdict from the
Guardium system if timed out. Look at
firewall_fail_close value to know whether to
block or allow the connection. The value can be
any integer value.
firewall_fail_close 0 If the verdict does not come back from the
Guardium system and the firewall_timeout is
passed, then if firewall_close = 0 the connection
will go through; if firewall_close=1 the
connection will be blocked.
firewall_default_state 0 What triggers the start of the firewall mode
0=event triggering a rule in the installed policy
happens 1=start in firewall mode enabled
regardless of a triggering event (0). This flag
forces the watch (or enabling) of the firewall
regardless of any rule, but specific actions
(DROP etc) still happen only when triggered by
a rule.
9.0
firewall_force_watch NULL When the firewall feature is enabled and
firewall_default_state is 0, the session will be
watched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2
9.0
firewall_force_unwatch NULL When the firewall feature is enabled and
firewall_default_state is 1, the session will be
unwatched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2,

116 S-TAP and other agents


Application server parameters
These parameters affect the behavior of the S-TAP when it is installed on a client
machine rather than on the database server.

Note: All of these parameters are deprecated for use on Windows servers and
should not be modified. They are listed here because you might see them in your
configuration file.
Table 24. S-TAP configuration parameters for application servers
Default
Parameter VersionGUI value Description
appserver_installed 0 Deprecated (for Windows only). 0 is default,
S-TAP acts as normal. 1=S-TAP is set in 'client
mode', switches S2C and C2S packets to reflect
S-TAP being installed on client, not db server.
Also, if 1, checks to see if the other appserver_*
parameters are filled in, and if so, examines http
packets on the supplied port to grab session
information about the end-user of the
java-application that resides on the client system.
appserver_ports Yes 8080 Deprecated (for Windows only).
Comma-separated list of ports on which the java
application is accessed via web browser. If my
url to a certain estore is: http://
woodpecker:8888/estore 8888 would be the
value I supply this parameter.
appserver_login_pattern Yes Deprecated (for Windows only).
Comma-separated list of strings specifying the
login pattern passed to the application. This is
the pattern that the java application is passed
that indicates a login of a user.
Yes
appserver_username_prefix Deprecated (for Windows only).
Comma-separated list of strings specifying the
prefix to the username for a given session. This
is the pattern the java application uses to
indicate the username of the given session.
Yes
appserver_username_postfix Deprecated (for Windows only).
Comma-separated list of strings specifying the
postfix to the username for a given session. This
is the pattern (or character) used by the java
application to indicate the end of the value for
the given variable that indicates the username.
Yes
appserver_session_pattern Deprecated (for Windows only).
Comma-separated list of strings that specify the
start of an end-user session using a particular
database session.
appserver_session_prefixYes Deprecated (for Windows only).
Comma-separated list of strings specifying
where the session id starts
Yes
appserver_session_postfix Deprecated (for Windows only).
Comma-separated list of strings specifying
where the session id ends.
Yes
appserver_usersess_pattern Deprecated (for Windows only).
Comma-separated list of strings specifying the
identifier for marking which end-session a given
connection is continuing with.

Chapter 1. S-TAPs and other agents 117


Table 24. S-TAP configuration parameters for application servers (continued)
Default
Parameter VersionGUI value Description
Yes
appserver_usersess_prefix Deprecated (for Windows only).
Comma-separated list of strings specifying what
identifies/precedes the session_id in a given
usersess indicator packet.
Yes
appserver_usersess_postfix Deprecated (for Windows only).
Comma-separated list of strings specifying
where the session id ends.

Debug parameters
These parameters affect the behavior of S-TAP debugging.

These parameters are stored in the [DEBUG_OPTIONS] section of the S-TAP


properties file:
Table 25. S-TAP configuration parameters for debugging
Default
Parameter VersionGUI value Description
debug_buffer 1 1=log the contents of local packets
debug_firewall 1 1=log firewall events

These parameters are stored in the [TAP] section of the S-TAP properties file:
Table 26. More S-TAP configuration parameters for debugging
Default
Parameter VersionGUI value Description
debug_file_name Location of the S-TAP debug file. The default
location is c:/guardium/stap.txt
debug_max_file_size 200
debuglevel 0 Level of debug messages to store. Leave at 0
unless directed by IBM Support.
0 Only critical error information
1 All previous messages plus repeatable
critical error information
2 Not used
3 All messages from level 1, plus brief
information about packets sent to a
Guardium system
4 All messages from level 3, plus local
sniffing log
5 All messages from level 4, plus network
sniffing log
6 All messages from level 5, plus
heartbeat receiving log
7 All messages from level 6, plus
miscellaneous debugging information

118 S-TAP and other agents


Table 26. More S-TAP configuration parameters for debugging (continued)
Default
Parameter VersionGUI value Description
dump_file_mode 0 Enable captures of dump files if S-TAP crashes.
0 - no dumps, 1 - dump file, 2 - time stamped
dump file 0 - crash dumps generated (default) 1
- crash dumps generated, written to the file
"stap.diag" which is created in the S-TAP
working directory. 2 - crash dumps generated,
written to the file "stap-TIMESTAMP.diag"
which is created in the S-TAP working directory,
where TIMESTAMP includes a time and date
string to identify when the crash dump was
generated. The dump file is opened every time
the S-TAP starts and the parameter is not zero,
but is empty if there was a crash. When using
the single file dump option (1), S-TAP copies
any existing stap.diag file to a backup file before
overwriting the stap.diag file.
stack_trace_file_mode similar to dump_file_mode
kernel_debug_level 0
syslog_messages 1 1= send messages to syslog (for UNIX) or the
EventViewer (for Windows). S0=do not send
messages.

Configuration Auditing System (CAS) parameters


These parameters affect the behavior of CAS on this system.
Table 27. S-TAP configuration parameters for CAS
Default
Parameter Version
GIMGUI value Description
cas_task_baseline task_baseline
File name for task baseline. Deprecated.
cas_task_checkpoint Yes task_checkpoint
File name for client baseline. Deprecated.
cas_client_baseline client_baseline
cas_client_checkpoint Yes client_checkpoint
cas_checkpoint_period Yes 60 Interval time in seconds for the check
cas_fail_over_file Yes fail_over_file
Name of file containing outgoing messages
buffer
cas_fail_over_file_size_limit Yes 50000 Size of fail over file
cas_max_reconnect_attempts Yes 5000 Number of reconnect attempts when
connection is lost
cas_reconnect_interval Yes 60 Wait time in seconds between reconnect
attempts
cas_raw_data_limit Yes 1000 Limit in kilobytes on size of raw data sent
to Guardium system
cas_md5_size_limit Yes 1000 Largest file size in kilobytes on which to
calculate MD5SUM
cas_command_wait 8.0 300 Wait time in seconds before killing a
long-running data collection process
8.0
cas_server_failover_delay 60 Wait time in minutes before trying to
connect to another Guardium system

Chapter 1. S-TAPs and other agents 119


Table 27. S-TAP configuration parameters for CAS (continued)
Default
Parameter Version
GIMGUI value Description
cas_server_port Windows only.

Driver parameters
These parameters affect the behavior of several drivers with which the S-TAP
interacts.
Table 28. S-TAP configuration parameters for drivers
Default
Parameter VersionGUI value Description
lhmon_driver_installed 1 LHMON can be used for both local and
network tcp traffic. S-TAP on Windows uses
lhmon driver for local traffic. Use 1 to turn
on ,0 to turn off local traffic snif
lhmon_driver_level 0 Advanced. Used for thread prioritization.
lhmon_for_network 1 Uses lhmon instead of winpcap for sniffing
network traffic if set to 1
lhmon_log_size 1 Advanced
nptrc_log_size 2 Advanced
shstrc_log_size 4 Advanced
ora_driver_installed 1 Set to 1 for sniffing Oracle ASO and SSL
traffic
ora_driver_level Yes 0 Advanced. Used for thread prioritization.
named_pipes_driver_installed 1 Set to 1 for local named pipes sniffing
named_pipes_driver_level Yes 0 Advanced. Used for thread prioritization.
shared_memory_driver_installed 0 Deprecated
shared_memory_driver_level Yes 0 Advanced. Used for thread prioritization.
krb_mssql_driver_installed 2 Set to 1 for sniffing MSSQL SSL traffic and
Kerberos tickets. set to 2 if you want to
collect just MSSQL decrypted traffic but not
Kerberos tickets to save time by the
collecting the domain user names when
starting the program. Note that this
parameter is always set to 0 after installation.
krb_mssql_driver_level 0
krb_mssql_driver_nonblocking 0 1=get domain user names from the domain
controller in a separate thread. In this case
the first packet with the new user does not
resolve the user SID into domain user name.
krb_mssql_driver_user_collect_time 30 Time limit for collecting SIDs. In case the old
method is used for pre-collecting
SID/usernames map
(KRB_MSSQL_DRIVER_INSTALLED=1) from
the domain controller, TAP property
KRB_MSSQL_DRIVER_USER_COLLECT_TIME
might be used for limiting the time of
communicating with the domain controller at
STAP start-up (default is 30 sec).

120 S-TAP and other agents


Table 28. S-TAP configuration parameters for drivers (continued)
Default
Parameter VersionGUI value Description
krb_mssql_driver_ondemand 0 Deprecated in V9.0 with V9.0 GPU patch 50.
Set to 1 if you want to save time by
resolving user SIDs into domain user names
only for Kerberos tickets from new users for
the running STAP instance.
sybase_driver_installed 0 Deprecated
sybase_driver_level Yes 0 Deprecated
wpcap_driver_installed 0 Deprecated
wpcap_driver_level Yes 0 Deprecated

UNIX S-TAP parameters


The following tables define the parameters that are used to control the behavior of
S-TAPs on UNIX.

The tables provide this information for each parameter:


Parameter
The name of the parameter
Version
If the parameter was introduced in Version 8.0 or later, the oldest version
with which the parameter can be used
GUI Yes if the parameter can be modified through the Guardium user interface,
blank if not
GIM Yes if the parameter can be modified through the Guardium Installation
Manager, blank if not
Default value
The default value of the parameter
Description
The meaning of the parameter, including descriptions of possible values

Note: If a parameter’s description begins with "Advanced" then you should


modify the value only if you are an expert user or you have consulted with IBM.

SQLGuard parameters
These parameters describe a Guardium system to which this S-TAP can connect.
Table 29. S-TAP configuration parameters for a Guardium system
Default
Parameter Version
GUI GIMvalue Description
sqlguard_ip NULL IP address or hostname of the Guardium
system that will act as a host for the S-TAP
sqlguard_port 16016 Read only. Port used for S-TAP to connect to
Guardium system

Chapter 1. S-TAPs and other agents 121


Table 29. S-TAP configuration parameters for a Guardium system (continued)
Default
Parameter Version
GUI GIMvalue Description
primary 1 Indicates if the server is a primary server:
Windows: 0=NO, 1=YES (1). UNIX:
1=Primary, 2=Secondary, 3=tertiary, etc. If
participate_in_load_balancing=1, there must
be at least one primary server. If
participate_in_load_balancing=0, there must
be exactly one primary server.
8.0
connection_pool_size Number of opened connections from S-TAP to
snif . When TLS is enabled, the feature is
called Multi TLS. The value is an integer.

General parameters
These parameters define basic properties of the S-TAP running on a UNIX server
and the server on which it is installed, and do not fall into any of the other
categories.

These parameters are stored in the [VERSION] section of the S-TAP properties file.
Table 30. S-TAP configuration parameters in the [VERSION] section
Default
Parameter Version
GUI GIMvalue Description
stap_client_build Yes The build version of the installed S-TAP
protocol_version The version of the Guardium system

These parameters are stored in the [TAP] section of the S-TAP properties file.
Table 31. S-TAP configuration parameters in the [TAP] section
Default
Parameter Version
GUI GIMvalue Description
tap_type S-TAP for UNIX, W-TAP for Windows
tap_version The version of S-TAP that is installed on
the server
tap_ip IP address or hostname for the database
server system on which S-TAP is installed
devices Which interfaces to listen on. Use
ifconfig to find the correct interface.
all_can_control Yes 0 0=S-TAP can be controlled only from the
primary Guardium system. 1=S-TAP can
be controlled from any Guardium system.

122 S-TAP and other agents


Table 31. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
participate_in_load_balancing 0 Controls load balancing to SQL Guard
servers:
v 0=NO
v 1=YES S-TAP balances all traffic to the
Guardium systems listed by Client IP
address
v 2=REDUNDANCY fully mirrored
S-TAP will send all traffic to all
monitoring Guardium systems
v 3=hardware load balancing is in use.
v 4=create extra threads and K-TAP
buffers to increase throughput. AIX
only. See Increasing S-TAP throughput
for details.
To designate an SQL Guard server as a
primary server, use the primary property
in the SQLGUARD section. If this
parameter is set to 0, and you have more
than one Guardium system monitoring
traffic, then the non-primary Guardium
systems are available for failover.
connection_timeout_sec 60 Number of seconds after which the S-TAP
will consider a Guardium server to be
unavailable. It can have any integer value.
use_tls Yes Yes 0 1=use SSL to encrypt traffic between the
agent and the Guardium system. 0=do not
encrypt.
failover_tls Yes Yes 1 1= If ssl connection is not possible for any
reason, fail over to using non-secure
connection. 0=use only secure
connections.
wait_for_db_exec Yes -1
tap_run_as_root 8.2 To allow S-TAP to run as regular user. 0 =
the tap will run as 'guardium' user, 1= the
tap will run as 'root'
tap_buf_dir NULL Location of S-TAP buffer file. Default is
NULL and will be located at
$inidir/buffers
tap_log_dir NULL Location of S-TAP files. Default is NULL
and will produce log files in /tmp
number_of_processors 4 Read only. Number of processors on the
machine.

Chapter 1. S-TAPs and other agents 123


Table 31. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
alternate_ips NULL Comma-separated list of alternate or
virtual IP addresses used to connect to
this database server. This is used only
when your server has multiple network
cards with multiple IPs, or virtual IPs.
S-TAP only monitors traffic when the
destination IP matches either the S-TAP
Host IP defined for this S-TAP, or one of
the alternate IPs listed here, so we
recommend that you list all virtual IPs
here.
tee_installed 0 1=Tee is in use. 0=Tee is not used.
tee_msg_buf_len 128 Size of the buffer for tee in MB. It can
take any integer value.
buffer_file_size 50 Advanced. Size in MB of the buffer
allocated for the packets queue. If the
buffer size is set too large, the S-TAP
might not be able to start. Files larger
than 2560 MB are known to cause this
problem.
buffer_file_name the full path of the memory mapped file
if BUFFER_MMAP_FILE=1. Default is
WSTAP working folder/StapBuffer/
STAP_buffer.dtx
buffer_mmap_file 9 0 1=memory mapped file option. 0=virtual
memory allocation
tracefiles_dir The Directory in which access tracer files
will be stored. The default is
INSTALLDIR.
compression_level 0 Advanced. compression level, from 1 to 9.
0=no compression.
min_bytes_to_compress 500 Advanced. Minimum size of message to
compress.
remote_messages Yes 1 1=Send messages to the active SQL Guard
host. 0=Do not send messages
tap_failover_session_size 8192 size in MB of the failover session list.
0=no failover sessions should be saved
tap_min_heartbeat_level 180 Number of seconds after which the S-TAP
should fail over
msg_aggregate_timeout 100 time in milliseconds at which K-TAP
sends the packets accumulated in its
buffer to the S-TAP. Can be any integer
value.
msg_count_watermark 64 Number of packets at which K-TAP sends
the packets accumulated in its buffer to
S-TAP. Can be any integer value.

124 S-TAP and other agents


Table 31. S-TAP configuration parameters in the [TAP] section (continued)
Default
Parameter Version
GUI GIMvalue Description
log_program_name 0 To boost performance you may consider
disabling getting the sourceprogram
name, in doing so you won't be able to
tell which program name was using the
connection (but all other connection
information like user and client address
will be available). 0 = don't send
source_program name to Guardium
system, 1=send source_program name to
Guardium system.
max_server_write_size 16384 The maximum number of bytes that the
S-TAP sends to the Guardium system at
once. Can be any integer value.
guardium_ca_path NULL Location of the Certificate Authority
certificate.
sqlguard_cert_cn NULL The common name to expect from the
Sqlguard certificate.
guardium_crl_path NULL The path to the Certificate Revocation list
file or directory.
tap_failover_session_size 1024 The maximum number of failover
sessions in the list per Guardium system.
0=failover feature is disabled. Can be any
integer value.
tap_failover_session_quiesce 240 The maximum idle time in minutes for
the session lists cleanup after a failover.
The time for a session where that session
is considered 'dead' by S-TAP ( This will
be the cleanup for sessions If
failover_session_quiesce minutes go by
after a failover) and fullfills purposes that
include: - being closed and not partaking
in a failover situation - having the
session's policy cleaned by S-TAP and
being removed from the firewalled and
scrubbed lists.
kerberos_plugin_dir NULL Location of Kerberos files
db_ignore_response NULL Comma-separated list of db types to be
response-ignored. If it is set to none, no
response is ignored; if it is set to all, the
responses from all DBs are ignored.
stap_statistic 0 Interval at which S-TAP sends statistic
information about S-TAP/K-TAP to sniffer
; 0=do not send. Specify a positive integer
for hours or a negative integer for
minutes.
upload_feature 9.1 Yes 0 If=1, when a new K-TAP is built, upload
it automatically to the Guardium system
to which this S-TAP reports. You can set
this parameter by setting the GIM
parameter STAP_UPLOAD_FEATURE.

Chapter 1. S-TAPs and other agents 125


Inspection engine parameters
These parameters affect the behavior of the inspection engine that the S-TAP uses
to monitor a data repository on a UNIX server.

These parameters are stored in the database section of the S-TAP properties file,
with the name of a data repository. There can be multiple sections in a properties
file, each describing one inspection engine used by this S-TAP.
Table 32. S-TAP configuration parameters for an inspection engine on UNIX
Default
Parameter VersionGUI GIM value Description
db_type Yes The type of data repository being monitored
port_range_start Yes Starting port range specific to the database
port_range_end Yes Ending port range specific to the database
networks Yes Identifies the clients to be monitored, using a
list of addresses in IP address/mask format:
n.n.n.n/m.m.m.m. There is no default. To
select all clients, omit the list of addresses. To
select local traffic only, use
127.0.0.1/255.255.255.255 If an improper IP
address/mask is entered, the S-TAP will not
start.
tee_listen_port Yes 12344 Not used for Windows. Under Unix, replaced
by the KTAP DB Real Port when the K-Tap
monitoring mechanism is used. Required
when the TEE monitoring mechanism is
used. The Listen Port is the port on which
S-TAP listens for and accepts local database
traffic. The Real Port is the port onto which
S-TAP forwards traffic.
connect_to_ip Yes 127.0.0.1 IP address for S-TAP to use to connect to the
database. When Tee is enabled, this
parameter will be the IP address for S-TAP to
use to connect to the database. Some
databases accept local connection on
127.0.0.1, while others accept local connection
only on the 'real' IP of the machine and not
on the default (127.0.0.1). When K-TAP is
enabled, this parameter will be used for
Solaris zones and AIX WPARs and it should
be the zone IP address in order to capture
traffic.
exclude_networks Yes A list of client IP addresses and
corresponding masks to specify which clients
to exclude. This option allows you to
configure the S-TAP to monitor all clients,
except for a certain client or subnet (or a
collection of these). When editing the list, to
create an additional Exclude Client IP/Mask
entry, click the Add button. To delete the last
Exclude Client IP/Mask entry, click the
Delete button.
real_db_port Yes 4100 Not used for Windows. Under Unix, used
only when the K-Tap monitoring mechanism
is used. Identifies the database port to be
monitored by the K-Tap mechanism.

126 S-TAP and other agents


Table 32. S-TAP configuration parameters for an inspection engine on UNIX (continued)
Default
Parameter VersionGUI GIM value Description
db_install_dir Yes NULL Unix only. DB2, Informix, or Oracle: Enter
the full path name for the database
installation directory. For example:
/home/oracle10 All other database types:
enter NULL
db_exec_file Yes NULL For a Windows Server: For Oracle or MS
SQL Server only, when named pipes are
used. For Oracle, the list usually has two
entries: oracle.exe,tnslsnr.exe. For MS SQL
Server, the list is usually just one entry:
sqlservr.exe.For a Unix Server: For a DB2,
Oracle, or Informix database, enter the full
path name for the database executable. For
example: Oracle: /home/oracle10/prod/
10.2.0/db_1/bin/oracle Informix:
/INFORMIXTMP/.inf.sqlexec Applies to all
Informix platforms but Linux. For Informix
with Linux, example: /home/informix11/
bin/oninit MYSQL: "mysql" For all other
database types, enter NULL.
informix_version Yes 9 The version of informix database. Used to be
able to capture informix shared memory
traffic.
encryption Yes 0 Activate ASO encrypted traffic for Oracle
(versions 9, 10 and 11) and Sybase on Solaris
or HPUX. Note: You may activate ASO
encryption through guardctl from a
command line for other database & operating
system combinations.
load_balanced Yes Yes 1 1=database participates in load balancing.
0=database does not participate in load
balancing.
unix_domain_socket_marker NULL Used to set marker for Oracle, Mysql and
Postgres UNIX domain sockets. Usually the
default value is correct, but when the named
pipe or UNIX domain socket traffic does not
work then we need to make sure this value
is set correctly. For example, for Oracle,
unix_domain_socket_marker should be set to
the KEY of IPC defined in tnsnames.ora. If it
is NULL or not set, the S-TAP will use
defined default markers identified as: *
MySQL - "mysql.sock" * Oracle - "/.oracle/" *
Postgres - ".s.PGSQL.5432"

Chapter 1. S-TAPs and other agents 127


Table 32. S-TAP configuration parameters for an inspection engine on UNIX (continued)
Default
Parameter VersionGUI GIM value Description
instance_running 1 For Solaris zones and WPARs, some zones
may be down when S-TAP starts. Instead of
stopping the entire S-TAP, if the
wait_for_db_exec flag is nonzero, we will
check periodically if that zone can be
brought up. If it can, then we will pass the
relevant parameter to the K-TAP and send a
new config to the Guardium system. The
config will include which databases are up or
down. If instance_running is 1, which means
zone is up, otherwise zone is down, the
S-TAP will check if instance is up
periodically.

These additional parameters are used with IBM DB2 databases:


Table 33. Additional S-TAP configuration parameters for a DB2 inspection engine
Default
Parameter Version
GUI GIMvalue Description
8 Yes
db2_fix_pack_adjustment 80 The offset to the server's portion of the shared
memory area. Offset to the beginning of the
DB2 shared memory packet, depends on DB2
version, 32 in the earlier versions, 80 - in 8.2.1
and later.
db2_client_offset8 Yes 61440 The offset to the client's portion of the shared
memory area. The client offset can be
calculated by taking the value of the DB2
parameter ASLHEAPSZ and multiplying by
4096 to get the appropriate offset. The default
for this parameter is 61440 decimal. This
parameter is calculated by taking the DB2
database configuration value of ASLHEAPSZ
and multiplying by 4096. To get the value for
ASLHEAPSZ, execute the following DB2
command: db2 get dbm cfg and look for the
value of ASLHEAPSZ. This value is typically
15 which yields the 61440 default. If it's not
15, take the value and multiply by 4096 to get
the appropriate client offset.
db2_log_size 8 Yes 25 The maximum file size, in megabytes, that the
functional DLL can keep buffered before it
starts throwing log entries away.

128 S-TAP and other agents


Table 33. Additional S-TAP configuration parameters for a DB2 inspection
engine (continued)
Default
Parameter Version
GUI GIMvalue Description
db2bp_path Yes NULL On solaris zones and AIX WPARs, with path
of the db2bp executable will be able to
activate uid_chain for more than one db2
instances at a given time. The value of this
parameter should be the full path of the
relevant db2bp as seen from the global
zone/wpar. For example: if the file is
/data/db2inst1/sqllib/bin/db2bp And the
zone is installed in /data/zones/oracle2nd/
root/ then the full path to db2bp that should
be set in the db2bp_path parameter is
/data/zones/oracle2nd/root/data/db2inst1/
sqllib/bin/db2bp

Firewall parameters
These parameters affect the behavior of the S-TAP with respect to the firewall.
Table 34. S-TAP configuration parameters for firewall
Default
Parameter Version
GUI GIMvalue Description
firewall_installed Yes 0 Firewall feature enabled. 1=yes, 0=no.
firewall_timeout Yes 10 Time in seconds to wait for a verdict from the
Guardium system if timed out. Look at
firewall_fail_close value to know whether
to block or allow the connection. The value
can be any integer value.
firewall_fail_close Yes 0 If the verdict does not come back from the
Guardium system and the firewall_timeout
is passed, then if firewall_close = 0 the
connection will go through; if firewall_close=1
the connection will be blocked.
firewall_default_state Yes 0 What triggers the start of the firewall mode
0=event triggering a rule in the installed
policy happens 1=start in firewall mode
enabled regardless of a triggering event (0).
This flag forces the watch (or enabling) of the
firewall regardless of any rule, but specific
actions (DROP etc) still happen only when
triggered by a rule.
9.0
firewall_force_watch Yes NULL When the firewall feature is enabled and
firewall_default_state is 0, the session will
be watched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2
9.0
firewall_force_unwatch Yes NULL When the firewall feature is enabled and
firewall_default_state is 1, the session will
be unwatched automatically when its client IP
matches a list of IP/MASK values. The list
itself is separated with commas, for example,
1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2,

Chapter 1. S-TAPs and other agents 129


Application server parameters
These parameters affect the behavior of the S-TAP when it is installed on a client
machine rather than on the database server.
Table 35. S-TAP configuration parameters for application servers
Default
Parameter VersionGUI GIM value Description
appserver_installed Yes 0 0=S-TAP acts as normal. 1=S-TAP is set in
'client mode', switches S2C and C2S packets
to reflect S-TAP being installed on client,
not database server. Also, if 1, checks to see
if the other appserver_* parameters are
filled in, and if so, examines http packets
on the supplied port to get session
information about the end-user of the
java-application that resides on the client
system.
appserver_ports Yes Yes 8080 Comma-separated list of ports on which the
Java application is accessed via web
browser.
appserver_login_pattern Yes Yes Comma-separated list of strings specifying
the login pattern passed to the application.
This is the pattern that the Java application
is passed that indicates a login of a user.
appserver_username_prefixYes Yes Comma-separated list of strings specifying
the prefix to the username for a given
session. This is the pattern the Java
application uses to indicate the username of
the given session.
Yes Yes
appserver_username_postfix Comma-separated list of strings specifying
the postfix to the username for a given
session. This is the pattern (or character)
used by the Java application to indicate the
end of the value for the given variable that
indicates the username.
appserver_session_patternYes Yes Comma-separated list of strings specify the
start of an end-user session, using a
particular database session. This is the
pattern specifying [change of] end-user
session for a given database connection.
appserver_session_prefix Yes Yes Comma-separated list of strings specifying
the session identifier
appserver_session_postfixYes Yes Comma-separated list of strings specifying
where the session id ends.
Yes Yes
appserver_usersess_pattern Comma-separated list of strings specifying
the identifier for marking which
end-session a given connection is
continuing with.
appserver_usersess_prefixYes Yes Comma-separated list of strings specifying
what identifies/precedes the session_id in a
given usersess indicator packet.
Yes Yes
appserver_usersess_postfix Comma-separated list of strings specifying
where the session id ends.

130 S-TAP and other agents


Configuration Auditing System (CAS) parameters
These parameters affect the behavior of CAS on this system.
Table 36. S-TAP configuration parameters for CAS
Default
Parameter Version
GUI GIMvalue Description
cas_task_baseline task_baseline
File name for task baseline. Deprecated.
cas_task_checkpoint Yes task_checkpoint
File name for client baseline. Deprecated.
cas_client_baseline client_baseline
cas_client_checkpoint Yes client_checkpoint
cas_checkpoint_period Yes 3600 Interval time in seconds for the check
cas_fail_over_file Yes fail_over_file
Name of file containing outgoing messages
buffer
Yes
cas_fail_over_file_size_limit 50000 Size of fail over file
cas_max_reconnect_attemptsYes 5000 Number of reconnect attempts when
connection is lost
cas_reconnect_interval Yes 60 Wait time in seconds between reconnect
attempts
cas_raw_data_limit Yes 1000 Limit in kilobytes on size of raw data sent
to Guardium system
cas_md5_size_limit Yes 1000 Largest file size in kilobytes on which to
calculate MD5SUM
cas_command_wait 8.0 300 Wait time in seconds before killing a
long-running data collection process
8.0
cas_server_failover_delay 60 Wait time in minutes before trying to
connect to another Guardium system
cas_server_port Windows only.

Debug parameters
These parameters affect the behavior of S-TAP debugging.

These parameters are stored in the [DEBUG_OPTIONS] section of the S-TAP


properties file:
Table 37. S-TAP configuration parameters for debugging
Default
Parameter VersionGUI GIM value Description
debug_buffer 1 1=log the contents of local packets
debug_firewall 1 1=log firewall events

These parameters are stored in the [TAP] section of the S-TAP properties file:
Table 38. More S-TAP configuration parameters for debugging
Default
Parameter VersionGUI GIM value Description
debug_file_name Location of the S-TAP debug file. The
default location is c:/guardium/stap.txt
debug_max_file_size 200

Chapter 1. S-TAPs and other agents 131


Table 38. More S-TAP configuration parameters for debugging (continued)
Default
Parameter VersionGUI GIM value Description
debuglevel 0 Level of debug messages to store. Leave at 0
unless directed by IBM Support.
0 Only critical error information
1 All logged at preceding level plus
repeatable critical error information
2 Not used
3 All logged at preceding level plus
brief information about packets sent
to a Guardium system
4 All logged at preceding level plus
local sniffing log
5 All logged at preceding level plus
network sniffing log
6 All logged at preceding level plus
heartbeat receiving log
7 All logged at preceding level plus
miscellaneous debugging
information
dump_file_mode 0 Enable captures of dump files if S-TAP
crashes. 0 - no dumps, 1 - dump file, 2 -
time stamped dump file 0 - crash dumps
generated (default) 1 - crash dumps
generated, written to the file "stap.diag"
which is created in the S-TAP working
directory. 2 - crash dumps generated, written
to the file "stap-TIMESTAMP.diag" which is
created in the S-TAP working directory,
where TIMESTAMP includes a time and
date string to identify when the crash dump
was generated. The dump file is opened
every time the S-TAP starts and the
parameter is not zero, but is empty if there
was a crash. When using the single file
dump option (1), S-TAP copies any existing
stap.diag file to a backup file before
overwriting the stap.diag file.
stack_trace_file_mode similar to the dump_file_mode parameter
kernel_debug_level 0
syslog_messages 1 1= send messages to syslog (for UNIX) or
the EventViewer (for Windows). S0=do not
send messages.

132 S-TAP and other agents


Table 38. More S-TAP configuration parameters for debugging (continued)
Default
Parameter VersionGUI GIM value Description
tap_debug_output_level To enable STAP log. STAP logs will got to a
file guard_stap.stderr.txt located in the
directory specified in tap_log_dir parameter.
STAP Log Levels: 0: Only critical error
information 1: Critical error information plus
repeatable not critical error information 2:
Check if KTAP is down then PCAP will back
up plus level 1 3: Check if guard_stap
configuration file are valid plus level 1 4:
Check local sniffing log plus level 1 5: Check
network sniffing log plus level 1 6: Check
Appserver debug info plus level 1 7: Trigger
to run Diagnostic script plus level 1

K-TAP parameters
These parameters affect the behavior of the K-TAP.
Table 39. K-TAP configuration parameters
Default
Parameter Version
GUI GIMvalue Description
ktap_installed Yes 0 Is Kernel Monitor module installed: 0=NO,
1=YES. ktap_installed and tee_installed are
mutually exclusive; only one can be set on.
8.0
ktap_request_timeout 5 It is the timeout on waiting for K-TAP reply.
K-TAP sends ioctl to stap to ask for some
information, and wait for the reply from stap.
It is in seconds and can have any value.
8.0
ktap_dbgev_ev_list 0 It is used to enable K-TAP trace log either
through GUI or through guard_tap.ini file:
0=disable, 1=enable ktap trace log located
under /var/tmp directory
8.0
ktap_dbgev_func_name all List of functions to log in K-TAP trace log.
all= all the functions or we can specify
specific function such as accept so we log in
the log file only the accept functions. If you
specify a function that is not relevant to the
K-TAP trace log it won't log anything to the
log.
8.0
ktap_fast_tcp_verdict 0 For tcp connection, K-TAP will send ioctl to
stap to confirm that session is the database
connection configured in our IE by checking
Ips. When ktap_fast_tcp_verdict is set to 1,
then K-TAP will not send the request to
S-TAP as long as session's ports are in the
range. it can have either 1 or 0 values (0)
8.0
ktap_fast_file_verdict 1 For tli connection, K-TAP will send ioctl to
S-TAP to confirm that session is the database
connection configured in our IE by checking
ports and Ips, when ktap_fast_file_verdict is
set to 1, then K-TAP will not send the request
to S-TAP as long as session's ports are in the
range. it can have either 1 or 0 values (1).

Chapter 1. S-TAPs and other agents 133


Table 39. K-TAP configuration parameters (continued)
Default
Parameter Version
GUI GIMvalue Description
ktap_buffer_size8.0 4194304 Advanced. The size of theK-TAP buffer in
Bytes. The range of values is between 1 MB
and 16 MB
ktap_buffer_flush8.0 0 Advanced. The way to send messages from
K-TAP to S-TAP. If = 1 the S-TAP reads the
entire K-TAP buffer and process all the
packets in the buffer. If ktap_flush_buffer=0,
the S-TAP reads a fixed amount rather than
the entire buffer.
ktap_local_tcp 8.2 0 1=only intercept local connections (although
previously intercepted connections will still be
captured) (this parameter is used for TCP
connections)
ktap_use_base_iov9.0 1 1= get data from the first element of msg_iov
only
8.0
khash_table_length 24593 Number of sessions that can be stored in the
Khash table. It is an integer and can have any
value.
khash_max_entries8.0 8192 Length of the table that contains all the
information for the specific session. It is an
integer and can have any value.

Table 40. A-TAP and PCAP configuration parameters


Default
Parameter Version
GUI GIMvalue Description
atap_exec_location /var/ Location of the executable that is used when
guard we activate ATAP by enabling the encryption
box in the inspection engine section
pcap_read_timeout8.0 0 only pcap traffic (non-ktap): how long should
STAP wait between pcap sampling. This value
should change only with the advise of
Guardium Services, after examining the
problem and determining the losses (not
capturing all the traffic) are caused due to
pcap/stap related bottleneck.
8.0
pcap_dispatch_count 16 optimization of pcap capturing; number of
packets to bundle (group) before reporting
back to stap. by grouping the packets together
we reduce the pcap-to-stap communication,
and boost performance. This value should
change only with the advise of Guardium
Services, after examining the problem and
determining the the losses (not capturing all
the traffic) are caused due to pcap/stap
related bottleneck.

134 S-TAP and other agents


Table 40. A-TAP and PCAP configuration parameters (continued)
Default
Parameter Version
GUI GIMvalue Description
pcap_buffer_size8.0 -1 size of pcap socket buffer. This parameter is
used for LINUX only. This integer's default
value is -1, means to get the maximal buffer
possible. Any other case, this is buffer size in
kilobytes. 0 is not legal - if it is 0, it means 60
other than that it can be any value up to
65535. Larger buffer mean that there will less
likely have losses when there are busts of
high volume traffic. The scenario; Bust of high
traffic, pcap captures everything, but the stap
(or pcap-to-stap flow) is not fast enough and
not keeping up with the traffic. To avoid
losses, the yet-to-be-processed packets are
buffered. The larger the buffer is, the more
resilient against higher and longer bursts of
high traffic. This value should change only
with the advise of Guardium Services, after
examining the problem and determining the
the losses (not capturing all the traffic) are
caused due to pcap/stap related bottleneck.

Delayed cluster disk mounting


This topic applies for Oracle, Informix and DB2 database servers only.

For these database types, when the S-TAP starts it must have access to the
database home. If your environment uses a clustering scheme in which multiple
nodes share a single disk that is mounted on the active node, but not on the
passive node, the database home is not available on the passive node until failover
occurs.

S-TAP can be configured for delayed loading by setting a configuration file


property, WAIT_FOR_DB_EXEC. When starting, if S-TAP finds that there is no
access to the database home, it checks the WAIT_FOR_DB_EXEC value, and takes
the appropriate action.
v WAIT_FOR_DB_EXEC > 0, S-TAP starts regardless of whether or not it can start
process name. It tries to start process name every 15 minutes
v WAIT_FOR_DB_EXEC < = 0 S-TAP tries to start process name in inspection
engine immediately after it comes up. If it cannot start process name, S-TAP
exits.

Before setting this property to a positive value, be sure to set all other necessary
configuration properties and test that the S-TAP starts and collects data correctly.
This property can be set only by editing the configuration file, and not from the
Guardium administrator console.

S-TAP Status Monitor


The S-TAP Status monitor tab of the System View enables you to view the current
status of your S-TAPs and investigate any problems.

You can view the status of each S-TAP.

Chapter 1. S-TAPs and other agents 135


Click any line in the list to view the inspection engines that are configured for this
S-TAP. The bread crumbs show where you are; click ALL S-TAPs to return to the
list of S-TAP.

The list of inspection engines shows whether they have been verified. If an
inspection engine is unverified, you can submit it for verification immediately, or
add it to the existing verification schedule. Verification is supported for these
database types:
v DB2
v Greenplum
v Informix
v MSSQL
v MySQL
v Netezza
v Oracle
v PostgreSQL
v Sybase
v Teradata (advanced verification only)
If you check the box next to a database of an unsupported type, a message is
displayed saying that the type is not supported for verification.

There are two types of verification:


Standard verification
Checks the S-TAP and inspection engine by submitting an invalid login
request, to verify that the appropriate error message is returned.
Advanced verification
If you need to avoid failed login requests, you can use advanced
verification. For this type, you must identify or create a datasource
definition associated with the target database. The datasource definition
includes credentials, which the verification process uses to log in to the
database. Then it submits a request to retrieve data from a nonexistent
table in order to generate an error message.

For both types of verification requests, the results are displayed in a new dialog
that provides information about the tests that were performed and recommended
actions for tests that failed.

By default, the system waits five seconds before displaying verification results. If
your network latency is high, this might not be enough time to receive the
expected response from the database server. If you need to allow more time, you
can use the store stap network_latency CLI command to change the period.

Related topics:
v “Viewing S-TAP verification results”
v “Configuring the S-TAP verification schedule” on page 137
v “Troubleshooting S-TAP problems” on page 138

Viewing S-TAP verification results


When you verify an inspection engine from the S-TAP Status Monitor page, it
checks several configuration parameters and attempts to connect to the database.

136 S-TAP and other agents


Standard verification attempts to log in to your database with an erroneous user
ID and password, to verify that this attempt is recognized and communicated to
the Guardium system. Your S-TAP could be configured in a way that prevents the
message from reaching the Guardium system from which the request was made.

These configuration details include:


v Load balancing: if the S-TAP is configured to return responses to more than one
Guardium system, the error message could be sent to a different Guardium
system.
v Failover: If secondary Guardium systems are configured for the S-TAP, the error
message could be sent to a different Guardium system. The S-TAP can fail over
to a secondary Guardium system if the primary Guardium system is busy.
v Db_ignore_response: if the S-TAP is configured to ignore all responses from the
database, it will not send the error message to the Guardium system.
v Client IP/mask: if any mask is defined that is not 0.0.0.0, it could prevent the
error message from being sent.
v Exclude IP/mask: if any mask is defined that is not 0.0.0.0, it could prevent the
error message from being sent.

Before connecting to the database, the verification process checks whether the
sniffer process is running on the Guardium system. The sniffer is responsible for
communicating with each S-TAP and processing the data that is received. If the
sniffer is not running, responses from the S-TAP will not be recognized.

Next the verification process checks whether it can connect to the selected
inspection engine on the database server. It expects to receive a response that
indicates a failed login. If a different response is received, you might have to
investigate further.

Some error messages from individual databases do not indicate a specific problem.
For example, on several supported databases, the error code returned for a wrong
port can also mean that the database itself is not started.

The results of the verification process are displayed in a dialog. Failed checks are
shown first, with recommendations for next steps. Checks that succeeded are
shown in a collapsed section at the end of the list. In some situations, it might be
useful to review the successful checks in order to choose among possible next
steps.

Related topics:
v “S-TAP Status Monitor” on page 135
v “Troubleshooting S-TAP problems” on page 138
v “Configuring the S-TAP verification schedule”

Configuring the S-TAP verification schedule


You can configure the schedule for running S-TAP verification.

About this task


By default the schedule for verifying S-TAPs is once per hour, every day. The same
schedule is used for all S-TAPs that are scheduled for verification. You can change
this schedule.

Chapter 1. S-TAPs and other agents 137


Procedure
1. Click Manage > Activity Monitoring > S-TAP Verification Scheduler to open
the S-TAP Verification Scheduler.
2. In the S-TAP Verification Scheduler portion of the page, click Modify Schedule.
3. In the Schedule Definition dialog, use the drop-down lists and check boxes to
schedule when verification runs. This schedule is applied to all S-TAPs that are
scheduled for verification.
4. Click Save to save your changes.

Troubleshooting S-TAP problems


You can use the S-TAP Status monitor tab of the System View to begin
investigating any problems. Sometimes you might need to use other tools,
particularly if you are monitoring databases for which the inspection engines
cannot be verified.
v If an S-TAP is not connected to your Guardium system, check whether the
S-TAP process is running on the database server:
UNIX: Verify that the S-TAP process is running
On the database server, from the command line, run the command ps
-ef | grep stap to verify that the S-TAP process is running. In the
process list, look for /guardium/guard_stap.
Windows: Verify that the GUARDIUM_S-TAP Service is running
1. Log on to the database server system by using a system
administrator account.
2. Verify that the GUARDIUM_S-TAP Service is running.
3. If the GUARDIUM_S-TAP service is not running, start it.
v Verify the connection between the database server and the Guardium system.
– Verify that you can ping the Guardium system at sqlguard_ip from the
database server.
– If the ping is successful, verify that you can telnet to the following ports on
the Guardium system:
- UNIX: 16016/16018
- Windows: 9500/9501
v If there is a firewall between the database server and the Guardium system,
verify that the following ports are open for traffic between these two systems.
– UNIX: TCP Port 16016 or TLS Port 16018 for encrypted connections.

Note: Use the following command to check the port availability: nmap -p
port guardium_hostname_or_ip
– Windows: UDP Port 8075 and TCP Port 9500, or TLS Port 9501 for encrypted
connections.

Note:
- Use the following command to check the port availability: netstat -an
- Verify that any Windows firewall is either turned off or that it is allowing
traffic through those ports.
v Verify that the sqlguard_ip parameter is set to the correct
guardium_hostname_or_ip for the Guardium system that you are connecting to.
1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP
Control.

138 S-TAP and other agents


2. Locate the S-TAP Host for the IP address that corresponds to your database
server.
3. Expand the Guardium Hosts subsection, and verify that the active Guardium
Host is correctly configured.
4. If necessary, click Modify to update the Guardium Hosts.
v Verify that the S-TAP process is not repeatedly restarting:
– UNIX: On the database server, run the command ps -eaf | grep stap to
verify that the process for S-TAP is not changing.
– Windows: From Windows Task Manager, select to view the PID column, and
verify that the PID for guardium_stap.exe is not changing.
v Verify that S-TAP Approval is not turned on. If S-TAP Approval is turned on,
any new S-TAP that connects to the Guardium system is refused.
1. Click Manage > Activity Monitoring > S-TAP Certification to open S-TAP
Certification.
2. Look at the S-TAP Approval Needed check box. If this box is checked, new
S-TAPs can connect to this Guardium system only after they have been
added to the list of approved S-TAPs.
3. If S-TAP Approval is turned on, select Daily Monitor > Approved Tap
Clients to view a list of approved S-TAPs. If the S-TAP that you are
investigating is not on this list, return to the S-TAP Certification pane, enter
the IP address of the S-TAP in the Client Host field, and click Add.

If the S-TAP shows green status but no data is being processed, check the status of
the A-TAP.

Related topics:
v “S-TAP Status Monitor” on page 135
v “Viewing S-TAP verification results” on page 136
v “Monitoring S-TAP behavior”

Monitoring S-TAP behavior


The S-TAP monitor is installed with the S-TAP on the database server. You can use
it to monitor the S-TAP process and take actions based on various thresholds.

By default, the S-TAP monitor is disabled. This is an advanced function, for use by
knowledgeable users. To enable it, uncomment the guard_monitor line in the
etc/inittab file; on Solaris systems, use the svcadm command to activate it. Before
you activate the monitor, choose the options and thresholds that you want to use.

The monitor is controlled by using the guard_monitor.ini file. This file contains
comments showing the meaning of each parameter. Default thresholds are
provided for each function. For example, you might want to monitor CPU usage,
and set one threshold (75%) for gathering diagnostic information and a higher
threshold (85%) at which the S-TAP should be killed. You would set auto_diag=1
to enable gathering of diagnostic information, and diag_high_cpu_level=7500 to
gather diagnostic information when CPU usage reaches 75%. Then set
auto_kill_on_cpu_enable=1 to enable automatic killing of the S-TAP process, and
set auto_kill_on_cpu_level=8500 to kill the process when CPU usage reaches 85%.

But you do not want to keep killing the S-TAP process repeatedly, so you can set a
limit on that as well. You can limit how many times the process can be killed
within one hour by setting kill_num_in_hour=5. Then specify what should happen

Chapter 1. S-TAPs and other agents 139


when the limit is reached: code final_action=1 to disable the S-TAP, or
final_action=2 to allow it to continue running.

Use a similar approach to configure other thresholds and behaviors.

S-TAP events panel


You can use the S-TAP Events panel to view the event messages output by S-TAP.

To open the S-TAP Events panel for any S-TAP listed in the control panel:
1. Click Reports > Real-Time Guardium Operational Reports > S-TAP Events to
open S-TAP Events.

Column Description
Event Type Success, Error Type, and so on
Event Description Short description of the event
Timestamp Date and time the event occurred

Note: If no messages display in the S-TAP Events panel, the production of event
messages may have been disabled in the configuration file for that S-TAP. If this is
the case, you may be able to locate S-TAP event messages on the host system in
the Event Log (for Windows) or the syslog file (for UNIX/Linux).

S-TAP reports
By default, the reports that are described in this topic appear in the Reports panel.

You can define new queries or reports on the Rogue Connections domain, and you
can create alerts that are based on exceptions that are created by S-TAPs, but other
domains that are used by S-TAP reports are system-private and cannot be accessed
by users.

System View

S-TAP Status Monitor - For each S-TAP reporting to this Guardium system, this
report identifies the S-TAP Host, S-TAP Version, DB Server Type, Status (active or
inactive), Last Response Received (date and time), Primary Host Name, and
true/false indicators for: KTAP, TEE, MS SQL Server Shared Memory, DB2 Shared
Memory, Local TCP monitoring, Named Pipes Usage, and Encryption.

Note: The DB2 shared memory driver has been superseded by the DB2 Tap
feature.

Tap Monitor

Rogue Connections - This report is available only when the Hunter option is
enabled on UNIX servers. The Hunter option is only used when the Tee
monitoring method is used. This report lists all local processes that have
circumvented S-TAP to connect to the database.

S-TAP Configuration Change History - This report is displayed only when an


inspection engine is added or changed. Lists S-TAP configuration changes – each

140 S-TAP and other agents


inspection engine change is displayed on a separate row. Each row lists the S-TAP
Host, DB Server Type, DB Port From, DB Port To, DB Client IP, DB Client Mask,
and Timestamp for the change.

Primary Guardium Host Change Log - Log of primary host changes for S-TAPs.
The primary host is the Guardium system to which the S-TAP sends data. Each
line of the report lists the S-TAP Host, Guardium Host Name, Period Start, and
Period End.

S-TAP Status - Displays status information about each inspection engine that is
defined on each S-TAP Host. This report has no From and To date parameters,
since it is reporting current status. Each row of the report lists the S-TAP Host, DB
Server Type, Status, Last Response, Primary Host Name, Yes/No indicators for the
following attributes: K-TAP Installed, TEE Installed, Shared Memory Driver
Installed, DB2 Shared Memory Driver Installed, LHMON Driver Installed, Named
Pipes Driver Installed, and App Server Installed. In addition, it lists the Hunter
DBS.

Inactive S-TAPs Since - Lists all inactive S-TAPs that are defined on the system. It
has a single runtime parameter: QUERY_FROM_DATE, which is set to now -1
hour by default. Use this parameter to control how you want to define inactive.
This report contains the same columns of data as the S-TAP Status report, with the
addition of a count for each row of the report.

S-TAP error messages


The following list describes the error messages produced by S-TAP, in alphabetical
sequence.

Message Description
Cant read inifile .../guard_tap.ini: The S-TAP configuration file (guard_tap.ini) has errors, which is most likely to
Cannot resolve hostname xxx for happen when it has been edited manually. When this happens, S-TAP attempts
the IP address parameter to restart from the last known good backup file (if one is available).
sqlguard_ip in section
SQLGUARD_x. Reverting to
.../guard_tap.ini.bak
bind: Address already in use [DB A port that an S-TAP TEE is trying to use is already in use. For example, if you
server name or IP] Cant bind configure a TEE to listen on port 4100, and Sybase is already listening on that
listening socket for tee: Address port, you will receive this message.
already in use
connect: Network is unreachable The standard message received when trying to reach a host that is not accessible.
In most cases this means that the Guardium system is not answering ping
requests.
Delayed server connection error: The Guardium system is refusing a connection request from this S-TAP. That
Connection refused Guardium system either has no inspection engine running (not likely), or it is
not configured to accept S-TAP connections (check the unit_type setting for that
Guardium system).
Deleting connection on unknown Not an error message; disregard.
pid:n
Got a connection from a remote S-TAP has received a connection request (to a TEE port) from an application at a
machine, ignoring remote host, and is ignoring that request. The Tee should be used only for local
connections.
Got new configuration The Guardium administrator has updated the configuration while logged into
the Guardium system, and the updated configuration file has been received by
the S-TAP.

Chapter 1. S-TAPs and other agents 141


Message Description
Guard Tee is accepting Normal TEE process start-up message (appears only when the TEE is installed).
connections on port 12346
Guardium TAP starting Normal S-TAP process start-up message.
read from socket: Connection The database server or database client is down. For example, someone ran an
reset by peer Oracle sqlplus session and used ctrl-C to exit. This message does not indicate a
problem.
Server wasn’t heard from for 180 S-TAP has not received a heartbeat signal from the Guardium system for 180
sec, closing and re-opening seconds. It will attempt to reconnect with the server. No data is lost since it is
cached in the buffer file .
SQLguard socket read: The Guardium system closed the connection to the S-TAP. This happens when
Connection reset by peer the Guardium system restarts, or when the Guardium system inspection engine
automatically goes down and comes up again (in which case, it does not indicate
a problem).
waitpid: No child processes Not an error message; disregard.
killed n The S-TAP hunter process has killed an unauthorized connection identified by n.

S-TAP appendix
This section details moving from one Informix version to another.

Transition Procedure for Moving from One Informix Version to


Another (SUSE 32-bit)

For SUSE 32-bit Linux, when there are multiple Informix versions installed, the
following step by step procedure can be used to move from one Informix version
to another; cleaning up semaphores and shared memory segments to help ensure a
clean start of the Informix database. The following steps assume an A-TAP
activation.
1. Using ipcs command, get the list of all semaphores and shared memory
segments that are created by Informix database. This might be tricky (since
they may show up belonging to user root), but the rule of thumb is that there
should be three shared memory segments with permissions 0660 and one with
0666 and four semaphore arrays with permissions 0660 and one with 0666, all
belonging to root.
2. Stop Informix database.
3. Make sure that all semaphores and shared memory segments created by
Informix database are gone.
4. (On Linux only) If the Informix instance was activated in A-TAP, deactivate it.
5. Make the changes to /etc/passwd (point the user Informix home directory to
the correct location).
6. Make sure that the installation directory is correct in the S-TAP inspection
engine for the new instance of Informix. Make the changes if needed, restart
S-TAP.
7. (On Linux only) Activate Informix in A-TAP (make sure to specify the correct
version).
8. Start the new instance.

142 S-TAP and other agents


IMS Definitions
An IMS definition establishes a connection from your Guardium system to the IMS
environment that you want to audit.

To create and modify IMS definitions using the Guardium system interface, an
S-TAP must already be installed on the IMS system and the agent address space
(AUIASTC) must have a preestablished connection to the Guardium system. If the
agent has not successfully connected and you need help establishing a connection,
refer to “Installing IBM Guardium S-TAP for IMS on z/OS” in the IBM Guardium
S-TAP for z/OS User's Guide.

Once defined, IMS Definitions are sent to the S-TAP along with any additional
policies according to the agent's policy pushdown settings. Policies for IMS on
z/OS S-TAPS must be associated with an IMS Definition in order to be included
during policy pushdown. For more information about configuring pushdown
events, refer to the "Policy pushdown” topic in the IBM Guardium S-TAP for z/OS
User's Guide.

For step-by step support while configuring IMS Definitions, refer to “Creating and
modifying IMS definitions” in the IBM Guardium S-TAP for z/OS User's Guide.

DB2 for i S-TAP


You can use the Guardium DB2 for i S-TAP to monitor and report on any database
access on IBM i. This includes any programs, such as RPG, that use native
database I/O operations or SQL access.

You can use information gathered by the Guardium DB2 for i S-TAP to create
activity reports, help you meet auditing requirements, and generate alerts of
unauthorized activity. Detailed auditing information includes:
v Session start and end times
v TCP/IP address and port
v Object names (for example, tables or views)
v Users
v SQLSTATEs
v Job and Job numbers
v SQL statements and variables
v Client special register values
v Interface information, such as ODBC, ToolboxJDBC, Native JDBC, .NET, and so
on

The S-TAP receives data from two sources:


v SQL Performance Monitor (otherwise known as database monitor) data for SQL
applications
v Audit entries from the QSYS/QAUDJRN audit journal for applications using
non-SQL interfaces
Data from these sources includes:
v Any SQL access whether it is initiated on the IBM i server or from a client
v Any native access that is captured in the audit journal
The S-TAP sends this data to the Guardium system in real time.

Chapter 1. S-TAPs and other agents 143


For more information about the DB2 for i S-TAP and related topics, refer to these
sources:
v Using IBM Security Guardium for monitoring and auditing IBM DB2 for i
database activity: this developerWorks article introduces IBM Guardium, the
DB2 for i S-TAP, and key related details.
v IBM i on IBM Knowledge Center: look here for information about IBM i, audit
journaling, and other related topics.
v Target DB2 for i as a data source: more information on related topics.

i S-TAP for encryption, load balancing, and failover


The IBM i S-TAP supports TLS encryption and S-TAP session load
balancing/failover.

Note: i S-TAP TLS support and load balancing is supported only for IBM i 7.1
and 7.2.

Similar to UNIX S-TAPs, i S-TAP configuration parameters are saved in a


guard_tap.ini file in the /usr/local/guardium directory on the IBM i server.

Administrators configure the S-TAP is done using the same APIs and UI (S-TAP
Control) as other UNIX S-TAPS. When the GUI or API is used to make a change to
the S-TAP configuration, the Guardium sniffer sends a message to the S-TAP,
which backs up the old .ini file, saves the configuration to the new .ini file and
then restarts itself.

Administrators can set up encrypted communication between the S-TAP and the
appliance using the S-TAP configuration controls as well as set up various load
balancing options.
Using S-TAP failover and load balancing
The failover and load balancing options for the i S-TAP are similar to what
exists for UNIX S-TAPs. Use the participate_in_load_balancing parameter
to determine whether to use failover or load balancing behavior, and use
the SQLGuard sections of your S-TAP to set up primary, secondary, and
tertiary Guardium hosts.
One difference is that there is no need for participate_in_load_balancing=3;
because of the way the I S-TAP communication is architected, complete
session information is available on each message. This means that even
before the enhancements delivered in this patch, you could have used
hardware balancing (such as F5) with participate_in_load_balancing=1 and
a virtual IP address in the primary SQLGuard section of the configuration
file.
In a failover configuration, the S-TAP is configured to register with
multiple collectors, but only send traffic to one collector at a time
(participate_in_load_balancing=0). The S-TAP in this configuration sends
all its traffic to one collector unless it encounters connectivity issues to that
collector that triggers a failover to a secondary collector.

Monitoring strategy
Make your monitoring and auditing effective and efficient by developing a
strategy that recognizes and fulfills your regulatory and other requirements.

144 S-TAP and other agents


After you know what data you need, develop a strategy for collecting it with as
little extraneous data as possible. Monitoring and logging data that you do not
need uses up disk space and processing power, and generates extra network traffic.
There are several areas where you can implement your strategy:
Database monitoring
The global SQL monitor captures SQL information and puts it into a queue
for the S-TAP. You can use the filtering capabilities of the monitor to
control which types of users and objects are queued. By default, these
types of entries are not forwarded from the S-TAP to the Guardium
system:

SQL Abbreviation Meaning


AD ALLOCATE DESCRIPTOR
CL CLOSE
DA DEALLOCATE DESCRIPTOR
DE DESCRIBE
EX EXECUTE (the SQL statement executed is
audited)
FE FETCH
FL FREE LOCATOR
GD GET DIAGNOSTICS
GS GET DESCRIPTOR
HL HOLD LOCATOR
PR PREPARE (except authorization errors are
captured)
RE RELEASE
RG RESIGNAL
SC SET CONNECTION
SD SET DESCRIPTOR
SG SIGNAL

Audit journal
You can configure the system audit journal to capture only those entries
that concern objects of interest or users of interest. By default, entries of
these types are sent from the S-TAP to the Guardium system:

SQL Abbreviation Meaning


ZR Read object
ZC Change object
CA Authority change
AD Auditing change
AF Authority failure
CO Create object
DO Delete object
SV System Value change
GR General purpose audit record
OM Object moved or renamed

Chapter 1. S-TAPs and other agents 145


SQL Abbreviation Meaning
PG Primary group change
PW Invalid password or user ID
OW Change owner
OR Object restored
RA Restore authority change
RO Restore owner change
RZ Restore primary group change

Only those entries that relate to database objects are forwarded:


v *FILE (a table, view, index, logical file, alias, or device file)
v *SQLUDT (an SQL user-defined type)
v *SQLPKG (an SQL package)
v *PGM (a procedure, function, or program)
v *SRVPGM (a procedure, function, global variable, or service program)
v *DTAARA (an SQL sequence)
On the Guardium system
You can define policies that control which information that is received
from the S-TAP is ignored, and what actions to take based on other items.

Ignoring data after it has been sent over the network is inefficient. Wherever
possible, filter out information that you do not need before it is queued for the
S-TAP.

Installing the S-TAP for IBM i


Follow these steps to install or uninstall the S-TAP.

Before you begin

The DB2 for i S-TAP requires Portable Application Solutions Environment (PASE),
which is automatically started and stopped as needed when a user starts and stops
the DB2 for i S-TAP from the IBM Guardium user interface.

You must know the IP address of the Guardium system to which this S-TAP will
connect.

When you download the S-TAP, be sure to filter for the IBM i platform, to ensure
that you download the correct package.

About this task


The Guardium Installation Manager (GIM) is not supported on IBM i.

You can use 5250 emulator software to connect to the IBM i system remotely.

Procedure
1. On the IBM i server, enter this command to open the PASE shell: call qp2term.
2. In the PASE shell environment, create a temporary directory to hold the S-TAP
installation script, such as /tmp.

146 S-TAP and other agents


3. Use FTP to move the following S-TAP installation shell script to that temporary
directory: guard-itap-9.0.0_rnnnnn-aix-5.3-aix-powerpc.sh
4. In the same directory, run this command:
guard-itap-9.0.0_rnnnnn-aix-5.3-aix-powerpc.sh guardium_host_IP

where guardium_host_IP is the IP address of the Guardium system.

Results

The S-TAP is installed in /usr/local/guardium. After the installation is complete,


the S-TAP attempts to start the processes that enable activity monitoring and to
connect to the Guardium system by using the IP address that was specified with
the installation command.

What to do next

To validate the successful installation and start of the audit process, log in to the
IBM Guardium web console as an administrator, navigate to the System View tab,
and check the status of the S-TAP.

Uninstalling the S-TAP


Procedure

To stop and uninstall the S-TAP, issue these commands:


RUNSQL SQL(’call SYSPROC/SYSAUDIT_End’) COMMIT(*NONE)
RMVDIR DIR(’/usr/local/guardium’) SUBTREE(*ALL)

Defining the S-TAP for IBM i


After you install the S-TAP, ensure that it can communicate with the Guardium
system.

Before you begin

You must know the log-in credentials for the IBM i system.

About this task


The high-level steps to configure the S-TAP are:
1. Define DB2 for i as a recognized data source to IBM Guardium and test the
connection.
2. Populate the Guardium system with information from the configuration file on
IBM i that was created when you installed the DB2 for i S-TAP, using the
Custom Table Builder process.
3. Create a DB2 for i configuration report. It is from this report interface that you
can invoke the Guardium APIs that enable you to start and stop the monitoring
process, get status information, and update configuration parameters, including
filtering values.

Procedure
1. Click Setup > Tools and Views > Datasource Definitions to open the
Datasource Builder. Select Custom Domain from the Application Selection box.
Click Next.
2. In the Datasource Finder, click New, which opens the Datasource Builder.

Chapter 1. S-TAPs and other agents 147


3. Select DB2 for i as the Database Type and then add the appropriate
information for the host, service name, and credentials. Click Apply.
4. Click Test Connection to ensure that the configuration succeeded.
5. Click Tools > Report Building.
6. Click Custom Table Builder. Select DB2 for i S-TAP Configuration and then
click Upload Data. The Datasource Finder displays a list of DB2 for i S-TAPs.
7. Select your DB2 for i data source from the list/ Click Add.
8. On the Import Data screen, ensure the DB2 for i data source appears. Click
Apply and then click Run Once Now. You should see a message that the
operation ended successfully with one row inserted.
9. Click Customize in the Guardium title bar. Then click Add Pane.
10. Give the pane a new name, such as My New Reports, and then click Apply.
11. My New Reports appears in the Customize pane. Click the icon next to the
name. In the Layout dropdown list, choose Menu Pane. Click Save. Your new
pane appears as a tab.
12. Click Report Building in the navigation pane.
13. From the query dropdown list, click DB2 for i S-TAP configuration, then
click Search.
14. Select the DB2 for i S-TAP configuration and then click Add to My New
Reports (or the name that you specified in step 10).
15. Open the My New Reports tab, which now displays the IBM i report row.
Double-click a row in the report and select Invoke. A list of IBM Guardium
APIs that you can select is displayed.
16. Select update_istap_config.
17. When you select a Guardium API, the parameters for that API are displayed.
You can change any values that you need to. Change the value of the
start_monitor parameter to 1. Click Invoke Now.

Results

Using the data that you have entered, the update_istap_config API performs these
tasks:
v Creates the message queue that will be used to send entries from the S-TAP to
the Guardium system and starts a global database monitor using a view with an
INSTEAD OF trigger, which sends the entries to the message queue.
v Starts PASE and the S-TAP.
v Receives journal entries from QAUDJRN and adds them to the message queue.

IBM Security GuardiumS-TAP for z/OS


The IBM Security Guardium S-TAP for z/OS® solution is a tool that collects and
correlates data access information for DB2 on z/OS, VSAM on z/OS or IMS™ on
z/OS to produce a comprehensive view of business activity for auditors.

IBM Security GuardiumS-TAP for DB2 on z/OS


S-TAP for DB2 on z/OS collects and correlates data access information from a
variety of DB2 resources to produce a comprehensive view of business activity for
auditors. The S-TAP for DB2 on z/OS captures DB2/zOS database traffic and
forwards that traffic to a Guardium system. Traffic captured by the S-TAP for z/OS
is forwarded directly to the Guardium system where the standard real-time
policies can be used. Guardium provides the following features and functions:

148 S-TAP and other agents


v Data collection—Guardium can collect and correlate many different types of
information into an administration repository:
– Modifications to an object (SQL UPDATE, INSERT, DELETE)
– Reads of an object (SQL SELECT)
– Explicit GRANT and REVOKE operations (to capture events where users may
be attempting to modify authorization levels)
– Assignment or modification of an authorization ID
– Authorization attempts that are denied due to inadequate authorization
– CREATE, ALTER, and DROP operations against an object (such as a table)
– Utility access to an object (IBM utilities only)
– DB2 commands entered (including the ability to determine which users are
issuing specific commands)
v Administration user interface—Enables product administrators to facilitate
configuration of data queries.

Note: For DB2 on z/OS, within reports, the source program as defined within the
Client/Server Entity will be the concatenation of requestor server name and
correlation id.

Note: For DB2 on z/OS, to use DB2 Unicode Database and show multi-byte
characters properly within reports, the user should change the DB2 parameter
UIFCIDS from No to Yes.

IBM Security GuardiumS-TAP for VSAM on z/OS

S-TAP for VSAM on z/OS is a tool that collects and correlates data access
information from records to produce a comprehensive view of business activity for
auditors. S-TAP provides the following features and functions:
v Data collection - S-TAP can collect and correlate many different types of
information:
– Access to VSAM data sets and security violations as recorded by SMF.
– Data set operations performed against VSAM data sets such as deletes or
renames.

IBM Security GuardiumS-TAP for IMS on z/OS


S-TAP for IMS on z/OS is a tool that collects and correlates data access
information from IMS Online regions, IMS batch jobs, IMS archived log data sets,
and SMF records to produce a comprehensive view of business activity for
auditors. IBM Guardium S-TAP provides the following features and functions:
v Data collection - S-TAP can collect and correlate many different types of
information:
– Accesses to databases and segment from IMS Online regions.
– Access to databases and segments from IMS DLI/DBB batch jobs.
– Access to database, image copy and RECON data sets and security violations
as recorded by SMF.
– IMS Online region START and STOP, database and PSB change of state
activity and USER sign-on and sign-off as recorded in the IMS Archived Log
data sets.
v Administration user interface—Provides auditors with flexible options for user
management and auditing profiles.

Chapter 1. S-TAPs and other agents 149


Note: No blocking actions or extrusion rules are supported.

Note: Contact Guardium Support if additional reports are needed.

IBM Security GuardiumS-TAP for z/OS

It is assumed the S-TAP for z/OS client is installed and configured to capture
traffic.

For additional information, see the following User Guides. Copies are available
through the IBM Information Management Software for z/OS Solutions
Information Center (http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/
index.jsp?topic=/com.ibm.db2tools.adhz.doc.ug/adhucon_bkoverview.htm). These
User Guides contain information about Guardium S-TAP; providing an overview
and its functions as well as tasks for planning, installing, configuring, and using
Guardium.

The User Guides available are:


v IBM Security GuardiumS-TAP for DB2 on z/OS v9.0
v IBM Security GuardiumS-TAP for VSAM on z/OS v9.0
v IBM Security GuardiumS-TAP for IMS on z/OS v9.0

Note: The current versions of these guides can be found in the Guardium
Infocenter, http://publib.boulder.ibm.com/infocenter/igsec/v1/index.jsp

Policy push down (only for use with DB2 and VSAM)

When the DB2 S-TAP for z/OS connects to the Guardium system or when the
policy is installed, the installed policy will be sent from the Guardium system to
the Mainframe S-TAP.

Once the user installs the policy, this will trigger a policy push down to the Agent
on the Mainframe collection profile. This applies only to access rules with DB Type
of DB2 Collection Profile or VSAM Collection Profile.

The fields used by the DB2 S-TAP for z/OS policy pushdown are: DB Type, Service
Name, Server IP (S-TAP IP for z/OS), DB User, OS User, Net Protocol, App. User,
(DB2) Client Info, Command, and Object. All other policy fields are ignored.

The fields used by the VSAM S-TAP for z/OS policy pushdown are: DB Type and
Object. DB Type must be set to VSAM COLLECTION PROFILE for the policy to be
picked up by the VSAM S-TAP. When setting up the Access Rule Definition, users
need to enter the value in uppercase for every field in order for VSAM S-TAP to be
processed properly.

The Object is the data set name to act upon. If NOT is selected, it will exclude all
actions matching the data set name, otherwise it will include all actions matching
the data set name. Wild cards are accepted for the Object field, ? will match a
single character while % will match any number of characters.

The underlying protocol for the connection between the z/OS Mainframe and the
Guardium UI is protobuf.

Note: Regular expressions cannot be used with DB2 collection profiles.

150 S-TAP and other agents


Follow these steps:
1. Configure a policy – add an access rule and select DB2 Collection Profile or
VSAM Collection Profile from DB_Type field. The different selection uses
different fields.
A set of predefined groups, that can be used to build groups for policy push
down have been created. Each of these, as well as others, can be viewed /
modified through Group Builder:
v DB2/Z Connection Types
– BATCH
– BMP
– CALL
– CICS®
– CTL
– DRDA®
– MPP
– PRIV
– RRSAF
v DB2/Z General Audit Types
– All Failed Authorizations
– Set Current® Sqlid
– Failed AuthId Changes
– Grant and Revokes
– IBMDB2 Utilities
– DB2 Commands
2. Install the policy.
3. Run the policy.
4. When S-TAP for DB2 for z/OS or S-TAP for VSAM for z/OS connects to a
Guardium system or when a policy is installed, the policy results will be sent
from the Guardium system to the agent on the Mainframe S-TAP (“push
down”).
According to rules with DB_Type equal to DB2 Collection Profile or VSAM
Collection Profile, the active collection profile on the Mainframe S-TAP will be
overridden.
A rule with a collection profile is different than a normal access rule, so the
fields for Collection profile cannot be mixed with original access rule fields.

Fields Used
v DB Type: DB2 Collection Profile
v Service Name: DB2 sub-system ID, for the DB2 sub-system this rule applies to
v DB User: AuthID
v OS User: original auth id
v Net Protocol - connection types available for this feature:
– TSO:1 = TSO FOREGROUND AND BACKGROUND
– CALL::2 = DB2 CALL ATTACH
– BATCH:3 = DL/I BATCH
– CICS:4 = CICS ATTACH
– BMP:5 = IMS ATTACH BMP

Chapter 1. S-TAPs and other agents 151


– MPP:6 = IMS ATTACH MPP
– PRIV:7 = DB2 PRIVATE PROTOCOL
– DRDA:8 = DRDA PROTOCOL
– CTL:9 = IMS CONTROL REGION
– TRAN:A = IMS TRANSACTION BMP
– UTIL:B = DB2 UTILITIES
– RRSAF:C = RRSAF
v APP.USER, CLIENT INFO
Format:
PLAN=x; PROG=z

X is the name of the PLAN, z is the name of the PROGRAM


However, use of a single item,
PLAN=<plan name>

or
PROG=<prog name>

is accepted.
Also accepted is use of NOT, for example,
PLAN= not <plan name>
PROG= not <prog name>
v Object: The object name.
v Command: The command name.
v DB2 Client Info: For access rules only. For z/OS only, a CLIENT INFO field (and
CLIENT_INFO_GROUP_ID) will be visible if DB_TYPE is DB2 COLLECTION
Profile. The type of information that can be placed in this field is USER=x;
WKSTN=y; APPL=z.
v Time Period: FROM_HOUR, TO_HOUR: Hours and minutes are valid values.
v All other fields are ignored.

Note: There can be multiple values for each field by selection and group. Click the
Group Builder icon can open the Group Builder.

Note: If a field is empty, it means ALL. Exceptions: Object_name field where


empty means nothing will be audited. Command field where empty means do not
collect Failed Authorization/Set Current Sqlid/Failed AuthId Change/Grant
Revoke/DB2 Utilities/DB2 Commands. If there is a rule with Command empty,
the S-TAP will collect the correlated event to Failed Authorization/ Set Current
Sqlid/ Failed AuthID Change/ Grant Revoke/ DB2 Utilities/ DB2 Commands.

Note: The relationship between fields in a rule is AND. For example, if


NET_PROTOCOL is TSO and OS_USER is User1, this means only TSO connection
type and User1 original Auth ID are going to be collected.

Note: The relationship between rules is OR. For example, Rule1 with
NET_PROTOCOL of TSO and OS_USER of User1 and Rule2 with
NET_PROTOCOL of CICS and OS_USER of User2 means TSO connection type and
User1 original Auth ID OR CICS connection type and User2 original Auth ID are
going to be collected.

152 S-TAP and other agents


Note:

If a report has a S-TAP HOST column, then double-clicking on this row will
produce a Collection Profile menu item. Clicking on the menu item to popup a
Collection Profile Summary window to show the POLICY field of the
SOFTWARE_TAP_PROPERTY record corresponding to that row.

If the row is not a policy client (check to see that the S-TAP value does not end
with :POLICY), then a warning message will be shown instead of the popup
window.
Further Mapping Information
Target: Concatenation of schema and table with a period in the format of
x.y. X means value for schema, y means value for table. If there is no
period, it is schema. This will be part of Read/Change events - read or
change or % concatenates with Target with a /, like, read/x.y
Concatenation of target and Read/Changes events is put to OBJECT field.
For example, if Value_group field of OBJECT has members read/%.%,
change/%.%, %/%.% then Reads of all targets, Changes of all targets, Read
and Change of all targets will be turned on.
General - Failed AuthID changes, All failed Authorizations, Successful
AuthID changes, Grant and Revokes, DB2 Utilities, DB2 Commands.
COMMAND group with members like: All Failed Authorizations/Set
Current Sqlid/Failed AuthId Changes/Grant and Revokes/IBM DB2
Utilities/DB2 Commands/
APP_USER_NAME and DB2_CLIENT_INFO contain concatenated fields.
The NOT checkbox is hidden for these two fields in the GUI. For these rule
fields, inverted is always false. If the user need to express inversion, the
user needs to explicitly put a NOT ahead of the value for the field to be
inverted, for example,
USER=NOT X; WKSTN=y; APPL=NOT z

meaning User is not X and workstation is y and application is not z;


USER=; WKSTN=NOT y; APPL=

means User and APPL can be any value and workstation is not y.

Guardium for z/OS Interface Definition

Use the Definitions screen to maintain one or more servers where the z/OS files
should be retrieved from.
1. Select Guardium for z/OS from the Configuration panel in the Administration
Console.
2. Click New button to create a new Guardium for z/OS Interface or select an
existing interface from the Guardium for z/OS Interface Definition Finder to
delete, modify or comment on the selected interface.
3. In the Server IP box, enter the IP address the Guardium for z/OS interface
will use to retrieve files from.
4. In the Server Name box, enter the Server Name the Guardium z/OS interface
will use to retrieve files from.
5. In the Directory box, enter the directory where the z/OS audit files are
located.

Chapter 1. S-TAPs and other agents 153


6. In the User and Password boxes, enter a name and password to be used to
access the FTP/SFTP servers to pull (and delete) the z/OS files.
7. In the SSID box, enter the SSID of the database server.
8. In the File suffixes box, enter the appropriate file extension(s) separated by
commas. See Load Balancing for additional information.
9. In the Transfer Method box, choose FTP or SFTP.
10. Check Use END_USER_ID for Application User Name if the application
provided app_user information; otherwise the application-to-database traffic
will not contain this information. In other words, this tells the parser what to
expect in the app_user information fields (end_user_identity mapping).
11. Check/Uncheck Active box to activate/deactivate this interface
12. Check the Remove Files box to remove files from the server after the file is
transferred. It is strongly recommend that this be set to Remove Files to avoid
accumulating too many files and experiencing disk space issues.
13. Click on Apply button to save this configuration.

Note: This Guardium for z/OS Interface Definition menu screen has tool tips for
certain menu choices. Move the cursor over a menu choice (such as Directory), and
a short description will appear.

Note: When editing an existing definition, if user input of password is empty, the
old password is retained. When adding a new record, user name and password
must be specified.

Database Entitlement Reports for DB2 on z/OS

Along with authenticating users and restricting role-based access privileges to data,
even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have
the privileges required to perform their duties. This is also known as database user
rights attestation reporting.

Use Guardium’s predefined database entitlement (privilege) reports (for example)


to see who has system privileges and who has granted these privileges to other
users and roles. Database entitlement reports are important for auditors tracking
changes to database access and to ensure that security holes do not exist from
lingering accounts or ill-granted privileges.

Custom database entitlement reports have been created to save configuration time
and facilitate the uploading and reporting of data from DB2 on z/OS.

Follow these steps to use Guardium’s predefined database entitlement (privilege)


reports with up-to-date snapshots of database users and access privileges:
1. Add datasources/databases to the appliance (Tools, Config & Control,
Datasource Definitions, Custom Domain from the Application Selection.
2. Assign datasources to entitlements (Tools, Report Building, Custom Table
Builder. Select the custom table listing of your entitlement, click Upload Data,
and assign datasources to the entitlement report at the Import Data menu
screen. When done, click Run Once Now.
3. To see entitlement reports, log on the user portal, and go to the DB
Entitlements tab.

154 S-TAP and other agents


DB Entitlement Reports use the Custom Domain feature of Guardium to create
links between the external data on the selected database with the internal data of
the predefined entitlement reports.

Note: DB Entitlements Reports are optional components enabled by product key. If


these components have not been enabled, the choices listed will not appear in the
Custom Domain Builder/Custom Domain Query/Custom Table Builder selections.

The predefined entitlement reports for DB2 on z/OS are listed as follows. They
appear as domain names in the Custom Domain Builder/Custom Domain Query/
Custom Table Builder selections :
v
DB2 zOS Object Privileges Granted To GRANTEE
v
DB2 zOS Database Resource Granted To GRANTEE
v
DB2 zOS Schema Privileges Granted To GRANTEE
v
DB2 zOS Database Privileges Granted To GRANTEE
v
DB2 zOS System Privileges Granted To GRANTEE
v
DB2 zOS Object Privileges Granted To PUBLIC On Object Type Table View
Package Routine Sequence And Plan
v
DB2 zOS Executable Object Privileges Granted To PUBLIC (Object type: Package,
Routine and Plan)
v
DB2 zOS Database Resource Granted To PUBLIC
v
DB2 zOS Schema Privileges Granted To PUBLIC
v
DB2 zOS Database Privileges Granted To PUBLIC
v
DB2 zOS System Privileges Granted To PUBLIC
v
DB2 zOS System Privileges Granted To PUBLIC
v
DB2 zOS Object Privileges Granted To GRANTEE With GRANT OPTION
(Object type: Table, View, Package, Routine, sequence and Plan)
v
DB2 zOS Database Resource Granted To GRANTEE With GRANT OPTION DB2
zOS Schema Privileges Granted To GRANTEE With GRANT OPTION
v
DB2 zOS Database Privileges Granted To GRANTEE With GRANT OPTION
v
DB2 zOS System Privileges Granted To GRANTEE With GRANT OPTION

Chapter 1. S-TAPs and other agents 155


Using the following custom tables:
v
DB2 zOS Object Privs Granted To GRANTEE
v
DB2 zOS Database Resource Granted To GRANTEE
v
DB2 zOS Schema Privs Granted To GRANTEE
v
DB2 zOS Database Privs Granted To GRANTEE
v
DB2 zOS System Privs Granted To GRANTEE
v
DB2 zOS Object Privs Granted To PUBLIC
v
DB2 zOS Executable Object Privs Granted To PUBLIC
v
DB2 zOS Database Resource Granted To PUBLIC
v
DB2 zOS Schema Privs Granted To PUBLIC
v
DB2 zOS Database Privs Granted To PUBLIC
v
DB2 zOS System Privs Granted To PUBLIC
v
DB2 zOS Object Privs Granted With GRANT
v
DB2 zOS Database Resource Granted With GRANT
v
DB2 zOS Schema Privs Granted With GRANT
v
DB2 zOS Database Privs Granted With GRANT
v
DB2 zOS System Privs Granted With GRANT

GUI handling of negative SQLCODES

The processing of SQLCODE lists is quite different from how collection rules are
processed in by the Guardium S-TAP.

For collection rules, all rules are evaluated until any rule determines that the event
can be collected.

With Negative SQLCODE support, only a single list of SQLCODEs is considered


by the Guardium S-TAP - the other lists are essentially ignored/discarded.
Additionally, no additional filtering is done for negative SQLCODE events. So
really the negative SQLCODE list is not operating at a rule level, it is operating at
the complete Policy level.

156 S-TAP and other agents


Character Set Support
For host variables values, different Coded Character Set Identifiers (CSSIDs) may
be in use, some supported and some unsupported.
v With supported CSSIDs variables will be properly converted
v With unsupported CSSIDs variables will be represented by a concatenation of
CSSID and the unconverted value such as: CSSID [number_of_CSSID] :
xxxxxxxxxxxxxx where x’s will be unconverted value as it was received from the
file.

Unsupported character sets


v MacCE
v Cp853

Supported character sets


v
ISO8859_6
v
Cp285
v
MacRoman
v
Cp297
v
Cp852
v
Cp858
v
Cp860
v
Cp500
v
Cp1351
v
Cp1114
v
Cp874
v
Cp1363
v
Cp1124
v
Cp1370
v
Cp921
v
Cp1253

Chapter 1. S-TAPs and other agents 157


v
Cp1255
v
Cp1258
v
MacCyrillic
v
Cp1381
v
Cp1140
v
Cp1142
v
Cp1144
v
Cp1146
v
Cp1147
v
Cp939
v
Unicode
v
MacIceland
v
Cp943
v
Cp949
v
Cp278
v
Cp1041
v
Cp834
v
Cp1047
v
ISO8859_1
v
ISO8859_4
v
v
Cp284
v
ISO8859_9

158 S-TAP and other agents


v
MacGreek
v
Cp970
v
Cp855
v
Cp737
v
Cp861
v
Cp864
v
Cp868
v
Cp1115
v
Cp875
v
Cp1122
v
Cp1006
v
Cp1250
v
Cp1252
v
Cp923
v
Cp1256
v
UTF8
v
Cp33722C
v
Cp775
v
Cp1382
v
Cp933
v
Cp1386
v
Cp1025
v

Chapter 1. S-TAPs and other agents 159


Cp1026
v
MacCroatian
v
Cp420
v
Cp301
v
Cp947
v
Cp943C
v
Cp950
v
Cp1043
v
Cp835
v
Cp837
v
ISO8859_2
v
ISO8859_5
v
v
ISO8859_7
v
Cp964
v
ISCII91
v
Cp971
v
Cp856
v
Cp859
v
Cp862
v
Cp865
v
Cp869
v
Cp870
v

160 S-TAP and other agents


Cp1362
v
Cp1364
v
Cp918
v
Cp1371
v
Cp1098
v
Cp1254
v
Cp1257
v
Cp949C
v
MacRomania
v
Cp930
v
Cp1141
v
Cp1143
v
Cp935
v
Cp1388
v
Cp1148
v
Cp1149
v
Cp300
v
Cp424
v
Cp1399
v
Cp273
v
Cp037
v
Cp437
v
Cp1046

Chapter 1. S-TAPs and other agents 161


v
Cp838
v
ISO8859_3
v
v
v
ISO8859_8
v
GB18030
v
Cp290
v
Cp850
v
Cp857
v
ASCII
v
Cp863
v
Cp866
v
Cp1112
v
Cp871
v
Cp1088
v
Cp1123
v
KOI8_R
v
Cp1251
v
Cp922
v
Cp924
v
Cp927
v
MacTurkish
v
Cp1380
v

162 S-TAP and other agents


Cp897
v
Cp1383
v
Cp1385
v
Cp1145
v
Cp937
v
Cp1027
v
Cp942C
v
Cp1390
v
Cp942
v
Cp948
v
Cp277
v
Cp951
v
Cp833
v
Cp836
v
MS932
v
Cp280

Chapter 1. S-TAPs and other agents 163


164 S-TAP and other agents
Chapter 2. Guardium Installation Manager
You can use the Guardium Installation manager (GIM) to install and maintain
Guardium components on managed servers.

The GIM component includes a GIM server, which is installed as part of the
Guardium system, and a GIM client, which must be installed on servers that host
databases or file systems that you want to monitor. The GIM client is a set of Perl
scripts that run on each managed server. After you install the GIM client, it works
with the GIM server to perform these tasks:
v Check for updates to installed software
v Transfer and install new software
v Uninstall software
v Update software parameters
v Monitor and stop processes that run on the database server
For example, you can use GIM to install your S-TAP modules and keep them
up-to-date.

The GIM client uses port 8444 to communicate with the GIM server.

You can use the GIM server through the Guardium user interface or through the
command-line interface (CLI).

The software modules that you can deploy by using GIM are packaged as GIM
bundles. A bundle is a file of type gim that contains software that can be deployed
by using GIM.

If your environment includes a Guardium system that is configured as a central


manager, you must decide which Guardium systems you want to use as GIM
servers. You can either manage all of your GIM clients, up to 4000, from a single
Guardium system, such as the central manager, or you can manage them in groups
from the different Guardium systems. If you manage all of your GIM clients from
a single Guardium system, then you can view the status of all the GIM clients and
perform related tasks from that one UI. If you choose to manage your GIM clients
in groups from separate Guardium systems, then you can use each UI to work
with the GIM clients that it manages; no overall view is available.

If you upgrade to Version 10.0 from V9.0 GPU patch 50 or later, there is no change
in how you can view information about GIM clients. If you upgrade from an older
version, these restrictions apply: After you upgrade your Central Manager, you can
still view information about GIM clients that are assigned to other Guardium
systems, but you can no longer do provisioning to those GIM clients from the
Central Manager. After you upgrade all your Guardium systems, you can view
each GIM client only from the Guardium system that is its GIM server.

To manage large numbers of GIM installations, you can create groups of GIM
clients. Then, you can use the groups to install, update, and manage software
bundles.

The GIM client monitors the processes that you install by using GIM. It checks the
heartbeat of each process once each minute, and passes status changes for the

165
processes to the GIM server. The status of each process is displayed on the Process
Monitoring panel. Changes are reflected within three minutes. Changes to the
status of the GIM client itself are reflected according to the interval at which the
client polls the server and delivers its “alive message”.

Note: When performing a system backup and restore from one server, which has
GIM defined, to another server, then the user must configure a GIM failover to the
restore server. This GIM configuration applies to a Backup Central Manager or a
System backup and restore.

GIM Server Allocation


Remotely connect to a pre-installed and inactive (not connected to any collector)
GIM agent and make it connect to some collector without the need to access the
database server.

Overview

The following process (also called GIM Auto-Discovery) allows you to remotely
connect to a pre-installed and inactive GIM agent and make it connect to a
collector without accessing the database server.
1. An inactive GIM client runs in listener mode and waits for a connection from
any collector.
2. From the collector's graphic user interface (GUI) or the GuardAPI, you can
send the IP address of any collector to the inactive GIM client.
3. The inactive GIM client accepts the collector's IP address and connects to it.

If GIM is installed without specifying a collector's IP address (--sqlguardip) it will


run in server mode. When the GIM agent is running in server mode, it accepts
messages only from verified collectors over SSL that have certificate authentication
and shared secret verification. If there are 30 or more consecutive authentication
failures, the GIM agent stops listening for requests and runs in server mode. This
action prevents denial of service (DoS) attacks.

You can define your own certificates, shared secret, and port number. To use other
certificates, specify the certificate/key full path name in the installation parameters:
--key_file and --cert_file. Load the certificates to the collector key store with
the GuardAPI command store certificate gim.

To set a shared secret other than the default one, use the GuardAPI command
grdapi gim_set_global_param paramName=gim_listener_default_shared_secret
paramValue=<password>. The format should be a string. The shared secret must be
identical on the database server and collector.

Note: Do not specify the unencrypted shared secret in the command line.

To use a port other than the default one, specify the port in the installation
parameter --listener_port. Set the GIM global parameter
gim_listener_default_port with the new port in the GIM Global Parameters.

Note: The default or user defined port must be enabled in the firewall.

Parameters

The following list describes the GIM installation parameters:

166 S-TAP and other agents


v --sqlguardip - Sets the collector IP address/hostname that the GIM client is
connecting to. If it is not specified, the GIM client will work in “Listener mode".
v --ca_file - Full file name path to the Certificate Authority PEM file.
v --key_file - Full file name path to the private key PEM file.
v --cert_file - Full file name path to the certificate PEM file.
v --shared_secret - specify a shared secret to verify collectors.
v --listener_port - specify a port number that is different than the default.
v --no_listener - disables GIM from running in "Listener mode" even if
--sqlguardip is not specified.

Any attempt to:


v update parameters
v install modules
v uninstall GIM directly on the database server
causes the GIM agent to exit server mode and process the request. If the GIM
client cannot connect to the designated collector, it returns to server mode. After
the GIM agent is assigned to a valid collector's IP address or host name, you
cannot set the GIM server to run in server mode again. All new GIM agent server
mode parameters appear as READ-ONLY.

Note: The following parameters must exist in the file system or the installation
fails:
v ca_file
v key_file
v cert_file

Setting GIM in Server Mode Global Parameters


You can set up the server mode GIM parameters by using the following GuardAPI
command:
grdapi gim_set_global_param
paramName=gim_listener_default_shared_secret
paramValue=<password>

This value is encrypted and stored in the database. The value must be identical to
the unencrypted value as the shared secret if you install the GIM agent on the
database server.

To set up a new default server mode GIM port, use the following GuardAPI
command:
grdapi gim_set_global_param paramName=gim_listener_default_port paramValue=<port number>

This value must be identical to the unencrypted value of the shared secret if you
install the GIM agent on the database server.

Note: If you use a different port or shared secret, you must specify the shared
secret or port every time you connect the collector IP/hostname to the server mode
GIM agent.

Chapter 2. Guardium Installation Manager 167


GIM Remote Activation
Remotely connect to a pre-installed GIM agent and connect it to a collector without
accessing the database server with GIM Remote Activation.
1. Click Manage > Module Installation > GIM Remote Activation.
2. Type in the IP address or host name where GIM is running in listener mode in
the IP / hostname field. Otherwise, select a server group from the following
list.
3. Type in a numerical value in the GIM Listener Port if it is different from the
GIM Global setting. The default value is 8445.
4. Enter the shared secret in the GIM Listener Password field if it is different
from the GIM Global setting.
5. Click Submit to process the information or Reset to clear the information.

Note: You must enter an IP address / host name or select a server group, but the
GIM listener port and GIM listener password are optional. When you install the
GIM client in listener mode, the settings of the shared secret and certificates cannot
be changed unless you reinstall the GIM client.

Create an GIM Auto-discovery Process

Specify which host and ports the Auto-discovery process scans.


1. Configure Auto-discovery by clicking Discover > Database Discovery > GIM
Auto-discovery Configuration.
2. Click New to create a new process and open the Auto-discovery Process
Builder.
3. Enter a Process name that is unique on your Guardium system.
4. To run a probe job immediately after the scan job completes, check the Run
probe after scan check box.
5. For each host or subnet to be scanned, enter the host and port, and click Add
scan. Each time that you add a scan, it is added to the task list.

Note:
v Wildcard characters are enabled. For example: to select all addresses
beginning with 192.168.2, use 192.168.2.*.
v Specify a range of ports by putting a dash between the first and last port
numbers in the range. For example: 4100-4102.
v After you add a scan, modify the host or port by typing over it. Click Apply
to save the modification.
v If you have a dual stack configuration, you will need to set up a scan for
both the IPV4 and the IPV6 addresses.
v To remove a scan, click the Delete this task icon for the scan. If a task has
scan results dependent upon it, the scan cannot be deleted.
6. When finished adding scans, click Apply, and run the job or schedule the job
in the future.

GIM Global Parameters


Define your own shared secret or GIM listener port through the user interface.
1. To open the GIM Global Parameters, click Manage > Module Installation >
GIM Global Parameters.

168 S-TAP and other agents


2. Select gim_listener_default_shared_secret to set the shared secret or
gim_listener_default_port to set the port.
3. Click the icon to edit the selected parameter.
4. Change the value and click Save to change the parameter or Close to return to
the page.

Installing the GIM client on a Windows server


A wizard is provided to help you install the GIM client on each database server.

About this task

Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Run the setup.exe file to start the wizard that installs the GIM client. The
setup.exe file is located in the gim_client folder.
3. Follow and answer the questions in the installation wizard.

What to do next

You can view the results of the installation in the log file at c:\
guardiumstaplog.txt.

Installing the GIM client by using silent installation


If you prefer, you can install the GIM client from the command line instead of
using the wizard.

About this task

Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Open a command prompt and navigate to the gim_client folder under the
folder where you placed the installer.
3. Enter this command:
setup.exe /s /z" <--host=>g10.guardium.com --path=c:\\program files\\guardium\\GIM"
The --host= parameter is optional. The GIM Listener is installed in listener
mode.

What to do next
You can view the results of the installation in the log file at c:\
guardiumstaplog.txt.

Uninstalling the GIM client


Procedure
1. Open a command prompt and navigate to the gim_client folder under the
folder where you installed the client.
2. Enter this command:
"setup.exe" /s /z" <--host=>g10.guardium.com --remove=true"

The --host= parameter is optional.

Chapter 2. Guardium Installation Manager 169


Installing the GIM client on a UNIX server
Use this command to install the GIM client on each database server.

Before you begin

The GIM client requires Perl version 5.8.x or 5.10.x to be installed. Verify that the
following packages are installed:
v IPC-Run3
v Win32-DriveInfo

About this task

Beginning with Guardium 9.1, you can install and use the GIM client in a Solaris
slave zone or an AIX workload partition (WPAR). This enables you to use the GIM
client to install an S-TAP in a slave zone or WPAR. When you install an S-TAP in a
slave zone or WPAR, the K-TAP is disabled, regardless of the setting of the
ktap_enabled parameter. You can also use the GIM client to install the
Configuration Auditing System (CAS) agent in a slave zone or WPAR. You cannot
install the discovery bundle in a slave zone or WPAR; the discovery agent running
on the global zone can collect information from other zones. The process for
installing the GIM client in a Solaris slave zone or an AIX workload partition is the
same as the process for installing in the master zone. The installation can take a
few seconds longer than installing in the master zone. If you install the GIM client
on a Solaris system with master and slave zones, you must install the client in the
same location on the master and slave zones. This location cannot be a shared
directory.

On Solaris, the GIM client and supervisor in each slave zone are controlled by the
GIM supervisor process that runs in the master zone. If the supervisor process on
the master zone is shut down, all GIM processes on the slave zones are shut down
as well.

Procedure
1. Place the GIM client installer on the database server in any folder.
2. Run the installer:
./<installer_name> [-- --dir <install_dir> <--sqlguardip> <g-machine ip> --tapip <db server ip add

Where sqlguardip is optional. If you omit this parameter, the GIM client is
installed in listener mode.
3. On Red Hat Linux, version 6 or later, run these commands to verify that the
files have been added:
ls -la /etc/init/gim*
ls -la /etc/gsvr*
On Solaris, version 10 or later, run this command:
ls /lib/svc/method/guard_g*
On all other platforms, run these commands to verify that the following new
entries were added to /etc/inittab:
gim:2345:respawn:<perl dir>/perl <modules install dir>/GIM/<ver>/gim_client.pl
gsvr:2345:respawn:<modules install dir>/perl <modules install dir>/SUPERVISOR/<ver>/guard_supervis

Where modules install dir is the directory where all GIM modules are
installed, for example, /usr/local/guardium/modules.

170 S-TAP and other agents


4. Enter this command to verify that the GIM client, SUPERVISOR process, and
modules are running:
ps -afe | grep modules
5. Log in to the Guardium system and check the Process Monitoring status.

What to do next

Uninstalling the GIM client


Procedure
1. Run this command: <GIM installation directory>/GIM/current/uninstall.pl
2. Verify that <GIM installation directory> is removed.

Upgrading the GIM client


You can use GIM to upgrade the GIM client to a newer version.

Procedure
1. Upload the latest available BUNDLE-GIM.gim file to the Guardium system.
2. Use the GIM GUI to schedule the installation of the new BUNDLE-GIM.gim file.
3. Monitor the installation process by clicking on the i icon and pressing Refresh.
When the installation has successfully completed the INSTALLED status will be
displayed.

Using groups with GIM


You can use groups to make some GIM tasks easier.

Before you begin

About this task

You can create group of GIM clients and use it to roll out updates to those
managed servers.

Procedure
1. Click Setup > Tools and Views > Group Builder. In the Group Builder, create
a new group. For the Group Type Description choose Client Hostname. The
new group is added to the list of existing groups.
2. Choose the new group in the Modify Existing Groups list and add members to
the group. You can add them manually or populate the list from a query. To
populate the list from a query, click Populate from Query and note these
requirements:
a. For Query select a report name that begins with GIM.
b. For Fetch Member from Column, select GIM Client Name.
c. In each Enter (Like) field, enter a value to be matched, or % if this field is
not used to identify clients.
d. Save the group and run or schedule the query.

Results
You can use the group in the GIM Setup by Client screen to work with this set of
clients as a group rather than individually.

Chapter 2. Guardium Installation Manager 171


GIM - GUI
The purpose of GIM is to provide automatic installation capability for modules,
taking advantage of a GIM client and GIM server residing on each database server
and Guardium system respectively.

Users may also interact with GIM through the CLI. See “GIM - CLI” on page 179
for information on installing and upgrading modules with GIM using CLI.

You can use the GUI of the Guardium Installation Manager (GIM) for these tasks:
v Process Monitoring
v Upload Module Package
v Configure, Install, or Update Modules (by client)
v Configure, Install, or Update Modules (by module)
v Rollback Mechanism

Note: If A-TAP is being used, A-TAP must first be disabled on the database server
before performing a GIM-based S-TAP upgrade or uninstall.

Note: GIM does not support the installation of native S-TAP installers (rpm, dept,
bff, etc.)

Note: Installation of modules on a specific client for the FIRST TIME using the
GIM utility must be in the form of a BUNDLE. Future upgrades of specific
modules which are part of the installed bundle can be either as single modules or
bundles.

Process Monitoring

Displays the status for GIM processes on servers.

Supervisor

The GIM Supervisor is a process with the main purpose of supervising and
monitoring Guardium processes. Specifically, it is responsible for starting, stopping,
and making sure all of Guardium processes are running at all times and restarting
them if they fail.

Note: For Guardium V9.0, on Solaris 5.10/5.11, GIM and SUPERVISOR are now
SMF services. They are not inittab entries anymore.

To start/stop gim/supervisor use:

svcadm -v enable guard_gim

svcadm -v enable guard_gsvr

svcadm -v disable guard_gim

svcadm -v disable guard_gsvr

GIM

The GIM process is the GIM client process, which is responsible for such duties as
registering to the GIM server, initiate a request to check for software updates,

172 S-TAP and other agents


installing the new software, updating module parameters, and uninstalling
modules.

Upload Module Package


Loads the modules package file (a .gim file containing module(s) sub-packages) to
the database.
1. Click Manage > Install Management > Upload to open Upload.
2. Click Browse to browse where your package (.gim file) is on disk.
3. Click Upload to upload your package.
4. Click the Import icon of the uploaded package located under Import Uploaded
modules to load the package.

Configure, Install, or Update Modules (by client)

You can use this option to configure/install a module for any number of clients
from packages already loaded.

The simplest, safest, and quickest way to install or uninstall modules is by using
bundles. Using bundles guarantees automatic dependency and order resolution.

If you have already created groups of clients, you can use a group to specify the
clients to be the target for the specified action. Otherwise use these steps to select a
list of clients.
1. Click Manage > Install Management > Setup by Client to open theClient
Search Criteria.
2. Click the Search button to perform filtered search and display the Clients
panel.
3. Select the clients that will be the target for the specified action.
v If there are more than 20 clients then the list of clients will be split onto
additional pages

Note: Clicking the Select All button will only select the clients on the
current page being viewed
4. From the Clients panel, two actions can be taken:
v Configure/install common parameters
v Configure/install module
v Reset Clients - By clicking Reset Clients, you can disassociate modules from
selected clients and remove the client definition from the Guardium system
database. Note: Resetting a client does NOT trigger module removal on the
database server.
v View installation state of this client - By clicking on the information icon you
can open up the Installation Status panel and view the installation status of a
client. This panel displays all modules on the client which are installed or
scheduled for update or uninstall. From this panel, you can use the Edit this
module icon to configure parameters for each module individually.

Configure, install, or update modules (by module)

Starting from modules, enables users to configure and install a module for any
number of clients. Any required packages should have been loaded beforehand.

Chapter 2. Guardium Installation Manager 173


1. Click Manage > Install Management > Modules Search Criteria to open the
Modules Search Criteria.
2. Click Search to perform filtered search and display the Modules panel. This
will display a Modules panel that lists all the available modules and bundles.
3. Make a selection and click Next to open the Clients panel.
4. Select the clients that will be the target for the specified action
v If there are more than 20 clients then the list of clients will be split onto
additional pages

Note: Clicking the Select All button will only select the clients on the
current page being viewed
5. From the Clients panel, two actions can be taken:
v Configure/install common parameters
v Configure/install module
v Reset Clients - By clicking the Reset Clients button you can disassociate
modules from selected clients and remove the client definition from the
Guardium system database. Note: Resetting a client does NOT trigger
module removal on the database server.
v View installation state of this client - By clicking on the information icon you
can open up the Installation Status panel and view the installation status of a
client. This panel displays all modules on the client which are installed or
scheduled for update. From this panel, the Edit this module icon can be
used to configure parameters for each module individually.

Configure/install common parameters


1. Click Setup > Tools and Views > Parameter Configuration.
2. Select the clients in the Client Module Parameters section that you would like
to modify parameters for
3. Modify any of the listed module parameters within the Client Module
Parameters section

Note: parameters may be entered in the Common Module Parameter section


to, by clicking Apply to Selected, populate the selected clients in the Client
Module Parameters section.
4. Click Apply to Clients, after entering values for all required parameters on all
selected clients, to save the configuration to the database. Before the save, a
validation is performed to make sure all required fields have values or their
values are in pre-defined range.
5. After Saving configurations, click Install/Update to schedule the module for
installation on the selected clients. In addition, from the Module Parameters
panel you may uninstall, cancel install/update, cancel uninstall, and revert
current changes. Note that the schedule date and time corresponds to the date
and time on the selected clients.

Note: The Generate Grdapi button at the front of the client line under the
Client Module Parameter section enables you to view the list of grdapi
commands that reflect the changes tat you have made to the module such as
assigning, installing, uninstalling, scheduling, and updating of the module.
These grdapi commands are provided so you can take the set of commands
and apply them to other clients in a script if you would like to reproduce the
changes .

174 S-TAP and other agents


Note: The open Property content button appears in front of every writable
properties and opens up a window that simplifies the editing of a long field.

Note: The View installation state of this client button, also at the front of the
client line under the Client Module Parameter section provides a view into the
current installation status for the module.

Note: When installing KTAP as part of BUNDLE-STAP, KTAP status will set to
INSTALLED even if the actual KTAP module was missing for this specific
platform. However a message will be shown on the GIM-EVENTS report
indicating KTAP module was missing.

Note: You should check the GIM-EVENTS report after installing bundles on the
DB servers.
6. Click Back to go back to the Clients panel.

Windows S-TAP Parameters in GIM

The WINSTAP_CMD_LINE parameter can be used to specify additional install options,


such as controlling the installation of certain features (DB2, Oracle, CAS, etc).

If nothing is specified in addition to the default command line, features are


installed according to the typical installation feature set.

Here is the list of options, can be set either to 0 (not installed) or to 1 (installed).
v MSSQLSharedMemory
v DB2SharedMemory
v CAS
v NamedPipes
v Lhmon
v LhmonForNetwork
v START: this parameter controls whether S-TAP is started or not after installation.
v INSTALL_DIR: this specifies where to install the software.
v QUIET: controls the switches that are passed to the Windows installer, do not
make any changes, used to debug installation issues.
v DBALIAS: an alias for the data base server machine, can use the machine host
name, not related to the actual data base installed on the server.

For example, the following command line options:


CAS=0 NamedPipes=0

Will skip the installation of CAS and Named Pipes support.

If you are installing an S-TAP and you do not want it to automatically discover
MSSQL databases, type START=0 in the WINSTAP_CMD_LINE column to prevent
the S-TAP from starting when it is installed. You can also specify this parameter for
a single database server by using the GIM API:
grdapi gim_update_client_params clientIP=xx.xx.xx.xx paramName=WINSTAP_CMD_LINE paramValue="START=

The installation directory for the S-TAP must be empty or not exist. You cannot
install an S-TAP into a directory that already contains any files. For installation on
64-bit machines you must specify the 32-bit program files folder (for example,
C:/program files (x86)/guardium/stap and NOT C:/program

Chapter 2. Guardium Installation Manager 175


files/guardium/stap). Otherwise the installation will fail because it cannot write to
the 64-bit folders.

Configure/install module
1. If configuring, installing, or updating:
a. by client
1) Click Next to display the Common Modules panel where a list of all
available common modules and bundles that can be installed on the
selected clients.
2) Select a module or bundle to configure/install for the selected clients.

Note: The status of a module or bundle will be displayed only if its


version matches either an installed version or a scheduled version.
3) Click Next after selecting a module or bundle from the list.
b. by module
1) Click Next after selecting the clients from the list
2. Depending on the module or bundle selected, and possible dependencies, you
will then see options based on the selection types:
v Bundle
Clicking Next for a bundle will take you to the Module Parameters panel
that will display all the parameters for all modules of the bundle. Modify
any of the listed module parameters within the Client Module Parameters
section.

Note: A bundle is treated as a regular module.


v module with no mandatory dependencies
Clicking Next for modules with no mandatory dependencies will take you to
the Module Parameters panel that will display the module's parameters.
Modify any of the listed module parameters within the Client Module
Parameters section.
v module with dependencies
Clicking Next for a module with dependencies displays that module and all
dependencies modules in the Dependent Modules screen. Click the Edit icon
for any of the modules to configure its parameters for all selected clients;
taking you to the Module Parameters panel that will display the module's
parameters. Change any parameters there and click the Accept button to
come back to the Dependent Modules screen.

Note: The configuration for module and all of its dependencies can be saved
to the database only at once. Also, they can only be installed as a bundle.
This means that they cannot be individually saved or scheduled for
installation. For example, if, in middle of scheduling installation, the process
fails for one of modules on one of the clients, it will roll back all installations
before that failure.

Note: parameters may be entered in the Common Module Parameter section


to, by clicking the Apply to Selected, populate the selected clients in the
Client Module Parameters section.
3. Click Apply to Clients, after entering values for all required parameters on all
selected clients, to save the configuration to the database. Before the save, a
validation is performed to make sure all required fields have values or their
values are in pre-defined range.

176 S-TAP and other agents


4. After Saving configurations, click Install/Update to schedule the module and
its dependencies for installation on the selected clients. In addition, from the
Dependent Module Parameters panel you may uninstall, cancel install/update,
cancel uninstall, and revert current changes. Note that the schedule date and
time corresponds to the date and time on the selected clients.

Note: The Generate Grdapi button at the front of the client line under the
Client Module Parameter section allows the user to view the list of grdapi
commands that reflect the changes the user has made to the module such as
assigning, installing, uninstalling, scheduling, and updating of the module.
These grdapi commands are provided to the user so they can take the set of
commands and apply them to other clients in a script if they would like to
reproduce.

Note: The open Property content button appears in front of every writable
properties and opens up a window that simplifies the editing of a long field.

Note: The View installation state of this client button, also at the front of the
client line under the Client Module Parameter section provides a view into the
current installation status for the module.

Note: When installing K-TAP as part of BUNDLE-STAP, K-TAP status will set
to INSTALLED even if the actual K-TAP module was missing for this specific
platform. However a message will be shown on the GIM-EVENTS report
indicating K-TAP module was missing.

Note: Always check the GIM-EVENTS report after installing bundles on the DB
servers

Note: When uninstalling modules, GIM will only uninstall the selected module
and not uninstall dependencies

Installing an S-TAP with Kerberos

When you install an S-TAP to monitor a datasource that uses Kerberos


authentication, the S-TAP must be able to locate the Kerberos plugin and libraries.
When you assign the S-TAP module to a client that uses Kerberos, through the
GIM UI or the command line, you can specify two parameters:
STAP_KERBEROS_PLUGIN_DIR
Optional. Specifies the location of the Kerberos plugin. If this parameter is
not specified, the S-TAP does not attempt to work with Kerberos.
STAP_KERBEROS_LD_LIBRARY_PATH
Optional. Specifies the location of the Kerberos libraries. If this parameter
is not specified, and STAP_KERBEROS_PLUGIN_DIR is specified, the S-TAP
looks for the libraries in the standard system search path.

Rollback Mechanism
GIM's rollback mechanism purpose is to handle errors during installation and
recover modules to their prior state. The Rollback mechanism supports the
following recovery scenarios:
1. Live Upgrade Recovery
For Bundles

Chapter 2. Guardium Installation Manager 177


v When bundles are installed, recovery will rollback the modules that have an
install failure within the bundle.
v Modules that are marked as NO_ROLLBACK (in the form of a read-only
parameter <MODULE>_NO_ROLLBACK=1) will not be rolled back in the event of a
failure. S-TAP/KTAP are two such modules that once successfully installed
will not be rolled back in the event of a failure of another module.
For non-Bundles
v Rollback entails the removal of the standalone module in the case of a
scratch install or reverting back to the previous version in case of an
upgrade.
2. Boot Time Installation Recovery
If installation occurs during a system reboot, a second system reboot will be
needed in order complete recovery. Users will still see the status IP-PR after
reboot, and a GIM_EVENT entry that indicates a second reboot is needed to
complete the recovery process. The module/bundle state will then indicate a
“FAILED” status after the second reboot.

Note: When the status is 'IP-PR' booting the DB-server is different per OS (Any
other way of rebooting the system will keep the pending modules in a pending
state):
Linux : shutdown -r
SuSe : reboot
HP : shutdown -r
Solaris : shutdown -i [6|0] (Note : ’0’ can be used only if shutdown is done from the terminal s
AIX : reboot
Tru64 : reboot

Note: In addition, prior to reboot, A-TAP instances must be


disabled/deactivated.

Changing the GIM server for a GIM client

You can change the GIM server that manages one or more GIM clients. You might
want to make this change in order to balance the load among your GIM servers, or
to make it easier to distribute GIM packages. To reassign a group of GIM clients to
a different GIM server, follow these steps:
1. Click Manage > Install Management > Setup by Module to change the GIM
server for a GIM client.
2. Select a GIM bundle that is installed on the clients that you want to reassign.
Click Next.
3. Select the clients to be changed. You can click Select All or select clients
individually. Click Next.
4. Click Select All.
5. For the GIM_URL parameter, enter the hostname or IP address of the GIM server
(Guardium system) to which you want to reassign the selected GIM clients.
Click Apply to Selected.
6. On the same panel click Apply to Clients, then click Install/Update and
schedule the update.
After the update has been processed, the GIM client will be managed by the new
GIM server.

178 S-TAP and other agents


GIM - CLI
You can use the CLI in order to install or upgrade modules on the database server.

The following examples are presented only to cover some of the more common
scenarios. For more information and a complete list of all supported CLI
commands refer to GuardAPI GIM Functions.
v Loading module packages
v Upgrade or Scratch install using bundles
v Uninstall a module/bundle
v Installation Status
v Querying modules state

Loading module packages

Before modules can be installed on DB server, they must be loaded onto the
Central Manager GIM database. If a Central Manager is not part of the
architecture, packages must be loaded onto each Guardium system. Use the Load
package option in the GIM UI in order to get the packages loaded to the database.

Upgrade or Scratch install using bundles

Note: Scratch install refers also to a case where old (pre-GIM) S-TAP is installed on
the database server.

A bundle is a list of modules grouped together to allow easier installation process.


Always use bundles to install or upgrade modules.
1. Get the list of registered clients (i.e. database servers installed with GIM client
that have registered with GIM server):
grdapi gim_list_registered_clients
ID=0
####### ENTRY 0 #######
CLIENT_ID: 1
IP: 192.168.2.204
OS: HP-UX
OS_RELEASE: B.11.00
OS_VENDOR: hp
OS_VENDOR_VERSION: B.11.00
OS_BITS: 64
PROCESSOR 9000
####### ENTRY 1 #######
CLIENT_ID: 2
IP: 192.168.2.210
OS: Linux
OS_RELEASE: 2.6.16.54-0.2.5-smp
OS_VENDOR: suse
OS_VENDOR_VERSION: 10.1
OS_BITS: 64
PROCESSOR x86_64
2. Assign (i.e. prepare to install; NOT a request to actually install it on the client)
the latest bundle available for a specific client
grdapi gim_assign_latest_bundle_or_module_to_client clientIP=198.168.2.210 moduleName=BUNDLE-S

Note: In order to assign a specific bundle or module to a client, step 2 should


be replaced with the following sequence:
gim_get_available_modules clientIP=”client ip”
gim_assign_bundle_or_module_to_client_by_version clientIP=”client ip” modulesName=”Bundle/Modul

Chapter 2. Guardium Installation Manager 179


3. Schedule the installation.
grdapi gim_schedule_install clientIP=192.168.2.210 date=now

Note: For multiple client installation repeat steps 2-3.

Note: For flexible GIM scheduling, use now + [1-9][0-9]* minute | hour | day
| week | month. Example: now + 1 day, now + 3 minutes

GIM scheduling

All time is relative to Guardium system time. Now means right now as specified
by the Guardium system. Now +30 minute is the current Guardium system time +
30 minutes. This can be seen when looking at the installation status by clicking on
the small "i" next to a client, for example in Manage > Module Installation >
Setup by clients. If the time on the database server has passed the time on the
Guardium system specified for install, then the install begins.

Example one, set up three clients (a) set for Guardium system time - 1 hour, (b) set
for Guardium system time, and (c) set for Guardium system time + 1 hour.

Set up an S-TAP installation via GIM for "now +30 minute".

Guardium system (a), which is already 30 minutes ahead of the time set for
installation, will install immediately.

Guardium system (b) will install in 30 minutes.

Guardium system (c) will take another hour after (b) to install.

Example two - Same setup as example one but this time specify "now".

Installation status changes to IP immediately on all clients.

Uninstalling a module/bundle
grdapi gim_uninstall_module clientIP=192.168.2.210 module=BUNDLE-STAP date=now

You can specify date=now or use the format of YYYY-MM-DD HH:mm. The
uninstallation will take place the next time GIM client checks for updates
(GIM_INTERVAL).

Installation Status
Additional information about the latest status the client has sent can be retrieved
by running the following command (The status message will appear as an entry in
GIM_EVENTS table from which a report can be generated):

The general status message can be obtained by running the following CLI
command:
grdapi gim_get_client_last_event clientIP="client ip"
grdapi gim_get_client_last_event clientIP=winx64
grdapi gim_get_client_last_event clientIP=9.70.144.73

Here is an example of the output from this command:


ID=0
OK
BUNDLE-STAP-8.0_r2609_1 INSTALLED

180 S-TAP and other agents


STAP-UTILS-8.0_r2609_1 INSTALLED
COMPONENTS-8.0_r2609_1 INSTALLED
KTAP-8.0_r2609_1 INSTALLED
STAP-8.0_r2609_1 INSTALLED
TEE-8.0_r2609_1 INSTALLED
ATAP-8.0_r2609_1 INSTALLED

Querying modules state


In order to query the installed module's state per client the following CLI
command needs to be executed.
grdapi gim_list_client_modules clientIP=”client ip”

The following states are possible:


INSTALLED
Module is installed.
PENDING-INSTALL
Module is pending to be scheduled for installation.
PENDING-UNINSTALL
Module is pending to be scheduled for uninstallation.
PENDING-UPDATE
Module is pending to be scheduled for update.
IP Module installation is in progress.
FAILED
Module's last operation failed.
IP-PR Module requires client reboot in order to complete the installation process.
Prior to rebooting, deactivate all A-TAP instances. Rebooting the database
server is different per OS (Any other way of rebooting the system will
keep the pending modules in a pending state).
v AIX: reboot
v Linux : shutdown -r
v SuSe: reboot
v HP-UX: shutdown -r
v Solaris: shutdown -i [6|0] (Note : '0' can be used only if shutdown is
done from the terminal server)
v Tru64: reboot

Output example
ID=0
####### ENTRY 0 #######
MODULE_ID: 11
NAME: INIT
INSTALLED_VERSION 8.0_r3852_1
SCHEDULED_VERSION 8.0_r3852_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 1 #######
MODULE_ID: -1
NAME: COMMON
INSTALLED_VERSION 8.0_r0_1
SCHEDULED_VERSION 8.0_r0_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 2 #######

Chapter 2. Guardium Installation Manager 181


MODULE_ID: 12
NAME: UTILS
INSTALLED_VERSION 8.0_r3852_1
SCHEDULED_VERSION 8.0_r3852_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 3 #######
MODULE_ID: 13
NAME: SUPERVISOR
INSTALLED_VERSION 8.0_r3852_1
SCHEDULED_VERSION 8.0_r3852_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 4 #######
MODULE_ID: 14
NAME: GIM
INSTALLED_VERSION 8.0_r3852_1
SCHEDULED_VERSION 8.0_r3852_1
STATE: INSTALLED
IS_SCHEDULED: N
####### ENTRY 5 #######
MODULE_ID: 15
NAME: BUNDLE-GIM
INSTALLED_VERSION 8.0_r3852_1
SCHEDULED_VERSION 8.0_r3852_1
STATE: INSTALLED
IS_SCHEDULED: N

Enabling K-TAP

If, during the installation process, K-TAP fails to load properly, possibly caused by
hardware or software incompatibility, Tee is installed as the default collection
mechanism. To switch back to K-TAP, after compatibility issues are resolved, follow
these steps.
1. Disable the S-TAP. See Stop UNIX S-TAP for more information.
2. Edit guard_tap.ini and change ktap_installed to 1 and tee_installed to 0
3. Run the guard_ktap_loader install command.
example: /usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader install
4. Run the guard_ktap_loader start command.
example: /usr/local/guardium/guard_stap/ktap/current/guard_ktap_loader start
5. Re-enable S-TAP. See Restart UNIX S-TAP for more information.

Copying a K-TAP module by using GIM


If you build a custom K-TAP module for a Linux database server, you can use GIM
to copy that module to other Linux database servers.

Before you begin

The custom K-TAP module is built when you install an S-TAP on a Linux server
for which there is no pre-built K-TAPfor the current kernel. The custom K-TAP
module is built only if the kernel-devel package is installed. When you install the
S-TAP bundle, use the GIM UI to set the value of the GIM parameter
STAP_UPLOAD_FEATURE to 1. This tells the GIM client to upload the custom K-TAP
module to the Guardium system after it is built and then automatically create a
custom S-TAP bundle.

182 S-TAP and other agents


Procedure
1. Use GIM to install the S-TAP on the Linux database server. The installer
determines that a custom K-TAP module is required and builds it.
2. The custom K-TAP module, along with its sha256sum value, is uploaded
automatically to the Guardium system for which the S-TAP is configured. Note
that this might not be the same Guardium system that you use as a GIM server.
3. On the Guardium system to which the K-TAP is uploaded, run this CLI
command: grdapi make_bundle_with_uploaded_kernel_module, This adds the
newly built K-TAP module to the corresponding S-TAP bundle. There must be
at least one S-TAP bundle whose build number and operating system attributes
match those of the uploaded K-TAP module. Loaded bundles are stored in
/var/gim_dist_packages. The script creates a new S-TAP bundle with _8XX
appended to the build number. The new bundle is located in /var/dump. After
running the GuardAPI command, grdpi
make_bundle_with_uploaded_kernel_module, there is a need to load the new
GIM bundle. Otherwise it will not be visible in GIM GUI. If the GuardAPI
command, grdpi make_bundle_with_uploaded_kernel_module, is successful,
the following example of a message containing the name of the new STAP
bundle will be printed: Created guard-bundle-STAP-9.0.0_r71327_v90_800-suse-
11-linux-x86_64.gim with kernel ktap-71327-suse-11-linux-x86_64-
xCUSTOMxeagle910-3.0.101-303.gefb7031-default-x86_64-SMP. Then run the
GuardAPI command, grdapi gim_load_package, and supply the name of the
new bundle printed in the previous step.
4. If the new bundle is on a Guardium system that is not your GIM server, copy
the new bundle to the GIM server.
5. Use the GIM GUI or CLI to distribute the new bundle to other database servers
that are running the same Linux distribution as the server where the custom
K-TAP was built. There are hundreds of Linux distributions available, and the
list is growing. This means that there might not be a K-TAP already available
for your Linux distribution. If the correct K-TAP is not available, the S-TAP
installation process can build it for you. When you build a new K-TAP module
for a Linux database server, you can copy that module to other database
servers that run the same Linux distribution.
“Building a K-TAP on Linux” on page 25
There are hundreds of Linux distributions available, and the list is growing.
This means that there might not be a K-TAP already available for your Linux
distribution. If the correct K-TAP is not available, the S-TAP installation process
can build it for you.
“Copying a new K-TAP module to other systems” on page 25
When you build a new K-TAP module for a Linux database server, you can
copy that module to other database servers that run the same Linux
distribution.

GIM dynamic updating


GIM clients check for updates from the GIM server at regular intervals. The GIM
server can calculate the best polling interval to use based on system conditions.

Each GIM client sends an “alive” message to its GIM server regularly, to check
whether any updates are ready to be processed. In prior releases, this message has
been sent at a fixed interval, regardless of system conditions. Now this polling
interval can be calculated and updated based on conditions at the GIM server. The

Chapter 2. Guardium Installation Manager 183


interval is calculated regularly, and the new value is passed to the GIM client in
response to its “alive” message. This feature is enabled by default, but you can
turn it off if you prefer a fixed interval.

The calculation begins with the number of GIM clients that are connected to the
GIM server. Two conditions on the GIM server are used to calibrate the polling
interval: the load on the CPU and the number of database connections in use on
the Guardium system. Thresholds are defined for each of these conditions, and the
update interval is adjusted based on those thresholds, and on the number of GIM
clients connected to the GIM server.

These parameters are used in the calculation. The default value for each parameter
is shown in parentheses.
dynamic_alive_enabled (1)
Dynamic alive feature control. 1 - enabled, 0 – disabled.
dynamic_alive_check_interval (5)
The interval, in minutes, at which the polling interval is recalculated
dynamic_alive_default_load_factor (3)
Dynamic alive load factor, calculated each interval
dynamic_alive_cpu_level1_threshold (65)
Dynamic alive CPU usage level 1 threshold (%)
dynamic_alive_cpu_level2_threshold (85)
Dynamic alive CPU usage level 2 threshold (%)
dynamic_alive_db_conn_level1_threshold (75)
Dynamic alive DB connections usage level 1 threshold (%)
dynamic_alive_db_conn_level2_threshold (90)
Dynamic alive DB connections usage level 2 threshold (%)
dynamic_alive_cpu_load_sample_time
Dynamic alive cpu load sample time in seconds

A new Guardium API command is provided to modify these parameters. Modify


the value of dynamic_alive_enabled to turn the dynamic calculation on and off.
The thresholds and other parameters have been set based on extensive testing; you
should not need to modify them. The new command is gim_set_global_param, and
it takes a parameter name and value as its argument. For example:
grdapi gim_set_global_param dynamic_alive_enabled=0

The new polling interval is calculated according to the


dynamic_alive_check_interval parameter, which is a number of minutes and
defaults to five. When each GIM client sends its alive message to the server, the
server responds with the new polling interval as well as any other updates that
have been scheduled for that client.

The polling interval is calculated by dividing the number of GIM clients by a load
factor. The load factor defaults to three, so that by default the polling interval in
seconds for each GIM client is the number of GIM clients connected to that GIM
server divided by three. For example, if you have 150 GIM clients attached to a
GIM server, the default polling interval is 50 seconds.

The load factor is adjusted according to whether either the CPU load or the
number of database connections passes its thresholds. If either of these conditions
passes its first threshold, the load factor is adjusted to two. In the example of 150

184 S-TAP and other agents


clients, each client is told to poll the server every 75 seconds instead of every 50.
This adjustment is the same whether either or both conditions pass their first
threshold.

If either condition passes its second threshold, the load factor is adjusted to one. In
the example of 150 clients, each client is told to poll the server every 150 seconds.
This prevents frequent polling from contributing to a problem with CPU load or
network traffic.

When a condition returns to a value smaller than the current threshold, the next
calculation adjusts the load factor accordingly so that the calculated interval
reflects the conditions in effect at the GIM server.

When you upgrade your database server operating system


When you upgrade the operating system on your database server, you can allow
the GIM client to make the required changes in itself and your GIM-installed
modules.

Before you begin

Review the information at http://www-01.ibm.com/support/


docview.wss?uid=swg21679002 to see the options that are available based on the
level of your GIM client.

About this task

It is best to update all your GIM-installed modules as soon as possible after the
upgrade, whether manually or automatically. By default, the option to update these
modules automatically is disabled. If you want to use automatic updating, you
must configure the Guardium system that acts as your GIM server to support this
option, and you must make the required bundles available on this server.

Procedure
1. For each module that you have installed on your database server, locate the
GIM bundle containing the latest version of this module that supports the new
operating-system version. The build number of each bundle must be the same
or greater than the bundle that is currently installed. Load each bundle onto the
GIM server.
2. Use the gim_set_global_param command to set the value of the global
parameter auto_install_on_db_server_os_upgrade to 1. This enables the
automatic update option on the GIM server.
grdapi gim_set_global_param paramName="auto_install_on_db_server_os_upgrade" paramValue="1"

By default this parameter is set to 0, which means the option is disabled.


3. After completing all your other preparations, upgrade the operating system on
your database server.

Results

At first boot after OS upgrade, the GIM client recognizes that the operating system
has been upgraded and because the automatic update option is enabled, the client
takes these steps:
1. Changes the configuration files for all GIM-installed modules to support the
new operating system attributes.

Chapter 2. Guardium Installation Manager 185


2. Re-registers all the modules to the GIM server with the updated attributes.
3. Records an alert in the GIM_EVENTS report saying that an OS upgrade has
occurred and listing actions that should be taken.

When the modules are re-registered, the GIM server looks first for a bundle that
has the same build number as the previously installed bundle, but is compatible
with the upgraded OS. If it does not find such a bundle, it looks for the latest
bundles that support the new OS attributes. If the server cannot find appropriate
bundles, it issues an error message. If the server finds appropriate bundles, it
schedules them for upgrade and runs the upgrade process immediately.

What to do next

Review the messages in the GIM_EVENTS report. If the GIM server reports that
the modules have been upgraded successfully, verify the proper operation of the
modules as you would do after any update.

If error messages have been written to the GIM_EVENTS report, indicating that the
upgrade was not successful, review the error messages for guidance.

After completing your planned OS upgrade, disable the automatic update option
on the GIM server. This prevents a GIM client from erroneously starting an update
process.
grdapi gim_set_global_param paramName="auto_install_on_db_server_os_upgrade" paramValue="0"

You can re-enable the automatic update option when you perform another OS
upgrade.

Distributing GIM bundles to managed units


You can distribute GIM bundles to managed units in order to deploy them on the
GIM clients managed by those managed units.

Before you begin

About this task

If you manage all your GIM clients from your Central Manager, you can deploy
bundles to all your GIM clients directly from the Central Manager. If you manage
groups of clients from several managed units, you can distribute GIM bundles
from your central manager to those managed units.

The time required for distribution depends on the size of the bundles and network
conditions. In a network with substantial latency, transfers can take several hours.

Procedure
1. Copy the bundles that you want to distribute into the /var/gim/dist_packages
directory on your Central Manager. All files in this directory will be
distributed; you cannot select which bundles you want to distribute.
2. Choose the managed units to which you want to distribute the bundles.
3. Click Distribute GIM bundles. The bundles are copied to the selected
managed units.

186 S-TAP and other agents


Results
You can install the bundles from each managed unit to the GIM clients that it
manages.

Removing unused GIM bundles


You can remove GIM bundles from your GIM server if they are no longer used on
any database server.

About this task

This function enables you to maintain your inventory of GIM bundles and prevent
it from using disk space unnecessarily.

You can use two new Guardium API commands to identify and remove unused
GIM bundles. Perform this procedure on each Guardium system that acts as a GIM
server.

Procedure
1. Run the gim_list_unused_bundles command to identify unused bundles. Use
the includeLatest parameter to indicate whether you want the list that is
returned by the command to include the latest version of each GIM bundle.
You might have some bundles that you have not yet distributed, or you might
want to keep one older version so that you can reinstall it if needed. Set
includeLatest to 0 to exclude the latest unused version of each bundle from
the command results. Set it to 1 to include all unused versions. This parameter
is required and no default value is provided. For example:
gim_list_unused_bundles includelatest=0

The command returns a list of GIM bundles that are found on the GIM server
but are not installed on any database server whose GIM client works with this
GIM server.
2. If step 1 identifies some unused bundles, use the gim_remove_bundle command
to remove each unwanted bundle. This command takes a single parameter,
bundlePackageName, which identifies the bundle to be removed. This parameter
is required and no default value is provided. Use names that are returned by
the gim_list_unused_bundles command.
The named bundle is removed only if:
v The name specified in bundlePackageName matches the name of one and only
one specific GIM bundle.
v There is no GIM bundle whose name matches bundlePackageName installed
on any database server whose GIM client works with this GIM server.
For example:
gim_remove_bundle bundlePackageName=name

where name is a bundle name that was returned by the


gim_list_unused_bundles command.

Results

GIM bundles that are not needed are removed from your GIM server.

Chapter 2. Guardium Installation Manager 187


Running GIM diagnostics
You can run diagnostics on GIM clients to verify that the GIM server has accurate
data about each client.

About this task

If you experience trouble with a GIM client, your first step should be to verify that
the GIM server has accurate data about that client. Running GIM diagnostics
verifies that the modules listed for that client on the GIM server match the
modules installed on that client, and that the parameters stored on the GIM client
match those stored on the GIM server.

You can run GIM diagnostics either from the Guardium user interface or from the
command line. To run from the command line, use this command:
grdapi gim_run_diagnostics clientIP=xx.xx.xx.xx

The value of clientIP can be either an IP address or a hostname. You must run the
command on the Guardium system that is the GIM server for this client.

To run GIM diagnostics from the GUI, use this procedure:

Procedure
1. Use the check boxes next to each client to choose the clients for which you
want to run GIM diagnostics.
2. Click Run diagnostics. The next time that each client polls the GIM server for
updates, it will receive the diagnostic command and run it immediately.

Results

You can review the results in the GIM_EVENTS report.

Debugging GIM operations


You might need to turn on debugging in order to troubleshoot a problem.

About this task

Use these steps to turn on GIM debugging on the GIM server (Guardium system).

Procedure
1. Edit the GIM properties file: /usr/local/jakarta-tomcat-4.1.30/webapps-http/
ROOT/WEB-INF/conf/gimserver.log4j.properties.
2. Change the value ERROR to DEBUG.
3. Save the file.

Results

Debugging will be turned on in a few seconds and debug messages will be written
to the daily debug log file in /var/log/guard/debug-logs/.

188 S-TAP and other agents


What to do next
When you have finished debugging, edit the file again and change DEBUG back to
ERROR.

Enabling GIM client debugging


About this task
To enable debugging on the GIM client, change the parameter module_DEBUG to 1,
where module is the name of the installed module whose operation you want to
debug. You can modify the parameter by using the CLI or the user interface. Set
the value to 0 when you complete your debugging.

Restarting the supervisor for Solaris with SMF support


Use a set of CLI commands to restart the supervisor on Solaris servers with SMF
support.

About this task

To restart the supervisor, complete the following procedure. Only use this
procedure on Solaris servers with SMF support.

Procedure
1. Stop the supervisor by running the command svcadm -v disable guard_gsvr.
2. Run the command svccfg delete -f guard_gsvr.
3. Restart the supervisor with the command svccfg import <gim install
dir>/SUPERVISOR/current/guard_gsvr.xml where <gim install dir> is the file
path to the GIM installation directory.

Results

The supervisor is restarted for Solaris with SMF support.

Chapter 2. Guardium Installation Manager 189


190 S-TAP and other agents
Index
A GIM client
installing 26, 47, 137, 169, 170, 171,
S-TAP for IBM i (continued)
installing 146
A-TAP, configure 65 182, 185, 188 S-TAP for z/OS 148
ATAP, activating 73 GIM GUI 172 S-TAP help book 137
GIM Sever Allocation 166 S-TAP parameters 109, 121, 135
S-TAP reports 140
C S-TAP; K-TAP; kernel; Linux 25
Configure S-TAP from the GUI 90 H SharePoint Agent 28
How to Set up S-TAP Authentication
with SSL Certificates 48
D T
Delayed cluster disk mounting 135 troubleshooting 138
S
S-TAP 1
E installing 1 U
Edit the S-TAP configuration file 109 S-TAP administration guide 42 Unix S-TAP 10, 22, 26, 64
Enterprise load balancing 33, 35, 36, 37, S-TAP appendix 142 UNIX S-TAP 50, 77
38 S-TAP behavior 139
S-TAP Certification 47
S-TAP Discovery 63 W
G S-TAP error messages 141
S-TAP events panel 140
When to restart, When to reboot 31
GIM 165, 183 Windows S-TAP 5, 84
S-TAP for IBM i
GIM CLI 179 configuring 147

191
IBM

Install and Upgrade


ii Install and Upgrade
Contents
Chapter 1. Installing your Guardium Reset Root Password . . . . . . . . .. 12
system . . . . . . . . . . . . . .. 1 Validate All Settings . . . . . . . . .. 13
Operating modes . . . . . . . . . . . .. 1 Reboot the System . . . . . . . . . .. 13
Hardware Requirements . . . . . . . . .. 2 Step 5. What to do next . . . . . . . . .. 13
Guardium port requirements . . . . . . . .. 2 Verify Successful Installation. . . . . . .. 13
Step 1. Assemble the following before you begin .. 4 Set Unit Type . . . . . . . . . . . .. 14
SAN storage devices . . . . . . . . . .. 5 Install License Keys. . . . . . . . . .. 14
Step 2. Set up the physical or virtual appliance . .. 5 Install maintenance patches (if available) . .. 14
Physical Appliance . . . . . . . . . .. 5 Additional Steps (optional) . . . . . . .. 15
How to identify eth0 and other network ports .. 5 Creating the Virtual Image . . . . . . . .. 16
Default passwords for physical appliances . .. 6 VMware Infrastructure Overview . . . . .. 16
Virtual appliance . . . . . . . . . . .. 6 VM Installation Overview . . . . . . .. 16
Step 3. Install the Guardium image . . . . . .. 6 Custom Partitioning . . . . . . . . . .. 23
Step 4. Set up initial and basic configuration. . .. 7 How to partition with an encrypted LVM . . .. 23
Set the primary system IP address . . . . .. 7 Example of SAN Configuration . . . . . . .. 24
Set the Default Router IP Address . . . . .. 8
Set DNS Server IP Address . . . . . . .. 8 Chapter 2. Upgrading your Guardium
SMTP Server . . . . . . . . . . . .. 8 System . . . . . . . . . . . . .. 31
Set Host and Domain Names . . . . . . .. 8 Planning an upgrade . . . . . . . . . .. 31
Set the Time Zone, Date and Time . . . . .. 8 Identify the correct upgrade scenario . . . .. 31
Set the Initial Unit Type . . . . . . . .. 9
Configuring the Squid proxy . . . . . . .. 9

iii
iv Install and Upgrade
Chapter 1. Installing your Guardium system
This document details the steps necessary to install and configure your IBM®
Security Guardium for Applications system. The system is referred to as “your
Guardium system” throughout these instructions.

This document also provides information on how to customize the partitioning on


the appliance and how to install on a remote drive (SAN).

The steps are:


1. Assemble configuration information and the hardware required before you
begin.
2. Set up the physical appliance or the virtual appliance.
3. Install the Guardium® image.
4. Set up initial and basic configurations.
5. Verify successful installation.

The IBM Security Guardium for Applications solution is available as:


v Hardware offering – a fully configured software solution delivered on physical
appliances provided by IBM.
v Software offering – the solution delivered as software images to be deployed by
the customers on their own hardware either directly or as virtual appliances.

The requirements listed in this document apply to the installation of both the
physical appliance and the virtual appliance unless specified otherwise.

Operating modes
You can deploy a Guardium system in any of several operating modes.

As you plan your Guardium environment, you might deploy systems in any or all
of these operating modes:
Collector
A collector receives data about database activities or file activities from
agents that are deployed on database servers and file servers. The collector
processes this data and responds according to policies that are installed on
the collector. A collector can export data to an aggregator.
Aggregator
An aggregator collects data from several collectors, to provide an
aggregated view of the data. The aggregator is not connected directly to
database servers and file servers. You can allocate collectors to aggregators
according to location or function. For example, you might want to connect
the collectors that monitor your human resources database servers to a
single aggregator, so that you can view data that is related to all those
servers in one location. If you want, you can implement a second tier of
aggregation by deploying an aggregator that collects data from all your
other aggregators, rather than from collectors.
Central manager
There is only one central manager in a Guardium environment, although

1
you can designate another Guardium system as a backup central manager.
You can use the central manager to define policies and distribute them to
all collectors, to perform other configuration tasks that affect all your
Guardium systems, and to perform various other administrative tasks from
a single console. Your central manager can also function as an aggregator,
collecting data from collectors or from other aggregators. This model
provides an enterprise-wide view of activities and enables you to view
reports that are based on data that is aggregated from all your Guardium
systems.
Vulnerability assessment
If you are using the Guardium Vulnerability Assessment component, you
must decide where to run assessment tests. Some customers dedicate a
separate Guardium system for this function. You can also run tests from
any Guardium system that is deployed as a collector, an aggregator, or a
central manager.

The number of monitored database servers and file servers that you assign to an
collector depends on the amount of data that flows from the servers to the
collector. For information about how many collectors and aggregators your
environment requires, and how to locate your Guardium systems for best results,
refer to the Deployment Guide for IBM Guardium.

Hardware Requirements
Detailed hardware requirements and sizing recommendations are posted on the
Web.

For detailed hardware specifications and sizing recommendations, refer to the


Guardium for Applications Release Notes at http://www.ibm.com/support/
docview.wss?uid=swg27044001.

Guardium port requirements


Each Guardium system must have ports available for several types of
communication. This table lists these connections and the default port numbers
that are assigned to them.

Ports for connections to UNIX database servers


Port Protocol Purpose
16016 TCP Clear UNIX S-TAP
16017 TCP Clear UNIX CAS
16018 TLS Encrypted UNIX S-TAP
(optional)
16019 TLS Encrypted UNIX CAS
(optional)

2 Install and Upgrade


Ports for connections to Windows database servers
Port Protocol Purpose
8075 UDP Windows S-TAP heartbeat
signal (two-way traffic).
Note: The UNIX S-TAP agent
does not use UDP for
heartbeat signals, so there is
no corresponding UNIX port
for this function.
9500 TCP Clear Windows S-TAP
9501 TLS Encrypted Windows S-TAP
(optional)
16017 TCP Clear Windows CAS
16019 TLS Encrypted Windows CAS
(optional)

Default Ports Used for Guardium Application Access


Port Protocol Purpose
8443 TCP Web browser access (https)
to the Guardium user
interface. Note: This port can
be changed by the Guardium
administrator, and is also
used to register a managed
unit to the Central Manager.
22 TCP SSH access from clients to
manage the Guardium
appliance
80 HTTP traffic
443 HTTPS traffic
3306 TCP Communication between
central manager and
managed units

Ports for connections to z/OS database servers


Port Protocol Purpose
16022 TCP Connects to S-TAP for DB2
z/OS, S-TAP for IMS, S-TAP
for Data Sets
41500 TCP Default starting port for
internal message logging
communications –
LOG_PORT_SCAN_START
39987 TCP Default agent-specific
communications port
between the agent and the
agent secondary address
spaces –
ADS_LISTENER_PORT

Chapter 1. Installing your Guardium system 3


Default ports used for other features
Port Protocol Purpose
20, 21 TCP FTP Server for
backups/archiving (optional)
22 TCP SCP for backups/archiving,
patch distributions, and
file-transfers
25 TCP SMTP (email server) for
alerts and other notification
53 TCP DNS Servers
123 TCP, UDP NTP (Time Server) for time
synchronization
161 TCP, UDP SNMP Polling (optional)
162 TCP, UDP SNMP Traps (optional)
389 TCP LDAP, for example, Active
Directory or Sun One
Directory
514 TCP Syslog Server (optional)
636 TCP LDAP, for example, Active
Directory or Sun One
Directory over SSL (optional)
1500 TCP Tivoli Storage Manager
backup hosts (optional)
3218 TCP, UDP EMC Centera backup hosts
(optional)
user-defined TCP Database Server listener
ports, for example, 1521 for
Oracle or 1433 for MS-SQL,
for Guardium datasource
access (optional)

Step 1. Assemble the following before you begin


To prepare for the deployment of the Guardium system, the network administrator
needs to supply the following information.
v IP address for the interface card (eth0)
v For transparent proxy, IP Address for the application mask network interface
(eth1)
v Subnet mask for primary IP address
v Default router IP address.
v Hostname and domain name to assign to system
v DNS server IP addresses (up to three addresses), and add the new Guardium
system to your DNS domain
v (optional) IP address for secondary management interface
v (optional) Mask for secondary IP management interface
v (optional) Gateway for secondary IP management interface
v (optional) NTP server hostname

4 Install and Upgrade


v (optional) SMTP configuration information (for email alerts): IP address, port,
and if authentication is used, an SMTP user name and password
v (optional) SNMP configuration information (for SNMP alerts) the IP address of
the SNMP server and the trap community name to use.

SAN storage devices


If the installation is to be deployed on a Storage Area Network (SAN), all
configuration information needed by the SAN, must be prepared before
deployment. Also, there are additional installation steps required to partition the
SAN storage device and install the Guardium OS.

Note: Installation on a SAN is supported, installation on a NAS is not supported.

Step 2. Set up the physical or virtual appliance


The setup instructions in this section are different when installing to a physical
appliance or a virtual appliance.

Physical Appliance
After the appliance has been loaded into the customer's rack, connect the appliance
to the network in the following manner:
1. Find the power connections. Plug the appropriate power cord(s) into these
connections.
2. Connect the network cable to the eth0 network port. Connect any optional
secondary network cables.
3. Connect a Keyboard, Video and Mouse directly or through a KVM connection
(either serial or through the USB port) to the system.
4. Power up the system.

How to identify eth0 and other network ports


Use the following CLI commands to map the network ports.

show network interface inventory


Use this CLI command to display the port names and MAC addresses of all
installed network interfaces.
show network interface inventory
eth0 00:13:72:50:CF:40
eth1 00:13:72:50:CF:41
eth2 00:04:23:CB:11:84
eth3 00:04:23:CB:11:85
eth4 00:04:23:CB:11:96
eth5 00:04:23:CB:11:97

show network interface port

Use this CLI command to locate a physical connector on the back of the appliance.
After using the show network interface inventory command to display all port
names, use this command to blink the light on the physical port specified by n (the
digit following eth - eth0, eth1, eth2, eth3, etc.), 20 times.
show network interface port 1

The light on port eth1 will now blink 20 times.

Chapter 1. Installing your Guardium system 5


Install the software directly on dedicated computer
When installing the Guardium software directly to disk on a dedicated computer,
use the Physical appliance instructions.

Default passwords for physical appliances


Default passwords are supplied for predefined users.

When you receive a physical appliance from IBM, use these passwords for your
initial configuration.

Note: Be sure to change all default passwords when you complete the installation.
Table 1. Default passwords for predefined users
User Default password
accessmgr guard1accessmgr
admin guard1admin
cli guard1cli

Virtual appliance
The IBM Security Guardium Virtual Machine (VM) is a software-only solution
licensed and installed on a guest virtual machine such as VMware ESX Server.

To install the Guardium VM, follow the steps in Creating the Virtual Image. The
steps are:
v Verify system compatibility
v Install VMware ESX Server
v Connect network cables
v Configure the VM Management Portal
v Create a new Virtual Machine
v Install the IBM Security Guardium virtual appliance

After installing the VM, return to Step 4, Setup Initial and Basic Configuration, for
further instructions on how to configure your Guardium system.

Step 3. Install the Guardium image


This section explains how to install the image and partition the disk.
1. Make sure your UEFI/BIOS “boot sequence” settings are set to attempt startup
from the removable media (the CD/DVD drive) before using the hard drive.

Note: Installation can take place from DVD. If needed, get the UEFI/BIOS
password from Technical Support.
2. Load the Guardium image from the installation DVD.
3. The following two options appear:
Standard Installation: this is the default. Use this choice in most cases when
partitioning the disk.
Custom Partition Installation: allows more customization of all partitions
(locally or on a SAN disk). See Custom partitioning for further information on
how to implement this option.

6 Install and Upgrade


Note:
v The Standard Installation wipes the disk, repartitions and reformats the disk,
and installs a new operating system.
v On the first boot after installation, the user is asked to accept a Licensing
Agreement. They can use PgDn to read through the agreement or Q to skip
to the end. To accept the terms of the agreement, enter q to exit and then
type yes. The user must enter yes to the agreement or the machine will not
boot up.
4. The system boots up from DVD. It takes about 12 minutes for this installation.
(d) The installation process will now ask you to choose a collector or
aggregator (will be set to “Collector” automatically after 10 seconds if no input
is provided). See the Product Overview for an explanation of Collector and
Aggregator. If you wanted to choose aggregator and you did not choose it
within 10 seconds, you must reinstall in order to get back to this point where
you have a choice of aggregator.
5. The system automatically reboots at this point to complete the installation. The
first login after a reboot requires a changing of passwords.

Step 4. Set up initial and basic configuration


The initial step should be the network configuration, which must be done locally
through the Command Line Interface (CLI) accessible through the serial port or the
system console.

Enter the temporary cli password that you supplied previously.

In the following steps, you will supply various network parameters to integrate the
Guardium system into your environment, using CLI commands.

In the CLI syntax, variables are indicated by angled brackets, for example:
<ip_address>

Replace each variable with the appropriate value for your network and installation.
Do not include the brackets.

Set the primary system IP address


The primary IP address is for the eth0 connection, and is defined by using the
following two commands:
store network interface ip <ip_address>
store network interface mask <subnet_mask>

The default network interface mask is 255.255.255.0. If this value is the correct
mask for your network, you can skip the second command.

To assign a secondary IP address, use the CLI command, store network interface
secondary [on <interface> <ip> <mask> <gw> | off], that can be used to
enable/disable the secondary interface.

Next you must restart the network by using the CLI command, restart network.
Assigning a secondary IP address cannot be done by using the GUI, only through
the CLI.

The remaining network interface cards on the appliance can be used to monitor
database traffic, and do not have an assigned IP address.

Chapter 1. Installing your Guardium system 7


Set the Default Router IP Address
Use the following CLI command:
store network routes defaultroute <default_router_ip>

Set DNS Server IP Address


Set the IP address of one or more DNS servers to be used by the appliance to
resolve host names and IP addresses. The first resolver is required, the others are
optional.
store network resolver 1 <resolver_1_ip>
store network resolver 2 <resolver_2_ip>
store network resolver 3 <resolver_3_ip>

SMTP Server
An SMTP server is required to send system alerts. Enter the following commands
to set your SMTP server IP address, set a return address for messages, and enable
SMTP alerts on startup.
store alerter smtp relay <smtp_server_ip>
store alerter smtp returnaddr <first.last@company.com>
store alerter state startup on

Note: You can also configure the SMTP server by using the user interface.
ClickSetup > Alerter.

Set Host and Domain Names


Configure the hostname and domain name of the appliance. This name should
match the hostname registered for the appliance in the DNS server.
store system hostname <host_name>
store system domain <domain_name>

Set the Time Zone, Date and Time


There are two options for setting the date and time for the appliance. Do one of
the following:
Date/Time Option 1: Network Time Protocol
Provide the details of an accessible NTP server and enable its use.
store system ntp server
store system ntp state on
Date/Time Option 2: Set the time zone, date and time
Use the following command to display a list of valid time zones:
store system clock timezone list

Choose the appropriate time zone from the list and use the same command
to set it.
store system clock timezone <selected time zone>

Note: When setting up a new timezone, internal services will restart and
data monitoring will be disabled for a few minutes during this restart.
Store the date and time, in the format: YYYY-mm-dd hh:mm:ss
store system clock datetime <date_time>

Note: Do not change the hostname and the time zone in the same CLI
session.

8 Install and Upgrade


Set the Initial Unit Type
An appliance can be a standalone unit, a manager or a managed unit; In addition,
an appliance can be set to capture database activity via network inspection or
S-TAP or both. The standard configuration would be for a standalone appliance
(for all appliances), and the most common setting would use S-TAP capturing
(only for collectors).

store unit type standalone - use this command for all appliances.

store unit type stap - use this command for collectors.

Unit type standalone and unit type stap are set by default. Unit type manager (if
needed) must be specified.

Note: Unit type settings can be done at a later stage, when the appliance is fully
operational.

Configuring the Squid proxy


IBM Security Guardium for Applications uses the Squid proxy server to filter
HTTP traffic and forward application server requests for processing. After you
install IBM Guardium for Applications, you must configure the Squid proxy to
route network traffic through the appliance.

About this task

You can choose to configure the Squid proxy either as a transparent proxy or as a
manual proxy.

Configuring Squid as a Fully Transparent Proxy


You can configure Squid as a fully transparent proxy for IBM Security Guardium
for Applications. When Squid is configured as a fully transparent proxy, Squid
invisibly intercepts and modifies all traffic that is sent to the application server.

About this task

To configure Squid as a fully transparent proxy:

Procedure
1. Connect the eth0 adapter to the external network, and connect the eth1 adapter
to the subnet of the application server.
2. If you configured Squid as a manual proxy and want to configure Squid as a
fully transparent proxy again, complete the following steps:
a. Enter the command store squid proxy default.
b. Restart Squid by entering the command restart squid.
3. Enter the following command, where XX.XX.XX.XX is the IP address to be
assigned to eth1 and MM.MM.MM.MM is the network mask. Set the IP address and
the network mask for eth1 so that eth1 is on the same subnet as the application
server.
store net int appmaskingnic on eth1 XX.XX.XX.XX MM.MM.MM.MM
4. Restart the network by entering the command restart network.

Chapter 1. Installing your Guardium system 9


What to do next

If you plan to use Secure Socket Layer (SSL) connections with Squid, you must
store the certificates and private key.

You can show whether Squid is configured as a fully transparent proxy by entering
the command show squid proxy.

Configuring Squid as a manual proxy


You can configure Squid as a manual proxy for IBM Security Guardium for
Applications. When Squid is configured as a manual proxy, each user must
manually configure browser proxy settings to connect to the application server
through the appliance.

About this task

To configure Squid as a manual proxy:

Procedure
1. Connect the eth0 adapter to the external network, and connect the eth1 adapter
to the subnet of the application server.
2. Enter the command store squid proxy manual.
3. Restart Squid by entering the command restart squid.

What to do next

After you configure Squid as a manual proxy, users must configure the proxy
manually on their browsers to connect to the application server through the
appliance. Users must specify the IP address or the host name and domain of eth0
as the HTTP proxy and 3128 as the port.

If you plan to use Secure Socket Layer (SSL) connections with Squid, you must
store the certificates and private key.

You can show whether Squid is configured as a manual proxy by entering the
command show squid proxy.

Configuring Secure Sockets Layer (SSL) on the Squid proxy


If you plan to use Secure Sockets Layer (SSL) connections with Squid, you must
store the certificates and the private key that are used by SSL.

About this task

You can use either a self-signed certificate or a certificate that has been signed by a
trusted certificate authority (CA).

Using an existing certificate:


Before you begin

You must have the private key, the certificate, and the CA root certificate if the
certificate was self-signed.

10 Install and Upgrade


About this task

Use this procedure if you already have a signed certificate and a corresponding
private key.

Procedure

To enable SSL, store the private key and associated certificates by using the
appropriate command:
v store certificate squid default console: Use this command to paste PEM
data corresponding to the private key, the certificate, and the CA root certificate
(if applicable).
v store certificate squid default import: Use this command to import the files
corresponding to the private key, certificate, and CA root certificate from a
remote location. You can import the files from secure copy (SCP), file transfer
protocol (FTP), Tivoli Storage Manager (TSM), Centera, or Amazon S3. After you
enter this command, the console prompts you for connection information for the
remote location.

Creating a self-signed certificate:


About this task

Use this procedure if you want to create a self-signed certificate.

Procedure
1. Run create csr squid to generate a Certificate Signing Request (CSR).
2. To enable SSL, store the associated certificates by using the appropriate
command:
v store certificate squid selfsign console: Use this command to paste
PEM data corresponding to the certificate and CA root certificate.
v store certificate squid selfsign import: Use this command to import
files corresponding to the certificate and CA root certificate from a remote
location. You can import the files from secure copy (SCP), file transfer
protocol (FTP), Tivoli® Storage Manager (TSM), Centera, or Amazon S3. After
you enter this command, the console prompts you for connection
information for the remote location.

What to do next

To display the Squid certificate information, enter the command show certificate
squid.

To delete the Squid certificate, CA root certificate, and private key and turn off
SSL, enter the command delete certificate squid.

To restore the last certificate that was used to configure SSL for the squid proxy,
run the following command: restore certificate squid backup.

Configuring the Squid Proxy to Fail Open


By default, the Squid proxy is set to fail close, which means that users are not able
to access the application through the proxy if the masking engine is down. If you
want users to be able to access the application through the proxy when the
masking engine is down, set the Squid proxy to fail open. The application is not
masked when the masking engine is down.

Chapter 1. Installing your Guardium system 11


About this task

To configure Squid to fail open, enter the command store squid bypass on.

What to do next

To configure Squid to fail close again, enter the command store squid bypass off.

To show whether Squid is configured to fail open or fail close, enter the command
show squid bypass.

Reset Root Password


Reset your root password on the appliance using your own private passkey by
executing the following CLI command (requires access key: "t0Tach"):
support reset-password root <random>

Save the passkey used in your documentation to allow future Technical Support
root accessibility. To see the current pass key use the following CLI command:
support show passkey root
Questions - How secure is the Guardium system root password? Who has access
to it?
Guardium appliances are "black box" environments with the end user only
having access to limited access Operating System accounts, such as:
cli; guardcli1; guardcli2; guardcli3; guardcli4; and, guardcli5.
The Graphical User Interface user accounts (for example admin and
accessmgr) are not defined by the Guardium system's operating system,
but are application IDs defined and managed via an application interface
(accessmgr).
Being a secured server, root access is not readily available to anyone, but,
is often required by Guardium support to gain access to the Guardium
apoliances to troubleshoot and resolve issues. Guardium support does not
use sudo, or any other userid other than root, to gain access to Guardium
appliances.
The root password is secured using a "joint password" mechanism. The
customer holds the keys to the appliance in the form of a eight-digit
numeric passkey. IBM holds the passkey decoder. Without having both, the
passkey and passkey decoder, neither IBM nor the customer can access the
appliance as root.
The passkey is managed by the customer via the CLI interface. The
customer can change the passkey at any time, without notifying IBM, by
using the following CLI command:
support reset-password root

Anyone with CLI access can retrieve the passkey for root by using the
following CLI command:
support show passkey root

When involving Guardium support, on a remote desktop sharing session,


the support analyst will request the root passkey for the Guardium
appliance in question. Once the passkey has been decoded, Guardium
support will use the root password to gain access to the appliance as root.

12 Install and Upgrade


After the remote desktop sharing session terminates, the customer can
change the passkey using the above CLI command, thereby ensuring IBM
no longer has the root password for this appliance.
Being an eight-digit numeric key, the passkey has a range of 10000000 to
99999999. This range provides 89,999,999 possible passwords. All encoded
passwords are hardened. They do not contain any common passwords, any
dictionary words, their length varies and they contain national, special,
alphabetic (upper and lower case) and/or numeric characters.
Access to the passkey decoder is restricted to a select few IBM Guardium
employees, such as Guardium R&D, Guardium QA and Guardium support
staff members. It is not available to IBM staff.
The CLI userids mentioned above (cli, guardcli1, guardcli2, guardcli3,
guardcli4, guardcli5) do not use the passkey mechanism and their
passwords are 100% governed by the customer with IBM having no access
to their passwords. For this reason, IBM recommends keeping the root
passkey in a password vault to ensure the appliance is accessible even if
the CLI account passwords have been forgotten or misplaced.

Validate All Settings


Before logging out of CLI and progressing to the next configuration step, review
and validate the configured settings using the following commands:
show network interface all
show network routes defaultroute
show network resolver all
show system hostname
show system domain
show system clock timezone
show system clock datetime
show system ntp all
show unit type

Reboot the System


If the system is not in its final location, now is a good time to shut down the
system, place it in its final network location, and start it up again.

Remove the installation DVD before you reboot the system.

To stop the system, enter the following command in the CLI:


stop system

The system shuts down. Move the system to its final location, re-cable the system,
and power the system back on. After the system is powered on, it is accessible
(using the CLI and GUI) through the network, using the provided IP address or
host name.

Step 5. What to do next


This section details the steps of verifying the installation, installing license keys,
and installing any available maintenance patches.

Verify Successful Installation


Verify the installation by following the following steps:
1. Login to CLI - ssh cli@<ip of appliance>

Chapter 1. Installing your Guardium system 13


2. Login to GUI - https://<hostname of appliance>.<full domain>:8443 (use
admin userid)

The first login after a reboot will require a changing of passwords.

Login to the Guardium web-based interface and go to the embedded online help
for more information on any of the following tasks.

Set Unit Type


To set up a federated environment, configure one of the appliances as the central
manager and set all the other appliances to be managed by the central manager.

Use the store unit type command to set the type of each Guardium system.

Install License Keys


Specific product keys, which are based on the customer's entitlements, must be
installed through CLI or the GUI as described here.

Note: In federated environments, license keys are installed only on the central
manager.

From the GUI:


1. Log in as admin to the Guardium console.
2. Click Setup > Tools and Views > System.
3. Enter the license key in the System Configuration panel.
4. Click Apply. And accept the license agreement.

From the CLI:


1. Log in to the CLI.
2. Type the store license CLI command to store a new license.
3. Copy and paste the new license at the cursor location. Press Enter.

Note: The license agreement must be accepted. Do this task from the GUI.

Install maintenance patches (if available)


You can install patches by using the CLI or through the GUI.

Note: In federated environments, maintenance patches can be applied to all of the


appliances from the Central Manager.

There may not be any maintenance patches included with the installation
materials. If any are included, follow these steps to apply them:
1. Log in to the Guardium console, as the cli user, using the temporary cli
password you defined in the previous installation procedure. You can do this
by using an ssh client.
2. Do one of the following:
v If installing from a network location, enter the following command (selecting
either ftp or scp):
store system patch install [ftp | scp]
And respond to the following prompts (be sure to supply the full path name
to the patch file):

14 Install and Upgrade


Host to import patch from:
User on <hostname>
Full path to patch, including name:
Password:
v If installing using the fileserver function, enter the following command:
store system install patch sys
You will be prompted to select the patch to apply. Use wildcards in the
pathname to get multiple patches. Also separate patch names by commas.
3. To install additional patches, repeat step 2.
4. To see if patches have been installed successfully, use the CLI command:
show system patch installed
Patches are installed by a background process that may take a few minutes to
complete.

Additional Steps (optional)


The following sections discuss changing the baseline English to another language,
installing S-TAP® agents, defining Inspection Engines and installing CAS agents.

Change the language

Installation of IBM Guardium is always in English. Use the CLI command store
language to change from the baseline English and convert the database to the
preferred language. A Guardium system can be changed only to Japanese or
Chinese (Traditional or Simplified) after an installation. The store language
command is considered a setup of the Guardium system and is intended to be run
during the initial setup of the system. Running this CLI command after
deployment of the appliance in a specific language can change the information
already captured, stored, customized, archived or exported. For example, the psmls
(the panes and portlets you have created) will be deleted, since they need to be
re-created in the new language.

Install S-TAP agents

Install S-TAP agents on the database servers and define their inspection engines.
S-TAP is a lightweight software agent installed on the database server, which
monitors local and network database traffic and sends the relevant information to
a Guardium system (the collector) for further analysis, reporting and alerting. To
install an S-TAP, refer to the S-TAP section of this information center. To verify that
the S-TAP have been installed and are connected to the Guardium system:
1. Log in to the administrator portal.
2. Do one of the following:

Navigate to the Manage > System View, and click S-TAP Status Monitor from the
menu. All active S-TAPs display with a green background. A red background
indicates that the S-TAP is not active.

Navigate to Manage > Activity Monitoring > S-TAP Control, and confirm that
there is a green status light for this S-TAP.

Define Inspection Engines


Define Inspection Engines for network-based activity monitoring.

Chapter 1. Installing your Guardium system 15


Install CAS agents
Install Configuration Auditing System (CAS) agents on the database server.

Creating the Virtual Image


Use this section to install the virtual image.

VMware Infrastructure Overview


While you can install a Guardium VM on any VMware product, the VMware ESX
server is the recommended platform for a virtual solution and is presented here.

The VMware ESX Server on which you can install the Guardium VM is one
component of the VMware infrastructure. Although not all VMware Infrastructure
components are required to support the Guardium VM, you should be familiar
with all components that are in use at your installation.

ESX Server: This component is used to configure and control VMware virtual
machines on a physical host referred to as the ESX Server host. To install an
Guardium VM, you first define a virtual machine on an ESX Server host, and then
install and configure the Guardium VM image on that virtual machine. You can
create multiple Guardium VMs on a single ESX Server.

VI Client (Virtual Infrastructure Client): This component is used to connect to a


standalone ESX Server, or to a VirtualCenter Server. In the latter case, you can
administer multiple virtual machines created over multiple ESX Server hosts.

Web Browser: Use a Web browser to download and use the VI Client software
from an ESX Server host or the VirtualCenter server.

VirtualCenter Management Server (Optional): This component runs on a remote


Windows machine, and can be used to manage multiple virtual machines on
multiple ESX Server hosts. It offers a single point of control over all the ESX Server
hosts.

Database (Optional): The VirtualCenter Server uses a database to store


configuration information for the infrastructure. The database is not needed if the
VirtualCenter Server is not used.

License Server (Optional): Stores and manages the licenses needed to maintain a
VMware Infrastructure.

For more information, go to www.vmware.com and search for “ESX Quick Start”

VM Installation Overview
To install the IBM Security Guardium VM, follow the steps that are described here.
After you install the VM, return to earlier Step 3, Install the IBM Security
Guardium image, and earlier Step 4, Initial Setup and Basic Configuration.

If you are installing multiple Guardium VM systems in a VMware VirtualCenter


Management Server environment, you can create a template system from the first
Guardium VM that you create, and then clone that template as necessary. Then, all
you need to do is set the IP address on each cloned system. For more information,
see the note following Step 7.

16 Install and Upgrade


Step 1: Verify system compatibility
1. Verify that the host is compatible with VMware's ESX Server (ESX 4.0 Update 4
and higher is the bare minimum to run a Guardium system). See the VMware
document entitled Systems Compatibility Guide for ESX Server, which is
available online in PDF format.
2. Verify that a virtual machine installed on the host will be able to provide the
minimum recommended resources for a Guardium system, whether you plan
to use it as a collector, central manager, or aggregator. See the
Minimum/Recommended Resources in the Hardware Requirements section of
this document.
3. When you create a 64-bit VM for the first time or upgrade a 32-bit VM to
64-bit, ensure that the virtual hardware is correctly configured for 64-bit
operation. In some cases, you might need to perform an Upgrade Virtual
Hardware operation. For information, refer to your VMware documentation.

Step 2: Install VMware ESX Server

If it is not already installed, install VMware ESX Server. VMware provides


installation instructions on their website to help with installing and configuring the
VMware Infrastructure and ESX server.

Note: The ESX server is only supported on a specific set of hardware devices. For
more information, see the VMware Virtual Infrastructure documentation.

Step 3: Connect network cables

Before you define any virtual switches that will be used for the Guardium VM,
you must connect the appropriate NICs to the network. You cannot assign NICs to
virtual networks or switches until the NICs are physically connected.

The following table describes how the Guardium VM uses network interfaces.
Refer to this table to make the appropriate connections before you configure the
virtual switches for use by the Guardium VM.
Table 2. IBM Security Guardium VM Network Interface Use
Interface Description
Proxy interface This interface is the main gateway to the appliance, and is used for these purposes:
(eth0) v Graphical web-based User Interface (GUI) to manage, configure, and use the solution
v Command Line Interface (CLI) for initial setup and basic configuration
v Connections with external systems such backup systems, database servers, and LDAP server
v Communication with other Guardium components such as other appliances (aggregator,
central manager) and agents that are installed on database or file servers such as S-TAP or
CAS clients
Application
server interface This interface is required if you configure your Guardium system as a transparent proxy. It
(eth1) connects to the application servers whose content your Guardium system is configured to mask.

Step 4: Configure the Guardium VM management portal

The default configuration for a new VMware ESX Server installation creates a
single port group for use by the VMware service console and all virtual machines.
For the Guardium VM, we strongly recommend that you do not share ports with

Chapter 1. Installing your Guardium system 17


the VMware console or any other virtual machine. Follow these instructions to
create one or more virtual switches to be used by a Guardium VM.

1. Open the VMware VI Client, and log on to either a VirtualCenter Server, or


the ESX Server host on which you want to create a new virtual machine.
2. If you are logged in to a VirtualCenter Server, click Inventory in the
navigation bar, and expand the inventory as needed to display the managed
host or cluster on which you plan to install a Guardium VM.
3. In the inventory display, click the host or cluster on which you plan to install
a Guardium VM.
4. Click Configuration tab, click Networking in the Hardware box, and then
click Add Networking.

This opens the Add Network Wizard, which is used for various purposes.
Use the Add Network Wizard to define a new virtual switch for the
Guardium VM network interface. This is the connection over which you will
access the Guardium VM management console, and over which the Guardium
VM will communicate with other Guardium components (S-TAPs, for
example, which are software agents that you will install later on one or more
database servers).
5. In the Connection Types box, click Virtual Machine and click Next.
6. In the Network Access panel, click Create a virtual switch, and mark the
unclaimed network adapter that you will use for the Guardium VM network
interface:

18 Install and Upgrade


7. Optionally mark a second unclaimed network adapter if want to use the
VMware IP teaming capability to provide a secondary (failover) network
interface. Later, you will designate this second adapter as a Standby Adapter
(and of course, you must cable both NICs appropriately).
8. Click Next to continue to the Connection Settings page of the Add Network
Wizard.
9. In the Network Label box, enter a name for the virtual machine port group,
for example: GuardETH0, and click Next.

10. In the Summary page, click Finish. The new virtual switch is displayed in the
Configuration tab.
11. Optional. If you have defined a second adapter for failover purposes: (a) Click
Properties link for the virtual switch just created to open the virtual switch
Properties panel. (b) Click Ports tab and select the virtual port group just
created (GuardETH0 in the example), and click Edit. (c) In the virtual port
group Properties panel, click NIC Teaming tab, mark the Override vSwitch
Failover box, and then move the second adapter to the Standby Adapters list.
(d) Click OK to close the virtual port group Properties box, and click Close to
close the virtual switch Properties box.

Chapter 1. Installing your Guardium system 19


Step 5: Create a new virtual machine

If you have not already done so, create a new virtual machine on which to install a
Guardium VM.

Perform this task by using the VMware VI Client.


1. Open the VMware VI Client, and log on to either a VirtualCenter Server, or
the ESX Server host on which you want to create a new virtual machine.
2. If you are logged in to a VirtualCenter Server, click Inventory in the
navigation bar, expand the inventory as needed, and select the managed host
or cluster to which you want to add the new virtual machine.
3. From the File menu, click New – Virtual Machine to open the configuration
Type panel of the New Virtual Machine wizard.
4. Click Typical as the configuration type, and click Next to continue with the
Name and Folder panel.
5. On the Name and Folder panel:
Enter a name for the new virtual machine in the Virtual Machine Name field.
This name appears in the VI Client inventory and is also used as the name of
the virtual machines files.
To set the inventory location for the new virtual machine, select a folder or the
root location of a datacenter from the list under Virtual Machine Inventory
Location.
Click Next.
6. If your host or cluster contains resource pools, the Resource Pool panel is
displayed, and you must select the resource (host, cluster, or resource pool) in
which you want to run the virtual machine. Click Next.
7. On the Datastore panel, optionally select a datastore in which to store the new
virtual machine files, and click Next.
8. In the Choose the Guest Operating System panel, choose the operating system
that corresponds to the Guardium image that you are installing. Click Linux >
RedHat Enterprise Linux 6, 64-bit from the Version box, and click Next. .
The operating system is not installed now, but the OS type is needed to set
appropriate default values for the virtual machine.
For VM minimum resources, refer to the Hardware Requirements in the
Before you begin section.
9. On the Virtual CPUs panel, select the number of CPUs recommended for the
type of Guardium VM being installed, and click Next.

20 Install and Upgrade


10. On the Memory panel, select the amount of memory recommended for the
type of Guardium VM being installed, and click Next. Important: the initial
value must be at least 16 GB. If customers want to work outside the required
range, consult with Technical Support.
11. On the Network panel, click 1 as the number of ports that are required, and
click Next.
12. For the selected port, use the Network pull-down menu to choose a port
group configured for virtual network use. (You should have defined this port
group in the previous procedure.)
13. For the selected port group, mark the Connect at Power On check box (it
should be marked by default), and click Next.
14. On the Virtual Disk Capacity panel, enter the amount of disk space to reserve
for the new virtual machine in the Disk Size field.
15. On the Ready to Complete panel, verify your settings and click Finish.

This completes the definition of the new virtual machine. The operating system has
not yet been installed, so if you attempt to start the virtual machine, that activity
will fail.

Step 6: Install the Guardium system

Perform this task using the VMware Virtual Infrastructure Client.


1. Open the VMware VI Client, and log on to either a VirtualCenter Server, or
the ESX Server host on which you want to create a new virtual machine.
2. If logged into a VirtualCenter Server, click Inventory in the navigation bar,
expand the inventory as needed, and select the virtual machine on which you
want to install the Guardium VM.
3. On the Summary tab, click Edit Settings.
4. Click CD/DVD Drive 1.
5. Select one of the following options to determine from where the virtual
CD-ROM/DVD device will read the Guardium Installation program. We
strongly recommend the first option:
Datastore ISO File – Connect to the Guardium Installation ISO file on a
datastore. If you have not already done so, copy the Guardium ISO files to a
datastore accessible from the ESX Server host on which the virtual machine is
installed. Click Browse to select the file.
Caution: For the remaining options, you will place the Guardium Installation
CD/DVD in a CD-ROM/DVD drive. If you reboot any system with an
Guardium Installation CD/DVD in its CD-ROM/DVD drive, you will install
Guardium on that system, wiping out the host operating system and files.
Client Device – Connect to a CD-ROM/DVD device on the system on which
you are running the VI Client. If you select this option, insert the Guardium
CD/DVD in the CD-ROM/DVD drive of the system on which the VI Client is
running.
Host Device – Connect to a CD-ROM/DVD device on the ESX Server host
machine on which the virtual machine is installed. If you select this option,
choose the device from a drop-down menu, and insert the Guardium
CD/DVD in the CD-ROM/DVD drive of the ESX Server host machine.
6. Click OK.
7. Click Power On to start the virtual machine.

Chapter 1. Installing your Guardium system 21


8. If you selected Client Device as your CD/DVD Drive option, click Virtual
CD-ROM (ide0:0) in the toolbar, and select the local CD-ROM device to
connect to.
9. Click Console tab to display the virtual machine console. You will need to
respond to several prompts during the installation process.
10. Skip this step if you are using theGuardium DVD.
When prompted for the second CD, depending on option you use in step 5
you need to either put the second CD in its drive or select the second CD ISO
image. Continue by pressing Enter. When prompted for the cli password,
enter a temporary password for use when logging in to the Guardium CLI,
which you will need to do to set the IP configuration parameters for the
appliance.
11. When you are prompted for the GUI admin password, enter a temporary
password for use when logging in to the Guardium user interface as the
admin user.
12. When asked if building a collector or aggregator, choose the appropriate type.
13. Click No to the Master Passkey prompt.
Caution: If a CD-ROM/DVD drive was used, the CD/DVD ejects when the
installation completes. Be sure to remove the installation CD/DVD from that
drive. If the ISO file was used, be sure to remove the ISO CD ROM by
changing the virtual CD/DVD back to a Client or Host Device. Otherwise, the
next time it is rebooted, you will install Guardium on the host machine,
wiping out the host machine operating system and all files.
The machine will reboot automatically, and you will be prompted to log in as
the CLI user.
14. At this point, return to Step 4, Set up Initial and Basic Configurations for
complete instructions on configuration of the Guardium system.

Step 7: Install Multiple VMs

(Optional) To install multiple GuardiumVMs, you can repeat the procedures for
each appliance, or you can minimize your work by cloning the first Guardium VM
that you created, and following these steps:
1. Use the VMware virtual infrastructure server product to clone the first
Guardium VM that you configured to a template.
2. From the template, create a clone for each additional Guardium VM to be
configured.
3. For each clone, log in to the Guardium VM console as the cli user, by using the
temporary cli password, and reset any of the IP configuration parameters that
you set in the previous procedure. Mandatory tasks are: Reset the IP address,
Reset the host name (store system hostname) and Reset the GLOBAL_ID (store
product gid). However, review all of the IP configuration settings entered in the
previous procedure.
store network interface ip <ip_address> store network interface mask <subnet_mask> store system ho
When you are done, enter the restart network command.
restart network

Note: The unique ID of the appliance is recalculated every time the hostname
changes, in order to avoid having multiple appliances with the same unique ID.

22 Install and Upgrade


Custom Partitioning
If you customize the partitioning of the hard drive, you must make several choices.
1. Choose Custom Partitioning Installation from the boot screen.
Choose Create custom layout and use the recommended partitioning scheme
listed here.

Note: The boot loader, a special program that loads the operating system into
memory, is part of any custom partitioning installation.
2. Create custom layout. In this case, there are existing partitions on the disk. Do
not delete any partitions. Choose the custom layout selection to add whatever
partitions you want to what is already on the disk.The following table specifies
recommended values for custom layout.
Table 3. Recommended values for custom layout
Partitions Values
/ 10 GB
Swap portion half of RAM size
/boot 5 GB
/var All the rest

All the available drives are also displayed on this screen. Choose the drive for the
partitioning and then installation.

After the partitioning is finished, the Guardium system software is installed


automatically.

If values are created that exceed the space available on the disk, an error message
appears.

Click OK to reboot the system and return to the beginning of Custom Partitioning.

For more information on how the RedHat distribution handles partitioning, see
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/
html/Installation_Guide/s1-diskpartsetup-x86.html.

How to partition with an encrypted LVM


If you want to use an encrypted disk, follow these steps to create an encrypted
LVM volume that contains the / and /var logical volumes.

For the encrypted LVM installation, you are asked to enter an encryption key.
Then, on EVERY reboot, the user is required to enter this key to unlock the LVM
volume (This means that the user must have console access to the appliance, either
physical or remote access).

Important – The encryption key must be safeguarded and retained, as it is


impossible to replace if lost.

Note: The boot loader, a special program that loads the operating system into
memory, is part of a custom partitioning installation. An example of the password
entry screen is shown near the end of this topic.
1. Insert the IBM Guardium DVD and boot the machine.

Chapter 1. Installing your Guardium system 23


2. Choose Custom Partition Installation from the boot screen.
3. Press Enter.
4. Click Remove all partitions and create default layout from the first RedHat
Enterprise Linux screen. Also, select the check boxes Encrypt system and
Review and modify partitioning layout.
5. Click Next.
6. A warning notice appears on the following screen, asking if you really want to
remove all partitions. Click Yes.
7. Click LogVol00 in the next screen and click Edit to bring up the Edit LVM
Volume Group dialog.
8. Click LogVol00 from the list in the previous screen and click Edit.
9. On the next screen, change the size to 10240 and click OK.
10. Click LogVol01 from the list on the next screen and click Edit.
11. Allocate a swap partition that is half as large as the memory that is installed
on the system. Specify the size of the swap partition and click OK.
12. Click Add. The Make Logical Volume dialog is displayed.
13. Specify /var as the mount point, and let the system pick the remaining size.
14. Review the sizes of your partitions. Then, click OK.
15. Then click OK from the Edit LVM Volume Group: VolGroup00 dialog.
16. Click Next in the next screen, which will take you to the passphrase dialog.
17. Enter the passphrase of your choice into the Enter passphrase field and enter
the identical passphrase into the Confirm passphrase field. Click OK.

Note: The passphrase must be entered each time that the system is booted.
There is no way to recover a lost LVM passphrase.
The Bootloader configuration dialog is displayed. When a computer with Red
Hat Enterprise Linux is turned on, the operating system is loaded into
memory by a special program that is called a boot loader. A boot loader
usually exists on the system's primary hard disk (or other media device) and
has the sole responsibility of loading the Linux kernel with its required files or
(in some cases) other operating systems into memory.
In most cases, the default options are acceptable, but depending on the
situation, changing the defaults options may be necessary.
18. At this screen, click Next. This starts the encrypted installation.
During the installation and further re-boots, you are asked to enter the LUKS
(Linux Unified Key Setup) passphrase for the LVM during boot. After you
enter the LUKS passphrase, the system completes the boot process.

Example of SAN Configuration


This appendix details the steps involved in moving to a command prompt in order
to pre-partition a hard drive (as is needed for SAN installation).

First partition space on the SAN storage device, and then install the IBM Security
Guardium OS. Choose one hard disk for this installation.

Note: Depending on what SAN hardware is used, specific instructions may be


different. Installation on a SAN is supported; installation on a NAS is not
supported.

24 Install and Upgrade


Summary of steps
1. Enter system setup (press F1 on IBM servers during initial boot) and modify
the Start Options to select the appropriate PCI slot to boot from (where the
QLogic Card is).
2. Modify the BIOS for the QLogic card by pressing Ctrl-Q, when the QLogic
BIOS is loading, to enable it to be a boot device. Then select the LUN (logical
unit number) of the boot device.
3. Boot from the RedHat 5.8 DVD and enter Rescue mode in order to run fdisk
and create partitions on the SAN device using the specifications listed here:
Table 4. Partitions on SAN device
Partitions Space
1 500 MB for /boot
2 Amount of system memory + 4 GB
3 10 GB for /
4 All remaining space for /var

Note: While the RedHat installation process would allow you to create the
partitions and load the OS, the system does not boot properly after the
installation unless the partitions are pre-created with fdisk.
4. Proceed with the OS installation utilizing the previously defined partitions (use
only the /dev/sda device).
5. Reboot and finish the remaining installation steps (hostname, IP configuration,
and so on).

Note:

In the SAN environment, the single LUN is presented to RedHat 5.8 as multiple
devices due to redundant paths within the network switch(es) on the SAN. (The
SDD storage was eight devices.)

This is a function of the SAN storage brand/type and how it is configured at each
site.

It is very important to only edit the existing partitions that the IBM Guardium
installation sees by adding the mount point and setting the file system (ext3 or
swap,) and not changing other settings (such as size) and to unselect all devices
other than /dev/sda when selecting which device to load the OS on.

Instructions for running fdisk

Follow these instructions for running fdisk to pre-partition the SAN storage from
RedHat rescue mode:
1. Assuming SAN is the only storage attached to the server, type fdisk
/dev/sda. Type y if a warning appears regarding working on the whole
device.
2. Type n for a new partition.
3. Type pfor a primary partition.
4. Type 1for partition #1.
5. Press Enter to accept the default start location.

Chapter 1. Installing your Guardium system 25


6. Type +512M to make partition 1 500MB in size (this will be the /boot
partition).
7. Type n for a new partition.
8. Type p for a primary partition.
9. Type 2 for partition #2.
10. Press Enter to accept the default start location.
11. Type +12288M to make partition 2 12GB in size (this assumes 8GB of physical
RAM). The recommended size is physical RAM + 4GB (this will be the swap
partition).
12. Type n for a new partition.
13. Type p for a primary partition.
14. Type 3 for partition #3.
15. Press Enter to accept the default start location.
16. Type +10240M to make partition 3 10GB in size (this will be the / partition).
17. Type n for a new partition.
18. Type p for a primary partition (will default to partition #4).
19. Press Enter to accept the default start location.
20. Press Enter to fill to maximum size (this will be the /var partition).
21. Type w to write the partition table to the SAN.
22. Type exit to exit rescue mode and reboot to begin the Custom Partition
Installation (Step 3, Install the IBM Security Guardium image).

Examples of screenshots for QLogic setup


The Q-Logic screens used here are representative of the steps needed. Other Fiber
Channel cards can be used.
1. Modify the BIOS for the QLogic card by pressing CTRL-D. This is the first
screen presented after pressing Ctrl-Q when prompted to enter the
Configuration Setup Utility. This is a two-port card; select the appropriate port
and press Enter.

2. Press Enter to change Configuration Settings.

26 Install and Upgrade


3. Press Enter to change Adapter Settings.

4. Use your arrow keys to select Host Adapter BIOS and press Enter to toggle to
Enabled.

Chapter 1. Installing your Guardium system 27


5. Press Esc to back up to the previous screen and use the down-arrow to select
Selectable Boot Settings and press Enter.

6. Press Enter to change Selectable Boot to Enabled.

28 Install and Upgrade


7. Select the first Boot Port Name, LUN and press Enter to display a list of LUNs.
If you are configuring the proper card/port, the LUN number(s) appear here.
Select the first one in the list.

8. Press Esc until you have backed out to the screen that says Reboot and select it
to reboot the system. You are now ready to proceed with the IBM Security
Guardium installation.

Chapter 1. Installing your Guardium system 29


30 Install and Upgrade
Upgrading your Guardium System
Use this information to upgrade your IBM® Security Guardium systems to V10.0.

Planning an upgrade
Learn about different upgrade scenarios and identify the correct approach for upgrading your
Guardium systems with minimal downtime.

 Identify the correct upgrade scenario


 Arranging upgrade resources
 Version mismatches during upgrade
 Upgrading environments with aggregators or Central Managers
 Using a backup Central Manager during upgrade

1
Identify the correct upgrade scenario
The best approach for upgrading to Guardium depends on multiple factors, including the
Guardium version you are upgrading from, the hardware of your system, and any special
partitioning requirements you may have.

Determine your current Guardium version and patch level by clicking the icon in the main
user interface and selecting About Guardium. Use the following table to identify the best
approach for upgrading your systems to Guardium V10.0.

Table 1. V10.0 Upgrade Scenarios


Version Architecture Partitioning System Type Action
Before V9 Any Any Any
Upgrade to V9 GPU 500 or above
V9 before 64-bit Any Any
and then revisit this topic to
GPU 200
continue your upgrade to V10.0.

Visit the Guardium


KnowledgeCenter for information
about upgrading to V9 GPU 500 or
above.
32-bit Any Any Upgrade to V10.0 following a
backup, rebuild, and restore
V9 GPU 32-bit Any Any
upgrade procedure.
200 or
64-bit Nonstandard Any
later
or GPT
Standard Central Upgrade to V10.0 using the
Manager or standard upgrade patch.
standalone
Managed unit Upgrade to V10.0 using either the
standard upgrade patch or the
network upgrade patch.
Important:

 The standard upgrade patch for V10.0 is greater than 2GB in size while the network
upgrade patch for V10.0 is approximately 50KB in size. For this reason, consider using
the network upgrade patch when upgrading managed units in an environment with a
Central Manager.

 SSLv3 must be disabled before upgrading to V10.0. To disable SSLv3 on systems at or


above V9 GPU200 but below V9 GPU500, download patches 9501 and 9502 from Fix
Central and follow the instructions in the patch release notes or upgrade to V9 GPU500.
All Guardium systems in your environment must have SSLv3 disabled before upgrading
to V10.0.

2
Arranging upgrade resources
Identify and understand the scope, timing, and resources required for your upgrade.

Before upgrading any Guardium systems, begin by arranging the required resources. This
includes:

Defining the scope of your upgrade

Typically, the upgrade process cannot be completed on all Guardium systems and all S-
TAPs simultaneously. It requires a multi-stage upgrade approach that creates temporary
version mismatches. During this transition period, the Guardium environment operates in
a hybrid mode with reduced functionality: plan your upgrade to minimize the time spent
operating in hybrid mode. For more information, see Version mismatches during
upgrade.

To minimize disruptions to your Guardium environment during upgrade, follow a top-


down upgrade approach. This means first upgrading one high-level system before
upgrading the systems or agents that report to it, then upgrading the next high-level
system and the systems or agents that report to it, and so on. For more information, see
Upgrading environments with aggregators or Central Managers.

A backup Central Manager can also be used to reduce downtime while upgrading your
Guardium environment. For more information, see Using a backup Central Manager
during upgrade.

Determine the timing of your upgrade

Planning your upgrade for off-peak or otherwise quiet periods will minimize the impact of
the upgrade on your other systems and users.

Typical Guardium upgrades may require two or more hours. During this time, your
Guardium systems may not be accessible or performing any data collection activity.
Factors contributing to the duration of the upgrade process include:

 Size, usage, and data distribution of internal database tables


 Capacity of the appliance, such as virtual memory and CPU

Purging unnecessary data from the appliance may significantly decrease the time
required for upgrade. For more information about purging data before an upgrade, see
Performing an upgrade.

Other planning considerations before beginning an upgrade include:

 Arranging for change control management within your organization


 Arranging for personnel availability during the upgrade
 Defining contingency or fallback plans

3
Version mismatches during upgrade
The upgrade process cannot be completed on all Guardium systems (Central Managers,
aggregators, and collectors) and all S-TAPs simultaneously. During the upgrade transition, you
will have a environment that includes systems operating different versions of Guardium.

While this hybrid mode is supported by Guardium, many functions are limited until all
components are at the same version level. You should complete the upgrade in a timely manner
and have all components at the same version and patch level.

Data collection, data assessment, and policies (with some restrictions) will continue to work
while in the hybrid mode. Functions with new or enhanced capabilities will not work in a mixed
environment. While in the hybrid mode, it is recommended that you avoid making any
configuration changes.

The following limitations apply while operating a hybrid environment:

 You cannot install policies from an upgraded Central Manager to a managed unit that is
running an older version. Until the managed unit is upgraded, you can only install
policies locally on the managed units.
 Capture/Replay configurations created in a prior release must be re-created in latest
version. After the upgrade, the Replay user should redo the staged configuration, stage
it again, and then export it again (assuming the data in GDM tables are still available).

Attention: Before beginning any upgrade procedures, review and assess the following
restrictions that apply when operating a hybrid environment with Guardium V10 and V9
systems:

 Guardium V10.0 Central Managers can manage Guardium systems at or above V9 GPU
200 with limited functionality.
 Guardium V10.0 backup Central Managers can provide limited services to Guardium V9
Central Managers at or above V9 GPU 300 with patch 337 and all security patches
installed.
 Guardium V9 backup Central Managers cannot provide services to Guardium V10.0
Central Managers.

4
Upgrading environments with
aggregators or Central Managers
Minimize disruptions to your Guardium environment by following a top-down upgrade approach.

This means first upgrading one high-level system and then upgrading the systems or agents
that report to it, then upgrading the next high-level system and the systems or agents that report
to it, and so on. This approach minimizes the impact of operating a hybrid environment with
multiple Guardium versions.

A top-down approach is necessary because an upgraded aggregator can aggregate data from
older releases, but an older aggregator cannot aggregate data from newer releases. Similarly,
an upgraded Central Manager can manage units running older releases, but the managed units
will not enjoy full functionality until they are upgraded to match the Central Manager.

To avoid these issues, upgrade a Central Manager before upgrading any of its managed units. If
you have multiple Central Managers, first upgrade one Central Manager and then upgrade its
managed units before going on to upgrade the next Central Manager and its managed units.

Similarly, upgrade an aggregator before upgrading any units that export data to it. If you have
several aggregators, first upgrade one aggregator and then upgrade the collectors that report to
it before going on to upgrade the next aggregator and its collectors.

Finally, upgrade a collector before upgrading the S-TAPs registered to it. Upgrade one collector
and all the S-TAPs registered to it before going on to upgrade the next collector and its S-TAPs.

This approach provides compatible systems--from Central Managers to aggregators, collectors,


and S-TAPs--in each branch of your environment more quickly than upgrading all your Central
Managers or aggregators before upgrading any collectors.

5
For example, considering the following environment with multiple Central Managers, a top-down
upgrade approach moves vertically through this list of systems:

 Central Manager
o Aggregator
 Collector
 S-TAP
 S-TAP
 Collector
 S-TAP
 S-TAP
o Aggregator
 Collector
 S-TAP
 S-TAP
 Collector
 S-TAP
 S-TAP
 Central Manager
o Aggregator
 Collector
 S-TAP
 S-TAP
 Collector
 S-TAP
 S-TAP
o Aggregator
 Collector
 S-TAP
 S-TAP
 Collector
 S-TAP
 S-TAP

6
Using a backup Central Manager during
upgrade
The availability of a backup Central Manager allows you to upgrade your Central Manager with
minimal disruption to your Guardium services.

Before you begin


To use this procedure, you must begin with a Central Manager and a backup Central Manager
operating at the same version level. You will then upgrade the backup Central Manager to the
latest version of Guardium. This creates a temporary version mismatch in your environment,
and the following restrictions apply:

 Guardium V10.0 Central Managers can manage Guardium systems at or above V9 GPU
200 with limited functionality.
 Guardium V10.0 backup Central Managers can provide limited services to Guardium V9
Central Managers at or above V9 GPU 300 with patch 337 and all security patches
installed.
 Guardium V9 backup Central Managers cannot provide services to Guardium V10.0
Central Managers.

For more information, see Version mismatches during upgrade.

About this task


This strategy involves upgrading the backup Central Manager to the latest version of Guardium
and then making that system the primary Central Manager. After upgrading and promoting the
backup Central Manager, upgrade all managed units and the original Central Manager to the
latest version of Guardium.

Procedure

1. On the backup Central Manager, view the Manage > Maintenance > General >
Aggregation/Archive Log and verify that backup Central Manager synchronization files
are being created successfully.
2. Upgrade your backup Central Manager to the latest version of Guardium.

After the upgrade is complete, wait approximately 30 minutes for the backup Central
Manager synchronization files to be created on the upgraded system.

3. Make the upgraded machine your primary Central Manager by navigating to Setup >
Make Primary Central Manager.

Managed units will now be assigned to the new primary Central Manager running the
latest version of Guardium.

4. Upgrade the managed units to the latest version of Guardium.

7
When you upgrade the system that had been your primary Central Manager (under the
previous version), you may choose to establish that system as a backup Central
Manager running the latest Guardium version. However, if you want to reestablish this
system as your primary Central Manager after the upgrade, navigate to Setup > Make
Primary Central Manager.

8
Performing an upgrade
Learn how to upgrade your Guardium system after identifying the appropriate upgrade scenario.

 Upgrade using an upgrade patch


 Upgrade using a backup, rebuild, and restore procedure

Upgrade using an upgrade patch


Upgrade your Guardium system using a V10.0 upgrade patch.

Before you begin


Before upgrading a Guardium system, review and verify the following information:

 Planning an upgrade, specifically the information in Identify the correct upgrade scenario
 System Requirements for Guardium V10.0

About this task

This sequence of tasks guides you through the processes of upgrading your Guardium system
using a V10.0 upgrade patch.

1. Purge system data


2. Create a system backup
3. Apply the health check patch
4. Enable a Central Manager as an upgrade server
5. Apply the upgrade patch
6. Verify and cleanup after the upgrade

9
Purge system data
Purging unnecessary data from the appliance can significantly decrease the time required to
upgrade.

About this task

For best performance and to minimize risks associated with upgrading large amounts of data,
try to achieve less than 20% internal database utilization by purging unnecessary system data.

Procedure

1. Open Manage > Data Management > Data Archive.


2. Click the Purge check box to define a purge operation.

Important: Changes made to the Data Archive purge configuration will also be applied
to the Data Export purge configuration.

3. Define a Purge data older than time period. All data older than the specified period of
days, weeks, or months will be purged from the system.
4. Click the Allow purge without archiving or exporting check box.
5. Click Apply to save the configuration changes.
6. Click Run Once Now to execute the purge operation and purge old system data.

What to do next
Open Manage > Reports > Activity Monitoring > Scheduled Jobs to monitor the status of the
data archive job.

10
Create a system backup
Performing a full system backup allows you to recover from a failed or interrupted upgrade
attempt, or can be used to upgrade your Guardium system using a backup, rebuild, and restore
procedure.

Before you begin

Before creating a backup, purge all unnecessary data from the system being upgraded. The
less data on the system, the more quickly the backup procedure runs.

About this task

The backup process copies data to a file using the following naming convention:
host_name.domain_name-yyyy-mm-dd.sqlguard.bak.

Procedure

1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Enter the following command to backup the Guardium system: backup system.

You will be prompted to enter host, directory, and password information for the system to
which the backup data will be sent. The backup utility will display status information
during the backup process. When the backup process is complete, the following
message will display:

Backup done
Keep the file /xxx/host_name.domain_name-yyyy-mm-dd.sqlguard.bak in a
safe place.

3. Press Enter to complete the backup process. A series of messages will display to
confirm the backup.

What to do next
Log in to the host machine that contains the backup file and verify that the file has been created.

11
Apply the health check patch
The health check patch performs preliminary tests that help prevent problems during an
upgrade.

Before you begin

Download the latest health check patch for your version of Guardium. For Guardium V10.0,
download the following package from Fix Central: SqlGuard-9.0p9997.tgz.enc.

About this task

You must apply the health check patch before upgrading. The patch prevents potential upgrade
issues by verifying the following:

 Hardware requirements
 System hostname
 Additional system configuration and status

For detailed information about the health check tests, review the release notes included with the
latest health check patch.

Apply the health check patch as you would apply any other patch to your Guardium system. It is
also possible to use a Central Manager to push the health check patch to managed units. For
more information, see Central Patch Management.

Procedure

1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Depending on the location of the patch file, perform the following steps:
o If you are installing from optical media, insert the patch media in the IBM®
Security Guardium optical drive, and enter the following command:store
system patch
o If installing from the network, enter the following command: store system
patch install [ftp | scp]

You will be prompted to enter the host machine name, the path to the patch file,
and both the user name and password for the host machine.

3. When prompted, enter the number that identifies the patch in the patches directory.
Press Enter to apply the patch.

Results

The health check generates a log file using the following naming convention:
health_check.time_stamp.log.

12
To view the log file, perform on the following actions:

1. From the CLI, enter the following command: fileserver.


2. Open the file server in a web browser using the URL http://hostname_of_system.
3. Navigate to the Sqlguard logs > diag > current folder and select the log file.

The log file will contain the status of each validation performed by the health check patch:

ERROR

If the patch finds an error, the message will contain an ERROR prefix.

WARNING

If the patch finds an error that may not prevent the upgrade, the message will contain a
WARNING prefix. Review the message details for more information about how to
proceed.

If the patch does not find any errors, the following message appears at the end of the log file:
Appliance is ready for GPU installation/upgrade..

Important:

 If the patch status is WARNING and a WARNING message appears in the log, the GPU
installation or upgrade may still be possible as some messages are version-specific.
Review the message details for more information about how to proceed.
 If the log file includes an ERROR or WARNING message that you cannot resolve, send the
log file to IBM Software Support to prevent potential issues during the upgrade.

13
Enable a Central Manager as an upgrade
server
Configure an existing Guardium V10.0 system to distribute packages to managed units being
upgraded using the network patch.

Before you begin

This task is only required if you are upgrading managed units using the network upgrade patch.
Review the topic, Identify the correct upgrade scenario, to determine if the network upgrade
patch can be used for your scenario.

To configure a Central Manager to distribute packages to the managed units in your


environment, you must have already upgraded the Central Manager to the latest Guardium
version.

About this task


This task guides you through the process of configuring a Central Manager to distribute
packages to the managed units you are upgrading.

Procedure

1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Enter the following command: upgradeserver on. The Central Manager will now be
available to distribute upgrade files to managed units.

What to do next
Upgrade the managed units in your environment using the network upgrade patch as described
in the topic, Apply the upgrade patch. When you have finished upgrading the managed units in
your environment, disable the upgrade server on your Central Manager by enter the following
command as the CLI user: upgradeserver off.

14
Apply the upgrade patch
You can download patches in ISO format and create installation media or use SCP/FTP to apply
patches from a remote host on the network.

Before you begin

Download the upgrade patch required for your upgrade scenario:

 If you are using the standard upgrade patch, download SqlGuard-


9.0p10000_Upgrade_to_Version_10.0.tgz.enc.
 If you are using the network upgrade patch, download SqlGuard-
9.0p10001_Network_Upgrade_to_Version_10.0.tgz.enc

For more information about which upgrade patch to use, see Identify the correct upgrade
scenario.

Procedure

1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Depending on the location of the patch file, perform the following steps:
o If you are installing from optical media, insert the patch media in the IBM®
Security Guardium optical drive, and enter the following command:store
system patch
o If installing from the network, enter the following command: store system
patch install [ftp | scp]

You will be prompted to enter the host machine name, the path to the patch file,
and both the user name and password for the host machine.

3. When prompted, enter the number that identifies the patch in the patches directory.
Press Enter to apply the patch.

Results

The Guardium system may reboot several times after applying an upgrade patch, but the
process does not require any further action.

15
Verify and cleanup after the upgrade
Verify that the upgrade completed successfully and perform post-upgrade maintenance.

Procedure

1. If you upgraded using an upgrade patch, log in as the CLI user and issue the following
command: show upgrade-status. The command will output detailed status
information from the upgrade process.
2. If you upgraded using a Central Manager to distribute upgrade packages to managed
units, disable the upgrade server on the Central Manager using the following CLI
command: upgradeserver off.
3. You may need to update the Guardium DPS file after upgrade or restore procedures.
Download the latest DPS file, then use the Harden > Vulnerability Assessment >
Customer Uploads tool to upload and import the new DPS file.

Attention: If you use add-on accelerators (for example, SOX or PCI), you may need to
reinstall the accelerator patches after importing a new DPS file.

4. Verify that custom reports created in previous versions of Guardium are available at
Reports > My Custom Reports.

My Custom Reports should contain any new reports that you created as well as any
predefined reports that you modified in a previous version of Guardium.

5. After completing upgrade or restore procedures, you may need to reload the open
source Microsoft SQL Server and Oracle JDBC drivers using the Harden > Vulnerability
Assessment > Customer Uploads tool. You may also need to update and save any
datasources that rely on these drivers.
6. Company logos uploaded before upgrade or restore procedures may need to be
reloaded. To reload a customer logo, follow these steps:
a. Log in as an admin user.
b. Navigate to Setup > Tools and Views > Global Profile.
c. Browse for the company logo file.
d. Upload the logo file.
7. If the upgrade or restore procedures disable the Database Discovery or auto-detect
functionality, you may need to download and install a separate patch to reinstall the
gauto-detect component.
8. Verify the status of the Cross-Site Request Forgery (CSRF) and Cross-Site Scripting
(XSS) services using the CLI commands show gui csrf_status and show gui xss_status.

16
Upgrade using a backup, rebuild, and
restore procedure
Upgrade by restoring a system backup onto a newly rebuilt installation of Guardium V10.0.

Before you begin


Before upgrading a Guardium system, review and verify the following information:

 Planning an upgrade, specifically the information in Identify the correct upgrade scenario
 System Requirements for Guardium V10.0

About this task

This sequence of tasks guides you through the processes of upgrading your Guardium system
by restoring a system backup onto a newly rebuilt V10.0 system.

1. Purge system data


2. Create a system backup
3. Rebuild Guardium to the latest version
4. Restore from a system backup
5. Verify and cleanup after the upgrade

17
Purge system data
Purging unnecessary data from the appliance can significantly decrease the time required to
upgrade.

About this task

For best performance and to minimize risks associated with upgrading large amounts of data,
try to achieve less than 20% internal database utilization by purging unnecessary system data.

Procedure

1. Open Manage > Data Management > Data Archive.


2. Click the Purge check box to define a purge operation.

Important: Changes made to the Data Archive purge configuration will also be applied
to the Data Export purge configuration.

3. Define a Purge data older than time period. All data older than the specified period of
days, weeks, or months will be purged from the system.
4. Click the Allow purge without archiving or exporting check box.
5. Click Apply to save the configuration changes.
6. Click Run Once Now to execute the purge operation and purge old system data.

What to do next
Open Manage > Reports > Activity Monitoring > Scheduled Jobs to monitor the status of the
data archive job.

18
Create a system backup
Performing a full system backup allows you to recover from a failed or interrupted upgrade
attempt, or can be used to upgrade your Guardium system using a backup, rebuild, and restore
procedure.

Before you begin

Before creating a backup, purge all unnecessary data from the system being upgraded. The
less data on the system, the more quickly the backup procedure runs.

About this task

The backup process copies data to a file using the following naming convention:
host_name.domain_name-yyyy-mm-dd.sqlguard.bak.

Procedure

1. Using an SSH client, log in to the Guardium system as the CLI user.
2. Enter the following command to backup the Guardium system: backup system.

You will be prompted to enter host, directory, and password information for the system to
which the backup data will be sent. The backup utility will display status information
during the backup process. When the backup process is complete, the following
message will display:

Backup done
Keep the file /xxx/host_name.domain_name-yyyy-mm-dd.sqlguard.bak in a
safe place.

3. Press Enter to complete the backup process. A series of messages will display to
confirm the backup.

What to do next
Log in to the host machine that contains the backup file and verify that the file has been created.

19
Rebuild Guardium to the latest version
Rebuild Guardium to the lastest version to provide a target system for restoring your backup.

About this task

At this stage of the backup, rebuild, and restore upgrade procedure, you must rebuild a new
installation of Guardium V10.0. For more information, see Installing your Guardium system.

Important: You must rebuild the Guardium system to match the system type you will be
restoring in the next step of the backup, rebuild, and restore upgrade procedure. This is
because you can only restore backups from the same system type as the rebuilt system, for
example a backup from a Central Manager must be restored to a system rebuilt as a Central
Manager.

What to do next
In the next step of the backup, rebuild, and restore upgrade procedure, you will restore your
backup data onto the newly rebuilt installation of Guardium V10.0.

Restore from a system backup


Complete the upgrade by restoring the system from a backup.

Before you begin

At this stage of the backup, rebuild, and restore upgrade procedure, you must have successfully
rebuilt the system to the latest version of Guardium. You can only restore backups from the
same system type as the rebuilt system, for example a backup from a Central Manager must be
restored to a system rebuilt as a Central Manager.

Procedure

1. Using an SSH client, log in to the Guardium system as the CLI user.
2. If the backup files are on a remote system, import the files by entering the following
command: import file .

You will be prompted to provide information for the system that contains the backup files
and the location of the files.

The import process copies the backup data files to the /var/dump directory.

3. Begin the restore process by entering the following command: restore db-from-prev-
version.

20
When you receive prompts to Update portal layout (panes and menus
structure), responding y moves all customized reports (including modified predefined
reports) to Reports > My Custom Reports.

Verify and cleanup after the upgrade


Verify that the upgrade completed successfully and perform post-upgrade maintenance.

Procedure

1. If you upgraded using an upgrade patch, log in as the CLI user and issue the following
command: show upgrade-status. The command will output detailed status
information from the upgrade process.
2. If you upgraded using a Central Manager to distribute upgrade packages to managed
units, disable the upgrade server on the Central Manager using the following CLI
command: upgradeserver off.
3. You may need to update the Guardium DPS file after upgrade or restore procedures.
Download the latest DPS file, then use the Harden > Vulnerability Assessment >
Customer Uploads tool to upload and import the new DPS file.

Attention: If you use add-on accelerators (for example, SOX or PCI), you may need to
reinstall the accelerator patches after importing a new DPS file.

4. Verify that custom reports created in previous versions of Guardium are available at
Reports > My Custom Reports.

My Custom Reports should contain any new reports that you created as well as any
predefined reports that you modified in a previous version of Guardium.

5. After completing upgrade or restore procedures, you may need to reload the open
source Microsoft SQL Server and Oracle JDBC drivers using the Harden > Vulnerability
Assessment > Customer Uploads tool. You may also need to update and save any
datasources that rely on these drivers.
6. Company logos uploaded before upgrade or restore procedures may need to be
reloaded. To reload a customer logo, follow these steps:
a. Log in as an admin user.
b. Navigate to Setup > Tools and Views > Global Profile.
c. Browse for the company logo file.
d. Upload the logo file.
7. If the upgrade or restore procedures disable the Database Discovery or auto-detect
functionality, you may need to download and install a separate patch to reinstall the
gauto-detect component.
8. Verify the status of the Cross-Site Request Forgery (CSRF) and Cross-Site Scripting
(XSS) services using the CLI commands show gui csrf_status and show gui xss_status.

21
IBM

CLI and API


Version 11 Release 3
ii CLI and API
Contents
CLI and API . . . . . . . . . . . .. 1 GuardAPI Database User Functions . . . .. 168
CLI Overview . . . . . . . . . . . . .. 1 GuardAPI Datasource Functions . . . . .. 171
Aggregator CLI Commands . . . . . . .. 3 GuardAPI Datasource Reference Functions .. 177
Alerter CLI Commands . . . . . . . . .. 9 GuardAPI Data User Security Functions . .. 180
Certificate CLI Commands . . . . . . .. 13 GuardAPI Enterprise Load Balancing Functions 184
Configuration and Control CLI Commands . .. 21 GuardAPI External Feed Functions . . . .. 185
diag CLI command . . . . . . . . . .. 56 GuardAPI File Activity Monitor Functions. .. 186
File Handling CLI Commands . . . . . .. 72 GuardAPI GIM Functions . . . . . . .. 189
Inspection Engine CLI Commands. . . . .. 83 GuardAPI Group Functions . . . . . .. 199
Network Configuration CLI Commands . . .. 87 GuardAPI Input Generation . . . . . .. 207
Support CLI Commands . . . . . . . .. 96 GuardAPI Process Control Functions . . .. 218
System CLI Commands . . . . . . . .. 106 GuardAPI Quick Search for Enterprise
User Account, Password and Authentication CLI Functions. . . . . . . . . . . . .. 233
Commands . . . . . . . . . . . .. 115 GuardAPI Query Rewrite Functions . . . .. 236
Proxy CLI Functions . . . . . . . . .. 124 GuardAPI Role Functions . . . . . . .. 251
Quick Search for Enterprise CLI Commands .. 128 GuardAPI S-TAP functions . . . . . . .. 257
GuardAPI Reference . . . . . . . . . .. 129 Guardium for Applications JavaScript API. . .. 268
GuardAPI Archive and Restore Functions . .. 135 Guardium for Applications JavaScript API
GuardAPI Assessment Functions . . . . .. 138 classes. . . . . . . . . . . . . .. 272
GuardAPI Auto-discovery Functions . . .. 143 Guardium for Applications JavaScript API
GuardAPI Capture Replay Functions . . .. 146 objects. . . . . . . . . . . . . .. 274
GuardAPI Catalog Entry Functions . . . .. 155
GuardAPI Classification Functions . . . .. 157 Index . . . . . . . . . . . . . .. 277

iii
iv CLI and API
CLI and API
The Guardium® command line interface (CLI) is an administrative tool that allows
for configuration, troubleshooting, and management of the Guardium system. The
Guardium application programming interface (API) provides access to many
Guardium functions from the command line.

CLI Overview
The Guardium command line interface (CLI) is an administrative tool that allows
for configuration, troubleshooting, and management of the Guardium system.

Documentation Conventions

All CLI command examples are written in courier text (for example, show system
clock).

To illustrate syntax rules, some command descriptions use dependency delimiters.


Such delimiters indicate which command arguments are mandatory, and in what
context. Each syntax description shows the dependencies between the command
arguments by using special characters:
v The < and > symbols denote a required argument.
v The [ and ] symbols denote an optional argument.
v The | (vertical bar) symbol separates alternative choices when only one can be
selected. For example:
store full-bypass <ON | OFF>

CLI Command Usage Notes®


v Commands and keywords can be abbreviated by entering enough characters so
the commands are not ambiguous. For example, show can be abbreviated sho.
v Most Guardium CLI commands consist of a command word followed by one or
more arguments. The argument may be a keyword or a keyword followed by a
variable value (for example an IP address, subnet mask, date, etc.
v Commands and keywords are not case sensitive, but element names are.
v To display command syntax and usage options, enter a question mark (?) as an
argument following the command word.
v Use quotation marks around words or phrases to precisely define search terms.

Accessing the CLI


An administrator can access the CLI though:
v A physically connected PC console or serial terminal OR
v A network connection using an SSH client

Physical Console Access

Interactive access to the Guardium appliance is through the serial port or the
system console.

1
PC keyboard and monitor – A PC video monitor can be attached to either the front
panel video connector or the video connector on the back of the appliance.

A PC keyboard with a PS/2 style connector can be attached to the PS/2 connector
on the back of the appliance. Alternatively, a USB keyboard can be connected to
the USB connectors located at the front or back of the appliance.

Serial port access – Using a NULL modem cable, connect a terminal or another
computer to the 9-pin serial port at the back of the appliance. The terminal or a
terminal emulator on the attached computer should be set to communicate as
19200-N-1 (19200 baud, no parity, 1 stop bit).

A login prompt displays once the terminal is connected to the serial port, or the
keyboard and monitor are connected to the console. Enter cli as the user name, and
continue with CLI Login.

Network SSH Access

Remote access to the CLI is available on the management IP address or domain


name, using an SSH client. SSH clients are freely or commercially available for
most desktop and server platforms. A Unix SSH connect command to log in as the
cli user might look like this:
ssh –l cli 192.168.2.16

The SSH client may ask you to accept the cryptographic fingerprint of the
Guardium appliance. Accept the fingerprint to proceed to the password prompt.

Note: If, after the first connection, you are asked again for a fingerprint, someone
may be trying to induce you to log into the wrong machine.

CLI Login
Access to the CLI is either through the admin CLI account cli or one of the five
CLI accounts (guardcli1,...,guardcli5). The five CLI accounts (guardcli1,...,guardcli5)
exist to aid in the separation of administrative duties.

Access to the GuardAPI, which is a set of CLI commands to aid in the automation
of repetitive tasks, requires the creation of a user (GUI username/guiuser) by
access manager and giving those accounts either the admin or cli role. Proper login
to the CLI for the purpose of using GuardAPI requires the login with one of the
five CLI accounts (guardcli1,...,guardcli5) and an additional login with guiuser by
issuing the 'set guiuser' command. See GuardAPI Reference Overview or Set
guiuser Authentication for additional information.

Password Hardening

In order to meet various auditing and compliancy requirements the following


password enforcements will be in effect for CLI accounts:
v For the account cli either use the cli password supplied or be sure to set a strong
password to protect this account. If you have just rebuilt the system from an
installation DVD, the Guardium cli user has a default password of guardium.
You should change that password immediately.
v Enforcement of an expiration period for the CLI and five CLI accounts where the
default is 90 days. When a password expires a required change of password will
be invoked during the login process.

2 CLI and API


v Passwords must be a minimum of eight characters in length.
v Passwords must contain at least one character from three of the following four
classes
– Any upper-case letter
– Any lower-case letter
– Any numeric (0,1,2,...)
– Any non-alphanumeric (special) character
v Once access is granted through the use of a separate GUI username (guiuser) the
CLI audit trail will show the CLI_USER+GUI_USER pair used for login.
v CLI users cannot be authenticated through LDAP as these are considered
administrative accounts and should be able to login regardless of connectivity to
an LDAP server

Limited CLI commands during maintenance of internal database

CLI has three sets of commands - general commands, specialized support


commands, and recovery commands. Support commands are to used by Technical
Support to analyze the system. Recovery commands are to recover the system
when the database is down.

The initial CLI login is:


Welcome to CLI - your last login was <date>

The welcome message will add further information if the internal database is
down due to maintenance or during an upgrade.

If this is the case, the number of CLI commands available will be limited.
The internal database on the appliance is currently down and CLI will be working
in "recovery mode"; only a limited set of commands will be available.

The CLI commands that available for use during recovery mode are as follows:
support reset-password root
restart mysql
restart stopped_services
restart system
restore pre-patch-backup
restore system

Aggregator CLI Commands


This section list Aggregator CLI commands.

aggregator backup keys file


Use this command to back up the shared secret keys file to the specified location.

Syntax

aggregator backup keys file <user@host:/path/filename>

Parameters

user@host:/path/filename For the file transfer operation, specifies a user, host, and
full path name for the backup keys file. The user you specify must have the
authority to write to the specified directory.

CLI and API 3


Note: For more information about the shared secret use, see System Shared Secret.

aggregator clean shared-secret

Sets the system shared secret value to null. All files archived or exported from a
unit with a null shared secret can be restored or imported only on systems where
the shared secret is null.

Syntax

aggregator clean shared-secret

Note: For more information about the shared secret use, see System Shared Secret.

aggregator debug

Starts or stops writing debugging information relating to aggregation activities.


Use these commands only when directed to do so by Guardium Support, and be
sure to issue the stop command after you have gathered enough information.

Note: Debug mode will automatically expire after 7 days.

Syntax

aggregator debug <start | stop>

aggregator list failed imports


When an import operation fails because of a shared secret mismatch, the offending
file is moved from the /var/importdir directory to the /var/dump directory, and
it is renamed using the original file name plus the suffix .decrypt_failed. Use this
command to list all such files

Syntax

aggregator list failed imports

aggregator recover failed import


Use this command to move and rename failed import files, prior to re-attempting
an import or restore operation. Failed import files are stored in the /var/dump
directory, with the suffix .decrypt_failed. Before re-attempting an import or restore
operation, those files must be renamed (by removing the .decrypt_failed suffix)
and moved to the /var/importdir directory.

Syntax

aggregator recover failed import <all | filename>

Parameters

Use the all option to move all files from the /var/dump directory ending with the
suffix .decrypt_failed, or use the filename option to identify a single file to be
moved.

4 CLI and API


Note: After moving the failed files, but before a restore or import operation runs,
be sure that the system shared secret matches the shared secret used to encrypt the
exported or archived file.

aggregator recover failed restore

Use this command to move and rename failed restore files, prior to re-attempting a
restore operation. Failed restore files are stored in the /var/dump directory, with
the suffix .decrypt_failed. Before re-attempting a restore operation, those files must
be renamed (by removing the .decrypt_failed suffix) and moved to the
/var/importdir directory.

Syntax

aggregator recover failed restore <all | filename>

Parameters

Use the all option to move all files from the /var/dump directory ending with the
suffix .decrypt_failed, or use the filename option to identify a single file to be
moved.

Note: After moving the failed files, but before a restore or import operation runs,
be sure that the system shared secret matches the shared secret used to encrypt the
exported or archived file.

aggregator restore keys file


Use this command to restore the shared secret keys file from the specified location.

Syntax

aggregator restore keys file <user@host:/path/filename>

Parameters

user@host:/path/filename For the file transfer operation, specifies a user, host, and
full path name for the backup keys file.

Note: For more information about the shared secret use, see System Shared Secret.

store aggregator drop_ad_hoc_audit_db


Audit Process reports on Aggregator – creates ad-hoc databases for each of its
tasks that will include only the relevant days for that task. These ad-hoc databases
can be kept for 14 days (for analysis) or deleted immediately after use. The CLI
command defines the ad-hoc databases purging policy. Choices are 0 or 1(0 - keep
for 14-days or 1 - delete after use).

Syntax

store aggregator drop_ad_hoc_audit_db [1|0]

Drop ad-hoc merge databases? 0

show aggregator drop_ad_hoc_audit_db

CLI and API 5


store aggregator orphan_cleanup_flag
Use this CLI command to regularly run static orphans cleanup on an aggregator.

Use this CLI command to clean orphans on aggregators that will be scheduled to
run on data older then 3 days and will run at the end of a purge.

This process will be started by the user with this CLI command, so in case of large
database, the user will be aware of the time length of the process.

It will cover the whole data on the aggregator, but will run it all on a separate
temporary database.

Note: On a collector, orphans cleanup is not changed - it runs with the small
cleanup tactics and is invoked before export/archive.

show aggregator orphan_cleanup_flag Displays OFF, small, large or analyze.

store aggregator orphan_cleanup_flag

store aggregator orphan_cleanup_flag <flag>, where flag is one of the words < OFF
small large analyze >

These commands are applicable on aggregator only. By default static orphans


cleanup is disabled (off) on aggregator.

If set to one of small, large or analyze - orphans cleanup script is invoked after
each run of merge process.

The orphans cleanup on an aggregator does not remove orphan records of the last
3 days - it does remove all orphans older then 3 days.

If small is specified, the process does not interfere with audit processes that can
start after the merge is completed.

If large is specified, the process would run faster where there is a large number of
orphans but it's run might interfere with audit processes - if large is specified,
audit processes will not start until orphans cleanup is complete.

If analyze is specified, the process first evaluates the number of orphans and uses
the large tactics if there are more than 20% orphans - if analyze is specified, audit
processes will not start until orphans cleanup is complete.

Syntax

store aggregator orphan_cleanup_flag [OFF | small | large | analyze]

Default is OFF.

Show command

show aggregator orphan_cleanup_flag

6 CLI and API


store archive_static_table
Use this CLI command to turn off/ turn on the archive static table

USAGE: store archive_static_table <state>,

where state is on/off.

Show command

show archive_static_table

store next_export_static

The aggregation software makes a distinction between two types of tables:


v static tables - grow slowly over time, data in these tables is not time dependent
(GDM_OBJECT, GDM_FIELD, GDM_SENTENCE, GDM_CONSTRUCT, etc.).
v dynamic tables- grow quickly with time, data is time dependent
(GDM_CONSTRUCT_INSTANCE, GDM_SESSION, GDM_CONSTRUCT_TEXT
etc.).

As stated previously, the data of static tables is not time dependant. The data of
dynamic tables that is time dependant is linked to static data. As static tables can
grow to be very large, the archive process does not archive the full static data
every day - it archives the full static data the first time it runs, and then at the first
day of each month, on any day besides the first of the month, it only archives
static data that changed during that day. For this reason when restoring data of
any day, it is also required that the first of the month be restored - this ensures that
full static data is present and references are not broken.

Use the CLI command, store next_export_static, to set a flag so that the next export
contains the full static data.

Syntax

store next_export_static [ON | OFF]

Show command

show next_export_static

store last_used
Use this CLI command during purging and aggregation.

Syntax

store last_used [size | interval | logging]

Show command

show last_used [size | interval | logging]

LAST_USED SIZE - Integer, Default is 50

CLI and API 7


LAST_USED INTERVAL - Integer, default is 60 (minutes)

LAST_USED LOGGING - Integer

All Tables - 1

Only GDM_Object - 2

None - 0 (Default)

store aggregator static_data


store aggregator static_data [TIMESTAMP | LAST_USED_FOR_OBJECT_ONLY |
LAST_USED ]

Note: Set the CLI command, last_used logging, prior to using this command.

When the LAST_USED column is updated by the Sniffer in Static tables, this
column can be referenced when purging data from these tables or when archiving
and exporting data from these tables.

The value of this column can also be updated when importing data to an
aggregator.

There are three options:


1. By default, the system behaves like it did in previous versions - the
LAST_USED column is not considered in purge, archive and export and is not
updated on import, archive and export are done by TIMESTAMP.
2. LAST_USED_FOR_OBJECT_ONLY is considered only for GDM_OBJECT table.
3. LAST_USED is considered for GDM_CONSTRUCT, GDM_SENTENCE,
GDM_OBJECT, GDM_FIELD, GDM_JOIN, GDM_JOIN_OBJECT

Note: Options 2 and 3 are only enabled when the sniffer is configured to collect
and update this data.

Note: Validations performed only on a collector - If


ADMINCONSOLE_PARAMETER.LAST_USED_LOGGING=0, then only
TIMESTAMP is allowed. If
ADMINCONSOLE_PARAMETER.LAST_USED_LOGGING=1 then all parameters
are allowed. If ADMINCONSOLE_PARAMETER.LAST_USED_LOGGING=2, then
TIMESTAMP and LAST_USED_FOR_OBJECT_ONLY are allowed. On an
aggregator, all parameters are allowed.

Syntax

store aggregator static_data <type>

where <type> is <TIMESTAMP | LAST_USED |


LAST_USED_FOR_OBJECT_ONLY>depends on the last_used logging flag.

Use show/store last_used logging commands.

Show command

show aggregator static_data

8 CLI and API


store archive_table_by_date
Use the CLI command, store archive_table_by_date, only on Aggregators. Use this
CLI command to archive all static tables on a daily basis or archive static tables
data at the first time of running and every first day of the month. In default,
archive data on an aggregator will run with full static tables on a daily basis. If
this CLI command is set to ENABLE, static tables will be archived only on the first
day of month or the first time archive data is running.

store run_cleanup_orphans_daily

Use this CLI command to clean all the old construct records that are no longer in
use. This CLI command is relevant for aggregators only and by default is enabled.

store run_cleanup_orphans_daily

USAGE: store run_cleanup_orphans_daily [on|off]

Show command

show run_cleanup_orphans_daily

Alerter CLI Commands


This section list Alerter CLI commands.

The Alerter subsystem transmits messages that have been queued by other
components - correlation alerts that have been queued by the Anomaly Detection
subsystem, or run-time alerts that have been generated by security policies, for
example. The Alerter subsystem can be configured to send messages to both SMTP
and SNMP servers. Alerts can also be sent to syslog or custom alerting classes, but
no special configuration is required for those two options, beyond starting the
Alerter. There are four types of Alerter commands. Use the links in the lists, or
browse the commands, which are listed in alphabetical sequence following the
lists.

Alerter Start-up and Polling Commands


v stop alerter
v restart alerter
v store alerter state operational
v store alerter state startup
v store alerter poll
v store anomaly-detection poll
v store anomaly-detection state

SMTP Configuration Commands


v store alerter smtp authentication password
v store alerter smtp authentication type
v store alerter smtp authentication username
v store alerter smtp port
v store alerter smtp relay
v store alerter smtp returnaddr

CLI and API 9


SNMP Configuration Commands
v store alerter snmp community
v store alerter snmp traphost

restart alerter

Restarts the Alerter. You can perform the same function using the store alerter state
operational command to stop and then start the alerter:

store alerter state operational off

store alerter state operational on

Syntax

restart alerter

stop alerter

Stops the Alerter.

You can perform the same function using the store alerter state operational
command:

store alerter state operational off

Syntax

stop alerter

store alerter poll

Starts (on) or stops (off) the Alerter. The default state at installation time is off. You
can also use the restart alerter or stop alerter commands to restart or stop the
Alerter subsystem.

Syntax

store alerter state operational <on | off>

Show Command

show alerter state operational

store alerter state operational


Sets the number of seconds, n, that the Alerter waits before checking its outgoing
message queue to send SNMP traps or transmit email using SMTP. The default is
30.

Syntax

store alerter poll <n>

Show Command

10 CLI and API


show alerter poll

store alerter state startup

Enables or disables the automatic start-up of the Alerter on system start-up. The
default state at installation time is off.

Syntax

store alerter state startup <on | off>

Show Command

show alerter state startup

store anomaly-detection poll


Sets the Anomaly Detection polling interval, in minutes (n). This controls the
frequency with which Guardium checks log data for anomalies.

Syntax

store anomaly-detection poll <n>

Show Command

show anomaly-detection poll

store anomaly-detection state

Enables or disables the Anomaly Detection subsystem, which executes all active
statistical alerts, checks the logs for anomalies, and queues alerts as necessary for
the Alerter subsystem.

Syntax

store anomaly-detection state <on | off>

Show Command

show anomaly-detection state

store alerter smtp authentication password

Sets the alerter SMTP authentication password to the specified value. There is no
corresponding show command.

Syntax

store alerter smtp authentication <value>

store alerter smtp authentication type


Sets the authentication type required by the SMTP server to the one of the
following values:

CLI and API 11


none: Send without authentication.

auth: Username/password authentication. When used, set the user account and
password using the following commands:

store alerter smtp authentication username

store alerter smtp authentication password

Syntax

store alerter smtp authentication type <none | auth>

Show Command

show alerter smtp authentication type

store alerter smtp authentication username

Sets the alerter SMTP email authentication username to the specified name.

Syntax

store alerter smtp authentication username <name>

Show Command

show alerter smtp authentication username

store alerter smtp port

Sets the port number on which the SMTP server listens, to the value specified by
n. The default is 25 (the standard SMTP port).

Syntax

store alerter smtp port <n>

Show Command

show alerter smtp port

store alerter smtp relay

Sets the ip address of the SMTP server to be used by the Guardium appliance.

Syntax

store alerter smtp relay <ip address>

Show Command

show alerter smtp relay

12 CLI and API


store alerter smtp returnaddr
Sets the return email address for email alerts. Any bounced messages or email
failures will be returned to this address.

Syntax

store alerter smtp returnaddr <email address>

Show Command

show alerter smtp returnaddr

store alerter snmp community

Sets the SNMP trap community used by the Alerter, to the name specified. There is
no corresponding show command.

Syntax

store alerter snmp community <name>

store alerter smtp traphost

Sets the Alerter SNMP trap server to receive alerts, to the specified IP address or
DNS host name.

Syntax

store alerter snmp traphost <snmp host>

Show Command

show alerter snmp traphost

store syslog-trap

Usage: store syslog-trap ON | OFF

Certificate CLI Commands


Use the certificate commands to create a certificate signing request (CSR), and to
install server, CA (certificate authority), or trusted path certificates on the
Guardium system.

Note: Guardium does not provide certificate authority (CA) services and does not
ship systems with different certificates than the one installed by default. A
customer that wants their own certificate must contact a third-party CA (such as
VeriSign or Entrust).

Certification Expiration

Expired certificates will result in a loss of function. Run the show certificate
warn_expire command periodically to check for expired certificates. The command
displays certificates that will expire within six months and certificates that have
already expired. The user interface will also inform you of certificates that will

CLI and API 13


expire. To see a summary of all certificates, run the command show certificate
summary.

New Certificates
To obtain a new certificate, generate a certificate signed request (CSR) and contact
a third-party certificate authority (CA) such as VeriSign or Entrust. Guardium does
not provide CA services and will not ship systems with different certificates than
the ones that are installed by default. The certificate format must be in PEM and
include BEGIN and END delimiters. The certificate can either be pasted from the
console or imported through one of the standard import protocols.

Note: Do not perform this action until after the system network configuration
parameters have been set.

create csr
Creates a Certificate Signed Request (CSR) for the Guardium system. Do not
perform this action until after the system network configuration parameters are set.
Within the generated CSR, the common name (CN) is created automatically from
the host and domain names assigned.

create csr alias creates a certificate request with an alias.

create csr gim creates a certificate request for gim (GIM Listener).

create csr gui creates a certificate request for the tomcat.

create csr sniffer creates a certificate request for the sniffer.

create csr squid creates a certificate signing request and associated key, which
must be signed by a certificate authority. A matching certificate must then be
supplied by using the store certificate squid selfsign command.

Syntax

create csr <alias | gui | sniffer | squid>

create csr <alias | gimi | gui | sniffer>

delete certificate squid


Backs up and then deletes the most recent squid certificate that is used to
configure the SSL connection.

Syntax

delete certificate squid

restore certificate gim

Restores the certificate gim to the last certificate gim on record or the default
certificate gim that was originally provided.

restore certificate gim backup restores the gim certificate to the last saved
sniffer gim certificate.

14 CLI and API


restore certificate gim default restores the gim certificate to the default gim
certificate that was supplied with the system.

Syntax

restore certificate gim <backup | default>

restore certificate keystore

Restores the certificate keystore to the last certificate keystore on record or the
default certificate keystore that was originally provided.

restore certificate keystore backup restores the certificate keystore to the last
saved certificate keystore.

restore certificate keystore default restores the certificate keystore to the


default value that was supplied with the system.

Syntax

restore certificate keystore <backup | default>

restore certificate mysql

Restores the client certificate to the last certificate on record.

restore certificate mysql backup restores the last saved mysql certificate.

Syntax

restore certificate mysql <backup>

restore certificate mysql backup client

Restores the client certificate to the last certificate on record.

restore certificate mysql backup client ca restores the last saved client
certificate authority (CA) certificate.

restore certificate mysql backup client cert restores the last saved client
certificate.

Syntax

restore certificate mysql backup client <ca | cert>

restore certificate mysql backup server

Restores the server certificate to the last certificate on record.

restore certificate mysql backup server ca restores the last saved server
certificate authority (CA) certificate.

restore certificate mysql backup server cert restores the last saved server
certificate.

CLI and API 15


Syntax

restore certificate mysql backup server <ca | cert>

restore certificate mysql default client

Restores the mysql client certificate to the default version that was supplied with
the system.

restore certificate mysql default client ca restores the mysql client ca


certificate to the default version that was supplied with the system.

restore certificate mysql default client cert restores the mysql client
certificate to the default version that was supplied with the system.

Syntax

restore certificate mysql default client <ca | cert>

restore certificate mysql default server

Restores the mysql server certificate to the default version that was supplied with
the system.

restore certificate mysql default server ca restores the mysql server ca


certificate to the default version that was supplied with the system.

restore certificate mysql default server cert restores the mysql server
certificate to the default version that was supplied with the system.

Syntax

restore certificate mysql default server <ca | cert>

restore certificate sniffer

Restores the certificate to the last certificate on record.

restore certificate sniffer backup restores the sniffer certificate to the last
saved sniffer certificate.

restore certificate sniffer default restores the sniffer certificate to the default
sniffer certificate.

Syntax

restore certificate sniffer <backup | default>

restore certificate squid backup


Restores the last saved squid backup. If no backup exists, the following message is
displayed:
Backup squid certificate key not found.
Backup squid certificate file not found.
err

16 CLI and API


restore cert_key mysql backup
Restores the mysql client or server certificate key to the last saved value.

restore cert_key mysql backup client restores the last saved mysql client cert
key.

restore cert_key mysql backup server restores the last saved mysql server cert
key.

Syntax

restore cert_key mysql backup <client | server>

restore cert_key mysql default

Restores the mysql client or server certificate key to the default version that was
supplied with the system.

restore cert_key mysql default client restores the default mysql client cert key
that was supplied with the system.

restore cert_key mysql default server restores the default mysql server cert key
that was supplied with the system.

Syntax

restore cert_key mysql default <client | server>

show certificate

Displays the summary of all certificates, certificate information, alias list,


certificates in the keystore, and expired or soon-to-expire certificates.

This certificate authenticity can be verified by a Guardium CA public key


(contained in the CA certificate that is distributed with the client software). This
certificate has either a customer company-unique CN (Common Name - for
example, acme.com, or a machine-specific CN (for example x4.acme.com). This
permits any client to establish that not only does the Guardium system have a
valid certification (it is a real Guardium system), but also that it is a specific
Guardium system (or a set of Guardium systems) that the client is supposed to
connect to.

show certificate all displays a summary of all certificates.

show certificate alias displays an alias list.

show certificate gim displays all GIM certificate information (GIM Listener).

show certificate gui displays all tomcat certificate information.

show certificate keystore displays all certificates in the keystore and an alias list
for you to select which certificate to show.

show certificate mysql displays client and server mysql certificate information.

CLI and API 17


show certificate sniffer displays all sniffer certificate information.

show certificate stap displays all S-TAP certificate information in the keystore.

show certificate squid displays all proxy server certificate information.

show certificate summary displays a summary of all certification information.

show certificate trusted displays all trusted certificate information.

show certificate warn_expired displays all expired certificates or certificates that


expire in 6 months.

Syntax

show certificate <alias | all | gui | keystore | mysql | sniffer | stap | squid |
summary | trusted | warn_expired>

show certificate <alias | all | gim | gui | keystore | mysql | sniffer | stap |
summary | trusted | warn_expired >

show certificate keystore

Displays certificate information in the keystore.

show certificate keystore all displays all certificates in the keystore.

show certificate keystore alias displays an alias list for you to select which
certificate to show.

Syntax

show certificate keystore <all | alias>

show certificate mysql

Displays mysql certificate information.

Parameters

show certificate mysql client shows client mysql information.

show certificate mysql server shows server mysql information.

Syntax

show certificate mysql <client | server>

store certificate
Stores a certificate. Paste your certificate in PEM format and include the BEGIN
and END lines.

Parameter

18 CLI and API


store certificate alias stores a certificate in the keystore after a CSR has been
generated. This CLI command supports the CLI command, create csr alias, which
allows the user to create an intermediate trusted certificate from scratch. Use both
of these commands to create intermediate trusted certificates. These intermediate
trusted certificates can then be used to sign other certificates, if required.

store certificate gim will allow the custom gim certificate to be stored in
keystore by prompting for certificate, key (optional) and CA certificate (GIM
Listener).

store certificate gui stores the tomcat certificate in the keystore after a CSR has
been generated.

store certificate keystore asks for a one-word alias to uniquely identify the
trusted certificate and store it in the keystore.

store certificate mysql stores mysql client and server certificates.

store certificate sniffer stores sniffer certificates.

store certificate squid stores squid certificate.

store certificate stap stores S-TAP certificates.

Syntax

store certificate <gui | keystore | mysql | sniffer | squid | stap >

store certificate <gim | gui | keystore | mysql | sniffer | stap >

store certificate mysql client

Stores a mysql client certificate.

store certificate mysql client ca stores client certificate authority (CA)


certificates.

store certificate mysql client cert stores client certificates.

Syntax

store certificate mysql client <ca | cert>

store certificate mysql server


Stores a mysql server certificate.

store certificate mysql server ca stores server certificate authority (CA)


certificates.

store certificate mysql server cert stores server certificates.

Syntax

store certificate mysql server <ca | cert>

CLI and API 19


store certificate squid
Stores the proxy server certificate.

store certificate squid caroot Stores a ca root certificate onto the Guardium
system and configures SSL proxy settings.

store certificate squid default stores a signed key/certificate pair. If the


certificate is self-signed, the ca root must also be provided to validate connections
from applications that are signed by a trusted certificate authority. If the certificate
is signed by a trusted certificate authority, providing the ca root is not mandatory.

store certificate squid selfsign stores a matching self-signed certificate and ca


root to validate connections from applications that are signed by a trusted
certificate authority. This command can be used only after you generate a csr and
key by using the create csr squid command.

Syntax

store certificate squid <caroot | default | selfsign>

store cert_key

Stores the system certificate key and the certificate key of a mysql client and
server.

store cert_key mysql stores the certificate key of a mysql client and server.

store cert_key sniffer stores the sniffer certificate key.

Syntax

store cert_key <mysql | sniffer>

store cert_key mysql

Stores the certificate key of a mysql client or server.

store cert_key myself client stores the certificate key of a mysql client.

store cert_key myself server stores the certificate key of a mysql server.

Syntax

store cert_key mysql <client | server>

store cert_key sniffer

Stores the system certificate key. This command enables a user to set the system
certificate that is used by the Guardium system (in communication with S-TAP®).
The certificate can either be pasted from the console or imported via one of the
standard import protocols. The certificate format should be PEM and should
include the BEGIN and END delimiters. This certificate needs to be signed by a
CA whose self-signed certificate is available to S-TAP software through the
guardium_ca_path.

20 CLI and API


store cert_key sniffer console stores the sniffer certificate key by pasting the
key into the console.

store cert_key sniffer import stores the sniffer certificate key by importing the
key file.

Syntax

store cert_key sniffer <console | import>

store sign certificate squid


Stores the proxy server certificate and the self-signed ca root certificate.

store sign certificate squid console stores the proxy server certificate and the
self-signed ca root certificate by pasting the data into the console.

store sign certificate squid import stores the proxy server certificate and the
self-signed ca root certificate by importing the associated files.

Syntax

store sign certificate squid <console | import>

Backup and Default Options

You can choose to restore certificates and certificate keys with the backup or
default parameter. Use the backup parameter to restore a certificate to the last
saved certificate. Use the default parameter to restore a certificate to the original
certificate that Guardium supplied.

Certificate Expiration Dates and Summary Commands

Run the show certificate warn_expire command periodically. This command


warns you of certificates that will expire in six months and displays a list of
expired certificates. For more information, see the show certificate CLI command.
To show a summary of all certificates, run the CLI command show certificate
summary. Run the commands periodically to review certificate expiration dates.

Configuration and Control CLI Commands


Use the following CLI commands for configuration and control.

? (question mark)

When entering a command, enter a question mark at any point to display the
arguments.

Syntax

<partial_command> ?

Example

CLI> show account strike ?

CLI and API 21


USAGE: show account strike <arg>, where arg is:

?, count, interval, max

ok

CLI>

delete unit type

Use this command to clear one or more unit type attributes. Note that not all unit
type attributes can be cleared using this command. See the table, located after the
store unit type command, for more information.

Syntax

delete unit type [manager | standalone] [aggregated] [netinsp] [network routes


static] [stap] [mainframe]

commands

Displays an alphabetical listing of all CLI commands.

Syntax

commands

debug

Enable/disable debug mode. Without an argument, it toggles the debug state.


Optionally, a state argument can be passed.

Syntax

debug <on | off>

eject

This command dismounts and ejects the CD ROM, which is useful after upgrading
or re-installing the system, or installing patches that were distributed via CD ROM.

Syntax

eject

delete scheduled-patch

To delete a patch install request, use the CLI command delete scheduled-patch

See the CLI command, store system patch install for further information on
patch installation.

forward support email

When the support-state option is enabled (which it is by default), this command


sets the email address to receive system alerts.

22 CLI and API


Syntax

forward support email to <email address>

Show Command

show support-email

generate-keys

Use this command to generate PGP keys for cli, tomcat and grdapi. Use the show
command to display the key (which you can then copy and paste, as appropriate
for your needs).

Syntax

generate-keys

Show Command

show system public key [ cli | tomcat | grdapi ]

iptraf

IPTraf is a network statistics utility distributed with the underlying operating


system. It gathers a variety of information such as TCP connection packet and byte
counts, interface statistics and activity indicators, TCP/UDP traffic breakdowns,
and LAN station packet and byte counts. The IPTraf User Manual is available on
the internet at the following location (it may be available at other locations if this
link does not work):

http://iptraf.seul.org/2.7/manual.html

Syntax

iptraf

license check
Indicates if the installed license if valid. Use this command after installing a new
product key.

Syntax

license check

ping

Sends ICMP ping packets to a remote host. This command is useful for checking
network connectivity. The value of host can be an IP address or host name.

Syntax

ping <host>

CLI and API 23


quit
Exits the command line interface.

Syntax

quit

recover failed

Command to restore failed CSV/CEF/PDF transfer files, placing the files back into
the export folder for another export attempt.

Syntax

recover failed [csv|cef|pdf]

register management

Registers the Guardium system for management by the specified Central Manager.
The pre-registration configuration of this Guardium system is saved, and that
configuration will be restored later if the unit is unregistered.

Syntax

register management <manager ip> <port>

Parameters

manager ip is the IP address of the Central Manager.

port is the port number used by the Central Manager (usually 8443).

restart gui
Restarts the IBM® Guardium Web interface. To optionally schedule a restart of the
GUI once a day or once a week, use additional parameters. HH is hours 01-24.
MM is minutes 01-60. W is the day of the week, 0-6, Sunday is 0. If HHMM is
listed twice, only the last entry is used. The parameter clear deletes the scheduled
time.

In order to restart the Classifier and Security Assessments processes, run the
restart gui command from the CLI (not from the GUI).

Running restart GUI from the GUI only restarts the web services. It is necessary
to run the restart GUI command from the CLI to fully restart all processes,
including Classifier and Security Assessments processes. It is necessary to run the
restart GUI command from the CLI for each managed unit to restart the Classifier
listener.

Syntax

restart gui [HHMM|HHMMW|clear]

24 CLI and API


restart stopped_services
Use this CLI command to restart services previously stopped with the store
auto_stop_services_when_full CLI command.

Syntax

restart stopped_services

restart system

Reboots the Guardium system. The system will completely shut down and restart,
which means that the cli session will be terminated.

Syntax

restart system

show buffer

This command displays a report of buffer use for the inspection engine process. If
you are experiencing load problems, IBM Technical Support may ask you to run
this command.

Syntax

show buffer <log | snif>

show buffer log

Use this CLI command to display the buffer usage of the inspection engine
process.

show buffer snif


Use this CLI command to display the buffer usage of the sniffer.

show build
Displays build information for the installed software (build, release, snif version).

Syntax

show build

show defrag

Identify fragmented packets and attempt to reconstruct the packets before they get
to the network sniffing process. The defrag is relevant only for network sniffing
through SPAM or a TAP device.

Syntax

show defrag
Parameters

CLI and API 25


Packet size- The packet size in bytes, up to a maximum of 217 (131072)
Time interval - The time interval
Trigger level - The trigger level
Release level - The release level specified as a number of seconds, up to a
maximum of the 31st power of two (2147483648).

show network routes static

Permit the user to have only one IP address per appliance (through eth0) and
direct traffic through different routers using static routing tables. List the current
static routes, with IDs.

Syntax

show network routes static

Delete command

delete network routes static

show password

This CLI command displays password functions. Password disable [0|1] removes
the use of a password by storing the value 1. Password Expiration [CLI|GUI]
[Number of days] displays the number of days between required password
changes. Default is 90 days. Password Validation [ON|OFF] determines how
strong the password is.

Syntax

show password disable [0|1]

show password expiration [CLI|GUI] 90

show password validation [ON|OFF]

show security policies


Displays the list of security policies.

Syntax

show security policies

show system patch available

Displays the already installed patches and patches scheduled to be


installed--showing date/time and the install status.

Syntax

show system patch installed

26 CLI and API


show system patch installed
Displays the already installed patches and patches scheduled to be
installed--showing date/time and the install status.

Syntax

show system patch installed

show system public key

Displays the public key for cli or tomcat. If none exists, this command creates one.

Note: See show system key, store system key in Certificate CLI commands.

Syntax

show system public key <cli | tomcat | grdapi>

stop gui

Stops the Web user interface.

Syntax

stop gui

stop system

Stops and powers down the appliance.

Syntax

stop system

store apply_user_hierarchy

Use this CLI command to apply user hierarchy to audit receiver.

If ON, the non-audit group receiver (the receiver other than the audit group
receiver (normal or role) will only see audit results with a group IP beneath the
receiver's hierarchy, including the receiver.

Syntax

store apply_user_hierarchy [ON | OFF]

Show command

show apply_user_hierarchy

store allow_simulation
Enables (on) or disables (off) the ability to run the Policy Simulation on the
appliance.

CLI and API 27


In order to run the simulation, the original traffic must be replayed through the
rules engine (with the policy needing to be tested). This requires some of the
original SQL on the appliance to be saved with their values. The enable/disable of
allow_simulation instructs IBM Guardium to save/NOT save any SQL or values
whatsoever.

Syntax

store allow_simulation [on|off]

Show command

show allow_simulation

store alp_throttle

Use this CLI to regulate the amount of data that will be logged.

Usage: store alp_throttle <num>

where <num> is the number in range of -2147483647 and 2147483647.

Default is 0.

0 - do not log into GDM_FLAT_LOG and do not create tapks files

>0 - log into GDM_FLAT_LOG and do not create tapks files

<0 - log into GDM_FLAT_LOG and create tapks files

99999 - do not log into GDM_FLAT_LOG, but create tapks files.

Example

10 - log into GDM_FLAT_LOG 10% of statements.

10 - log into GDM_FLAT_LOG 10% of statements and create tapks files

store analyzer
Ignore session: The current request and the remainder of the session will be
ignored. This action does log a policy violation, but it stops the logging of
constructs and will not test for policy violations of any type for the remainder of
the session. This action might be useful if, for example, the database includes a test
region, and there is no need to apply policy rules against that region of the
database.

This command sets the value of the timeout of the ignore session and sets the
duration of the ignore session.

Syntax

store analyzer [ignore_sess_timeout | max_open_sess]

Show command

28 CLI and API


show analyzer

store auto_stop_services_when_full

When ON, will stop internal services if database exceeds the 90% full threshold.

Inspection Engine, Classification and other Collection-related services will stop.


Also, Aggregation import/restore will not process any new files.

To remediate, use the various Support commands (support clean audit_task,


support clean log_files, support clean DAM_data, support show large_files) to
analyze and manually purge large tables.

Syntax

store auto_stop_services_when_full [ON | OFF]

Show command

show auto_stop_services_when_full

store connect oracle_parser

Use this command to connect and disconnect the Oracle parser from the DB2
parser. The default is OFF (disconnect).

Syntax

store connect oracle_parser [ON | OFF]

Usage: store connect_oracle_parser [state], where state is ON/OFF. ON is connect


and OFF is disconnect.

Show command

show connect oracle_parser

store default_queue_size
Use this CLI command to control the configuration parameter
ADMINCONSOLE_PARAMETER.DEFAULT_QUEUE_SIZE. The default is 25. The
range is 25-300.

The sniffer must be restarted after a change in value.

Syntax

store default_queue_size <N>, where N is the number in range of 25 to 300

Show command

show default_queue_size 25

CLI and API 29


store defrag
Use this command to restore defragmentation defaults, or to set the
defragmentation size. After entering this command, you must issue the restart
inspection-core command for the changes to take effect. The defrag is relevant
only for network sniffing through SPAM or a TAP device.

Syntax

store defrag [default | size <s> interval <i> trigger <t> release <r>]

Show command

show defrag
Parameters
default - Restore the default size.
s - The packet size in bytes, up to a maximum of 217 (131072)
i - The time interval
t -The trigger level
r - The release level specified as a number of seconds, up to a maximum of
the 31st power of two (2147483648).

store delayed_firewall_correlation

Use this CLI command to hold a user connection until the decryption correlation
has taken place.

Syntax

store delayed_firewall_collection [on | off]

Show command

show delayed_firewall_correlation

store full-bypass
This command is intended for emergency use only, when traffic is being
unexpectedly blocked by the Guardium system. When on, all network traffic
passes directly through the system, and is not seen by the Guardium system.

When using this command, you will be prompted for the admin user password.

Syntax

store full-bypass <on | off>

store gdm_analyzer_rule
Analyzer rules - Certain rules can be applied at the analyzer level. Examples of
analyzer rules are: user-defined character sets, source program changes, and
firewall watch or firewall unwatch modes. In previous releases, policies and rules
were applied at the end of request processing on the logging state. In some cases,

30 CLI and API


this meant a delay in decisions based on these rules. Rules applied at the analyzer
level means decisions can be made at an earlier stage.

Note: When applying analyzer rules on source program changes, if the source
program is not matching the exact pattern, add a .* at the end of the pattern to
deal with the possibility that the source program has a trailing space (unseen by
user).

Syntax

store gdm_analyzer_rule [active_flag | new ]

store gdm_analyzer_rule active_flag

Usage: store gdm_analyzer_rule active_flag <id> <on|off>

where <id> is the rule ID.

Use the CLI command, show gdm_analyzer_rule, to see a list of GDM analyzer
rules.

store gdm_analyzer_rule new

Enter rule description (optional):

Enter rule type (required):

Show command

show gdm_analyzer_rule

store gdm_http_session_template

Use this CLI command to set the template for the HTTP session.

Usage
store gdm_http_session_template [activate] [add] [deactivate] [remove]

Show command
show gdm_http_session_template

Attempting to retrieve the template information. It may take time. Please wait.
Table 1. store gdm_http_session_template
Active
URL Username Login_Session
ID# Regex Session Regex Regex Regex Comment Logout_Session_ID
Logout_URL_Regex
1 1 Cookie.*PHPSESSID=([[:a
.*user_name=([[:alnum:]
Set- example of HTTP
Cookie:.*PHPSESSID=
session deleted
2 1 Cookie.*PSJSESSIONID=([
.*SignOnDefault=([[:aln example of HTTP cmd=logout
session
3 1 Cookie.*JSESSIONID=([0-
.*username=([[:alnum:]]
Set- example of HTTP Logout.jsp
Cookie:.*JSESSIONIDsession

CLI and API 31


store log external
Use this command to set file size, flush period, gdm error and state of the log
external.

Usage
store log external [file_size] [flush_period] [gdm_error] [state]

Usage: store log external gdm_error <state>

where state is on/off. 'on' is to enable and 'off' is to disable.

Usage: store log external file_size <num>

where <num> is the size of the file.

Default is 4096 bytes.

Usage: store log external flush_period <num>

where <num> is the flush period.

Default is 60 seconds.

Usage: store log external state <state>

where state is on/off. 'on' is to enable and 'off' is to disable.

Show command
show log external [file_size] [flush_period] [gdm_error] [state]

store monitor gdm_statistics

Use this CLI command to get information about the Unit Utilization. Default is 1
(run the script every hour).

Syntax
CLI> store monitor gdm_statistics
USAGE: store monitor gdm_statistics <hour>, where hour is value from 0 to 24.
Default value is 1, means to run the script every hour.
Value 0, means not to run the script.

Show command

CLI> show monitor gdm_statistics

Disable gdm_statistics monitor

store gui
store gui [port | session_timeout | csrf_status]

Sets the TCP/IP port number on which the IBM Guardium appliance management
interface accepts connections. The default is 8443. n must be a value in the range of
1024 to 65535. Be sure to avoid the use of any port that is required or in use for
another purpose.

32 CLI and API


Set timeout of session - Sets the length of time (in seconds) with no activity before
timeout. After the no-activity-timeout has been reached, it is necessary to log on
again to IBM Guardium. The default length is 900 seconds (15-minutes).

Set Cross-site Report Forgery (CSRF) (ON | OFF) - See the section CSRF and 403
Permission Errors in the Getting Started with GUI help topic. The default value is
enabled on an upgraded system. Trying to use certain web browser functions (for
example, F5/CTRL-R/Refresh/Reload, Back/Forward) will result in a 403
Permission Error message.

The new session timeout value will take effect only after the next GUI restart.

Syntax

store gui port <n>

store gui session_timeout <n>

store gui csrf_status [on | off]

Show command

Displays the GUI port number, state, session timeout (in seconds) and/or CSRF
status.

Syntax

show gui [port | state | all | session_timeout | csrf_status ]

store gui cache

Use this CLI command to turn web browser caching ON or OFF (Enable or
Disable).

The response is

The parameter has been changed.

Restarting gui

Changing to port 8443

Stopping.......

Safekeeping xregs

ok

The default setting for browser caching is enabled.

The act of changing the cache setting will automatically restart the Guardium web
server.

For Firefox, in order for the setting to take affect, the cache on the respective
browsers has to be cleared.

CLI and API 33


Syntax

store gui cache [ON | OFF]

Show command

show gui cache

store gui session_timeout

Sets the length of time (in seconds) with no activity before timeout. After the no
activity timeout has been reached, it is necessary to log on again to IBM
Guardium. The default length is 900 seconds (15-minutes).

Syntax

store gui session_timeout

Show command

show gui session_timeout

store gui csrf_status

Use this CLI command to enable or disable the Cross-site Request Forgery (CSRF)
status.

Syntax

store gui scrf_status [ on | off ]

Show command

show gui scrf_status

store gui xss_status

Use this CLI command to enable or disable the Cross-Site Scripting (XSS) status.
This option is enabled by default on upgraded systems.

Syntax

store gui xss_status [ on | off ]

Show command

show gui xss_status

store installed security policy


Sets the security policy named policy-name as the installed security policy.

Syntax

store installed security policy <policy-name>

34 CLI and API


Show Command

show installed security policy

store keep_psmls

Use this CLI command to retain the current layouts/profiles/portlets created the
users of the Guardium application. Set this CLI command to ON before an
upgrade, and the psmls from the previous version will be retained.

Syntax

store keep_psmls [ON | OFF]

show keep_psmls

store ldap-mapping

Store LDAP mapping parameters - allow a custom mapping for the LDAP server
schema. This command permits customized mapping to the LDAP server schema
for email, firstname and lastname attributes. The paging parameter is used to
facilitate transfer between any LDAP server type (Active Directory, Novell
Directory, Open LDAP, Sun One Directory, Tivoli® Directory). If the paging
parameter is set to on, but paging is not supported by the server, the search is
performed without paging.

Example for paging. If the CLI command, ldap-mapping paging is set to ON, then
Microsoft Active Directory will download the maximum number users defined
under the limit value on the LDAP Import configuration screen. If CLI command,
ldap-mapping paging is set to OFF, then Active Directory will download up to
only 1000 users not matter what the limit value is set to. All other LDAP server
configurations must use the CLI command, ldap-mapping paging off in order to
download users up to the set limit value.

Note: Each time you change the CLI ldap-mapping attributes you also need to
select Override Existing Changes on the LDAP Import configuration screen in IBM
Guardium GUI before updating. This action must occur each time you change the
CLI ldap-mapping email, firstname or lastname attributes and import LDAP users.

Show commands

show ldap-mapping [email] [firstname][lastname] <name>

show ldap-mapping paging ON|OFF

A GUI restart of the CLI is required for new parameters to take effect.

Examples

Some examples are shown.

store ldap-mapping firstname name

store ldap-mapping lastname sn

store ldap-mapping email mail

CLI and API 35


store ldap-mapping paging on

If the attributes are written as follows, the mapping process will use the first
attribute it finds. If this is not what you want, use one of the examples to map to
specific attributes.

Values for firstname attribute: gn,givenName,name

Values for lastname: attribute: sn,surname,name

Values for email attribute:


userPrincipalName,mail,email,emailAddress,pkcs9email,rfc822Mailbox

Values for paging: on, off

store license

This command applies a new license key to the appliance.

A license key may be of one of two kinds: override type or append type; an
override type replaces the currently installed license while the append type license
will be appended to the currently installed license. Append-type licenses can only
add functionality; new functions may be enabled and when relevant - expiration
dates be updated, remaining number of scans and datasources will be increased,
and a certain numeric fields in the license, such as number of managed units will
be replaced.

Syntax

store license

Show Command

show license

Example

When using the store license command, you will be prompted to paste the new
product key:

CLI> store license

Paste the string received from IBM Guardium and then press Enter.

Copy and paste the new product key at the cursor location, and then press Enter.
The product key contains no line breaks or white space characters, and it always
ends with (and includes) a trailing equal sign. A series of messages will display,
ending with:

We recommend that the machine be rebooted at the earliest opportunity in order to


complete the license updating process.

ok

36 CLI and API


CLI>

Run the restart gui command at this time.

Note:

store log classifier level


Sets the debugging level for the classifier, to one of the values shown.

Syntax

store log classifier level DEBUG|INFO|WARN|ERROR|FATAL

Show command

show log classifier level

store log sql parser_errors

Sets the logging of syntactically wrong SQL commands.

Syntax

store log sql parser_errors [on|off]

Note: A restart of the inspection engine is required after the store command is
issued to apply change.

Show command

show log sql parser_errors

store log object_join_info


Sets the logging of object_join.

A join table is a way of implementing many-to-many relationships. Use join entity


to join tables in a SELECT SQL statement.

Syntax

store log object_join_info [ on | off]

Show command

show log object_join_info

store log session_info


Sniffer-related

Syntax

store log session_info [ on | off]

CLI and API 37


Show command

show log session_info

store log exception sql

When on, logs the entire SQL command when logging exceptions.

Syntax

store log exception sql <on | off>

Show command

show log exception sql

store logging granularity

Sets the logging granularity to the specified number of minutes. You must use one
of the minute values shown in the syntax. The default is 60.

Syntax

store logging granularity <1, 2, 5, 10, 15, 30 or 60>

Show command

show logging granularity

store max_audit_reporting

Displays the audit report threshold. The default is 32. When defining reports in
Audit Process, the number of days of the report (defined by the FROM-TO fields)
should not exceed a certain threshold (one month by default). See the Workflow
Process, Central Management and Aggregation section of the Compliance
Workflow Automation help topic for further information on this using this CLI
command.

Syntax

store max_audit_reporting

Show command

show max_audit_reporting

store max_result_set_size

Store the max_result_set_size, default value is 100 (size is between 1 and 65535)
and aids in tuning the inspection engine when observing returned data. This
command sets the limitation for total result set size. This parameter works for any
type of database. If the value is beyond the defined threshold, the analyzer will not
retrieve data to calculate records affected value.

Syntax

38 CLI and API


store max_result_set_size <size>

Show command

show max_result_set_size

store max_result_set_packet_size
Store the max_result_set_packet_size, default value is 32 (size is between 1 and
65535) and aids in tuning the inspection engine when observing returned data.
This command sets the limitation for packet size in response. This parameter
works for any type of database. If the value is beyond the defined threshold, the
analyzer will not retrieve data to calculate records affected value.

Syntax

store max_result_set_packet_size <size>

Show command

show max_result_set_packet_size

store max_tds_response_packets

Store the max_tds_response_packets, default value is 5 (size is between 1 and


65535) and aids in tuning the inspection engine when observing returned data.
This command sets the limitation for number of packets in response. This
parameter works for MS SQL only. If the value is beyond the defined threshold,
the analyzer will not retrieve data to calculate records affected value.

Syntax

store max_tds_response_packets <size>

Note: max_tds_response_packets (Tabular Data Stream) is only applicable for MS


SQL Server and Sybase.

Show command

show max_tds_response_packets

store maximum query duration


Sets the maximum number of seconds for a query to the value specified by n. The
default is 180. We recommend that you do not set this value greater than the
default, because doing so increases the chances of overloading the system with
query processing. This value can also be set from the Running Status Monitor
panel on the administrator portal.

Syntax

store maximum query duration <n>

Show Command

show maximum query duration

CLI and API 39


store monitor [ buffer | custom_db_usage | gdm_statistics ]
Use the CLI command, store monitor buffer to set the interval of how often the
script must run that retrieves the information shown in the Buffer Usage Monitor
report of the IBM Guardium Monitor tab.

Syntax: store monitor buffer

Use the CLI command, store monitor custom_db_usage to set the state to on and
to specify a time to run this job.

Syntax
CLI> store monitor custom_db_usage
USAGE: store monitor custom_db_usage <state> <hour>
where state is on/off.
If state is on, specify the hour to run.
Valid value is number from 0 to 23

Use the CLI command, store monitor gdm_statistics to get information about the
Unit Utilization. Default is 1 (run the script every hour).

Syntax
CLI> store monitor gdm_statistics
USAGE: store monitor gdm_statistics <hour>, where hour is value from 0 to 24.
Default value is 1, means to run the script every hour.
Value 0, means not to run the script.

Show Commands

show monitor buffer

show monitor custom_db_usage

show monitor gdm_statistics

store packet max-size

Limit the maximum size of packets from the sniffer.

Syntax

store packet max-size 1536

Show Command

show packet max-size

store pdf-config
Use this command to change the pdf font size and pdf orientation of the PDF
image body content (excluding header/footer).

Size unit ranges from 1 (smallest) to 10 (largest) with default value of 6.

Orientation unit is 1 (for landscape orientation) or 2 (for portrait). The default


value is 1.

40 CLI and API


The change takes effect immediately after typing the CLI command and pressing
the Enter key.

Syntax

store pdf-config [ orientation | size ]

Show Command

show pdf-config [ orientation | size ]

store pdf-config multilanguage_support

There are different static pdf generator config files for English (Used on English
version) and language C/J (Used on Chinese/Japanese). Use this CLI command to
define the fonts in the PDF generator. Default is English. Multi-language is
language C/J.

Syntax
CLI> store pdf-config multilanguage_support
Current setting is Default

1 Default
2 Multi-language
Please select the option (1,2, or q to quit)

Show command

show pdf-config multilanguage_support

store populate_from_query_maxrecs

Sets the maximum number of records that can be used to populate groups and
aliases from a query.

Use caution when setting a maximum records value via this CLI command. Setting
it too high may result in incomplete populate group from query processes. The
maximum threshold is dynamic and dependent on the system load and memory
utilization. This CLI command is limited to a high value of 200000.

Syntax

store populate_from_query_maxrecs 100000

Show command

show populate_from_query_maxrecs

store product gid


Sets the stored unique product <n> GID value.

Syntax

store product gid <n>

Show Command

CLI and API 41


show product gid

store purge object

Sets the age (in days) at which non-essential objects will be purged. Use the show
purge objects age command to display a table showing the index, object name,
and age for each object type for which a purge age is maintained. Then use the
appropriate index from that table in the command to set the purge age.

Note: The value of number of days will be set to the default (90 days) when the
unit type changes between managed unit/Manager/standalone unit.

Syntax

store purge object age <index> <days>

Show Command

show purge object age

Example

Assume you want to keep an Event Log for 30 days. First issue the show purge
objects age command to determine the index (do not use the table; your list may
be different). Then enter the store purge object command.
CLI>show purge objects age
Index Name, Age
1. Central Management Persistent Operations, 7
2. S-TAP Event Log, 14
4. Assessment Tests, 7
5. Central Management Temporary Policies, 7
6. S-TAP Change History, 14
7. Kerberos Authentication Information, 1
8. Comment History, 60
9. Comment Local History. 60
10. Call Graph History, 90
11. CAS Host Event History, 7
12. Unused CAS Access Names, 7
13. Unused CAS Access Name Templates, 7
14. Custom Table Operations Log, 7
15. table in custom db without def, 7
16. Custom Table Upload Log, 7
17. Baseline entries referred to user, 30
18. Classification Process Results, 7
19. Sniffer Buffer Usage, 14

42 CLI and API


20. Secure Map, 1
21. GDM Access, 30
22. Audit Process Log, 14
23. Sessions Live, 7
24. GDM Errors, 30
25. CAS AUdit State and State Datum, 60
26. GDM Uid Chain, 1
27. STAP/Z Files Purge, 20
28. Default Custom Table Purge Job, 60
40. STAP/Z Files Purge, 30
ok
CLI> store purge object age 2 30
ok

store quartz_thread_run

This CLI command is for use by Technical Support.

The Java™ Virtual Machine allows the application to have multiple threads. Thread
is a piece of the program execution.

Use the store quartz_thread_num CLI command to set the number of threads that
can run at the same time.

Use this command to ease conflict between too many threads running at the same
time.

The show quartz_thread_num CLI command displays the number of Quartz


scheduler threads that run at the same time.

Syntax

store quartz_thread_run <number>

USAGE: store quartz_thread_num <number>, where number is in range 3 to 15


with default value = 5.

Show command

show quartz_thread_num

org.quartz.threadPoll.threadCount= 5

store remotelog

Controls the use of remote logging. In addition to system messages, statistical


alerts and policy rule violation messages can be written to syslog (optionally). For
each facility.priority combination, messages can be directed to a specific host. This
command can also control the use of remote logging through an optional port

CLI and API 43


number and can designate a mandatory protocol (UDP or TCP). This command
works with any syslog implementation that supports TCP.

If you enable remote logging, be sure that the receiving host has enabled this
capability (see the note).

Syntax

store remotelog [help|add|clear] facility.priority host [optional port


number:mandatory protocol (UDP or TCP)]
Table 2. Store remotelog parameters
Parameters Description
help Displays supported facilities and priorities.
add Adds the specified facility.priority combination to the list of messages to be
sent to the specified remote host.
clear Clears the specified facility.priority combination from the list of messages
being sent to the specified host.
facility Use daemon. The majority of messages issued by the IBM Guardium
appliance will be from the daemon facility.
priority May be one of the following: alert, all, crit, debug, emerg, err, info, notice,
warning.

The standard IBM Guardium severity codes for alerts and violations map as
follows:

Guardium severity / Syslog priority

INFO / info

LOW / warning

MED / err

HIGH / alert
host Identifies the host to receive this facility.priority combination.
optional
port
number
mandatory UDP or TCP
protocol

Note:

To configure the receiving system to accept remote logging, edit


/etc/sysconfig/syslog on that system to include the -r option. For example:
SYSLOGD_OPTIONS=-r -m 0

Then restart the syslog daemon:


/etc/init.d/syslog restart

The standard syslog file in Linux is named:


/var/log/messages

44 CLI and API


Common criteria requires that all communications from the Guardium system to a
remote syslog server be encrypted. Communications to the remote syslog server
can not be in clear text.
CLI commands
show remotelog
store remotelog ?
store remotelog add ?
store remotelog add encrypted
USAGE: store remotelog add encrypted <facility.priority> <host[:port]>
<tcp|udp>
Possible facilities: all auth authpriv cron daemon ftp kern local0 local1
local2 local3 local4 local5 local6 local7 lpr mail mark news security syslog
user uucp
Possible priorities: alert all crit debug emerg err info notice warning

Note:
If you want to send the encrypted remote log message to the server, the
rsyslog configuration in the server needs to accept encrypted message.
Encrypted setting on client and server only works in TCP mode.
Switching from one mode to other on the same remote server: it needs to
modify the configuration file to sync with the designated mode and the
remote service needs to restart.
Example
store remotelog add non_encrypted
store remotelog clear
g32.guard.swg.usma.ibm.com> show remotelog
*.* @9.70.148.175:10514

Use the example to store the certificate as ca.pem in /etc/pki/rsyslog/.


This will open a new window and asks the user to paste the certificate.
store remote add encrypted all.all <IP address>:<port number> tcp
Encrypting syslog
Alerts and other messages can be forwarded to a remote syslog receiver,
such as a SIEM system. This message traffic can be encrypted from the
collector or aggregator to the remote syslog receiver.

Note: Encryption only works in TCP mode. By default, syslog forwarding


uses UDP, so if encryption is required, specify TCP for the CLI command,
store remotelog.
Before you begin:
The procedure documented here must be repeated on every collector or
aggregator that is sending traffic to the encrypted host.
The certificate used by the remote syslog receiver is needed. Store that
certificate on the Guardium system.
1. Have available the public certificate from the CA (Certificate Authority)
from Verisign, Thwate, Geotrust, GoDaddy, Comodo, in-house, etc.

CLI and API 45


2. Log into the CLI on the individual Guardium system from which to
send the encrypted syslog. Before executing the command, obtain the
appropriate certificate (in PEM format) from the CA, and copy the
certificate, including the Begin and End lines, to your clipboard.
3. Enter the following CLI command:store remotelog add encrypted
daemon.all <IP address of encrypted remote host>:<port number of
remote host> tcp

Note: This example uses daemon because Guardium sends its


application events using daemon.
4. The following instructions will be displayed:
Please paste your CA certificate, in PEM format. Include the BEGIN and END lines, and then
Paste the PEM-format certificate to the command line, then press
CRTL-D. Guardium will take this input and store it as
/etc/pki/rsyslog/ca.pem
There will follow a message informing of the success or failure of the
store operation.
When successful, Guardium can send encrypted traffic to the remote
system with the correct key.
5. Repeat the procedure for each collector and aggregator that is sending
syslog traffic to the encrypted host.

store replay

This feature is used for performance and capacity testing. Use the CLI commands
to set configuration values.

See the Replay Configuration help topic for examples on how to use this feature.

Note: The Replay feature will work only on sniffed data captured with a Log Full
Details policy.

Syntax

store replay active_thread

store replay keep_active

store replay max_queue_size

Show command

show replay active_thread

show replay keep_active

show replay max_queue_size

store replay active_thread

USAGE: store replay active_thread <N>

46 CLI and API


where <N> is number from 50 to 2000.

This command will update the number of replay active threads.

show replay active_thread

50 (default value)

store replay keep_active

USAGE: store replay keep_active <N>

where <N> is number of minutes from 60 to 525600.

The command will update the number of minutes for parameter replay keep
active.

The default value is 525,600 minutes (365 days).

Show command

show replay keep_active

store replay max_queue_size

USAGE: store replay maximum_queue_size <N>

where <N> is number from 250 to 10000.

This command will update the number of replay maximum queue size.

show replay max_queue_size

300 (default value)

store s2c
Sets several configurable parameters for ADMINCONSOLE. These parameters are
used for throttling server-to-client (S2C) traffic.

Note: Use this CLI command only when directed by IBM Guardium Technical
Services.

Minimum and maximum values:

ANALYZER_S2C_IGNORE = {0,1,2,3}

MAX_S2C_VELOCITY (K bytes/sec) - number >=0 and <= 2147483647

CLI and API 47


MAX_S2C_INTERVAL (sec) - number >=1 and <= 2147483647

See also the CLI command Store Throttle.

Syntax

store s2c

USAGE: store s2c ignore I maxrate M maxinterval T

where 0<=I<=3 (level), 0<=M<=2147483647 (K/sec), and 1<=T<=2147483647


(seconds) OR store throttle default

store s2c ignore 3 maxrate 300 maxinterval 5007

The new configuration will be effective once the CLI command, restart
inspection-core, command is executed.

Show command

show s2c

Throttle S2C parameters (defaults):

Ignore: 0

Max rate: 999999

Max interval: 30

-------------------

ANALYZER_S2C_IGNORE (0,1,2,3) - Switch s2c throttling mechanisms on/off


based on scenarios. This flag is based on bits. 0 = the s2c throttling mechanism is
OFF. 1 = turns on the function described in scenario 1, 2 = turns on the function
described by scenario 2. 3 = turns both on.

MAX_S2C_VELOCITY - maximal rate (K bytes/sec). If this rate is exceeded, then


analyzer should send CLI commands, ignore session, or ignore session reply,
request to S-TAP or sniffer.

MAX_S2C_INTERVAL - time interval in seconds (default 30 sec.) between possible


CLI commands, ignore session, or ignore session reply, requests.

Scenario 1

The sniffer starts to receive traffic from S-TAP or network in the middle of large
query. Since all incoming packets are DB server responses, no new session will be

48 CLI and API


created by the analyzer and therefore no information will be sent to logger and
rules engine. This type of traffic is useless for the sniffer. From the other side, this
type of traffic can create additional S-TAP and sniffer load. A throttling mechanism
helps to decrease S-TAP and network sniffer load by sending a ignore session
message from the analyzer, if the S2C velocity is greater than
MAX_S2C_VELOCITY. If for some reason S-TAP or network sniffer were not
affected, then analyzer will send ignore session request again after
MAX_S2C_INTERVAL seconds. In order to switch this throttling mechanism on, set
ANALYZER_S2C_IGNORE flag to 1.

Scenario 2

If the incoming traffic has a high S2C rate (>MAX_S2C_VELOCITY), then a


throttling mechanism sends a ignore session reply request to S-TAP for local
database connections in the case when S2C velocity is greater than
MAX_S2C_VELOCITY. If from some reason S-TAP was not affected, then analyzer
will send ignore session reply request again after MAX_S2C_INTERVAL seconds.
In order to switch this throttling mechanism on, set ANALYZER_S2C_IGNORE
flag to 2.

store sender_encoding

Use this CLI command to encode outgoing messages (email and SNMP traps) in
different encoding schemes, where previously everything is encoded in UTF8.

For example, a Guardium customer wanted to encode all of the outgoing SNMP
messages in SJIS - an alternative Japanese encoding.

Note: If the conversion fails, for either reason (a) the encoding scheme specified is
invalid, or (b) the characters to be encoded can not be represented in the requested
encoding scheme, then the message will be sent using UTF8, which is the default
encoding scheme.

Syntax

store sender_encoding <str>,

where str is the encoding with maximum length 16

Show command

show sender_encoding

store serial
Enable/disable a console or other terminal connection via serial port.

Syntax

store serial ON|OFF

store stap approval


Use this function to block unauthorized STAPs from connecting to the Guardium
appliance.

CLI and API 49


If ON, then STAPs can not connect until they are specifically approved.

If an unapproved STAP connects, it is immediately disconnected until the specific


authorization of the IP Address of that STAP.

There is a pre-defined report for approved clients, Approved TAP clients, it is


available on the Daily Monitor tab.

Note:

A valid IP address is required, not the host name.

The CLI command, store stap approval, does not work within an environment
where there is an IP load balancer.

Within a Central Managed environment, after adding the IPs to approved STAPs,
there is a wait time associated with synchronization that might take up to an hour.
After synchronization is complete the approved STAPs status will appear green in
GUI.

Syntax

store stap approval ON | OFF

Show command

show stap approval

GuardAPI command

grdapi store_stap_approval

The new configuration will be effective after running the CLI command, restart
inspection-core.

store stap certificate

Stores a certificate from the S-TAP host (usually a database server), on the IBM
Guardium appliance. This command functions exactly like the store certificate
console command, described later.

Syntax

store stap certificate

You will be prompted as follows:

Please paste your new server certificate, in PEM format.

Include the BEGIN and END lines, then press CTRL-D.

If you have not done so already, copy the server certificate to your clipboard. Paste
the PEM-format certificate to the command line, then press CRTL-D. You will be
informed of the success or failure of the store operation.

50 CLI and API


When you are done, use the restart gui command to restart the IBM Guardium
GUI.

store stap network_latency


S-TAP verification is a feature by which customers can verify if a S-TAP is
monitoring database traffic or not. The verification feature is affected by the
customer's network traffic/latency. Since latency is different for each customer,
there is a need for a way to list and change the default value that the verification
feature uses.

Syntax

store stap network_latency

USAGE: store stap network_latency <N>

where N is the number greater than 0 seconds.

The default value is 5 seconds.

If the number goes higher the S-TAP verification process will become slower.

Show command

show stap network_latency

store storage-system

store storage-system

Adds or deletes a storage system type for archiving or system backup.

Syntax

store storage-system <Centera | TSM> <backup | archive> <on | off>

Show Command

show storage-system

Example

Assume you are currently using Centera for system backups, but want to switch to
a TSM system. You must turn off the Centera backup option (unless you want to
leave that as another option), and turn on the TSM backup option. The commands
to do this are highlighted in the example. The show commands are not necessary,
but are for illustration only.

CLI> show storage-system

NETWORK :

CENTERA : backing-up

TSM :

CLI and API 51


SCP : archiving and backing-up

FTP : archiving and backing-up

ok

CLI>store storage centera backup off

ok

CLI> store storage tsm backup on

ok

CLI> show storage-system

NETWORK :

CENTERA :

TSM : backing-up

SCP : archiving and backing-up

FTP : archiving and backing-up

ok

CLI>

store support state

Enables (on) or disables (off) the sending of email alerts to the support email
address, which can be configured using the forward support email command. By
default, the support state is enabled (on), and the default support email address is
support@guardium.com.

Syntax

store support state <on | off>

Show Command

show support state

store throttle

This CLI command stores the throttle parameters. After entering this command,
you must issue the CLI command, restart inspection-core for the changes to take
effect.

This command is used to filter out (ignore) large packets. Throttling has two
modes: Thresholds, per session - ignore sessions when identifying a long enough
burst (duration configurable) of large packets (size configurable) and stop ignoring
the session when traffic goes under a certain threshold (also configurable); and,
Overall - ignore all packets larger than a certain size (configurable) in all sessions.

52 CLI and API


This throttling mode completely ignores long and excessive non-database packets
smaller than a predefined size (useful for VNC clients and other types of
white-noise traffic). Use for network traffic through SPAM port or hardware TAP.
For S-TAP traffic, only network TCP traffic picked up by PCAP. See also the CLI
command, store s2c.

Syntax

store throttle [default | size <s> interval <i> trigger <t> release <r>]

USAGE: store throttle size S interval I trigger T release R

where 0<=S<=2^17 (bytes), 1<=I,T,R,<=2^31 (seconds)

OR store throttle default

Show Command

show throttle

Throttle parameters:

Packet size: 228000

Time interval: 604800

Trigger level: 10000000

Release level: 10000000

Parameters

default - Enter the keyword default to restore the system defaults (no other
parameters are used). The default throttling parameters are never throttle.

s - The packet size in bytes, up to a maximum of 217 (131072).

The remaining parameters are in seconds, up to a maximum of 231 (2147483648):

i - The time interval

t - The trigger level

r- The release level

Note: To restore the throttle defaults, use the CLI command, store throttle default.

store timeout
Sets the timeout value of a CLI session and/or fileserver session. The default value
is 600 seconds. A timeout will also close the CLI session.

If the fileserver is stopped because of a timeout, a message will appear, Warning :


Fileserver stopped because of timeout. The file upload may not be complete.
Stopping the process.

CLI and API 53


Use the CLI commands, show timeout db_connection, to show the socketTimeout
value in the conf file, and store timeout db_connection <value>, to set the value of
the timeout. The value should be greater than 0. The default value is 25000
seconds. These CLI commands are used in managing the communications between
the Central Manager and the managed unit when DNS is not configured.

Syntax

store timeout cli_session <n>

store timeout fileserver_session <n>

store timeout db_connection <n>

Show command

show timeout cli_session 600

show timeout fileserver_session 600

show timeout db_connection 25000

store transfer-method

Sets the file transfer method used for CSV/CEF export. For export file, need to use
CLI command, store transfer-method csv, to set the method of transfer. For
backup/archive, use the CLI command, store transfer-method backup, to set the
method of transfer.

Syntax

store transfer-method <FTP | SCP>

Show Command

show transfer-method

Note: Files sent from one IBM Guardium appliance to another (from a collector to
an aggregator, for example) are always sent using SCP.

store uid_chain_polling_interval
Set the interval for UID Chain polling with this CLI command. UID chain is a
mechanism which allows S-TAP (by way of K-Tap) to track the chain of users that
occurred prior to a database connection.

Set the interval to 0 to turn off the UID Chain processing, in order to improve
database performance. If the UID Chain processing is turned off, then calculating
the UID Chain and updating children sessions are skipped.

Note: When using any database, the UID chain is not logged for all sessions if the
session is very short.

Syntax

store uid_chain_polling_interval <N>

54 CLI and API


where N is time in minutes (>= 1 minute; default is 2 minutes)

set N = 0, to turn off the UID Chain processing

Show command

show uid_chain_polling_interval

store upd_session_end

This CLI command adds an option to skip the update for the session_end time.

Syntax

store upd_session_end [enable | disable]

Show command

show upd_session_end

store unit type

Use this CLI command to set unit type attributes for the Guardium appliance. See
the Unit Type Attributes table for a description of all unit type attributes that can
be displayed by this command.

Syntax

store unit type [manager | standalone] [netinsp] [stap] [mainframe] [sink]

Collected DRDA traffic can be sent to Optim Query Capture Replay with a
microseconds timestamp, since OQCR requires a granularity of 1 microsecond. Use
the CLI command. store unit type sink, to switch from a granularity of 1
millisecond to 1 microsecond.

Show Command

show unit type

Note: Some attributes listed are set using the store unit type command, and
cleared using the delete unit type command. One attribute (aggregator) is set
only when the IBM Guardium software is installed, and cannot be modified except
by re-installing the IBM Guardium software.

Unit Type Attributes


The Guardium system unit type attributes that can be displayed by the show unit
type command are described in the table. Except where noted, these attributes can
be set using the store unit type command, and cleared using the delete unit
type command.
Table 3. Unit Type Attributes
Attribute Description
mainframe The unit is a mainframe (z/OS®) network inspection appliance.
manager Central manager functions are enabled for this unit.

CLI and API 55


Table 3. Unit Type Attributes (continued)
Attribute Description
netinsp Inspection of network traffic is enabled.
network route static Removes one line off the static routing table
standalone Local management (independent of a central manager)
stap The unit can receive data from and manage S-TAP and CAS agents.

unregister management

The unregister command restores the configuration that was saved when the
appliance was registered for central management. If that happened under a
previous release of the IBM Guardium software, restoring that configuration
without first applying a patch to bring the saved configuration to the current
software release level will disable the appliance, potentially causing the loss of all
data stored there. Accordingly, do not unregister a unit until you have verified that
the pre-registration configuration is at the current software release level. If you are
unsure about how to verify this, contact Technical Support before unregistering the
unit.

Syntax

unregister management

Notes
v This command is intended for emergency use only, when the Central Manager is
not available.
v After unregistering using this command, you should also unregister from the
Central Manager (from the Administration Console), since that is the only way
the count of managed units will be reduced. The count of managed units is
authorized by the product key.

diag CLI command


Use these CLI command to access troubleshooting and maintenance utilities
through diag.

Use the diag command as directed by Technical Support.

There are no functions that you would perform with this command on a regular
basis. Each main menu entry is described in a separate topic (see Main Menu
Commands).

Troubleshooting and Maintenance Utilities through DIAG:


v Aggregator Fix Schema – brings all imported tables that have older schema than
that of the aggregator to the schema of the latest patch level of the aggregator
(runs in the background and may take several hours to complete). Note: There
may be scenarios in which (a) the aggregator will not have the latest patch level
or (b) some of the imported tables are of the latest patch level—resulting in not
all imported tables having the latest patch level.
v Aggregator Maintenance – full analysis and recovery of the Aggregator. This
utility will collect AGG related logs and place it in the diag export folder, calls
the Aggregator Fix Schema to sync the schema of all databases, clean AGG

56 CLI and API


workspace and restart the merge process to ensure full analysis of all imported
tables (runs in the background and may take several hours to complete).
v Clean Static Orphans on an Aggregator – This option should be used only by
Technical Support and only in those cases where static tables grow too much
and needed to be cleaned. This utility cleans all the old construct records that
are no longer in use.

Opening the Diagnostics Main Menu

To use the diag command, follow the procedure outlined:


1. At the command line prompt, log into the Guardium appliance with CLI.
The Guardium user attempting to use the diag command must have an
assigned CLI or admin role. The only user who has a CLI role by default is
admin. The user with a CLI or admin role is permitted to enter the diag
command, use the unlock admin and unlock accessmgr CLI commands, and
use the export audit-data CLI command without restrictions. The user with a
CLI role does not have to enter user name and password required of a GUI
login and does not go through any further role check.
If the Guardium user attempting to use CLI does not have a CLI or admin role,
CLI will not start. The accessmgr assigns CLI and admin roles.
2. After starting CLI, enter the diag command (with no arguments) at the
command line prompt.
3. The Guardium user attempting to use the diag command must have an
assigned diag role on the Guardium system. By default, only admin has this
assigned role. Access to diag is allowed or disallowed based on the role
assignment of this user (access to diag is permitted only if this user has the
diag role). The accessmgr assigns diag roles.
4. You are presented with the main command menu. Do one of the following to
move the option selection cursor (which is selecting the first item in the
example):
v Type the desired entry number (the selection cursor moves to the selected
entry).
v Use the Up or Down arrow key to select the desired entry.
5. Press the Spacebar, the Left arrow key, or the Right arrow key to move the
command selection cursor in the display (which is selecting the OK command
in the example).
6. Perform an action by selecting the appropriate option in the display area and
then doing one of the following:
v Select the appropriate command with the command selection cursor, then
press the Enter key
v Click on the appropriate action command.

About the diag Output


The diag command creates output in two directories:
v .../guard/diag/current
v .../guard/diag/depot

This output is accessed through the fileserver CLI command. See fileserver for
further information.

Each directory is described in the following subsections.

CLI and API 57


.../guard/diag/current Directory
Most output from the diag commands is written in text format to the current
directory. For most commands, this directory contains a separate output file. Each
time you run the same command, output is appended to the single file for that
command. For a smaller number of commands, a separate file is created for each
execution, usually incorporating a date and time stamp in the filename.

We recommend that you “clean up” after each session, so in subsequent sessions
you are not looking at old information. When you pack files to a single
compressed file for exporting (see the following topic), all files in the current
directory are deleted. Alternatively, you can use the Delete recordings command of
the Output Management menu to delete individual files.

The files in the current directory are easy to identify since the names are created
from menu and command names. For example, after you use the File Summary
command from the System Interactive Queries menu, a file named
interactive_filesummary.txt is created in the current directory.

If you look at the current directory while in the process of using a command, you
may see a hidden temporary file with the same name as the one that will contain
the output for that command. The temporary file will be removed when the output
is appended to the command output file.

.../guard/diag/depot Directory

When you pack the diag output files in the current directory to a compressed file
(to send to Guardium Technical Support, for example), it is stored in the depot
directory. The filename is in the format diag_session_<dd_mm_hhmm>.tgz,
where the variable portion of the name indicates when the file was created. For
example, a file created at 12:15 PM on May 20th would be named as follows:
diag_session_20_5_1215.tgz.

After exporting files (see the Export recorded files topic), you can remove them
from the depot directory using the Delete recordings command of the Output
Management menu.

1 Output Management
The Output Management commands control what is done with the output
produced by the diag command. Each Output Management command is described
separately.

1.1 End and pack current session


Use this command to pack all diagnostic files in the current directory into a single
compressed file, and remove those files from the current directory. When you enter
this command, there is no feedback to indicate that the command has completed.
You can verify that the command has finished by displaying the directory of the
depot directory. When the command completes, there is a file named in the
following format: diag_session_<mm_dd_hhmm>.tgz, where the variable portion
of the name is a date and time stamp, as described previously. Use the Export
recorded files command of the Output Management menu to send the file to
another system.

58 CLI and API


1.2 Delete recordings
Use this command to delete files in the depot or current directory. (To delete only
the current session files, use the Delete current session files command.) When you
enter this command, the depot directory structure displays:

You can navigate the directories using the Up and Down arrow keys and pressing
Enter. For example, selecting ../ and pressing Enter moves the selection up one
level in the directory structure.

You could then select the current directory and press enter, to navigate down to
that folder and delete individual command output files. Note that you can
navigate to other directories, but you cannot delete files except from the current
and depot directories.

When you have selected the file you want to delete, press Enter.

Caution: You will not be prompted to confirm the delete action

1.3 Export recorded files

Use this command to send a file from the depot directory to another site. To export
a file:
1. Select Export recorded files from the Output Management menu. The depot
directory displays.
2. Select the file to be sent or use the ../ and ./ entries to navigate up or down
in the directory structure. (However, keep in mind that you can only export
files from the depot directory.)
3. With the file to be transmitted selected, press Enter.
4. You are prompted to select FTP or exit. Select FTP and press Enter.
5. You are prompted to supply a host name. Enter the host name of the receiving
system (or its IP address), and press Enter.
6. You are prompted for a user name. Enter a user account name for the
receiving system, and press Enter.
7. You are prompted for a password. Enter the password for the user on the
receiving system.
8. You are prompted to identify a directory to receive the sent file on the
receiving system. Enter the path relative to the ftp root of the directory to
contain the file on the receiving system and press Enter.
9. You are prompted to confirm the details of the transfer (the file to be sent and
its destination). Press Enter to perform the transfer, or select Cancel and press
Enter to start over.
10. You are informed of the success (or failure) of the operation.

1.4 Delete current session files


Use this command to delete files created during the current session.

1.5 Exit
Use the Exit command to return to the main menu.

CLI and API 59


2 System Static Reports
Use the System Static Reports command of the Main Menu to produce an
extensive set of reports.
1. Select System Static Reports from the Main Menu. You are informed that the
process is running.
2. After the report has been created, it displays in the viewing area. Note that his
report is lengthy and may be easier to view using a text editor, after exporting
it to a desktop computer).
Use the Up and Down arrow keys to scroll up or down in the report. When
you are done viewing the report, press Enter to return to the Main Menu.

System Static Reports Overview

The following subtopics provide an outline of the major components of the System
Static Reports output. The fragments of output shown are intended to illustrate the
type and level of information contained in the report, rather than provide a
detailed description of the actual contents (that is beyond the scope of this
document).

System Configuration Information

The System Static Reports output describes the build version, the patches applied,
the current system up time, and name server information:
Build version: 34e1eb12eb68ba76cb49028251c9a0d6 /opt/IBM/guardium/etc/cvstag
Patches:
2009/02/22 16:16:50: START Installation of ’Update 5.0’
2009/02/22 16:18:04: Installation Done - Successfully Installed

< lines deleted... >

Current uptime:
09:03:43 up 6 days, 17:34, 1 user, load average: 0.44, 0.50, 0.41
System nameservers:
192.168.3.20
DB nameservers:
192.168.3.20
Gateway: 192.168.3.1 (system) 192.168.3.1 (def)

Next, the file system information displays (shown partially):


Filesystem Size Used Avail Use% Mounted on
/dev/hdc3 2.0G 1.1G 813M 58% /
/dev/hdc1 97M 9.2M 83M 10% /boot
none 504M 0 504M 0% /dev/shm
/dev/hdc2 71G 1.2G 66G 2% /var
total: used: free: shared: buffers: cached:
Mem: 1055199232 1041711104 13488128 0 63275008 186220544
Swap: 536698880 295432192 241266688
MemTotal: 1030468 kB
MemFree: 13172 kB

< lines deleted... >

This is followed by information about the mail and SNMP servers configured:
SMTP server: 192.168.1.7 on port 25 : REACHABLE
SMTP user: undef
SMTP password: undef

60 CLI and API


SMTP auth: NONE
SNMP trapsink: undef UNREACHABLE
SNMP trap community: undef
SNMP read community: undef

The final section of the system configuration section describes the network
configuration for the unit: IP address, host and domain names, etc:
eth0: 192.168.3.101 (system) 192.168.3.101 (def)
hostname: (system) g1 (def)
domain: (system) guardium.com (def)
mac address: 00:04:23:A7:77:F2 (MAC1) 00:04:23:A7:77:F2 (MAC2)
unit type: 548 Standalone STAP

Internal Database Information

The next major section of the System Static Reports output contains information
about the internal database status and threads (only the first few threads are
shown):
uptime 77097 seconds.
27 threads.
78545028 queries.
+------+------------+-----------------------------+---------+---------+------+-----------
| Id | User | Host | db | Command | Time | State | +---------
| 1137 | enchantedg | localhost | TURBINE | Sleep | 26 |

< lines deleted... >

The list of threads is followed by an analysis of table status.

Web Servlet Container Information

The next several sections of the System Static Reports output contain information
about the Web servlet container environment (Tomcat):
============================================================================
Currently defined Tomcat port is 8443.
The TOMCAT daemon is running and listening on port(s): 8005 8443.
Currently OPEN ports
java run by tomcat on port *:8443

< lines deleted... >


============================================================================

These are the nanny latest actions:


May 19 14:13:09 guard nanny:[5528]: Also checking tomcat.
May 19 14:13:09 guard nanny:[5528]: Going for my initial nap.

< lines deleted... >

This is the TOMCAT command line:


463 sh -c ps -o pid,cmd -e | grep Dcatalina.base
21917 grep Dcatalina.base.

Inspection Engine Information

The next major section of the System Static Reports output contains information
about the inspection engine:
============================================================================
This is the SNIF (pid: 13036) command line: 13036 /opt/IBM/guardium/bin/snif.
This is the SNIF status:
Name: snif
State: R (running)

CLI and API 61


Tgid: 13036

< lines deleted... >


============================================================================

Current timestamp is 2009-05-20 11:56:41


This is the last timestamp at GDM_CONSTRUCT_INSTANCE: 2009-05-20 11:56:41
This is the last timestamp at GDM_EXCEPTION: 2009-05-20 11:56:41
This is the last timestamp at GDM_POLICY_VIOLATIONS_LOG: 2009-05-20 11:56:41

============================================================================

Snif buf usage at Fri May 20 11:56:44 2009:


100 204800 buffers out of 204800
126 connection used, 32642 unused, 0 dropped (sniffer), 9 ignored (analyzer)
0 bytes lost, 60 connections ended, 601752099 bytes sent, 579063 request sent
Dropped Packets: 0 buffer full, 0 too short , 451 ignored
time now is 1116604603
Analyzer/Parser buffers size: 6 (66533) 0 (62902)
ms-tsql-logger 0 (11331)
syb-tsql-logger 0 (70)
ora-tsql-logger 79 (67803)
db2-sql-logger 0 (20544)

< lines deleted... >

IP Tables Information

The next major section contains information about the IP tables:


===========================================================================
IPTABLES:
-------------
tcp -- 192.168.2.0/24 192.168.1.0/24 tcp spts:1521:60000 set 0x23
tcp -- 192.168.1.0/24 192.168.2.0/24 tcp dpts:1521:60000 set 0x22
< lines deleted... >

S-TAP Information
The next major section contains S-TAP information:
============================================================================
STAP:
----
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:9500
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9500
2696 148K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:16016
2835 175K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:16016

< lines deleted... >

IP Traffic Information
The next major section contains IP traffic information:
IP traffic statistics.
OUTPUT OF ETH0
Fri May 20 11:57:04 2012; ******** Detailed interface statistics started ********

*** Detailed statistics for interface eth0, generated Fri May 20 11:58:04 2009

< lines deleted... >

OUTPUT OF ETH1
Fri May 20 11:57:04 2012; ******** Detailed interface statistics started ********

*** Detailed statistics for interface eth1, generated Fri May 20 11:58:04 2009

62 CLI and API


Total: 82440 packets, 53892382 bytes
(incoming: 82440 packets, 53892382 bytes; outgoing: 0 packets, 0 bytes)
IP: 82440 packets, 52632747 bytes
(incoming: 82440 packets, 52632747 bytes; outgoing: 0 packets, 0 bytes)

< lines deleted... >

Information Engine STDERR and STDOUT Information

The next section contains the last messages output by the sniffer:
Snif STDERR:

< lines deleted... >

Snif STDOUT:
Fri_20-May-2009_04:04:35 : Guardium Engine Monitor starting
Fri_20-May-2009_04:14:37 : Guardium Engine Monitor starting
Fri_20-May-2009_04:24:38 : Guardium Engine Monitor starting

< lines deleted... >

Import Directory Information

The next section lists the import directory contents:


These are the contents of the importdir directory:
total 0

Aggregator Activity Information


This section lists aggregator activities (there are none in the example):
============================================================================
This is the aggregator last activities:

Audit Report

This section lists the following summary information (see example):


============================================================================
Range of time in logs: 01/14/10 13:12:26.348 - 01/18/10 12:48:01.073
Selected time for report: 01/14/10 13:12:26 - 01/18/10 12:48:01.073
Number of changes in configuration: 4 - changes to the audit configuration
Number of changes to accounts, groups, or roles: 0
Number of logins: 22 - logins into the machine - ssh and console
Number of failed logins: 114
Number of authentications: 22 - "su", etc.
Number of failed authentications: 5
Number of users: 2
Number of terminals: 18
Number of host names: 9
Number of executables: 7
Number of files: 0
Number of AVC’s: 0
Number of MAC events: 0
Number of failed syscalls: 0
Number of anomaly events: 3
Number of responses to anomaly events: 0
Number of crypto events: 0
Number of keys: 0
Number of process IDs: 9173
Number of events: 98669
============================================================================

CLI and API 63


Anomaly Report
This section lists the following (see example):
============================================================================
# Date Time Type Exe Term Host AUID Event
============================================================================
1. 01/14/10 13:16:02 ANOM_PROMISCUOUS /usr/sbin/brctl (none) ? -1 8 - this is expected
to appear - it means the bridge is listening to all traffic

Authentication Report

This section lists the following (see example):


============================================================================
# Date Time Type Exe Term Host AUID Event
============================================================================
1. 01/14/10 13:13:22 tomcat ? console /bin/su yes 4
2. 01/14/10 13:16:44 tomcat ? console /bin/su yes 11
3. 01/14/10 13:16:44 tomcat ? console /bin/su yes 17
4. 01/14/10 13:16:45 tomcat ? console /bin/su yes 23
5. 01/14/10 13:16:48 tomcat ? console /bin/su yes 29
6. 01/14/10 13:22:29 tomcat ? ? /bin/su yes 155
7. 01/14/10 13:28:10 ? ? tty1 /bin/login no 252
8. 01/14/10 13:28:20 ? ? tty1 /bin/login no 254

Login Report

This section lists the following (see example):


============================================================================
# Date Time Type Exe Term Host AUID Event
============================================================================
1. 01/14/10 13:22:15 root 192.168.2.9 sshd /usr/sbin/sshd no 142
2. 01/14/10 13:22:15 root 192.168.2.9 sshd /usr/sbin/sshd no 143
3. 01/14/10 13:22:17 root 192.168.2.9 sshd /usr/sbin/sshd no 144
4. 01/14/10 13:22:17 root 192.168.2.9 sshd /usr/sbin/sshd no 145
5. 01/14/10 13:22:20 root 192.168.2.9 sshd /usr/sbin/sshd no 146

3 Interactive Queries
Select System Interactive Queries from the main menu to open the Interactive
Queries menu. (Use the Down arrow key to scroll past the tenth item to see all
items on this menu.)

In addition to displaying the requested information, each interactive query


command creates output in a separate text file in the current directory. See the
Overview topic for more information about the files created.

Each command is described in the following sections.

3.1 Files Changed

Use the Files Changed command to display a list of files changed either before or
after a specified number of days.
1. Select Files Changed from the Interactive Queries menu. You are prompted to
enter a number days. Type a number and press Enter.
2. You are asked if you are interested in the files changed before or after that
number of days. Select 1 or 2 and press Enter.
3. The full directory path for each changed file is displayed. Note that if not all
data fits in the display area, use the Up and Down arrow keys to scroll through

64 CLI and API


the data. The current position in the file is indicated by the number in the
display. The white bars in the display area indicate the presence of more data
with a plus sign.

3.2 List Folder

Use this command to list the contents of various directories.


1. Select List Folder from the Interactive Queries menu.
2. You are prompted to select a directory. Select a directory and press Enter. The
selected directory is displayed. Remember that if multiple commands of the
same type are issued, the data for each execution of the command is appended
to the single text file maintained for that command.
3. Press Enter or click Exit when you are done.

3.3 Summarize Folder

Use the Summarize Folder command to display the output of the du (Disk Usage)
command:
1. Select Summarize Folder from the Interactive Queries menu. There are no
prompts. You are presented with a display of disk use for various directories.
2. Use the Up and Down arrow keys to scroll through the directories.
3. Press Enter or click Exit when you are done.

3.4 File Summary and Export

Use this command to list all or some portion of a log file.


1. Select File Summary from the Interactive Queries menu.
2. You are prompted to select a file. Use the Up and Down arrow keys to scroll
the selection cursor to the file you want to view.
3. Press Enter or click OK.
4. You are prompted to select the number of lines to display. Make your selection
and press Enter.
5. You are prompted to enter an optional search string. Use this box if you are
searching for a particular log message (you can enter a regular expression).
Otherwise leave the box empty and press Enter.
6. Following the prompt, press Enter to answer yes, meaning that only unique
messages will be displayed. Otherwise select No and press Enter (all messages
will be displayed).
Be aware that when the Summary Style is used, variables are replaced by the
pound sign character (#). For some log data containing variables such as IP
addresses or dates, the replacements can be extensive.

3.5 Test Email


Use this command to send a test email using the configured SMTP server.
1. Select Test Email from the Interactive Queries menu.
2. You are prompted to select a recipient. Select Custom and press Enter.
3. You are prompted to supply an email address. Type an email address and press
Enter. You will be informed of the output of the operation. Note that on the
Administration Console, the Test Connection link in the SMTP pane of the
Alerter configuration panel only tests that an SMTP port is configured, not that

CLI and API 65


mail can actually be delivered via that server. You can use this command to test
email delivery without having to configure and trigger a statistical or real-time
alert, or an audit process notification.

3.6 Test SNMP

Use this command to send a test SNMP trap to the configured SNMP server.
1. Select Test SNMP from the Interactive Queries menu.
2. You are informed of the activity and the results. Note that on the Alerter
Configuration panel, the Test Connection link in the SNMP pane only tests that
an SNMP port is configured, not that a trap can actually be delivered via that
server. You can use this command to test trap delivery without having to
configure (and trigger) a statistical or real-time alert, or an audit process
notification.

3.7 Report Query Data

Use this command to display the actual select statement used for a report query.
This might be useful if a user-written report is producing unexpected output.
1. Select Report Query Data from the Interactive Queries menu.
2. You are prompted to make a selection from a list of report titles. Use the Up
and Down arrow keys to select an entry and press the Enter key. Each entry in
this list is a Report entity. All pre-defined reports are listed first. These are
numbered in the range 100-225 (for version 3.6.1 – the numbers will most likely
grow incrementally with each release, as more pre-defined reports are created).
User written reports are listed following the pre-defined reports, beginning
with number 20001 (for version 3.6.1).
The selected report select statement will be displayed.

3.8 GDM Queries

Use this command to display a count of observed SQL calls during a 100 second
interval.
1. Select GDM Queries from the Interactive Queries menu.
2. A message displays requesting your patience. Select yes to continue. The
CMD_CT column on the display lists the number of observed SQL calls from
the specified clients to the specified servers.
3. Press Enter when you are done viewing the report.

3.9 Generate TCP Dump


Use this command to create a TCP dump. For this command, output is written to a
command file only and not to the screen. Unlike most other commands, a separate
file is created in the current directory for each execution of this command. The file
name is in the format: tcpdump_<mmyyyy-hhmmss>, where the variable portion
is a date and time stamp: mmyyyy is the month and year, and hhmmss is the
hours, minutes, and seconds.
1. Select Generate TCP dump from the Interactive Queries menu.
2. You are prompted to select an interface. Select a port and press Enter.
3. You are prompted for an optional filter IP address. If you are interested in
traffic from only a specific address, enter that IP address and press Enter.
Otherwise, just press Enter.

66 CLI and API


4. You are prompted for an optional port number. If you are interested in traffic
from only a specific port, enter that port number and press Enter. Otherwise,
just press Enter.
5. You are prompted to select how many seconds of traffic to capture. Select a
number of seconds and press Enter.
6. You are prompted to press Enter to start collecting data. Press Enter. You are
returned to the menu after (approximately) the specified number of seconds.
7. To view the TCP dump data, select the Read TCP dumps command or export
the file (see Export Reported Files on the Output Management menu, described
previously).

3.10 Read TCP Dumps

Use this command to display a TCP dump file created previously.


1. Select Read TCP dumps from the Interactive Queries menu.
2. You are prompted to select file. The TCP dump files are listed from oldest to
newest. The file name is in the format: tcpdump_<mmddyy-hhmmss>, where
the variable portion is a date and time stamp: mmddyy is the month, day, and
year; and hhmmss is the hours, minutes, and seconds. Select the file you want
to view and press Enter.
3. The selected file displays. Use the Up and Down arrow keys to scroll through
the display and press Enter when you are done.

3.11 Watch Buffer

Use this command to watch activity in the Guardium buffers:


1. Select Watch Buffer from the Interactive Queries menu. The display is updated
every second.
2. Press Ctrl-C to close the display.

3.12 SLON Utility

Use this command to run the slon utility, which tracks packets. Typically, you
would only run this command as directed by Technical Support. For this
command, output is not written to the screen. Output is written to one of two
command files in the current directory, for each execution of the command:
apks.txt.<day_dd-mmm-yyyy_hh.mm.ss.ttt> OR requests.txt.<day_dd-mmm-
yyyy_hh.mm.ss.ttt>

The variable portions or the file names are date and time stamps. For example,
apks.txt.Fri_20-May-2011_08.52.00.789.
1. Select Slon Utility from the Interactive Queries menu.
2. Select the action to be performed and click OK. The choices are:
(a) to dump Analyzer rules info
(f) to filter Analyzer packets based on IP and/or mask
(p) to dump packets to apks.txt
(l) to dump logger requests to requests.txt
(m) to dump STAP packets (Select how long to run. Wait for completion and
then check the msg-dump file under /var/log/guard/diag/current/tap/ )
(r) to record IPQ traffic
(s) to dump State machine info

CLI and API 67


(t) to configure throttle parameters
3. Regardless of your selection, you will be prompted to select the time period for
the activity. Select a time period and press Enter.
4. You are notified that the program will run for the specified time and prompted
to press Enter. Press Enter and wait.
5. When processing completes, a message will be displayed. You can use the File
Summary command to display the output of this command. Because this
command can produce a large amount of data, you will probably want to
export the file to another system, where you can view the contents using a text
editor. (Pack the current session data, and export the recordings as described
earlier in this section.)

3.13 Show Indexes

Use this command to show indexes for various internal tables:


1. Select Show Indexes from the Interactive Queries menu.
2. You are prompted to select a table. Select a table and press Enter to display the
indexes for that table.
3. Use the Up and Down arrow keys to scroll through the display. Press Enter
when you are done.

3.14 S-TAP Check


Use this command to display S-TAP definitions and traffic information:
1. Select S-TAP Check from the Interactive Queries menu.
2. The system’s unit type displays in numeric format. Press Enter.
3. You are prompted to select the number of seconds to monitor the S-TAP traffic.
Use the Up and Down arrow keys to make a selection and press Enter.
4. You are informed of approximately how long to wait for output, and prompted
to press Enter. Press Enter.
5. The S-TAP Definitions and Server Traffic reports display. Press Enter when you
are done viewing the report.

3.15 Interface Link Status


Use this command to display interface link status.
1. Select Interface link status from the Interactive Queries menu.
2. The status of all interfaces displays. Use the Up and Down arrows to scroll
through the display.
3. Press Enter when you are done. Note that this command displays the link
status only. To display interface configuration information, use the show
network interface all CLI command.

3.16 Show Throttle Data


Use this command to display throttle data.
1. Select Show Throttle data from the Interactive Queries menu.
2. Press Enter and wait 3 seconds for throttle statistics.
3. Use the Up and Down arrows to scroll through the display, and press Exit
when you are done.

68 CLI and API


3.17 Generate TCP dump and slon
Use this command to create a TCP dump and run the slon utility, which tracks
packets. Typically, you would only run this command as directed by Technical
Support. See the individual topics, Generate TCP dump, and Slon Utility.

3.18 Generate SSL dump


Use this command to create a SSL dump..
1. Select Generate SSL dump from the Interactive Queries menu.
2. Select an interface and press OK. Enter filter IP address and press OK. Enter
filter port number and press OK.
3. Select how long to run and press OK. Press OK and wait the specified time in
order to gather TCP dumps.
4. If you wish to view SSL dumps, press OK.
5. Press Exit when you are done.

3.19 View bash history

Use this command to display bash history.


1. Select View Bash History from the Interactive Queries menu.
2. Press OK.
3. Use the Up and Down arrows to scroll through the display, and press Exit
when you are done.

3.20 Generate GDM_Error dump

Use this command to create GDM_ERROR dumps.


1. Select Show Generate GDM_ERROR dump from the Interactive Queries menu.
2. Press OK and then enter password. Press Enter.
3. Use the Up and Down arrows to scroll through the display, and press Exit
when you are done.

3.21 Prepare Tomcat Memory dump


When Tomcat has a first outOfMemory error, it will do a memory dump to
/var/tmp/tomcat/tomcat.dmp. Use this command to compress, encrypt and move
this file to /var/log/guard/diag/tomcat/ for fileserv to retrieve.
1. Select Prepare Tomcat Memory dump from the Interactive Queries menu.
2. Press OK.
3. Use the Up and Down arrows to scroll through the display, and press Exit
when you are done.

3.22 Extended Network Information


Click on Extended Network Information option under System interactive query to
display the network diagnostics information.

Example

SQLGuard Diagnostics

CLI and API 69


Network Parameters from ADMINCONSOLE_PARAMETER:

SYSTEM_NETMASK1: 255.255.255.0

SYSTEM_DOMAIN:

SYSTEM_DEFAULT_ROUTE:

SYSTEM_DNS1:

SYSTEM_DNS2:

SYSTEM_DNS3:

TOMCAT_IP:

MANAGER_IP:

HOST_MAC_ADDRESS:

SECOND_DEVICE:

3.23 Generate TCP dump in rotation

This selection is different from other diag selections in the section called Generate
TCP and Generate TCP and slon.

For Generate TCP dump in rotation, enter Filter IP address (enter blank for all IPs).
Enter Filter Port number. For the question, How long to run? if the TCP dump in
rotation is already running, choose the option “Rotation OFF” or “Rotation” (ON).
If Rotation is selected, add file size.

The TCP dump will be output to /var/log/guard/tcp.bin1 and


/var/log/guard.bin2 in rotation.

Select TCP dump in rotation again to stop the process loop_tcpdump.sh.

4 Perform Maintenance Actions


Select the Perform Maintenance Actions option from the Main Menu to open the
Maintenance menu. Use these commands only under the direction of Technical
Support. These do not need to be run on a regular basis.

4.1 TURBINE analysis (update index cardinality)


Use this command to optimize index cardinality on Guardium’s internal database.
A progress bar displays while the operation is running. When the operation
completes, you are returned to the Maintenance menu.

4.2 TURBINE optimize (rebuild indexes, takes longer)

Use this command to analyze and re-index Guardium’s internal database.


1. Select TURBINE optimize ( index cardinality ) from the Maintenance menu. A
progress bar displays while the operation is running. When the operation
completes, you are returned to the Maintenance menu.

70 CLI and API


4.3 Clean disk space
Use this command to clean unused disk space. You are returned to the
Maintenance menu when the procedure completes.
1. Select Clean disk space from the Maintenance menu. You will be prompted to
select a directory.
2. Select the directory from which you want to remove files. The contents of the
directory will be listed, and you will be prompted to confirm that you want to
remove all files.
3. When the operation completes, you are returned to the Maintenance menu.

4.4 RAID maintenance

Use this command only under the direction of Technical Support. This command
provides access to the Management Menu of the RAID controller utility program,
which can be used to display the status of the RAID drives. If your system does
not have a RAID controller, an error message displays if you select this command.
You must be extremely careful when using the RAID controller utility program,
since several of the functions provided will erase all information on the disk.

4.5 Application Debugging Utility

Use this command to turn debugging on or off. You are prompted to enable or
disable logging, or to reset the system defaults.

4.6 Modify TURBINE watchdog threshold

Use this option to change the timeout limit for long queries.

4.7 Force unrecoverable MySQL to start

Use this option only when directed to do so by Technical Support.

4.8 Transfer backups and system recovery

Use this command to restore a backed up version of the internal database. You will
be prompted to confirm the operation.

4.9 Tomcat Logging Level


Use this command to select the component debug level. Choose one of the
following options:

Classifier, Data Level Security, Workflow, or Other.

Choose Classifier to select debug level options: ERROR, WARN, INFO, DEBUG,
ALL.

Choose DLS (data level security), Workflow, or Other (text input) to select debug
level options: ERROR, WARN, INFO, DEBUG, ALL.

If Other is chosen (text input separated by ',') , enter valid components (dls,
workflow, audit, customtable, gui, other, job).

CLI and API 71


4.10 Aggregator Maintenance
Full analysis and recovery of the Aggregator. This utility will collect AGG related
logs and place it in the diag export folder, calls the Aggregator Fix Schema to sync
the schema of all databases, clean AGG workspace, and restart the merge process
to ensure full analysis of all imported tables (runs in the background and may take
several hours to complete).

4.11 Aggregator Fix Schema

Brings all imported tables to the schema of the latest patch level (runs in the
background and may take several hours to complete).

4.12 Clean Static Orphans

This option should be used only by Technical Support and only in those cases
where static tables grow too much and needed to be cleaned. This utility cleans all
the old construct records that don’t have any Instances associated with them. A
progress message will display during the Clean Static Orphans (for use on collector
or aggregator).

5 Exit to CLI

Select Exit to CLI on the Main Menu. Press Enter to close the diag command and
return to the command line interface.

File Handling CLI Commands


Use these commands to backup and restore system information. Many of these
tasks can be performed from Guardium user interface.

About Archived Data File Names

When Guardium data is archived (or exported to an aggregator), there is a


separate file for each day of data. Depending on how your export/purge or
archive/purge operation is configured, you may have multiple copies of data
exported for the same day. Archive and export data file names have the same
format:

<daysequence>-<hostname.domain>-w<run_datestamp>-
d<data_date>.dbdump.enc

daysequence is a number representing the date of the archived data, expressed as


the number of days since year 0. The same date appears in yyyy-mm-dd format in
the data_date portion of the name.

hostname.domain is the host name of the Guardium appliance on which the


archive was created, followed by a dot character and the domain name.

run_datestamp is the date that the data was archived or exported, in


yyyymmdd.hhmmss format.

data_date is the date of the archived data, in yyyy-mm-dd format.

For example: 732423-g1.guardium.com-w20050425.040042-d2005-04-22.dbdump.enc

72 CLI and API


backup config
These commands back up and restore configuration information from the internal
administration tables. The backup config command stores data in the
/media/backup directory. The backup config command removes license and other
machine-specific information. The backup system command provides a more
comprehensive backup of the configuration and the entire system.

Syntax

backup config

restore config

backup system

This topic applies to backup and restore operations for the Guardium internal
database. You can back up or restore either configuration information only, or the
entire system (data plus configuration information, except for the shared secret key
files, which are backed up and restored separately, see the aggregator backup keys
file and aggregator restore keys file commands). These commands stop all
inspection engines and web services and restart them after the operation
completes.

Before restoring a file, be sure that the appliance has the system shared secret of
the system that created that file (otherwise, it will not be able to decrypt the
information). See About the System Shared Secret in the Guardium Administrator
Guide.

Note: System restore must be done to the same patch level of the system backup.
For example, if a customer backed up the appliance when it was on Version 7.0,
Patch 7 and then wishes to restore this backup into a newly-built appliance, then
there is a need to first install Version 7.0, Patches 1 to 7 on the appliance and only
then to restore the file.

There are two commands involved in the restore process:


v import file, which returns an archived backup file to the system
v restore system, which restores the system from a backup file previously returned
by an import file operation.

For all backup, import and restore commands, you will receive a series of prompts
to supply some combination of the following items, depending on which storage
systems are configured, and the type of restore operation. Respond to each prompt
as appropriate for your operation. The following table describes the information
for which you may be prompted.

Note:

One copy of the SCP/FTP/TSM/Centera file transfer is saved, regardless if the


transfer was successful or failed. As certain files may take hours to regenerate (for
example, system backup), having a readily available copy (in particular if the file
transfer failed) is of value to the user. Only one copy of each type of file is retained
(archive/system backup/configuration backup/etc.)

CLI and API 73


Backup system will copy the current license, metering and number of datasources,
and then backup the data. Restore system will restore the data and then restore the
license, metering and number of datasources. This sequence applies to the regular
restore system. Restore from a previous system will require re-configuring license,
metering and number of datasources.

When configuring backups, value of zero '0' for the port number indicates that the
default port is being used for that protocol and no need to change.
Table 4. backup system
Item Description

SCP, FTP, TSM, Centera, Select the method to use to transfer the file. TSM and Centera
Snapshot will be displayed only if those storage methods that have been
enabled (see the store storage-method command)

Data or Configuration Select Configuration to back up definitions and configuration


information only, or select Data to back up data in addition to
configuration information.

restore from archive or Select restore from archive to restore archived data, or select
restore from backup restore from backup to restore configuration information.

normal or upgrade If restoring from the same software version of Guardium, select
normal. If restoring configuration information following
software upgrade of the Guardium appliance, select upgrade.

host The remote host for the backup file.

remote directory The directory for the backup file. For FTP, the directory is
relative to the FTP root directory for the FTP user account
used. For SSH, the directory path is a full directory path. For
Windows SSH servers, use Unix-style path names with forward
slashes, rather than Windows-style backslashes.

username The user account name to use for the operation (for backup
operations, this user must have write/execute permission for
the directory specified).

Note: For Windows, a domain user is accepted with the format


of domain\user

password The password for the username.

file name The file name for the archive or backup file. See Archived Data
Names.

A user can select multiple files by using the wildcard character


* in the file name. Support of the wildcard character * is
permitted when using transfer methods FTP, SCP and
Snapshot. Support of the wildcard character * is not permitted
on transfer methods TSM or Centera.

Centera server Enter the Centera server name. If using PEA files, use the
following format: <Host name/IP>? <full PEA file name>,
for example:

128.221.200.56?/var/centera/us_profile_rwqe.pea.txt

74 CLI and API


Table 4. backup system (continued)
Item Description

Centera clipID For a Centera restore operation, the Content Address returned
from the backup operation. For example:

6M4B15U4JM4LBeDGKCPF9VQO3UA

After you have supplied all of the information required for the backup or restore
operation, a series of messages will be displayed informing you of the results of
the operation. For example, for a restore system operation the messages should
look something like this (depending on the type of restore and storage method
used):
gpg: Signature made Thu Feb 22 11:38:01 2009 EST using DSA key ID 2348FF9E gpg: Good signature fro

Prevent backup/archive scripts from filling up /var

The backup process will check for room in /var before running and fail. This
process will also warn the user if there is insufficient space for backup.

The archive process will check the size of the static tables and make sure there is
room in /var to create the archive.

An error is now logged in the logfile and GUI if the backup is over 50%

Example:
ERROR: /var backup space is at 60% used. Insufficient disk space for backup. CLI> backup system

backup profile

Use this command to maintain the backup profile data (patch mechanism).

The backup file will be copied to the destination according to the backup profile.
If the parameter indicating whether to keep the backup file is “1” AND there is
enough disk space the backup file will be kept within the system, otherwise
removed.

All four fields must be filled in - backup destination host, backup destination
directory, backup destination user, and backup destination password.

Syntax

show backup profile

Example
patch backup flag is 1 patch backup automatic recovery flag is 1 patch backup dest host is

Syntax

store backup profile

Example
Do you want to set up for automatic recovery? (y/n) Enter the patch backup destination host:

CLI and API 75


export audit-data
Exports audit data from the specified date (yyyy-mm-dd) from various internal
Guardium tables to a compressed archive file. The data from a specified date will
be stored in a compressed archive file, in the /var/dump directory. The file created
will be identified in the messages produced by the system. See the example. Use
this command only under the direction of Guardium Support.

Note: Only users with admin role may run this command .

Syntax

export audit-data <yyyy-mm-dd>

Example
If you enter the audit-data command for the date 2005-09-16, a set of messages similar to the followi

The data from each of the named internal database tables is written to a text file, in
CSV format. The name of the archive file ends with exp.tgz and the remainder of
the name is formed as described in About Archived Data File Names.

You can use the export file command to transfer this file to another system.

delete audit-data
Use this command only under the direction of Guardium Support. This command
is used to remove compressed audit data files. You will be prompted to enter an
index number to identify the file to be removed. See Archived Data File Names, for
information about how archived data file names are formed.

You will be prompted to identify the file to be removed.

Syntax

delete audit-data

show audit-data
Use this command to display any files that were created by executing the CLI
command, export audit-data. For more information about audit data files, see
export audit-data.

Syntax

show audit-data <yyyy-mm-dd>

export file
This command exports a single file named filename from the /var/dump,
/var/log, or /var/importdir directory. Use this command only under the direction
of Guardium Support. To export Guardium data to an aggregator or to archive
data, use the appropriate menu commands on the Administration Console panel.

Syntax

export file </local_path/filename> <user@host:/path/filename>

76 CLI and API


local_path must be one of the following: /var/log, /var/dump, or /var/importdir.

fileserver

Use this command to start an HTTP-based (different from an HTTPS) file server
running on the Guardium appliance. This facility is intended to ease the task of
uploading patches to the unit or downloading debugging information from the
unit. Each time this facility starts, it deletes any files in the directory to which it
uploads patches.

Note: Any operation that generates a file that the fileserver will access should
finish before the fileserver is started (so that the file is available for the fileserver).

Syntax

fileserver [ip address] [duration]

ip address is an optional parameter that allows access to the fileserver from the
indicated IP address. By default (without the parameter), access is restricted to the
IP address of the SSH client that started the fileserver.

duration is an optional parameter that specifies the number of seconds that the
fileserver is active. After the specified number of seconds, the fileserver shuts
down automatically. The duration can be any number of seconds from 60 to 3600.

In case of a security setup where browser sessions are redirected through a proxy
server, the IP address of the fileserver client will not be the same as SSH client that
started the fileserver. Instead, the fileserver client will have the IP address of the
proxy server, and this address must be passing the optional ip address parameter.
To find the proxy IP address, check your browser settings or the client IP addresses
shown in the Logins to Guardium report in the Guardium Monitor interface.

Example

To start the file, enter the fileserver command:

CLI> fileserver <ip address> <duration>

Starting the file server. You can find it at http://(name of appliance)

Press ENTER to stop the file server.

Open the fileserver in a browser window, and do one of the following:


v To upload a patch, click Upload a patch and follow the directions.
v To download log data, click Sqlguard logs, navigate to the file you want and
download as you would any other file.

When you are done, return to the CLI session and press Enter to terminate the
session.

import file
See backup config and restore config.

CLI and API 77


In import file CLI command, user can use wildcard * for the file name in method
scp, ftp and snapshot.

Syntax

import file

import tsm config

Uploads a TSM client configuration file to the Guardium appliance. You must do
this before performing any archiving or backup operations using TSM. You will
always need to upload a dsm.sys file, and if that file includes multiple servername
sections, you will also need to upload a dsm.opt file. For information about how to
create these files, check with your company’s TSM administrator.

You will be prompted for a password for the user account on the specified host.

Syntax

import tsm config <user@host:/path/[ dsm.sys | dsm.opt ]>

Parameters

user@host - User account to access the file on the specified host.

/path/[ dsm.sys | dsm.opt ] - Full path filename of the file to import.

Note: In setting up TSM on each collector, if the initial configuration fails, a


notification error results which says the test file could not be sent. Logging into the
collector as root, and then running a dsmc archive command to the TSM server,
the TSM file, with the same credentials, now succeeds. Returning to the GUI, and
configuring with the same options used before, the configuration now succeeds as
well.

If tsm config has passwordaccess=generate, the password stored in a local file, is


sought. The root user needs to run the dsmc command once to create this local
password file.

After uploading the tsm config file, if tsm config has a passwordaccess generate
prompt, passwordaccess is set to be generated.
Would you like to run a dsmc command now to ensure password is set locally (y/n)? If the answer i

import tsm property


Use this CLI command to upload a file to /opt/tivoli/tsm/client/ba/bin/
guard_tsm.properties.

The file size should be 1K.

Syntax

import tsm property user@host:file

This command will upload the input file to /opt/tivoli/tsm/client/ba/bin/


guard_tsm.properties

78 CLI and API


restore config
These commands back up and restore configuration information from the internal
administration tables. The backup config command stores data in the
/media/backup directory. The backup config command removes license and other
machine-specific information. The backup system command provides a more
comprehensive backup of the configuration and the entire system.

When restoring a configuration, you must restore a backup that is of the same
version and patch level as the original appliance where the backup was created.

Syntax

backup config

restore config

restore db-from-prev-version

This command takes a backup from the immediate past system (backup data must
be provided, configuration backup is optional) and performs a restore on a newer
system. It includes upgrading the data, portlets, etc.

Perform a full system backup prior to upgrading your Guardium system. If for
some reason the upgrade fails and leaves the machine in a way that can not be
used, instead of trying to fix and re-run the upgrade, rebuild the machine as the
latest system, setting up this latest system with only the basic network information
(IP, resolver, route, system hostname and domain).

The result will be the latest system with the data and customization (if
configuration file is provided) from the previous system.

First, try a regular upgrade from the previous system to the latest system. If this is
not successful, then use the backup as an alternative way to upgrade from the
previous system to the latest system.

Note: Older data being restored to an aggregator (not to investigation center), and
outside the merge period, will not be visible until the merge period is changed and
the merge process rerun.

To run this command, back up the current server for both data and configuration.
Once the backup is complete, install the latest release onto the same server. Next,
import both the data and configuration file from CLI via the import file command.
Then after the two backup files are imported, run, again from CLI, the command
restore db-from-prev-version. This restores the backup files (data and
configuration) from the older version to the newly installed server.

Note: If you are using Guardium in a non-English language, the restore CLI
command sets some strings, including report headers, to English. To view these
strings in the non-English language, run the store language CLI command after
you run the restore CLI command.

Syntax
restore db-from-prev-version
This procedure will restore and upgrade a previous backup on a newly-installed latest system. If t

CLI and API 79


Note:

Answering Y (yes) to the following questions during the execution of the CLI
command, restore db-from-prev-version, will result in all non-canned/customized
reports and panes to compress into one pane with the name of v.x.0 Custom
Reports.

Answering N (no) to the same questions will result in all panes being restored to
what they were in previous version.
Update portal layout (panes and menus structure) to the new v8 default (current instances of custom r

restore keystore

Use this command only under direction from Technical Support.

Use this command to restore certifications and private keys used by the Web
servlet container environment (Tomcat).

Syntax

restore keystore

restore pre-patch-backup

Use this command only under direction from Technical Support.

Use this command to recover the pre-patch-backup when the appliance database is
up or down.

Syntax
restore pre-patchbackup Please enter the information to retrieve the file: Is the file in the local s

restore system

This topic applies to backup and restore operations for the Guardium internal
database. You can back up or restore either configuration information only, or the
entire system (data plus configuration information, except for the shared secret key
files, which are backed up and restored separately, see the aggregator backup keys
file and aggregator restore keys file commands). These commands stop all
inspection engines and web services and restart them after the operation
completes.

Before restoring a file, be sure that the appliance has the system shared secret of
the system that created that file (otherwise, it will not be able to decrypt the
information). See About the System Shared Secret in the Guardium Administrator
Guide.

Note: System restore must be done to the same patch level of the system backup.

There are two commands involved in the restore process:


v import file, which returns an archived backup file to the system
v restore system, which restores the system from a backup file previously returned
by an import file operation.

80 CLI and API


For all backup, import and restore commands, you will receive a series of prompts
to supply some combination of the following items, depending on which storage
systems are configured, and the type of restore operation. Respond to each prompt
as appropriate for your operation. The following table describes the information
for which you may be prompted.

Note:

One copy of the SCP/FTP/TSM/Centera file transfer is saved, regardless if the


transfer was successful or failed. As certain files may take hours to regenerate (for
example, system backup), having a readily available copy (in particular if the file
transfer failed) is of value to the user. Only one copy of each type of file is retained
(archive/system backup/configuration backup/etc.)

Backup system will copy the current license, metering and number of datasources,
and then backup the data. Restore system will restore the data and then restore the
license, metering and number of datasources. This sequence applies to the regular
restore system. Restore from a previous system will require re-configuring license,
metering and number of datasources.
Table 5. restore system
Item Description

SCP, FTP, TSM, Select the method to use to transfer the file. TSM and Centera will
Centera, Snapshot be displayed only if those storage methods that have been
enabled (see the store storage-method command)

Data or Configuration Select Configuration to back up definitions and configuration


information only, or select Data to back up data in addition to
configuration information.

restore from archive or Select restore from archive to restore archived data, or select
restore from backup restore from backup to restore configuration information.

normal or upgrade If restoring from the same software version of Guardium, select
normal. If restoring configuration information following software
upgrade of the Guardium appliance, select upgrade.

host The remote host for the backup file.

remote directory The directory for the backup file. For FTP, the directory is relative
to the FTP root directory for the FTP user account used. For SSH,
the directory path is a full directory path. For Windows SSH
servers, use Unix-style path names with forward slashes, rather
than Windows-style backslashes.

username The user account name to use for the operation (for backup
operations, this user must have write/execute permission for the
directory specified).

Note: For Windows, a domain user is accepted with the format of


domain\user

password The password for the username.

CLI and API 81


Table 5. restore system (continued)
Item Description

file name The file name for the archive or backup file. See Archived Data
files names.

A user can select multiple files by using the wildcard character *


in the file name. Support of the wildcard character * is permitted
when using transfer methods FTP, SCP and Snapshot. Support of
the wildcard character * is not permitted on transfer methods
TSM or Centera.

Centera server Enter the Centera server name. If using PEA files, use the
following format: <Host name/IP>? <full PEA file name>, for
example:

128.221.200.56?/var/centera/us_profile_rwqe.pea.txt

Note the ? between the server IPs and Pea file name.

This IP address and the .PEA file comes from EMC Centera. The
question mark is required when configuring the path. The
.../var/centera/... path name is important as the backup may fail
if the path name is not followed. The .PEA file gives permissions,
username and password authentication per Centera backup
request.

Centera clipID For a Centera restore operation, the Content Address returned
from the backup operation. For example:

6M4B15U4JM4LBeDGKCPF9VQO3UA

After you have supplied all of the information required for the backup or restore
operation, a series of messages will be displayed informing you of the results of
the operation. For example, for a restore system operation the messages should
look something like this (depending on the type of restore and storage method
used):
gpg: Signature made Thu Feb 22 11:38:01 2009 EST using DSA key ID 2348FF9E gpg: Good signature from "

set up help (secondary disk for backup)

Install a secondary disk or for backup on R610 R710 appliances. Place it slot
number 2 and proceed with set up snapshotdisk to configure the partition, format
the drive, and mount it. The two CLI choices are set up help and set up
snapshotdisk.

Syntax

setup [help | snapshotdisk | vmware_tools]

store language

Use this CLI command to change from the baseline English and convert the
database to the desired language. Installation of Guardium is always in English. A
Guardium system can only be changed to Japanese or Chinese (Traditional or
Simplified) after an installation.

82 CLI and API


The CLI command, store language, is considered a setup of the appliance and is
intended to be run during the initial setup of the appliance.

Running this CLI command, after deployment of the appliance in a specific


language, can change the information already captured, stored, customized,
archived or exported.

For example, the psmls (the panes and portlets you have created) will be deleted,
since they need to be recreated in the new language.

Note: After switching from English to a desired language, it is not possible to


revert back to English, using this CLI command. The Guardium system must be
reinstalled in English.

Syntax

CLI> store language [English | Japanese | SimplifiedChinese | TraditionalChinese]

Show command

show language

set up vmware tools

Use this CLI command to install VMware that runs on the ESX infrastructure.

Syntax

setup vmware_tools [ install | uninstall ]

Step 1: Open the VM client/console and select the VM instance that contains the
IBM Guardium appliance. Right-click the instance, select (from the popup menu)
Guest => Install/upgrade VMware tools. This enables the instance to access the
VMware tools via a mount point.

Step 2: Run the CLI command (from within the VM client/console), setup
vmware_tools install, to install VM tools.

Vmware kernel panic after a reboot


VMware ESX 4.1 Virtual machine running Guardium might get a kernel panic after
a reboot.

To correct this situation, VMware recommends: Install update 2 on ESX4.1 or Set


CPU/MMU virtualization to Use software only instruction set and MMU
Virtualization. This option is found under Settings/ Options/ CPU/MMU Use
software for instruction set and MMU Virtualization.

Inspection Engine CLI Commands


Use these CLI commands to configure the inspection engines.

An inspection engine monitors the traffic between a set of one or more servers and
a set of one or more clients using a specific database protocol (Oracle or Sybase,
for example). The inspection engine extracts SQL from network packets; compiles

CLI and API 83


parse trees that identify sentences, requests, commands, objects, and fields; and
logs detailed information about that traffic to an internal database.

add inspection-engines
Adds an inspection engine configuration to the end of the inspection engine list.
The parameters are described. You can re-order your list of inspection engines after
adding a new one by using the reorder inspection-engines command. Adding an
inspection engine does not start it running; to start it running, use the start
inspection-engines command.

Syntax

add inspection-engines <name> <protocol>

<fromIP/mask> <port> <toIP/mask>

<exclude client list> <active on startup>

Parameters

name - The new inspection engine name; must be unique on the unit.

protocol - The protocol monitored, which must be one of the following: Cassandra,
CouchDB, DB2, DB2 Exit, exclude IE, FTP, GreenPlumDB, Hadoop, HTTP, ISERIES,
Informix, KERBEROS, MongoDB, MS SQL, Mysql, Named Pipes, Netezza, Oracle,
PostgreSQL, SAP Hana, Sybase, Teradata, or Windows File Share.

fromIP/mask - A list of clients, identified by IP addresses and subnet masks.


Separate each IP address from its mask with a slash, and multiple entries by
commas. An address and mask of all zeroes is a wild card. If the exclude client list
option is Y, the inspection engine monitors traffic from all clients except for those
in this list. If the exclude client list option is N, the inspection engine monitors
traffic from only the clients in this list.

port - The port or range of ports over which traffic between the specified clients
and database servers will be monitored. To specify a range, separate the two
numbers with a hyphen.

toIP/mask - The list of database servers, identified by IP addresses and subnet


masks, whose traffic will be monitored. Separate each IP address from its mask
with a slash, and multiple entries by commas. An address and mask of all zeroes is
a wildcard.

exclude client list - A Y/N value; defaults to N. If Y, the inspection engine


monitors traffic from all clients except for those identified in the client list. If N,
the inspection engine monitors traffic from only the clients listed in the client list.

active on startup - A Y/N value; defaults to N. If Y, the inspection engine is


activated on system startup.

delete inspection-engines
Removes the single inspection engine identified by its name. The name can include
only letters, numbers and blanks. If the inspection engine name contains any
special characters, use the administrator portal GUI to remove it.

84 CLI and API


Syntax

delete inspection-engines <name>

reorder inspection-engines

Specifies a new order for the inspection engines, using index values from the list
produced by the list inspection-engines command.

Syntax

reorder inspection-engines <index>, <index>...

Example

If the displayed indices are 1, 2, 3, and 4, the following command will reverse
order of the engines:

reorder inspection-engines 4,3,2,1

restart inspection-core

Restarts the inspection-engine core, but not the inspection engines. The collection
of database traffic stops when this command is issued.

Syntax

restart inspection-core

Note: To restart the collection of traffic for one or more specific inspection engines,
follow this command with one or more start inspection engine commands.
Alternatively, to restart the collection of traffic for all inspection engines, use the
restart inspection-engines command.

restart inspection-engines

Restarts the database inspection engine core and all inspection engines. The
collection of database traffic stops temporarily while this occurs and restarts only
when database connections re-initiate.

Syntax

restart inspection-engines

show inspection-engines
Displays inspection engine configuration information, as follows:

all - All inspection engines.

configuration <index> - Only the inspection engine identified by the specified


index, which is from the list inspection-engines command.

type <db_type> -Displays configurations of a specific database type, which must


be one of the supported monitored protocol types: Cassandra, CouchDB, DB2, DB2
Exit, exclude IE, FTP, GreenPlumDB, Hadoop, HTTP, ISERIES, Informix,

CLI and API 85


KERBEROS, MongoDB, MS SQL, Mysql, Named Pipes, Netezza, Oracle,
PostgreSQL, SAP Hana, Sybase, Teradata, or Windows File Share.

Syntax

show inspection-engines <all | configuration <index> | log sqlstrings | type


<type> >

start inspection-core

Starts the inspection-engine core.

Syntax

start inspection-core

start inspection-engines

Starts one or more inspection engines identified using index values from the list
produced by the list inspection-engines command.

Syntax

start inspection-engines <all | id>

start inspection-engines all

Starts all the inspection engines.

Syntax

start inspection-engine all

start inspection-engines id
Usage: start inspection-engines id <n>, where n is a numeric sniffer id.

Syntax

start inspection-engines id <n>

stop inspection-engines id
Usage: stop inspection-engines id <n>, where n is a numeric sniffer id.

stop inspection-core

Stops the inspection-engine core.

Syntax

stop inspection-core

86 CLI and API


stop inspection-engines
Stops one or more inspection engines identified using index values from the list
produced by the list inspection-engines command. It can also stop all
inspection-engines.

Syntax

stop inspection-engine <all | id>

stop inspection-engines all


Stops all the inspection engines.

Syntax

stop inspection-engines all

stop inspection-engines id

Stops one or more inspection engines identified using index values from the list
produced by the list inspection-engines command.

Syntax

stop inspection-engine <n>, where <n> is numeric sniffer id

store ignored port list

Sets the complete set of port numbers to be ignored by all inspection engines. The
list you specify completely replaces the existing list. Each number is separated
from the next by a comma, and no blanks or other white-space characters are
allowed in the list. Use a hyphen to specify an inclusive range of numbers.

Syntax

store ignored port list <n>

Example

store ignored port list 33,60-70

Show Command

show ignored port list

Network Configuration CLI Commands


Use the network configuration CLI commands to set IP addresses, handle
bonding/failover, handle secondary functionality, and reset networking.

Use the network configuration CLI commands to:


v Identify a connector on the back of the machine (show network interface port)
v Reset networking after installing or moving a network card (store network
interface inventory)

CLI and API 87


v Set IP addresses (store network interface ip, store network interface mask, store
network resolver, store network routes defaultroute)
v Enable or disable high-availability (store network interface high-availability)
v Configure the network card if the switch it attaches to will not auto-negotiate
the settings (store network interface auto-negotiation, store network interface
speed, store network interface duplex)

restart network

Restarts just the network configuration. For example, change the IP address, then
run this CLI command.

Syntax

restart network

show network interface all

This command shows settings for the network interface used to connect the
Guardium appliance to the desktop LAN. The IP address, mask, state (enabled or
disabled) and high availability status will be displayed. If IP high-availability is
enabled, the system will display two interfaces (ETH0 and ETH3). Otherwise, only
ETH0 will be displayed.

Syntax

show network interface all

show network routes operational

Display the IP routing configuration in use.

Syntax

show network routes operational

Example

CLI> show net rout ope

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 nic1

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 nic2

0.0.0.0 192.168.3.1 0.0.0.0 UG 0 0 0 nic1

ok

CLI>

88 CLI and API


show network verify
Display the current network configuaration.

Syntax

show network verify


CLI> show network verify

Current Network Configuration


--------------------------------------------------------------------------------
Hostname =
--------------------------------------------------------------------------------
Device | Address | Netmask | Gateway | Member of
--------------------------------------------------------------------------------
eth0 |
--------------------------------------------------------------------------------
Ethtool Options
--------------------------------------------------------------------------------
Device | Options (speed,autoneg,duplex)
--------------------------------------------------------------------------------
eth0 |
--------------------------------------------------------------------------------
DNS Servers
--------------------------------------------------------------------------------
Index | DNS Server
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
Static Routes
--------------------------------------------------------------------------------
Device | Index | Address | Netmask | Gateway
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Basic Network Settings Verified

store network interface auto-negotiation

If auto-negotiation is available on the switch to which a Guardium port is


connected, auto-negotiation will be used, and only the restart option of this
command will have any effect. Use this command to enable, disable, or restart
auto-negotiation for the network interface named ethN. Use the show network
interface inventory command to display all port names.

Syntax

store network interface auto-negotiation <ethN> <on | off | restart>

Show Command

show network interface auto-negotiation

store network interface duplex

Use this command only when auto-negotiation is not available on the switch to
which the Guardium port is connected. This command configures duplex mode for
the port named ethn. Use the show network interface inventory command to
display all port names.

CLI and API 89


Syntax

store network interface duplex <ethn> <half | full>

Show Command

show network interface duplex <ethn>

store network interface high-availability

Enables or disables IP Teaming (also known as bonding), which provides a


fail-over capability for the Guardium system primary IP address.

The two ports used (ETH0 and a second interface) must be connected to the same
network. There is a slight delay, caused by the switch re-learning the port
configuration. The default setting is off.

The port used for the primary IP address is always ETH0. When the
high-availability option is enabled, the Guardium system automatically fails over,
as needed, to the specified second interface, in effect transferring the primary IP
address to the second interface.

Note: IP Teaming and Secondary Interface can not done at the same time.

Syntax:
store network interface high-availability [on <NIC> | off ]

There is no show network interface high-availability command.

store network interface inventory

Resets the network interface MAC addresses stored in the Guardium internal
tables. This command should only be used after replacing or moving a network
card.

Note: The store network interface inventory command will detect on-board NIC
cards within the Guardium appliance and assign these cards as eth0 and eth1. This
command should only be run if specifically instructed to by Guardium Support as
it can rearrange the NIC cards.

Syntax
CLI> > store network interface inventory
WARNING: Running this function will reorder your NICS and may make the machine unreachable.
WARNING: It is suggested to run this from the console or equivalent.
Are you SURE you want to continue? (y/n)

Use the show command to display the port names and MAC addresses of all
installed network interfaces.

Syntax

show network interface inventory

Example

CLI> show network interface inventory

90 CLI and API


Current® network card configuration:

Device| Mac Address| Member of

eth0| 00:50:56:3b:c3:73|

eth1| 00:50:56:8a:0d:fa|

eth2| 00:50:56:8a:0d:fb|

eth3| 00:50:56:8a:00:c1|

Note: The “Member of” will show which NICs are in the bond pair, if a bonding
exists).

store network interface ip


Sets the primary IP address for the Guardium appliance. When changing the
network interface IP address, you may also need to change its subnet mask. See
store network interface mask. See store network interface secondary to create and
manage a secondary IP address. Bonding/failover is managed from the CLI
command, store network interface high-availability.

Syntax

store network interface ip <ip address>

Show Command

show network interface ip

store network interface ip6

Sets the primary IP V6 address for the Guardium appliance. When changing the
network interface IP address, you may also need to change its subnet mask. See
store network interface mask. See store network interface secondary to create and
manage a secondary IP address. Bonding/failover is managed from the CLI
command, store network interface high-availability.

Syntax

store network interface ip6 <ip address>

Show Command

show network interface ip6

store network interface map


Maps the Ethernet port identified by ethn to the MAC address mac.

Syntax

store network interface map <ethn> <mac>

CLI and API 91


store network interface mask
Sets the subnet mask for the primary IP address. When changing the network
interface mask, you may also need to change its IP address. See store network
interface ip. Note that the subnet mask for a secondary IP address can be assigned
only from the System Configuration panel on the Administration Console.

Syntax

store network interface mask <ip mask>

store network interface mtu

Use this CLI command to set the MTU (Maximum Transfer Unit).
CLI> store network interface mtu
Usage: store network interface mtu <interface> <mtu>]
where <interface> is the interface name,
that is one of ( eth0 )
and <mtu> is number between 1000 and 9000.

Show command

show network interface mtu

eth0 1500

show network interface port

Use this command to locate a physical connector on the back of the appliance.
After using the show network interface inventory command to display all port names,
use this command to blink the light on the physical port specified by n (the digit
following eth in the command - eth0, eth1, eth2, eth3, etc.), 20 times.

Syntax

show network interface port eth<n>

Example

CLI> show network interface port eth1

The orange light on port eth1 will now blink 20 times.

store network interface remap

Use this CLI command to remap the NIC.

Syntax

store network interface remap

store network interface reset


Use this CLI command to wipe the existing OS network configuration and reapply
the stored Guardium network settings.

92 CLI and API


Syntax
CLI> store network interface reset
WARNING: This command will reset the network configuration to the stored Guardium network settings
Are you SURE you want to continue? (y/n)

store network interface secondary

Use this command to configure a port on the Guardium system as a secondary


management interface with a different IP address, network mask, and gateway
from the primary.

Note: IP Teaming and Secondary Interface can not done at the same time.

Syntax:
store network interface secondary [on <NIC> <ip> <mask> <gateway> | off ]

Show command

show network interface secondary

store network interface speed

Use this command only when auto-negotiation is not available on the switch to
which the Guardium port is connected. This command configures the speed setting
for the port named ethn. Use the show network interface inventory command to
display all port names.

Syntax

store network interface speed <ethn> <10 | 100 | 1000>

Show Command

show network interface speed <ethn>

show network arp-table

Displays the address resolution protocol (ARP) table, which is an operational


system value. This command is provided for support purposes only.

Syntax

show network arp-table

Example

CLI> sho net arp

IP address HW type Flags HW address Mask Device

192.168.3.1 0x1 0x2 00:0E:D7:98:07:7F * nic1

192.168.3.20 0x1 0x2 00:C0:9F:40:33:30 * nic1

ok

CLI and API 93


CLI>

show network macs

Displays a list of MAC addresses (like the show network interface inventory
command).

Syntax

show network macs

Example

Current network card configuration:

Device| Mac Address| Member of

eth0| 00:50:56:3b:c3:73|

eth1| 00:50:56:8a:0d:fa|

eth2| 00:50:56:8a:0d:fb|

eth3| 00:50:56:8a:00:c1|

Note: The “Member of” will show which NICs are in the bond pair, if a bonding
exists).

ok

store network interface ip6

Usage: store network interface ip <ip>, where IP is a valid IP6 address.

store network interface appmaskingnic

Sets the interface definition for the network interface card that connects to the
server that is to be proxied. Set on when in transparent proxy mode, off when in
manual proxy mode.

Syntax

store network interface appmaskingnic [on <interface> <ip> <mask> | off]

Where ip is an IP address in the same subnet as the application server to be


proxied, and mask is the mask of that same subnet.

Show Command

show network interface appmaskingnic

store network resolver


Sets the IP address for the first, second, or third DNS server to be used by the
Guardium appliance. Each resolver address must be unique. To remove a DNS
server, enter null instead of an IP address.

94 CLI and API


Syntax

store network resolver <1 | 2 | 3> <ip address | null>

Show Command

show network resolver <1 | 2 | 3>

store network routes defaultroute

Sets the IP address for the default router to the specified value.

Syntax

store network routes defaultroute <ip address>

Show Commands

show network routes defaultroute

store network routes static

Permit the user to have only one IP address per appliance (through eth0) and
direct traffic through different routers using static routing tables. Add line to static
routing table.

Syntax

store network routes static

Show Command

List the current static routes, with IDs - Device, Index, Address, Netmask, Gateway

show network routes static

Delete command

delete network routes static

store system domain


Sets the system domain name to the specified value.

Syntax

store system domain <value>

Show Command

show system domain

CLI and API 95


store system hostname
Sets the system's host name to the specified value.

Syntax

store system hostname <value>

Show Command

show system hostname

Support CLI Commands


The following CLI commands are to be used only with the direction of Technical
Support.

These commands are to assist Technical Support in analyzing the status of the
machine, troubleshooting common issues and correct some common problems.
There are no functions that you would perform with these commands on a regular
basis.
support clean audit_task
A way to manually purge audit results, this command should be used only
when absolutely necessary to deal with audit tasks that produce a high
number of records and take up too much disk space.
It is strongly advised to consult with Technical Support before running this
command.
A Warning message is presented and a confirmation step is needed when
running this command.
This command will list the audit processes and tasks information.
It will present the number of rows, ordered from the largest result set to
the smallest. The number of report results is greater or equal to the input
value.
Next, after the report is presented, the user can select a line number to
purge the results of the audit process corresponding to that line number.
Selection of this line number will delete the audit data for the selected
process name.
Syntax
support clean audit_tasks <rows>
Input parameters
rows - an integer, number of rows to show. Default 10.
Note: On a system with a great many audit tasks, the completion of this
command can take some time.
support clean log_files
This CLI command will delete the specified file after user confirms to
delete. If it can not find the file, it will list files larger than 10MB in
/var/log and the user delete a large file from the list. A warning message
is presented and a confirmation step is included.
Syntax

96 CLI and API


support clean log_file <filename> >> add filename
support clean DAM_data
A way to manually purge database activity monitoring data, this command
should be used only when absolutely necessary.
It is strongly advised to consult with Technical Support before running this
command.
A Warning message and a confirmation step are included in the command.
Syntax
support clean DAM_data <purge_type> <start_date> <end_date>
Input parameters
purge_type options: agg, exceptions, full_details, msgs, constructs, access,
policy_violations, parser_errors, flat_log
start_date: YYYY-mm-dd
end_date: YYYY-mm-dd
support clean centera_files
Guardium archives/backups stored within Centera have a deletion date
marker attached to them by Guardium, however there is no subsequent
facility to invoke the deletion. Centera does not have a GUI to allow
maintenance of its own files, it relies on API invocations from client
applications.
Use the CLI command, support clean centera_files, to delete marked files
within Centera.
support clean InnoDB-dumps
Use this CLI command to purge InnoDB tables
This is a password protected command (for Technical Support only)
support clean hosts
USAGE: support clean hosts <IP address> <fully qualified domain name>
support clean servlets
Deletes *jsp*.java and *jsp*.class files and restarts GUI.
Use this CLI command to delete generated Java servlets and their classes.
support execute
This utility is designed to provide Guardium Advanced Support with the
ability to assist with remote diagnostics and support when direct remote
access it not available or permitted.
Support Execute is not a replacement for direct remote connections, but
will allow Guardium Support at least some level of root access in a secure
way without direct access.
The commands provided by Guardium Advanced Support can be SQL
statements, O/S Commands, Shell Scripts or SQL scripts. These will then
be provided to the customer along with a Secure Key to allow the
command to run via CLI. The Secure key is tied to the system that
Guardium Support is working with the customer on, and is not valid for

CLI and API 97


any other system. The command can only be run a number of times
permitted by Guardium Support and is only valid for seven days from the
agreed date.
The feature is disabled by default. Enable via CLI command in both
normal and recovery mode:
support execute [enable | disable]
In order to permit the Guardium Advanced Support team to generate a
Secure Key, the MAC address of the system in question must be provided
for eth0. Here is an example of the interfaces and MAC addresses:
Customer usage / Logged in as CLI
support execute <CMD String> <PMR #> <KEY>
# main execute command provided by Guardium Advanced Support
support execute showlog [<Secure Key>|main|files]
# Show usage logs
#'<Secure Key>' for full details of single entry
# 'main' to display the main execute log
# 'files' to display log directory list
support execute mac
# Eth0 MAC address required by support to generate secure key
support execute info
# Show eth0 MAC address, root passkey & other system information
support execute version
# Display the "Support Execute" internal binary code version
support execute help
# Help details and purpose of utility information
Example of command provided by Guardium Advanced Support:
support execute "select * from GDM_ACCESS%5CG" 11111,111,111
6254130c0f0c3c504b33687c57f41363e4c00
support reset-password accessmgr
This command will reset the accessmgr account password.
Syntax
support reset-password accessmgr 10000000-99999999|random
Parameters
8-digit key number used to generate new password. Keep this key number
to provide to Technical Support to receive new accessmgr account
password. The selection Random will generate a 8-digit random number.
Note: System will attempt to send notification to the accessmgr account
email, if it is setup.

support reset-password root

98 CLI and API


This command will reset root password on the IBM Guardium appliance.
Syntax
support reset-password root 10000000-99999999|random
Parameters
8-digit key number used to generate new password. Keep this key number
to provide to Technical Support. The choice Random will generate a 8-digit
random number.
This command also requires that the user provide a secret keyword in
order to change the root password. Contact Technical Support if there is a
need to change the root password.
Note: Do not reset root password unless absolutely required by business
rules.

support show audit_tasks


This command will list all the audit tasks.
Note: On a system with a great many audit tasks, the completion of this
command can take some time.
support show db-processlist
This command will list all the db processes sorted by running time.
Syntax
support show db-processlist all
support show db-processlist locked
support show db-processlist running
support show db-process full
Parameters:
support show db-processlist [ ]
Where
running is option to see all running sql statements
all is option to include also sleeping processes
locked is to display all locked and one oldest processes
full [optional] displays sql queries in expended format

support show db-struct-check


This command will display all the structure differences found during
aggregation process.
Syntax
support show db-struct-check

support show db-top-tables

CLI and API 99


This command will list 20 biggest database tables sorted by size and list of
tables sorted by used free table space in percents for those tables which use
more than 80% free space. It will allow filtering by table name. All table
sizes displayed in Mbytes, free space usage in percents.
Syntax
support show db-top-tables all
support show db-top-tables like
Parameters
support show db-top-tables all
will list biggest size tables out of entire DB sorted
support show db-top-tables like
will list biggest tables matching criteria, where could be any portion of the
table name

support show db-status


This command will show database usage.
Selections are free, used, megabytes, percentage.
Syntax
support show db-status free %
support show db-status used %
support show db-status free m
support show db-status used m

support show hardware-info


This command uses a script to collect hardware information and place this
collected information in a directory for retrieval.
After running this CLI command, the following message will appear:
Collected HW Info as /var/log/guard/Gather_hw_info-2012-06-25-17-
43.tgz
Then run the CLI command, fileserver, to retrieve this .tar file from the
server.
support show iptables
This command will display the output of system iptables command.
Syntax
support show iptables diff
support show iptables list
Parameters
[diff | list] parameter controlling normal iptables output presentation
versus displaying only differences/delta

100 CLI and API


[accept | full] parameter will filter output by accept row versus not filtered
list

support show large_files


This command will list all the files larger than MB and older than days in
the /var /tmp /root folders.
Usage
support show large_files
This command will list all the files larger than MB and older than days in
the /var /tmp /root folders
Input parameters:
* size - integer > 10 (in MB)
* age - integer >= 0 (in days)
Syntax:
support show large_files <size> <age>
Parameters
support show large_files
where <size> is the minimum size files to display (default 100M)
where <age> is the number of days since the last modification.

support show netstat


This command will display the output of system netstat command. It will
allow filtering of the output by content using grep parameter.
Syntax
support show netstat all
support show netstat grep
Parameters
support show netstat grep
where is alphanumeric string to search
support show netstat all

support show port open


This command is similar to using telnet to detect an open TCP port locally
or on a remote host.
If we are able to connect successfully you will see a message like:
Connection to 127.0.0.1 8443 port [tcp/*] succeeded!
If you are unable to connect you will see a message like: connect to
127.0.0.1 port 1 (tcp) failed: Connection refused
Syntax: support show port open
IP port - IP must be a valid IPv4 address like 127.0.0.1.

CLI and API 101


Port must be an integer with a value in 1-65535.
support show top
This command will display the output of system top command sorted by
cpu, memory or running time. It has configurable number of iterations
(default 1) and number of displayed rows (default 10).
Syntax
support show top [ cpu | memory | time ]
Parameters
support show top cpu
where N is number of iterations in range 1 to 10 and R is number of rows
to display - min 10
support show top memory
where N is number of iterations in range 1 to 10 and R is number of rows
to display - min 10
support show top time
where N is number of iterations in range 1 to 10 and R is number of rows
to display - min 10

support check tables [DB name] [table name}


Invokes mysqlcheck –c command on tables (checks tables for errors).
Without any parameter this command checks all tables in TURBINE
database with 3 minutes timeout for each check. Checks are running in
parallel, overall time will vary. Command will show progress in percents.If
any check runs more than 3 minutes it will be terminated. All tables,
whose checks were terminated by timeout, will be listed on the screen after
command completion. Any errors occurred during command's operation
will be reported to the log file /var/log/guard/<dbname>_check_tables/
errors.<date>.log, where <date> is current date and <dbname> is the name
of database.
Errors found for each table check operation will be reported in
/var/log/guard/<dbname>_check_tables/
check_table_child.<tablename>.<date>.log files, where <date> is current
date, <dbname> is a name of database and <tablename> is the name of
table checked. Files for healthy tables are not created. </p><p>With
dbname specified as the 1st parameter the command will check all tables
in the specified DB with the same timeout (3 minutes). With no parameters
specified it will check all TURBINE's tables.
With dbname and tablename specified as the parameters the command will
check specified table in specified DB without timeout, until the check
operation is complete. This is to allow manual checking the tables whose
checks didn't finish in 3 minutes. You can use masks in tablename
parameter using percent sign (%).

support shrink innodb-size


Use this CLI command to reduce size of ibdata1 file.

102 CLI and API


It performs the following steps:
v dumps all InnoDB tables
v stops mysql
v deletes ibdata1, ib_logfile0, ib_logfile1 files
v starts mysql
v restores dumped tables
This is a password protected command (for Technical Support only)

support show innodb-status


Use this CLI command to troubleshoot MySQL issues. Use this CLI
command to check what is happening at runtime with MySQL tables. Use
this CLI command to determine if long check times with MySQL tables are
due to record lock or table lock.
support show innodb-status
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
Main thread process no. 7959, id 139923805550336, state: sleeping Number
of rows inserted 6894, updated 6934, deleted 93, read 24787 0.33 inserts/s,
0.00 updates/s, 0.00 deletes/s, 0.67 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
support analyze static-table
Use this CLI command to analyze content of static tables by sorting them
based on the largest group per value length and value occurrence.
support must_gather commands
There are some simple must_gather commands that can be run by user CLI
that generate specific information about the state of any Guardium system.
This information can be uploaded from the appliance and sent to
Guardium Technical Support whenever a PMR (Problem Management
Record) is logged.
In order to run these commands, you will need to have the appropriate
must_gather patch installed.
Once the correct patch is installed, the must_gather commands can be run
at any time by user CLI as follows.
1. Open a Putty session (or similar) to the Guardium system of concern.
2. Log in as user CLI.
3. Depending on the type of issue you are facing, paste the relevant
must_gather commands into the CLI prompt. More than one
must_gather command may be needed in order to diagnose the
problem.
support must_gather system_db_info
support must_gather purge_issues
support must_gather audit_issues
support must_gather agg_issues
support must_gather cm_issues

CLI and API 103


support must_gather alert_issues
support must_gather patch_install_issues
support must_gather app_masking_issues
support must_gather user_interface_issues

The following may take a few minutes to run to completion.


support must_gather miss_dbuser_prog_issues
support must_gather sniffer_issues

For the following commands, you will be prompted for a time in


minutes for how long you want the debugger running while you
reproduce the problem.
support must_gather backup_issues
support must_gather scheduler_issues

Output is written to the must_gather directory with filename(s) along


the lines of this example, must_gather/system_logs/.tgz
4. Send the resulting output to IBM Support.
By using fileserver, you can upload the tgz files and send to Support.
Send via email or upload to ECUREP using - for example - the standard
data upload specifying the PMR number and file to upload.

Guardium for z/OS traffic diagnostics commands


support store zdiag on [N]
Where optional N is number of minutes to run diagnostics, from 10 to 600,
60 by default
Turns on Guardium for z/OS traffic diagnostics. This includes collection of
TCPDUMP and SLON, collections will stop once corresponding files reach
2 GB size. Once completed, results files tcpdump.tar.gz and slon_all.tar.gz
can be found via fileserver command. The /var partition must have at
least 15GB of free space.
support store zdiag off
Turns off Guardium for z/OS traffic diagnostics. Results files
tcpdump.tar.gz and slon_all.tar.gz can be downloaded using the CLI
command, fileserver.
support show zdiag
Shows Guardium for z/OS traffic diagnostics status.

SLON Collection Commands


support store slon on [parameter]
Turns on SLON utility that captures packets got by sniffer for debug.
Results files slon_packets.tar.gz, slon_messages.tar.gz or slon_all.tar.gz can
be found via fileserver. The /var partition must have at least 15GB of free
space.
Where optional parameter is:
packets, dump analyzer packets (default)

104 CLI and API


snifsql, log sniffer SQL activities and dump analyzer packets
secparams, log secure parameters info and dump analyzer packets
sgate, log S-GATE debugging info and dump analyzer packets
messages, tap message data dump
support store slon off [parameter]
Turns off SLON utility. Results files slon_packets.tar.gz,
slon_messages.tar.gz or slon_all.tar.gz can be found via fileserver.
Where optional parameter is:
packets, stop dumping packets, logging secure parameters, S-GATE debug
info and sniffer SQL activities (default)
messages, stop tapping message data dump
all, stop all activities
support show slon
Shows SLON utility status.

TCPDUMP Collection Command


support store snif_memory_max
Usage: support snif_memory_max <num>, where num is a number of | 33
| 50 | 75 |
This command only applies to 64-bit system.
Show command
support show snif_memory_max
support store tcpdump on <type> <period> <loglimit> [interface] [IP] [port]
[protocol]
support store tcpdump on <type> <period> <loglimit> [interface] [IP]
[port] [protocol]
Turns on TCPDUMP utility. After period ends, results file tcpdump.tar.gz
can be found via fileserver. The /var partition must have at least 15GB of
free space.
Where:
<type> - dump type, 'headers' (only headers captured) or 'raw' (whole
packets captured)
<period> - dump period, NUMBER[SUFFIX], where optional SUFFIX may
be 's' for seconds, 'm' for minutes (default)
<loglimit> - dump logfile limit, from 1 to 6 gigabytes
Optional filter arguments:
[interface] - network interface name (default eth0)
[IP] - IP address
[port] - port
[protocol] - protocol, 'tcp', 'udp', 'ip', 'ip6', 'arp', 'rarp', 'icmp' or
'icmp6'

CLI and API 105


Example
support store tcpdump on headers 10m 1
This command will run TCPDUMP saving packets headers for 10 minutes
and 1GB log file size limit.
support show tcpdump
Shows TCPDUMP utility status.
support store tcpdump off
Turns off TCPDUMP utility. After stop, results file tcpdump.tar.gz can be
found via fileserver.
support must_gather datamining_issues
Collects necessary diagnostic information for Outliers, Quick search and
Datamart functionality. Information includes dumps of corresponding
internal tables, necessary logs, state of corresponding processes and
standard must_gather diagnostics (general system and internal DB info).
support must_gather network_issues [--host=<HOST>], where optional
parameter <HOST> is hostname or IP address.
The command gathers all network information from the appliance and
polls hosts that Guardium interacts with by using ping, traceroute,
corresponding port probing and other measures. If the optional parameter
is specified, then it polls only the host that was specified (if Guardium is
configured to do any activity on this host).

System CLI Commands


Use these CLI commands to configure system settings.

store system apc

Use this command to configure automatic powering down options when a UPS is
attached. Note that the UPS must be attached to a USB connector (serial
connections for a UPS are not supported).

Sets the minimum charge percent (0-100) before powering down, or the number of
seconds to run on battery power before powering down. The defaults are 25 and
zero, respectively.

There are also commands to start and stop the apc process. The apc process is
disabled by default.

Syntax

store system apc [battery-level <percent> | timeout <seconds>]

store system apc start

store system apc stop

Show Command

show system apc [battery-level | timeout ]

106 CLI and API


store system banner
store system banner [message | clear]

To create a banner (warning about unauthorized access, etc. or a welcome


message) at the CLI login, use the CLI command, store system banner [message |
clear].

Syntax

store system banner clear - use this CLI command to remove an existing banner
message.

store system banner message - use this CLI command to create a banner message.
Enter the banner message and then press CTRL-D.

Show command

show system banner - use this CLI command to view an existing banner message.

store system clock datetime

Sets the system clock's date and time to the specified value, where YYYY is the
year, mm is the month, dd is the day, hh is the hour (in 24-hour format), mm is
the minutes, and ss is the seconds. The seconds portion is required, but will
always be set to 00.

Syntax

store system clock datetime YYYY-mm-dd hh:mm:ss>

Show Command

show system clock <all |datetime |timezone>

Example

store system clock datetime 2008-10-03 12:24:00

store system clock timezone

Lists the allowable time zone value (list option), or sets the time zone for this
system to the specified timezone. Use the list option first to display all time zones,
and then enter the appropriate timezone from the list.

IBM Guardium also logs the local timezone in the standard audit trail, to address
cases where data is used in (or aggregated with) data collected in another time
zones.

Note: The timezone setting is not updated automatically when Daylight Saving
time occurs. In order to update the machine, the user will need to reset the
timezone. Reset the timezone means to set a new timezone, different from what
currently is, and then resetting to the correct timezone. Just resetting the timezone
to the same one will not work and give the message, No change for the timezone.

Syntax

CLI and API 107


store system clock timezone <list | timezone>

Show Command

show system clock <all | timezone | datetime>

Example

Use the command first with the list option to display all time zones. Then enter
the command a second time with the appropriate zone.

CLI> store system clock timezone list

Timezone: Description:

--------- -----------

Africa/Abidjan:

Africa/Accra:

Africa/Addis_Ababa:

...

...output deleted

...

CLI> store system clock timezone America/New_York

store system conntrack

Sets the current status of connection tracking subsystem of the Linux kernel. Status
can be ON|OFF.

Syntax

store system conntrack ON|OFF

Show command

show system conntrack

store system cpu profile


Allow configuration of CPU scaling from a CLI command on hardware that
supports CPU scaling.

Use this CLI command to set the appropriate CPU scaling policy for your needs:
v conservative = less power usage, conservative scaling
v balanced = medium power usage, fast scale up
v performance = runs the CPU(s) at maximum clock speed

Guardium software sets the scaling policy to Performance upon installation.

108 CLI and API


Syntax

store system cpu profile [min|perf|max]

Show command

show system cpu profile

store system custom_db_size

Use this CLI command to set the maximum size of the custom database table (in
MB). The Default value is 4000 MB.

Syntax
CLI> store system custom_db_max_size
USAGE: store system custom_db_max_size <N>
where N is number larger than 4000.

Show command

show system custom_db_size

store system domain

Sets the system domain name to the specified value.

Syntax

store system domain <value>

Show Command

show system domain

store system hostname

Sets the system's host name to the specified value.

Syntax

store system hostname <value>

Show Command

show system hostname

store system issue

store system issue [message | clear]

The CLI command, store system issue message, will receive input from the console
until Ctrl-d and write it to /etc/motd after removing from the input any $,\,
\followed by single letter, and ` characters. This is a way to enter messages that
make this system compliant with the security policies of customers.

CLI and API 109


The CLI command, store system issue clear, will restore /etc/motd to the default
version.

The version comes from /etc/guardium-release. For example, SG70 -> 7.0, SG80 ->
8.0. If the SG is not found in the /etc/guard-release, the default version is an
empty string.

store system netfilter-buffer-size

Set the size of the netfilter buffer.

Syntax

store system netfilter-buffer-size

Show command

Displays the S-TAP netfilter buffer size. 65536 by default.

show system netfilter-buffer-size

show system ntp diagnostics

Use this CLI command to run ntpq -p and ntptime and send the output directly to
the screen. The Guardium system queries ntpd from localhost via udp.

Syntax

show system ntp diagnostics

Example
CLI> show system ntp diagnostics
Output from ntpq -p :
localhost.localdomain:
-------------------------------------------------------------------
Output from ntptime :
(Note that if you have just started the ntp server, it may report an ’ERROR’ until it has synchronize
-------------------------------------------------------------------
ntp_gettime() returns code 5 (ERROR)
time d3443c21.47a46000 Thu, Apr 26 2012 17:26:57.279, (.279852),
maximum error 16384000 us, estimated error 16384000 us
ntp_adjtime() returns code 5 (ERROR)
modes 0x0 (),
offset 0.000 us, frequency 0.000 ppm, interval 1 s,
maximum error 16384000 us, estimated error 16384000 us,
status 0x40 (UNSYNC),
time constant 2, precision 1.000 us, tolerance 512 ppm,

store system ntp [all | server | state]


store system ntp server

Sets the host name of up to three NTP (Network Time Protocol) servers. Note that
to enable the use of an NTP server, you must use the store system ntp state on
command. To define a single NTP server, enter its host name or IP address. To
define multiple NTP servers, enter the command with no arguments, an you will
be prompted to supply the NTP server host names.

110 CLI and API


Syntax

store system ntp server

USAGE: store system ntp server

For each server enter either ip or hostname

Enter up to 3 NTP servers to store:

Show Command

show system ntp <all |server>

Delete command

delete ntp-server

store system ntp state

Enables or disables use of an NTP (Network Time Protocol) server.

Syntax

store system ntp state <on | off>

Show Command

show system ntp <all |state>

store system patch install

Installs a single patch or multiple patches as a background process. The ftp and
scp options copy a compressed patch file from a network location to the IBM
Guardium appliance. Note that a compressed patch file may contain multiple
patches, but only one patch can be installed at a time. To install more the one
patch, choose all the patches that need to be installed, separated by commas.
Internally the CLI will submit requests for each patch on the list (in the order
specified by the user) with the first patch taking the request time provided by the
user and each subsequent patch three minutes after the previous one. In addition,
CLI will check to see if the specified patch(es) are already requested and will not
allow duplicate requests.

The last option (sys) is for use when installing a second or subsequent patch from
a compressed file that has been copied to the IBM Guardium appliance using this
command previously.

To display a complete list of applied patches, see the Installed Patches report on
the IBM Guardium Monitor tab of the administrator portal.

In store system patch install CLI command, user can choose multiple patches from
the list.

Syntax

store system patch install <type> <date> <time>

CLI and API 111


<type> is the installation type, cd | ftp | scp | sys

<date> and <time> are the patch installation request time, date is formatted as
YYYY-mm-dd, and time is formatted as hh:mm:ss

If no date and time is entered or if NOW is entered, the installation request time is
NOW.

Parameters

Regardless of the option selected, you will be prompted to select a patch to apply:

Please choose one patch to apply (1-n,q to quit):

cd - To install a patch from a CD, insert the CD into the IBM Guardium CD ROM
drive before executing this command. A list of patches contained on the CD will be
displayed.

tp or scp - To install a patch from a compressed patch file located somewhere on


the network, use the ftp or scp option, and respond to the prompts shown. Be sure
to supply the full path name for the patch, including the filename:

Host to import patch from:

User on hostname:

Full path to the patch, including name:

Password:

In store system patch install scp CLI command, user can use wildcard * for the
patch file name.

The compressed patch file will be copied to the IBM Guardium appliance, and a
list of patches contained on file will be displayed.

sys - Use this option to apply a second or subsequent patch from a patch file that
has been copied to the IBM Guardium appliance by a previous store system patch
execution.

The store system patch install command will not delete the patch file from the IBM
Guardium appliance after the install. While there is no real need to remove the
patch file, as same patches can be reinstalled over existing patches and keeping
patch files around can aid in analyze various problems, a user may remove patch
files by hand or use the CLI command diag (Note, the CLI command diag is
restricted to certain users and roles.)

To delete a patch install request, use the CLI command delete scheduled-patch

store system remote-root-login

Enable/disable SSH (root access). Secure Shell or SSH is a network protocol that
allows data to be exchanged using a secure channel between two networked
devices.

Syntax

112 CLI and API


store system remote-root-login ON|OFF

Show command

show system remote-root-login

store system scheduler


Scheduling is managed by a timing mechanism within the IBM Guardium
application. If the timing function is disrupted, it will restart after the restart
interval designated by this CLI command.

Use store system scheduler restart_interval [5 to 1440 or -1] to restart the timing
function after 5 minutes to 1440 minutes. The default is -1 which means the timing
restart mechanism is not installed.

Use store system scheduler wait_for_shutdown [ON | OFF] to restart the


scheduler after all jobs currently running finish. The parameters are ON or OFF.

Syntax

store system scheduler restart_interval [5 to 1440 or -1]

store system scheduler wait_for_shutdown [ON | OFF]

Show command

show system scheduler

store system shared secret

Sets the system's shared secret value to the specified value. This key must be the
same for a Central Manager and all of the appliances it will manage; or an
Aggregator, and all of the appliances from which it aggregates data. After an
appliance has registered for management by a Central Manager, the shared secret
on that unit is no longer used. (You cannot unregister a unit from Central
Management by changing this value.)

Dynamic password for aggregator OS user

The aggregator password will be <the current password> concatenated with the
shared secret, meaning: password=<current passwd><share secret>

Users will need to make sure the collectors' shared secret and the aggregator's
shared secret is exactly the same, otherwise the SCP transfer will fail from the
collector to the aggregator (This is a requirement for managed units and
aggregators, collectors and aggregators, and export setup screen). The shared secret
can be set both from CLI and from the System pane in the Admin Console tab.

Syntax

store system shared secret <key>

store system snif-buffers-reclaim

Use this CLI command only when directed by IBM Guardium Technical Services.

CLI and API 113


The new configuration will be effective once the CLI command, restart
inspection-core, is executed.

Syntax

store system snif-buffers-reclaim [ON | OFF]

Show command

show system snif-buffers-reclaim

store system snif-thread-number

Use this CLI command to specify how many threads are running.

The new configuration will be effective once the CLI command, restart
inspection-core, is executed.

Syntax

store system snif-thread-number [new | default]

Show command

show system snif-thread-number

Snif is running with 6 threads on the 32-bit system

store system snmp contact

Stores the email address for the snmp contact (syscontact) for the IBM Guardium
appliance. By default it is info@guardium.com.

Syntax

store system snmp contact <email-address>

Show Command

show system snmp contact

store system snmp location


Stores the snmp system location (syslocation) for the IBM Guardium appliance. By
default it is Unknown.

Syntax

store system snmp location <string>

Show Command

show system snmp location

114 CLI and API


store system snmp query community
Stores the snmp system query community for the IBM Guardium appliance. By
default it is guardiumsnmp.

Syntax

store system snmp query community <string>

Show Command

show system snmp query community

User Account, Password and Authentication CLI Commands


Use these CLI commands to configure user accounts, passwords and
authentication.

Set guiuser Authentication

When logging on via CLI with one of the default CLI accounts (guardcli1,
...guardcli5), it is required to run the CLI command, set guiuser, before any
GuardAPI commands will work. This authentication is required to prevent users
with limited roles in the GUI from gaining unauthorized access to GuardAPI
commands.

The use of the guardcli1 ... guardcli5 accounts requires the setting of a local
password. Use the CLI command, set guisuer, command to reset the guardcli1 ...
guardcli5 accounts and then add a local password, as shown in the Syntax.

Certain CLI commands are dependent on the role of the guiuser. For example, the
role of the guiuser (marked when creating a new user from accessmgr view) must
be accessmgr in order to access grdapi create_user, grdapi set_user_roles, and
grdapi update_user

Syntax

set guiuser <gui_user> password <password>

Example

$ ssh guardcli1@a1.corp.com

IBM Security Guardium , Command Line Interface (CLI)

guardcli1@a1.corp.com's password:

Last login: Thu Nov 4 14:56:34 2012 from 123.a1.corp.com

================================================================

IBM Security Guardium

Unauthorized access is prohibited

================================================================

CLI and API 115


a1.corp.com> set guiuser johny_smith password 3wel9s887s

ok

a1.corp.com>

create_user
Examples

>grdapi create_user firstName=john lastName=smith

password=pASSW0rd confirmPassword=pASSW0rd email=jsmith@us.ibm.com

userName=john disabled=0

ID=20000
>grdapi set_user_roles userName="john"
roles="dba,diag,cas,user"

ID=20000

Added role (dba).

Failed to add role (diag). Diag must have one of these roles: cli or admin.

Added role (cas).

Added role (user).


> grdapi set_user_roles userName="john"
roles="dba,diag,cas,user,cli"

ID=20000

Added role (dba).

Added role (diag).

Added role (cas).

Added role (user).

Added role (cli).


> grdapi update_user userName="john"
email="john.smith@gmail.com"

ID=20000

> grdapi list_users

ID=0

####### User 3 #######

Username: accessmgr

116 CLI and API


First Name: accessmgr

Last Name: accessmgr

Email:

Disabled: false

####### User 1 #######

Username: admin

First Name: admin

Last Name: admin

Email:

Disabled: false

####### User 33 #######

Username: anon

First Name: anon

Last Name: anon

Email:

Disabled: false

####### User 20000 #######

Username: john

First Name: john

Last Name: smith

Email: john.smith@gmail.com

Disabled: false

####### User 2 #######

Username: bill

First Name: bill

Last Name: green

Email:

Disabled: true

CLI and API 117


set_user_roles
set_user_roles

Each time that you execute a set_user_roles, you reset the roles of a user. You don't
append to the roles. You reset.

When you create a user using GrdAPI, it will create the user with user role.
Whenyou set the role, you have to specify all of its roles This is done to enable
deletion of existing roles and addition of new roles.

Even in GUI, it displays all roles, in which you can either check or uncheck a role
and when you save it, it will save everything that you checked.

What GrdAPI does, is to give user kevin only role INV, where any user must have
one of these roles: user, cli, admin, or accessmgr

The correct way to call this GrdAPI is:


grdapi set_user_roles userName="kevin" roles="user,inv"

Example

> set guiuser accessmgr password ASDFasdf

ok

> grdapi create_user firstName=kevin

lastName=smith password=pASSW0rd confirmPassword=pASSW0rd

email=ksmith@company.com userName=kevin disabled=0

ID=20000

ok
> grdapi set_user_roles userName="kevin" roles="inv"

set_user_roles:

ERR=3700

User must have one of these roles: user, cli, admin, or accessmgr.

Error executing the command

ok
> grdapi set_user_roles userName="kevin"
roles="user,inv"

ID=20000

Added role (user).

Failed to add role (inv). Sorry, before assigning the inv role the user's Last Name
must be set to the name of one of the three investigation databases -

118 CLI and API


INV_1, INV_2, or INV_3 (case-sensitive)

ok
> grdapi set_user_roles userName="kevin"
roles="dba,diag,cas,user"

ID=20000

Added role (dba).

Failed to add role (diag). Diag must have one of these roles: cli or admin.

Added role (cas).

Added role (user).

ok

>

show guiuser

This displays the user (by role) of GUI.

Show command

show guiuser

Password Control Commands

Use the following commands to control user passwords, as follows:


v store password disable - Set the number of days after which an inactive account
will be disabled.
v store password expiration - Set the number of days after which a password will
expire.
v store password validation - Enable or disable the hardened password validation
rules.

Account Lockout Commands

Use the account lockout commands to disable a Guardium user account after one
or more failed login attempts. Use these commands to:
v Enable or disable the feature. See store account lockout.
v Set the maximum number of login failures allowed an account within a given
time interval. See store account strike count and store account strike interval.
v Set the maximum number of failures allowed an account for the life of the
Guardium appliance. See store account strike max.
v To unlock the admin user account in the event it becomes locked, see the unlock
admin command description.

After a Guardium user account has been disabled, it can be enabled from the
Guardium portal, and only by users with the accessmgr role, or the admin user.

Example

CLI and API 119


Enable account lockout, lock an account after 5 login failures within 10 minutes,
and set the maximum number of failures allowed to 999.

store account lockout on

store account strike count 5

store account strike interval 10

store account strike max 999

Note:

If the admin user account is locked, use the unlock admin command to unlock it.

If account lockout is enabled, setting the strike count or strike max to zero does
NOT disable that type of check. On the contrary, it means that after just one failure
the user account will be disabled!

store account lockout

Enables (on) or disables (off) the automatic account lockout feature, which disables
a user account after a specified number of login failures.

Syntax

store account lockout <on | off>

Show Command

show account lockout

store account strike count

Sets the number of failed login attempts (n) in the configured strike interval before
disabling the account.

Syntax

store account strike count <n>

Show Command

show account strike count

store account strike interval

Sets the number of seconds (n) during which the configured number of failed login
attempts must occur in order to disable the account.

Syntax

store account strike interval <n>

Show Command

120 CLI and API


show account strike interval

store account strike max

Sets the maximum number (n) of failed login attempts to be allowed for an
account over the life of the server, before the account is disabled.

Syntax

store account strike max <n>

Show Command

show account strike max

store password disable


Sets the number of days of inactivity, after which user accounts will be disabled.
When set to 0 (zero), no accounts will be disabled by inactivity. At installation, the
default value is zero. You must restart the GUI after changing this setting (see
restart gui).

Syntax

store password disable <days>

Show Command

show password disable

store password expiration

Sets the age (in days) for user password expiration. When set to 0 (zero), the
password never expires. For any other value, the account user must reset the
password the first time they log in after the current password has expired. The
default value is 90. You must restart the GUI after changing this setting.

Syntax

store password expiration <days>

Show Command

show password expiration

store password validation

Turns password validation on or off. The default value is on. You must restart the
GUI after changing this setting.

When password validation is enabled, the password must be eight or more


characters in length, and must include at least one uppercase alphabetic character
(A-Z), one lowercase alphabetic character (a-z), one digit (0-9), and one special
character from the table. When disabled (not recommended), any length or
combination of characters is allowed.

CLI and API 121


Syntax

store password validation <on | off>

Show Command s

show password validation


Table 6. Special Characters for Guardium Passwords
Character Description

@ Commercial at sign

# Number sign

$ Dollar sign

% Percent sign

^ Circumflex accent (carat)

& Ampersand

. Full stop (Period)

; Semicolon

! Exclamation mark

- Hyphen (minus)

+ Plus sign

= Equals sign

_ Low line (underscore)

store user password


Use this command to reset the cli user password. To simplify the support process,
we suggest that you keep the cli user password assigned initially by Guardium.
There is no way to retrieve the cli user password once it is set. If you lose this
password, contact Guardium Support to have it reset.

Syntax

store user password

You will be prompted to enter the current password, and then the new password
(twice). None of the password values you enter on the keyboard will display on
the screen.

122 CLI and API


The cli user password requirements differ from the requirements for user
passwords. The cli user password must be at least six characters in length, and
must contain at least one each of the following types of characters:
v Digits (0-9)
v Lowercase alphabetic characters (a-z)
v Uppercase alphabetic characters (A-Z)

Running this CLI command will also update the change-time record in the
password expiration file.

unlock accessmgr

Use this command to enable the Guardium accessmgr user account after it has
been disabled. This command does not reset the accessmgr user account password.

Note: Only users with admin role are allowed to run this CLI command.

Syntax

unlock accessmgr

restart gui

unlock admin

Use this command to enable the Guardium admin user account after it has been
disabled. This command does not reset the admin user account password.

Note: Only users with admin role are allowed to run this CLI command.

Syntax

unlock admin

restart gui

Authentication commands

The following commands display or control the type of authentication used.

store auth

Use this command to reset the type of authentication used for login to the
Guardium appliance, to SQL_GUARD (i.e. Local Guardium authentication, the
default).

Optional authentication methods (LDAP or Radius, for example) can be configured


and enabled from the administrator portal, but not from the CLI. See Configure
Authentication for more information.

Syntax

store auth SQL_GUARD

Show Command

CLI and API 123


show auth

Generate a new layout for a role based on a user layout

The Guardium portal window contains one or more panes. Each pane defines the
layout of some portion of the window. Each pane may contain one or more other
panes. The default layout contains three different types of panes: tab panes, menu
panes, and portlet panes, each of which is described in the help topic, Portal
Customization.

The Guardium administrator or access manager can generate, via CLI, a default
layout for a role. After that, any new user who is assigned that role will have that
layout after logging in for the first time.

Note: Default .psml structures for user and role can be defined, via the GUI, by
the admin user. See Portlet Editor for further information.

Use the generate-role-layout CLI command to generate a new layout for an


existing role, based on the layout for the specified user. Once the new role layout
has been defined, any users who are assigned that role before they log in for the
first time, will receive the layout for that role.

generate-role-layout

Syntax generate-role-layout <user> <role>

Note: user (login name) and role are not case-sensitive.

Parameters

If either of the following parameters contains spaces (John Doe is user , or DBA
Managers is role), replace the space characters with underscore characters.

For example:

generate-role-layout John_Doe DBA_Managers

user - The name of the user whose layout will be used as a model for the role
layout.

role - The role to which the new layout will be attached.

Proxy CLI Functions


Use these commands to show, store, and restore proxy functions.

After you install the Guardium system, use the following commands to configure
the proxy server that checks if the ICAP server is available. The port number for
the proxy server is 3128, and th eport number for the transparent proxy is 3129.
The port number for ICAP is 1344. You can upload a certificate and key that is
signed by an authorized company such as VeriSign. After the certificate has been
uploaded, a path to the proxy server is provided. The certification for the proxy
server must be signed by an authorized company. If it is not, the certificate will be
denied.

124 CLI and API


The proxy server and ICAP server start automatically upon startup. You can restart
or stop the services. If the ICAP server is down, you have the option to configure
the proxy and have access to the Guardium system but the application will not
have masking. SSL configuration for the transparent proxy is accessible through the
CLI.

Note: Any configuration will require restarting the proxy server and ICAP.

restart icap

Restarts the icap process that handles HTTPS traffic. This command stops the icap
process with a time stamp and displays the message - stop icap. Another time
stamp appears with the message - start icap. Then, a third time stamp appears
with the message - start icap completed to confirm that the icap has restarted.

Syntax

restart icap

restart squid

Restarts the proxy server service. This command stops the service with a time
stamp and displays the message - stop squid. Another time stamp appears with
the message - start squid. Then, a third time stamp appears with the message -
start squid completed to confirm that the proxy server has restarted.

Syntax

restart squid

show squid

Shows the state of the proxy server bypass, proxy, or SSL (Secure Sockets Layer).
You cannot enable the proxy server bypass when it is already enabled. Also, you
cannot disable the proxy server bypass when it is already disabled.

Syntax

show squid <bypass | proxy | ssl>

show squid bypass


Shows the state of the proxy server bypass. The proxy server bypass configuration
displays: Enabled when the proxy server bypass is on, and Disabled when it is off.
If bypass is enabled, the application is available without masking when ICAP is
down. When bypass is disabled, the application is not available without masking
when ICAP is down. To change the setting, use the command store squid bypass
<on | off>.

Syntax

show squid bypass

CLI and API 125


show squid proxy
Shows the state of the proxy server. The proxy server configuration displays: proxy
default when the state is set to default and proxy manual when the state is set to
manual. If this setting is set to default, the default setting of the proxy is
transparent proxy, and the client does not need to configure the proxy in the web
browser. If the proxy is set to manual, the client must configure the proxy in the
browser. Use the store squid proxy <default | manual> command to change the
current state.

Syntax

show squid proxy

show squid ssl

Shows the state of the proxy server SSL connection. The proxy server SSL
configuration displays: enable when the SSL connection is on and disable when
the SSL connection is off. To change the setting, use the command store squid ssl
<on | off>. A certificate file must exist to enable the proxy server SSL connection.

Syntax

show squid ssl

start icap

Starts the icap process that handles Hypertext Transfer Protocol Secure (HTTPS)
traffic. It is a method that secures the transfer of information across a network. A
time stamp shows when the process has started with the following message: -
start icap. After the process is completed, a confirmation message states: - start
icap completed.

Syntax

start icap

start squid
Starts the proxy server service. A time stamp shows when the process is started
with the following message: - start squid. After the process is completed, a
confirmation message states: - start squid completed.

Syntax

start squid

stop icap
Stops the icap process that handles Hypertext Transfer Protocol Secure (HTTPS)
traffic. A time stamp indicates that the process to stop icap has started. It is
followed by the message: - stop icap. The process stops and sends back a time
stamp and the following message after it is completed: - stop icap completed.

Syntax

126 CLI and API


stop icap

stop squid

Stops the proxy server service. A time stamp indicates that the process to stop the
proxy server has started. It is followed by the message: - stop squid. The process
stops and sends back a time stamp and the following message after it is
completed: - stop squid completed.

Syntax

stop squid

store squid

Stores the proxy server bypass, proxy, or SSL configuration. The current state is
determined by the argument <state> where on is to enable and off is to disable.

Syntax

store squid <bypass | proxy | ssl>

store squid bypass

Stores the proxy server bypass. The following message appears:


Usage: store squid bypass <state>
where state is on/off. ’on’ is to enable and ’off’ is to disable.
ok

If bypass is enabled, the application is available without masking when ICAP is


down. When bypass is disabled, the application is not available without masking
when ICAP is down. If you attempt to disable the proxy server bypass when the
setting is already off, you will trigger the following error message: Invalid state.
You can see the current state by using the command show squid bypass.

Syntax

store squid bypass <on | off>

store squid certificate


Stores the certificate for the squid service.

store squid proxy

Stores the proxy configuration in the configuration file. You can set the state of the
proxy server to default or manual. Use the show squid proxy to view the current
status of the proxy server.

If this setting is set to default, the default setting of the proxy is transparent proxy,
and the client does not need to configure the proxy in the web browser. If the
proxy is set to manual, the client must configure the proxy in the browser.

Syntax

store squid proxy <default | manual>

CLI and API 127


store squid ssl
Enables or disables a proxy server SSL connection. A certificate file must exist or
you will receive the following error message: No certificate files existed.
Cannot enable squid ssl connection. You cannot disable the ssl setting if it is
already disabled. Use show squid ssl to see the current setting.

Syntax

store squid ssl <on | off>

Quick Search for Enterprise CLI Commands


Use these CLI commands to configure Quick Search for Enterprise.

show solr connection_timeout

Use this command to show the current connection_timeout value.

show solr connection_timeout

show solr so_timeout

Use this command to show the current so_timeout value.

show solr so_timeout

show solr time_allowed

Use this command to show the current time_allowed value.

show solr time_allowed

store solr connection_timeout

Use this command to set the connection timeout. If Quick Search for Enterprise
cannot connect to the collector within the specified timeout period, no results from
that collector will be returned.

store solr connection_timeout [value]

Parameter Value Description


connection_timeout integer The timeout is expressed as a
value of 0 to 2147483647
milliseconds.

The default value is 100000


milliseconds.

store solr so_timeout


Use this command to set the socket timeout.

store solr so_timeout [value]

128 CLI and API


Parameter Value Description
so_timeout integer The timeout is expressed as a
value of 0 to 2147483647
milliseconds.

The default value is 100000


milliseconds.

store solr time_allowed

Use this command to set the socket timeout.

store solr time_allowed [value]

Parameter Value Description


time_allowed integer The timeout is expressed as a
value of 0 to 2147483647
milliseconds.

The default value is 90000


milliseconds.
Note: Deep search uses 10x
(ten times) the time_allowed
value.

GuardAPI Reference
GuardAPI provides access to Guardium functionality from the command line.

This allows for the automation of repetitive tasks, which is especially valuable in
larger implementations. Calling these GuardAPI functions enables a user to quickly
perform operations such as create datasources, maintain user hierarchies, or
maintain the Guardium features such as S-TAP just to name a few.

Proper login to the CLI for the purpose of using GuardAPI requires the login with
one of the five CLI accounts (guardcli1,...,guardcli5) and an additional login
(issuing the 'set guiuser' command) with a user (GUI username/guiuser) that has
been created by access manager and given either the admin or cli role. See Set
guiuser Authentication for more information.

GuardAPI is a set of CLI commands, all of which begin with the keyword grdapi.
v To list all GuardAPI commands available, enter the grdapi command with no
arguments or use the 'grdapi commands' command with no search argument.
For example:
CLI> grdapi
or
CLI> grdapi commands
v To display the parameters for a particular command, enter the command
followed by '--help=true'. For example:
CLI> grdapi list_entry_location --help=true
ID=0
function parameters :

CLI and API 129


fileName
hostName - required
path - required
ok
v To search for GuardAPI commands given a search string use the CLI command,
grdapi commands <search-string>. For example:
CLI> grdapi commands user
ID=0
Matching API Function list :
create_db_user_mapping
create_user_hierarchy
delete_allowed_db_by_user
delete_db_user_mapping
delete_user_hierarchy_by_entry_id
delete_user_hierarchy_by_user
execute_appUserTranslation
execute_ldap_user_import
list_allowed_db_by_user
list_db_user_mapping
list_user_hierarchy_by_parent_user
update_user_db
v To display a values list for a parameter, enter the command followed by
'--get_param_values=<parameter>;'. For example:
CLI> grdapi create_group --get_param_values=appid
Value for parameter ’appid’ of function ’create_group’ must be one of:
Public
Baseline
Access_policy
Classifier
Db2_zos
ID=0
ok
Table 7. Current APIs that support the –get_param_values command structure
API Function Parameter
create_datasource application, type, severity, shared
create_group appid, type

Case Sensitivity

Both the keyword and value components of parameters are case sensitive.

Parameter Values with Spaces


If a parameter value contains one or more spaces, it must be enclosed in double
quote characters.

For example:
grdapi create_datasource type ="MS SQL SERVER" ...

NULL Values and Empty Strings

In general, when calling a GuardAPI function and a value for a non-required


parameter is not specified or is set to an empty string (“""”) GuardAPI will convert
that parameter to a NULL value when calling the GuardAPI function. This
translates into GuardAPI ignoring the parameter just as if it were not specified.

130 CLI and API


If, for example, you wanted to clear out a group from a policy rule you instead
would set that group to space (“" "”) and not an empty string (“""”). Using an
empty string (“""”) would signal GuardAPI to ignore that group and not change
that group selection.

Example for clearing out a group from a policy value


grdapi update_rule fromPolicy=V8 ruleDesc="LogFull Details" dbUserGroup=" " dbUser=" " objectGroup

Return Codes

Regardless of the outcome of the GuardAPI command, a return code is always


returned in the first line of output, in the following format:
Table 8. Return Codes
Return Code Description
ID=identifier Successful. The identifier is the ID of the object operated upon;
for example, the ID of a group that has just been defined.
ERR=error_code Error. The error_code identifies the error, and one or more
additional lines provide a text description of the error.

There is a table of common errors in the Overview and a


complete listing of error codes in GuardAPI Error Codes.

For example, if we use the create_group command to successfully define an objects


group named agroup, the ID of that group is returned:
CLI> grdapi create_group desc=agroup type=objects appid=Public
ID=20001
ok
CLI>

We could use that ID in the list_group_by_id command to display the group


definition
CLI> grdapi list_group_by_id id=20001
ID=20001
Group GroupId=20001
Group GroupTypeId=3
Group ApplicationId=0
Group GroupDescription=agroup
Group GroupSubtype=null
Group CategoryName=null
Group ClassificationName=null
Group Timestamp=2008-05-10 07:34:11.0
Group type = OBJECTS
Application Type = Public
Touple Group
ok

For an unsuccessful execution, an error code is returned. For example, if we enter


the list_group_by_id command again with an invalid ID, we receive the following
message:
a1.corp.com> grdapi list_group_by_id id=20123
ERR=140
Could not retrieve Group - check Id.
ok

CLI and API 131


Common Error Codes
Error codes with a value fewer than 100 are for common error conditions. Error
codes greater than 100 apply to specific functions, and those are described
following each function.

To see a complete list of GuardAPI error codes, type grdapi-errors, at the CLI
command prompt.
Table 9. Common Error Codes
Error Description
0 Missing parameters or unknown errors such as unexpected exceptions.
1 An Exception has occurred, please contact Guardium's support
2 Could not retrieve requested function - check function name. To list all
functions, type either the CLI command, grdapi, or grdapi commands, with no
arguments.

To search, by function name, given a search string, use the CLI command,
grdapi commands <search-string>
3 Too many arguments. To get the list of parameters for this function call the
function --help=true
4 Missing required parameter. To get the list of parameters for this function call
the function with --help=true
5 Could not decrypt parameter, check if encrypted with the correct shared secret.
6 Wrong parameter format, specify a function name followed by a list of
parameters using <name=value> format.
7 Wrong parameter value for parameter type.
8 Wrong parameter name, please note, parameters are case sensitive.
9 User has insufficient privileges for the requested API function
10 Parameter Encryption not enabled - shared secret not set.
11 Failed sending API call request to targetHost
12 Error Validating Parameter
13 Target host must be the ip address of the central manager
14 Target host is not managed by this manager
15 Target host is not online
16 Target host cannot be specified on a standalone unit
17 User is not allowed to operate on the specified object
18 Target host cannot be specified
19 Missing end quote
20 User is not allowed to run grdapi commands
21 --username and --source-host are grdapi reserved words and cannot be passed
on the command line.
22 A parameter name cannot be specified more than once, please check the
command line for duplicate parameters.
23 Value not in constant list.
24 Not a valid encrypted value.
25 Not a valid parameter format - parameters should be specified as
<name=value>, spaces are not allowed.

132 CLI and API


GuardAPI Activity Log
The Guardium Activity Log records all grdapi commands that are executed on the
system. To view the commands from the administrator portal, navigate to the User
Activity Audit Trail report on the Guardium Monitor tab.

All grdapi activity will be attributed to the cli user. Double-click on the cli row in
that report, and select the Detailed Guardium User Activity drill-down report.
Every command entered will be listed, along with any and all changes made. In
addition, the IP address from which the command was issued is listed.

Encrypted Parameter

GuardAPI is intended to be invoked by scripts, which may contain sensitive


information, such as passwords for datasources. To ensure that sensitive
information is kept encrypted at all times, the grdapi command supports passing
of one encrypted parameter to an API Function. This encryption is done using the
System Shared Secret which is set by the administrator and can be shared by many
systems, and between all units of a central management and/or aggregation
cluster; enabling scripts with encrypted parameters to run on machines that have
the same shared secret.

Note: Trying to run an API call with encrypted parameter on a system where
shared secret was not set will result in an error message of
Parameter Encryption not enabled - shared secret not set

For Guard API scripts generated through the GUI, if encryption is required it is
done using the shared secret of the system where script generation is performed.

The optional parameter encryptedParam is available on every grdapi call. This


parameter can be used to pass an encrypted value for another parameter.

The procedure for manual encryption is as follows:


1. Use the Parameter Encryption API
The encrypt_value API accepts a value to encrypt and the target system's
shared secret (key) and then prints out the encrypted value. If the key is not
the system's shared secret it will print out a warning.
a1.corp.com> grdapi encrypt_value --help=true
ID=0
function parameters :
key - required
valueToEncrypt - required
api_target_host
ok
Table 10. Encrypted Parameter
Parameter Description
key The target system's shared secret
valueToEncrypt The value to be encrypted
api_target_host In a central management configuration only, allows the user to specify
a target host where the API will execute. On a Central Manager (CM)
the value is the host name or IP of any managed units. On a managed
unit it is the host name or IP of the CM.

Example

CLI and API 133


a1.corp.com> grdapi encrypt_value valueToEncrypt="some value" key=guard
ID=0
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.7 (GNU/Linux)
jA0EAgMCTEIUShudn0tgyTB9GL7wR79UL9X9DCAa6RkUQRbegG52olA4gwOzmpHF
0qEhsd6Uz7l8rUsheUyX9v4=
=c1Cq
-----END PGP MESSAGE-----
2. Copy the generated content and embed within your cli script.
example of cli.gsh code :
set guiuser johny_smith password 3wel9s887s
grdapi create_datasource type=oracle name=myOra host=somehost application=AuditTask owner=admin u
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.7 (GNU/Linux)
jA0EAgMCTEIUShudn0tgyTB9GL7wR79UL9X9DCAa6RkUQRbegG52olA4gwOzmpHF
0qEhsd6Uz7l8rUsheUyX9v4=
=c1Cq
-----END PGP MESSAGE-----
3. Run the script to invoke GrdApi:
user> ssh cli@a1.corp.comuser> ssh cli@a1.corp.com

Central Management Caution

When using GuardAPI in a Central Management environment, be sure that you


understand what components are defined on the Central Manager, and what
components are defined on managed units. For information on this topic, see
Central Management.

Display attributes for certain users in Query Builder

The admin user can see all query attributes in Query Builder and non-admin users
can see query attributes in Query Builder, except those that are designed as admin
only (IDs, for example).

There are some entities (like FULL SQL) that have large numbers of attributes in
them.

By default, all attributes will show up for all users (admin and non-admin).

Two GuardAPI commands have been added to display or not display certain
attributes for certain users.

These GuardAPI commands will enable disable/enable ONLY specific groups of


attributes in Full SQL: VSAM, ISAM, MApReduce, APEX, Hive and BigInsight.

Two New GuardAPIs named: grdapi enable_special_attributes and grdapi


disable_special_attributes

Both receive only one parameter: attributesGroup.

The valid values for this parameter are: VSAM, IMS, MapReduce, APEX, Hive, BI
(BigInsights), IMS/VSAM, DB2 i, F5 (Not case sensitive).

Each Grdapi will enable (disable) all the correspondent attributes for the group, for
example VSAM will enable (disable) the following attributes:
v VSAM records
v VSAM records delected

134 CLI and API


v VSAM records inserted
v VSAM records retrieved
v VSAM records updated
v VSAM User Group ID

Hive will enable (disable) the following attributes:


v Hive command
v Hive database
v Hive error
v Hive parsed SQL
v Hive table name
v Hive user

Note: The attributes will still be displayed if the user has the admin role; enabling
or disabling these attributes applies ONLY to non-admin users (with no admin
role).

Note: The GUI does not have to be restarted for the change to take effect. With
this exception: If a report with the attributes of group F5 has been created and
added it to My New Reports, even though the attributes have been enabled, the no
admin-user does not have the privilege to view the report. The GUI needs to be
restarted to see the report fields.

GuardAPI Archive and Restore Functions


list_expiration_dates_for_restored_days

List the expiration dates for all restored days.


Table 11. list_expiration_dates_for_restored_days
Parameter Description
newExpDate Required. The new expiration date for the day restored.
restoredDay Required. Identifies the restore day for data.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi list_expiration_dates_for_restored_days

get_expiration_date_for_restored_day
Get the expiration date associated with a given restored day.
Table 12. get_expiration_date_for_restored_day
Parameter Description
newExpDate Required. The new expiration date for the day restored.
restoredDay Required. Identifies the restore day for data.

CLI and API 135


Table 12. get_expiration_date_for_restored_day (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example:
grdapi get_expiration_date_for_restored_day restoredDay=restoredDay

where restoredDay can be of the format of a real day yyyy-mm-dd hh:mi:ss or


relative day such as NOW -10 day.

purge_results_by_id

Purge results stored in the Results archive by configuration ID.


Table 13. purge results by id
Parameter Description
configld Integer. Required. Configuration ID
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi purge_results_by_id <name=value>

set_expiration_date_for_restored_day

Set the expiration date for a given restored day.


Table 14. set_expiration_date_for_restored_day
Parameter Description
newExpDate Required. The new expiration date for the day restored.
restoredDay Required. Identifies the restore day for data.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example:
grdapi set_expiration_date_for_restored_day newExpDate=newExpDate restoredDay=restoredDay

where newExpDate and restoredDay can be of the format of a real day yyyy-mm-dd
hh:mi:ss or relative day such as NOW -10 day.

136 CLI and API


set_import
Start or stop import of Aggregation data.
Table 15. set_import
Parameter Description
state Required. START or STOP
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi set_import [START]

configure_export

Configure the export of Aggregation data.


Table 16. configure_export
Parameter Description
aggHost Required. String. Host name of Aggregator.
aggSecHost String
exportOlderThan Required. Integer. Detail what data to export by time.
exportValues Required. Integer. 0 or 1
ignoreOlderThan Required. Integer. Detail what data to ignore by time.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi configure_export [aggHost] [aggSecHost] [exportOlderThan] [exportValues] [ignoreOlderThan]

configure_archive
Configure the archive of Aggregation data.
Table 17. configure_archive
Parameter Description
accessKey String. Shared secret key of Aggregator.
archiveOlderThan Required. Integer. Detail what data to archive by time.
archiveValues Required. Integer. 0 or 1
bucketName String
destHost String. Host name of archive destination.
ignoreOlderThan Required. Integer. Detail what data to ignore by time.
passwd String. Password.
passwdRetype String. Retype Password
port Integer. Port number

CLI and API 137


Table 17. configure_archive (continued)
Parameter Description
protocol Required. String. SCP, FTP, or AMAZON
retention Integer. How long to retain.
secretKey String
targetDir String
userName String. User name.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi configure_archive [accessKey] [archiveOlderThan] [archiveValues][bucketName][destHost][ignoreO

GuardAPI Assessment Functions


Use these CLI commands to add, delete and update Assessment Functions.

Use the following GuardAPI commands to:


v Add, delete, update the Security Assessment definition
v Add, delete a datasource from an existing Security Assessment
v Add, delete tests from an existing Security Assessment

create_assessment

Use this GuardAPI command to add a security assessment.


Table 18. create_assessment
Parameter Validation
assessmentDescription
Required. Free text – unique - must ensure there is no previous
assessment with the same description. If there is one, then ERROR.
fromDate Valid date or relative date not mandatory – default NOW -1 DAY
toDate Valid date or relative date not mandatory – default: NOW
FilterClientIP Valid IP address not mandatory – default null
FilterServerIP Valid IP address not mandatory – default null

Action: If all parameters are validated created a new record in


SECURITY_ASSESSMENT table (MODIFIED_FLAG leave default – 0)

Example
grdapi create_assessment assessmentDescription=Assess1

138 CLI and API


add_assessment_datasource
Use this GuardAPI command to add a datasource to a security assessment.
Table 19. add_assessment_datasource
Parameter Validation
assessmentDescription
Required. Free text – unique - must ensure there is no previous
assessment with the same description. If there is one, then ERROR.
datasourceName Required. Free Text: Must be the Name of an existing datasource, if such
datasource not present, then ERROR

Action: If all parameters are validated then it adds a record to:


ASSESSMENT_DATASOURCE using the ASSESSMENT ID and DATASOURCE ID
for the assessment and datasource with the names provided.

Example
grdapi add_assessment_datasource assessmentDescription=Assess1 datasourceName=DS1

add_assessment_test

Use this GuardAPI command to add a test to an existing security assessment.


Table 20. add_assessment_test
Parameter Validation
assessmentDescription
Required - Free text – unique - must ensure there is no previous
assessment with the same description, if there is one, then ERROR
testDescription Required - Free Text: Must match the TEST_DESC of an existing test in
AVAILABLE_TEST , if such test not present, then ERROR
severity Validates against SEVERITY_DESC table (using DESCRIPTION) – Not
mandatory. The default value is INFO.
thresholdValue IF Threshold value required from available test = 0, then IGNORE this
parameter.

Else (THRESHOLD) value required in available_test = 1, then parameter


must be an integer

If the parameter is not provided, then use


DEFAULT_THRESHOLD_VALUE from AVAILABLE_TEST.

CLI and API 139


Table 20. add_assessment_test (continued)
Parameter Validation
exceptionsGroup Check the value CAN_HAVE_EXCEPTIONS_GROUP in
AVAILABLE_TEST.

The parameter is NOT mandatory.

If 0 then (exceptions group not supported for this test): If the parameter
is provided, then ERROR (can not provide exception group for this test);
If the parameter is NOT provided, then use -1 to populate.

Else (Exception group supported for the test): If the parameter is NOT
provided then use -1 to populate; IF the parameter is provided validate
the group and use the group ID.

To validate the group select from GROUP_DESC where


GROUP_DESCRIPTION = the description provided, and check whether
the record exist and the GROUP_TYPE_ID

If there is not such group ERROR, then exception group does not exists.

If there is such group and the GROUP_TYPE_ID != 55, then ERROR:


Exception group must be of the type “VA Exceptions”

If the group is present and the type = 55, then use the GROUP_ID.

Additional Validation: Check whether there is already a record in


ASSESSMENT_TEST for the ASSESSMENT_ID and TEST_ID, if there is such
record: ERROR, this test is already present in the assessment can not add it again.

Action: If all parameters validated then add a record to ASSESSMENT_TEST (note


SEVERITY must be populated with the DESCRIPTION)

Example
grdapi add_assessment_test assessmentDescription=Assess1 testDescription="The first test"

delete_assessment
Use this GuardAPI command to delete a security assessment.
Table 21. delete_assessment
Parameter Validation
assessmentDescription
Required - Free text – unique - must ensure there is no previous
assessment with the same description, if there is one then ERROR

Additional Validation: Must ensure there are no results for the assessment to be
deleted by:

Select count (*) from ASSESSSMENT_RESULT_HEADER where ASSESSMENT_ID


= TheIdToRemve

IF the select returns > 0 then do not remove, ERROR

Action: If the parameter is validated (identifies the security assessment record, and
there are no results for the assessment) delete the SECURITY_ASSESSMENT

140 CLI and API


records, THE ASSESSMENT_TEST records and the ASSESSMENT_DATASOURCE
records (all three deletes using the ASSESSMENT_ID)

Example
grdapi delete_asssessment assessmentDescription=Assess1

delete_assessment_datasource

Use this GuardAPI command to delete a datasource from a security assessment.


Table 22. delete_assessment_datasource
Parameter Validation
assessmentDescription
Required. Free text – unique - must ensure there is no previous
assessment with the same description. If there is one, then ERROR.
datasourceName Required. Free Text: Must be the Name of an existing data-source, if
such datasource not present, then ERROR

Action: If all parameters validated, then check whether there is a record in


ASSESSMENT_DATASOURCE for the assessment and datasource provided. If no
such record Error, otherwise delete the record.

Example
grdapi delete_asssessment_datasource assessmentDescription=Assess1 datasourceName=DS1

delete_assessment_test
Use this GuardAPI command to delete a test from an existing security assessment
Table 23. delete_assessment_test
Parameter Validation
assessmentDescription
Required. Free text – unique - must ensure there is no previous
assessment with the same description, if there is one then ERROR
testDescription Free Text: Must match the TEST_DESC of an existing test in
AVAILABLE_TEST , if such test not present, then ERROR

Additional Validation: Check whether there is a record in ASSESSMENT_TES for


the ASSESSMENT_ID and TEST_ID, if there is no such record: ERROR, this test is
not present in the assessment

Action: If all parameters validated then delete the record from


ASSESSMENT_TEST.

Example
grdapi delete_asssessment_test assessmentDescription=Assess1

CLI and API 141


list_assessments
Use this GuardAPI command to list the security assessments.
Table 24. list_assessments
Parameter Validation
assessmentDescription
Required. Free text – unique - must ensure there is no previous
assessment with the same description, if there is one then ERROR

Example
grdapi list_assessments

list_assessment_tests

Use this GuardAPI command to show the list of tests for the security assessment.

The output of list_available_tests is in the following format: TEST=[<test


description>], DS_TYPE=[<datasource type>] (The actual values are encapsulated
within the brackets)

The output of list_assessment_tests is in the following format:


TEST_DESC=[<available test description>], DS_TYPE=[<datasourcetype>]

The parameters of list_assessment_tests API command are non-mandatory and


support filtering.
Table 25. list_assessment_tests
Parameter Validation
assessmentDescription
The API will:
v Validate the description is ONE valid assessment description and will
retrieve the ID of the assessment. (if there is no assessment, then
error)
v Show the list of tests for the assessment (and the datasource type).

Select AVAILABLE_TEST.TEST_DESC, DATASOURCE_TYPE.NAME


from ASSESSMENT_TEST, DATASOURCE_TYPE, AVAILABLE_TEST,
SECURITY_ASSESSMENT where
AVAILABLE_TEST.DATASOURCE_TYPE_ID =
DATASOURCE_TYPE.DATASOURCE_TYPE_ID and
ASSESSMENT_TEST.ASSESSMENT_ID =
SECURITY_ASSESSMENT.ASSESSMENT_ID and
SECURITY_ASSESSMENT.ASSESSMENT_DESC like “Your Param”

Example
grdapi list_assessment_tests

update_assessment

Use this GuardAPI command to update the record of the security assessment.
Table 26. update_assessment
Parameter Validation
assessmentDescription
Must match an existing record in SECURITY_ASSESSMENT

142 CLI and API


Table 26. update_assessment (continued)
Parameter Validation
newAssessmentDescription
Free Text – IF empty, means do not update the description, use the
value from the previous parameter, otherwise: unique must ensure there
is no previous assessment with the same description, if there is one then
ERROR.
fromDate Valid date or relative date
toDate Valid date or relative date
filterContentIP Valid IP address
filterServerIP Valid IP address

Action: If all parameters validated (and there it identified a


SECURITY_ASSESSMENT record with the description provided, then update the
record with the values provided)

Example
grdapi update_assessment assessmentDescription=Assess1 filterClientIP=192.168.1.1.

GuardAPI Auto-discovery Functions


Use these CLI commands to create, modify, list and run Auto-discovery Functions.

add_autodetect_task

This command adds a task to the specified process.


Table 27. add_autodetect_task
Parameter Description
process_name Required. Name of process
hosts_list Required. Lists of hosts. Is a space separated list of IPs or IP ranges and
wild cards such as 192.168.0.1 192.168.1.*
ports_list Required. List of ports. Is a comma separated list of ports or port ranges
such as 22,23,1400-1600
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi add_autodetect_task process_name=myProcess hosts_list="192.168.1.1 192.168.1.3" ports_list=

create_autodetect_process

This command creates an autodetect process.


Table 28. create_autodetect_process
Parameter Description
check_ICMP_echo Required. PE parameter to nmap (*) . Values are 'true' or 'false'
host_timeout Required. Parameter to nmap (*) . Timeout value.
process_name Required. Name of process

CLI and API 143


Table 28. create_autodetect_process (continued)
Parameter Description
run_probe_after_scan
Required. Values are 'true' or false'.
use_dns Required. Parameter to nmap1. Values are 'R' or 'true' for always, 'n' or
'false' for never.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Note: * nmap options are accessible from API only and not from GUI. For details
of nmap parameters and their impact on scan performance see man nmap.

Example
grdapi create_autodetect_process process_name=myProcess

modify_autodetect_process

This command modifies an autodetect process.


Table 29. modify_autodetect_process
Parameter Description
check_ICMP_echo Required. PE parameter to nmap (*) . Values are 'true' or 'false'
host_timeout Required. Parameter to nmap (*) . Timeout value.
process_name Required. Name of process
run_probe_after_scan
Required. Values are 'true' or false'.
use_dns Required. Parameter to nmap1. Values are 'R' or 'true' for always, 'n' or
'false' for never.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Note: * nmap options are accessible from API only and not from GUI. For details
of nmap parameters and their impact on scan performance see man nmap.

Example
grdapi modify_autodetect_process process_name=myProcess

delete_autodetect_scans_for_process
This command remove all the tasks for a process, but cannot run if a process is
running, scheduled or has results.
Table 30. delete_autodetect_scans_for_process
Parameter Description
process_name Required. Name of process

144 CLI and API


Table 30. delete_autodetect_scans_for_process (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_autodetect_scans_for_process process_name=myProcess

list_autodetect_processes

This command lists all processes.


Table 31. list_autodetect_processes
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_autodetect_processes

list_autodetect_tasks_for_process
This command lists all tasks of a specified process.
Table 32. list_autodetect_tasks_for_process
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_autodetect_tasks_for_process process_name=myProcess

execute_autodetect_process
This command runs the specified process, but it cannot run if no tasks are defined
for the process or if the process is currently running.
Table 33. execute_autodetect_process
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

CLI and API 145


Example
grdapi execute_autodetect_process process_name=myProcess

show_autodetect_process_status
This command shows process status and progress summary.
Table 34. show_autodetect_process_status
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi show_autodetect_process_status process_name=myProcess

stop_autodetect_process

This command stops the run of a specific process.


Table 35. stop_autodetect_process
Parameter Description
process_name Required. Name of process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi stop_autodetect_process process_name=myProcess

GuardAPI Capture Replay Functions


Use these GuardAPI commands to modify, purge, replay and queue Capture
Replay Functions.

execute_replay

Use this GuardAPI command to run the replay, equivalent to clicking Run Once
Now in the GUI.
Table 36. execute_replay
Parameter Description
setupName String - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

146 CLI and API


execute_staging_drop
Use this GuardAPI command to drop the staged data, equivalent to DROP option
from the stage drop down list in the GUI.
Table 37. execute_staging_drop
Parameter Description
replayConfigNameString - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

execute_staging_start
Use this GuardAPI command to start the staging process, equivalent to START
option from the stage drop down list in the GUI.
Table 38. execute_staging_start
Parameter Description
replayConfigNameString - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

modify_staging_data

Modify the SQL within staging data


Table 39. modify_staging_data
Parameter Description
configId Integer - required
dbName String
dbProtocol String
fullSQLFilter String
sessionId String
sourceProgram String
statementType String
toSQL String - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi modify_staging_data configId=3 fullSQLFilter="select * from dual" statementType=0 sessionId

Note: Use the escape character “\” for sourceProgram values that might also
contain the special character “\”in their path.

CLI and API 147


populate_replay_to_replay_session_data
Populate the session data.
Table 40. populate_replay_to_replay_session_data
Parameter Description
rrhid1 Integer - required
rrhid2 Integer - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

purge_staging_data

Purge the staging data for a replay configuration.


Table 41. purge_staging_data
Parameter Description
configId Integer - required
dbName String
dbProtocol String
fullSQLFilter String
sessionId String
statementType String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi purge_staging_data configId=3

queue_purge_agg_replay_match_by_id

Purge the data generated by the queue_replay_agg_match_by_id API.


Table 42. queue_purge_agg_replay_match_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)

148 CLI and API


Table 42. queue_purge_agg_replay_match_by_id (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi queue_purge_agg_replay_match_by_id rrhid=2 configid=3


isCompareToCapture=1

queue_purge_replay_match_by_id

Purge the data generated by the queue_replay_match_by_id API.


Table 43. queue_purge_replay_match_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi queue_purge_replay_match_by_id rrhid=2 configid=3


isCompareToCapture=1

queue_purge_replay_match_by_name

Purge the data generated by the queue_replay_match_by_name API.


Table 44. queue_purge_replay_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

CLI and API 149


grdapi queue_purge_replay_match_by_name capture_name= replay_header_name=
runtime=

queue_purge_replay_to_replay_results_match_by_id
Purge the data generated by the queue_replay_match_by_id API for replay-replay
compare.
Table 45. queue_purge_replay_to_replay_results_match_by_id
Parameter Description
rrhid1 Integer - required
rrhid2 Integer - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

queue_replay_agg_match_by_id

Used to compare two workloads, typically for databases that are of the same type,
and populates the Workload Aggregate Match report.
Table 46. queue_replay_agg_match_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi queue_replay_agg_match_by_id rrhid=2 configid=3 isCompareToCapture=1

queue_replay_agg_match_by_name
Purge the data generated by the queue_replay_agg_match_by_name API.
Table 47. queue_purge_replay_agg_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required

150 CLI and API


Table 47. queue_purge_replay_agg_match_by_name (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi queue_replay_agg_match_by_name capture_name= replay_header_name=


runtime=

queue_replay_match_by_id

Used to compare SQL statements; populating the Workload Match report.


Table 48. queue_replay_match_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_match_by_id rrhid=2 configid=3 isCompareToCapture=1 includeGroup="Replay - Inc

queue_replay_match_by_name

Purge the data generated by the queue_replay_match_by_name API.


Table 49. queue_replay_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_match_by_name capture_name= replay_header_name= runtime=

CLI and API 151


queue_replay_object_agg_match_by_id
Used to compare two workloads, typically for databases that are of differing types,
and populates the Workload Aggregate Match report.
Table 50. queue_replay_object_agg_match_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_object_agg_match_by_id rrhid=2 configid=3 isCompareToCapture=1

queue_replay_object_agg_match_by_name

Purge the data generated by the queue_replay_object_agg_match_by_name API.


Table 51. queue_replay_object_agg_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_object_agg_match_by_name capture_name= replay_header_name= runtime=

queue_replay_resultsMatch_by_id
Table 52. queue_replay_resultsMatch_by_id
Parameter Description
configid Integer - required; is capture ID (ID-From in lists) unless
isCompareToCapture is 0 then it is a replay ID
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
isCompareToCapture
Integer - required; denotes the type of comparison performed between
rrhid and configid where: 0 - replay to replay, 1 - replay to capture, 2 -
capture to capture
rrhid Integer - required; is the replay ID (ID-To in lists)

152 CLI and API


Table 52. queue_replay_resultsMatch_by_id (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

queue_replay_results_match_by_name

Purge the data generated by the queue_replay_results_match_by_name API.


Table 53. queue_replay_results_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_results_match_by_name capture_name= replay_header_name= runtime=

queue_replay_to_replay_match_by_id
Table 54. queue_replay_to_replay_match_by_id
Parameter Description
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
rrhid1 Integer - required; is a replay ID
rrhid2 Integer - required; is a replay ID
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

queue_replay_to_replay_match_by_name

Purge the data generated by the queue_replay_to_replay_match_by_name API.


Table 55. queue_replay_results_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list

CLI and API 153


Table 55. queue_replay_results_match_by_name (continued)
Parameter Description
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_to_replay_match_by_name capture_name= replay_header_name= runtime=

queue_replay_to_replay_results_match_by_id
Table 56. queue_replay_to_replay_results_match_by_id
Parameter Description
excludeGroup String - required - Constant values list
includeGroup String - required - Constant values list
rrhid1 Integer - required; is a replay ID
rrhid2 Integer - required; is a replay ID
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

queue_replay_to_replay_results_match_by_name
Purge the data generated by the queue_replay_to_replay_results_match_by_name
API.
Table 57. queue_replay_to_replay_results_match_by_name
Parameter Description
capture_name String- required. Constant values list
replay_header_name
String- required. Constant values list
runtime String- required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi queue_replay_to_replay_match_by_name capture_name= replay_header_name= runtime=

Not supported

clone_replay

clone_replay_schedule_setup

154 CLI and API


create_replay

create_replay_schedule_setup

delete_replay

delete_replay_schedule_setup

list_replay list_replay_schedule_setup

update_replay

update_replay_schedule_setup

GuardAPI Catalog Entry Functions


Use these GuardAPI commands to create, list, delete, and update Catalog Entry
Functions.

create_entry_location
Adds a new archive entry to the internal catalog location table.
Table 58. create_entry_location
Parameter Description
entryType Required string. Must be one of the following:
v CollectorDataArchive
v AggDataArchive
v AggResultArchive
processDesc String. Used and required only when the entryType is
AggResultArchive.
fileName Required string. Identifies the file.
hostName Required string. Identifies the host.
path Required string. For FTP: specify the directory relative to the FTP
account home directory; for SCP: Specify the directory as an absolute
path.
user Required string. User account to access the host.
password Required string. Password for user.
retention Optional integer. The number of days this entry is to be kept in the
catalog (the default is 365).
storageSystem Required string. Must be one of the following: EMC CENTERA, FTP,
SCP, TSM.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi create_entry_location entryType=CollectorDataArchive fileName=733392-a1.corp.com-w2007122

CLI and API 155


list_entry_location
Lists one archive location if a fileName is specified, or lists multiple archive
locations when the fileName is omitted.
Table 59. list_entry_location
Parameter Description
fileName Optional string. Identifies the single file location to be listed. If
omitted, all file locations on the specified hostName and path will be
listed.
hostName Required string. Identifies the host.
path Required string. For FTP: specify the directory relative to the FTP
account home directory; for SCP: Specify the directory as an absolute
path.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi list_entry_location path=/mnt/nfs/ogazit/archive_results/ hostName=192.168.1.33

delete_entry_location

Updates one archive location if a fileName is specified, or removes multiple


archive locations when the fileName is omitted.
Table 60. delete_entry_location
Parameter Description
fileName Optional string. Identifies the single file location to be removed. If
omitted, all file locations on the specified hostName and path will be
removed.
hostName Required string. Identifies the host.
path Required string. For FTP: specify the directory relative to the FTP
account home directory; for SCP: Specify the directory as an absolute
path.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi delete_entry_location path=/var/dump/mojgan hostName=192.168.1.18

update_entry_location
Updates one archive locations if a fileName is specified, or updates multiple
archive locations when the fileName is omitted.

156 CLI and API


Table 61. update_entry_location
Parameter Description
fileName Optional string. Identifies the single file location to be updated. If
omitted, all file locations on the specified hostName and path will be
updated.
hostName Required string. Identifies the host.
path Required string. For FTP: specify the directory relative to the FTP
account home directory; for SCP: Specify the directory as an absolute
path.
newHostName Optional string. When used, specifies the new host name.
newPath Optional string. When used, specifies the new path.
user Required string. User account to access the host.
password Required string. Password for user.
retention Optional integer. The number of days this entry is to be kept in the
catalog (the default is 365).
storageSystem Optional string. Use one of the following: EMC CENTERA, FTP,
SCP, TSM.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi update_entry_location fileName=a1.corp.com-1_4_2008-01-10_10:27:24.res.70.tar.gz.enc path=

GuardAPI Classification Functions


Use the following GuardAPI commands for Classification policy configuration, for
test automation and, for scripting of prerequisite data preparation.

For instructions on how to use GuardAPI commands, see GuardAPI Reference


Overview help topic.

create_classifier_action
Table 62. create_classifier_action
Parameter Description
actionName Required. String
actualMemberContent
Required. String

CLI and API 157


Table 62. create_classifier_action (continued)
Parameter Description
actionType Required. String

For reference, here is the list of action types with the associated required
parameters. So depending on what the user selects for the action type
will determine which parameters are required -

add_to_group_objects

actionName - String - required

actualMemberContent - String - required

objectGroup - String - required

policyName - String - required

ruleName - String - required

add_to_group_object_fields

actionName - String - required

objectFieldGroup - String - required

policyName - String - required

ruleName - String - required

create_access_rule

accessPolicy - String - required

accessRuleAction - String - required

actionName - String - required

ruleName - String - required

create_privacy_set

actionName - String - required

policyName - String - required

privacySet - String - required

ruleName - String - required

log_policy_violation

actionName - String - required

policyName - String - required

ruleName - String - required

action_send_alert

actionName - String - required

policyName - String - required

receiver - String - required

ruleName - String - required


158 CLI and API
Table 62. create_classifier_action (continued)
Parameter Description
description String
objectGroup Required. String
policyName Required. String
ruleName Required. String
replaceGroupContent
Boolean
objectFieldGroup Required. String
accessPolicy Required. String
accessPolicy Required. String
accessRuleAction Required. String
commandsGroup String
includeField Boolean
includeServerIP Boolean
receiver String
privacySet Required. String
severity String
notificationType String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples
grdapi create_classifier_action actionType=add_to_group_objects policyName=-policy1 ruleName=-rule
grdapi create_classifier_action actionType=add_to_group_object_fields policyName=-policy1 ruleName
grdapi create_classifier_action actionType=create_access_rule policyName=-policy1 ruleName=-rule1
grdapi create_classifier_action actionType=create_privacy_set policyName=-policy1 ruleName=-rule1
grdapi create_classifier_action actionType=log_policy_violation policyName=-policy1 ruleName=-rule
grdapi create_classifier_action actionType=send_alert policyName=-policy1 ruleName=-rule1 actionNa
GuardAPI command values
See the table for a list of GuardAPI command values for the command,
grdapi create_classifer_action that are used in the GUI. Use these values
when creating groups.
Table 63. GrdAPI create_classifer_action
GUI values GrdAPI values
%/%.Name %/NAME
%/Full %/FULL
Change/%.Name CHANGE/NAME
Change/Full CHANGE/FULL
Fully Qualified FULLNAME
Name(Schema.Object)
Like %Full %FULLLIKE
Like %Full% %FULLLIKE%

CLI and API 159


Table 63. GrdAPI create_classifer_action (continued)
GUI values GrdAPI values
Like %Name %NAMELIKE
Like %Name% %NAMELIKE%
Like Full% FULLLIKE%
Like Name% NAMELIKE%
Object Name NAMEONLY
Only
Read/%.Name READ/NAME
Read/Full READ/FULL

Example
grdapi create_classifier_action actionName=classgrpobjectseach1 actionType=ADD_TO_GROUP_OBJECTS polic

Examples of group object types


grdapi create_group appid=Classifier type=OBJECTS desc="Classifier Group of Each Objects" owner=admin
grdapi create_datasource type="Oracle (DataDirect)" user=scott password=tiger host="swan.guard.swg.us
grdapi create_classifier_policy policyName="A Group Object Each Type Policy" category="Object Each Pr
grdapi create_classifier_rule policyName="A Group Object Each Type Policy" category="Object Each Proc
grdapi create_classifier_action actionName=classgrpobjectseach1 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach2 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach3 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach4 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach5 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach6 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach7 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach8 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach9 actionType=ADD_TO_GROUP_OBJECTS polic
grdapi create_classifier_action actionName=classgrpobjectseach10 actionType=ADD_TO_GROUP_OBJECTS poli
grdapi create_classifier_action actionName=classgrpobjectseach11 actionType=ADD_TO_GROUP_OBJECTS poli
grdapi create_classifier_action actionName=classgrpobjectseach12 actionType=ADD_TO_GROUP_OBJECTS poli
grdapi create_classifier_action actionName=classgrpobjectseach13 actionType=ADD_TO_GROUP_OBJECTS poli
grdapi create_classifier_action actionName=classgrpobjectseach14 actionType=ADD_TO_GROUP_OBJECTS poli
grdapi create_classifier_process policyName="A Group Object Each Type Policy" processName="A Group Ob
grdapi create_classifier_action actionName=classgrpobjectseach10 actionType=ADD_TO_GROUP_OBJECTS poli

create_classifier_policy
Table 64. create_classifier_policy
Parameter Description
category Required. String
classification Required. String
description String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

160 CLI and API


Example
grdapi create_classifier_policy policyName=-policy1 classification=class1 description=desc1 catego

create_classifier_process
create_classifier_process

Note: Create a classification policy and datasource before calling this GuardAPI.
Table 65. create_classifier_process
Parameter Description
comprehensive Boolean
datasourceNames Required. String
policyName Required. String
processName Required. String
sampleSize Integer
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi create_classifier_process datasourceNames=sample_cls_0001 policyName=APITEST_Cls_Ply_10001

create_classifier_rule
Table 66. create_classifier_rule
Parameter Description
policyName Required. String
ruleName Required. String

CLI and API 161


Table 66. create_classifier_rule (continued)
Parameter Description
ruleType Required. String

For reference, here is the list of valid rule types with the associated
required parameters. Depending on what the user selects for the rule
type will determine which parameters are required

catalog_search_add

policyName - String - required

ruleName - String - required

search_by_permissions_add

policyName - String - required

ruleName - String - required

grantTypes - String - required

search_for_data_add

policyName - String - required

ruleName - String - required

search_for_unstructured_data_add

policyName - String - required

ruleName - String - required


category String
classification String
continueOnMatch Boolean
description String
columnNameLike String
fireOnlyWithMarker
String
tableNameLike String
tableTypeSynonymBoolean
tableTypeSystemTable
Boolean
tableTypeTable Boolean
tableTypeView Boolean
grantTypes String
role String
roleGroup String
user String
userGroup String
withAdminOption Boolean
compareToValuesInGroup
String
compareToValuesInSQL
String
dataTypes String

162 CLI and API


Table 66. create_classifier_rule (continued)
Parameter Description
evaluationName String
hitPercentage Integer
maxLength Integer
minLength Integer
searchExpression String
searchLike String
grantTypes String
showUniqueValuesTrue or False
uniqueValueMask Value
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-c
grdapi create_datasource type="Oracle (DataDirect)" user=scott password=tiger host="swan.guard.swg
grdapi create_group appid=Classifier type=OBJECTS desc="AA Classifier ALL Values" owner=admin cate
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=ACCOUNTING
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=ACCOUNTTING
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=ACCOUNTTING
grdapi create_member_to_group_by_desc desc="AA Classifier ALL Values" member=AG
grdapi create_classifier_policy policyName="Search ALL DATA SEARCH smoke values" category="ALL" cl
grdapi create_classifier_rule policyName="Search ALL DATA SEARCH smoke values" category="ALL" clas
grdapi create_classifier_process policyName="Search ALL DATA SEARCH smoke values" processName="Sea

delete_classifier_action
Table 67. delete_classifier_action
Parameter Description
actionName Required. String
policyName Required. String

Example
grdapi delete_classifier_action policyName=-policy1 ruleName=-rule1 actionName=-action1

delete_classifier_policy
Table 68. delete_classifier_policy
Parameter Description
policyName Required. String

CLI and API 163


Table 68. delete_classifier_policy (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_classifier_policy policyName=-policy1

delete_classifier_process
Table 69. list_classifier_process
Parameter Description
processName String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

example
grdapi delete_classifier_process processName=APITEST_Clps_10001_1

delete_classifier_rule
Table 70. delete_classifier_rule
Parameter Description
policyName Required. String
ruleName Required. String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_classifier_rule policyName=-policy1 ruleName=-rule1

execute_cls_process
Execute (submit) a classification process

Runs a classification process. It is equivalent of executing Run Once Now from


Classification Process Builder. It submits the job which places the process on the
Guardium Job Queue, from which the appliance runs a single job at a time.
Administrators can view the job status by selecting Guardium Monitor >
Guardium Job Queue.

Note: Create a classification process before calling this API.

164 CLI and API


Table 71. execute_cls_process
Parameter Description
processName Name of the classification process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_cls_process processName="classPolicy1"

Here is a list of the classifier functions and the parameters for each. In the case
where the parameter will have a set list of valid entries, the list will be supplied.

list_classifier_policies
Table 72. list_classifier_policies
Parameter Description
policyName Required. String
ruleName Required. String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_classifier_policy policyName=-policy1 ruleName=-rule1 actionName=-action1 recursive=1

Note: Executing this function with no arguments will list all policies. Passing an
argument for the policy will list all rules and actions for the policy. Passing a
policy and rule will list all of the actions for the rule.

list_classifier_process
Table 73. list_classifier_process
Parameter Description
processName String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

example:

grdapi list_classifier_process processName=APITEST_CLPS_30001

update_classifier_action
Table 74. update_classifier_action
Parameter Description
actionName Required. String

CLI and API 165


Table 74. update_classifier_action (continued)
Parameter Description
actualMemberContent
Required. String
description String
objectGroup Required. String
policyName Required. String
ruleName Required. String
replaceGroupContent
Boolean
objectFieldGroup Required. String
accessPolicy Required. String
accessRuleAction Required. String
commandsGroup String
includeField Boolean
includeServerIP Boolean
receiver String
privacySet Required. String
severity String
notificationType String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi update_classifier_action actionType=add_to_group_objects policyName=-policy1 ruleName=-rule1 a
grdapi update_classifier_action actionType=add_to_group_object_fields policyName=-policy1 ruleName=-r
grdapi update_classifier_action actionType=update_access_rule policyName=-policy1 ruleName=-rule1 act
grdapi update_classifier_action actionType=update_privacy_set policyName=-policy1 ruleName=-rule1 act
grdapi update_classifier_action actionType=log_policy_violation policyName=-policy1 ruleName=-rule1 a
grdapi update_classifier_action actionType=send_alert policyName=-policy1 ruleName=-rule1 actionName=

update_classifier_policy
Table 75. update_classifier_policy
Parameter Description
policyName Required. String
category Required. String
classification Required. String
description String

api_target_host In a central management configuration only, allows the user to specify a


target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

166 CLI and API


grdapi update_classifier_policy policyName=-policy1 classification=class1 description=desc1 catego

update_classifier_process
update_classifier_process
Table 76. update_classifier_process
Parameter Description
comprehensive Boolean
datasourceNames Required. String
newName String
policyName Required. String
processName Required. String
sampleSize Integer
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example:

grdapi update_classifier_process
datasourceNames=sample_cls_0001,sample_cls_0002
policyName=APITEST_Cls_Ply_10001_1 processName=APITEST_Clps_10001_1
comprehensive=0 sampleSize=3000

update_classifier_rule
Table 77. update_classifier_rule
Parameter Description
policyName Required. String
ruleName Required. String
ruleType Required. String

values – catalog_search

search_by_permissions

search_for_data

search_for_unstructured_data
category String
classification String
continueOnMatch Boolean
description String
columnNameLike String
fireOnlyWithMarker
String
tableNameLike String
tableTypeSynonymBoolean
tableTypeSystemTable
Boolean

CLI and API 167


Table 77. update_classifier_rule (continued)
Parameter Description
tableTypeTable Boolean
tableTypeView Boolean
grantTypes String
role String
roleGroup String
user String
userGroup String
withAdminOption Boolean
compareToValuesInGroup
String
compareToValuesInSQL
String
dataTypes String
evaluationName String
hitPercentage Integer
maxLength Integer
minLength Integer
searchExpression String
searchLike String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas
grdapi update_classifier_rule policyName=-policy1 ruleName=-rule1 category=-cat1 classification=-clas

GuardAPI Database User Functions


Use these GuardAPI commands to maintain database user mapping, non-credential
scan and set debug level.

non_credential_scan
API that allows for submitting jobs that will scan databases within the
serversGroup for enabled default users in the usersGroup. Submitted jobs will run
under the Classifier Listener and may be tracked using the Classifier/Assessment
Job Queue report. A submitted job may be canceled from the Classifier/
Assessment Job Queue report by double-clicking on the job and choosing Stop Job.

Note: If a server within the serversGroup can not be reached an exception of type
Scheduled Job Exception will be added and the server will not be scanned.

168 CLI and API


Table 78. non_credential_scan
Parameter Description
databaseType Required. Must be one of the following: ORACLE, DB2®, SYBASE,
MS SQL SERVER, MYSQL, TERADATA, POSTGRESQL, NETEZZA,
IBM ISERIES, INFORMIX
serversGroup Required. Must be a valid group of servers (Server IP/Instance
Name/Port) as defined with Group Builder.
usersGroup Required. Must be a valid group of users (DB User/DB Password)
as defined with Group Builder. Default groups exist within Group
Builder.

Example
grdapi non_credential_scan databaseType=ORACLE serversGroup=oracleServers usersGroup="ORACLE Defau

Maintain Database Mapping

These APIs help maintain the mapping between database users (Invokers of SQL
that caused a violation) and email addresses for real time alerts. See Alerting
Actions for more information on Invokers.
v create_db_user_mapping
v delete_db_user_mapping
v list_db_user_mapping

create_db_user_mapping

Use of wildcards:
v In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
v 'create' command:
– serverIp - wildcard is valid, '%' can be placed instead of the number in the
ip_address format
– 192.168.2.% - valid
– 192.%.2.% - valid
– 192.% - invalid
v serviceName - wildcards (%) are allowed
v dbUserName - no wildcards, '%' is valid, but will be considered as the symbol
'%'
v emailAddress - no wildcards, '%' is valid, but will be considered as the symbol
'%'
Table 79. create_db_user_mapping
Parameter Description
serverIp Required (IP Address). Needs to be in the format of an IP address
A.B.C.D
serviceName Required (any string). Identifies the service name.
dbUserName Required (any string). Identifies the database user name.
emailAddress Required (any string and requires an '@' sign). Identifies the email
address.

CLI and API 169


Table 79. create_db_user_mapping (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi create_db_user_mapping serverIp=192.168.1.104 serviceName=ora1 dbUserName=scott emailAddress=s

delete_db_user_mapping

Use of wildcards:
v In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
v 'create' command:
– serverIp - wildcard is valid, '%' can be placed instead of the number in the
ip_address format
– 192.168.2.% - valid
– 192.%.2.% - valid
– 192.% - invalid
v serviceName - wildcards (%) are allowed
v dbUserName - no wildcards, '%' is valid, but will be considered as the symbol
'%'
v emailAddress - no wildcards, '%' is valid, but will be considered as the symbol
'%'
Table 80. delete_db_user_mapping
Parameter Description
serverIp Required (IP Address). Needs to be in the format of an IP address
A.B.C.D
serviceName Required (any string). Identifies the service name.
dbUserName Required (any string). Identifies the database user name.
emailAddress Required (any string and requires an '@' sign). Identifies the email
address.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi create_db_user_mapping serverIp=192.168.1.104 serviceName=ora1 dbUserName=scott emailAddress=s

list_db_user_mapping
Use of wildcards:
v In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
v 'create' command:
– serverIp - wildcard is valid, '%' can be placed instead of the number in the
ip_address format

170 CLI and API


– 192.168.2.% - valid
– 192.%.2.% - valid
– 192.% - invalid
v serviceName - wildcards (%) are allowed
v dbUserName - no wildcards, '%' is valid, but will be considered as the symbol
'%'
v emailAddress - no wildcards, '%' is valid, but will be considered as the symbol
'%'
Table 81. list_db_user_mapping
Parameter Description
serverIp Required (IP Address). Needs to be in the format of an IP address
A.B.C.D
serviceName Required (any string). Identifies the service name.
dbUserName Required (any string). Identifies the database user name.
emailAddress Required (any string and requires an '@' sign). Identifies the email
address.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi create_db_user_mapping serverIp=192.168.1.104 serviceName=ora1 dbUserName=scott emailAddres

get debug level

Use this GuardAPI command to view the debug level for IMS™ output.

set_debug_level
Use this GuardAPI command to control IMS output.

If the IMS debug_level = 1, IMS debug fields like mvs_is_plex, mvs_ipaddr,


mvs_dlta_sign, mvs_dlta_val will output to internal database tables,
GDM_CONSTRUCT_TEXT.FULL_SQL or GDM_EXCEPTION.FULL_SQL. If the
IMS debug level is 0, then the IMS debug fields will not be distributed.

GuardAPI Datasource Functions


Use these GuardAPI commands to create, list, delete, and update Datasource
Functions.

create_datasource

Use this command to define a new datasource.

Note: In a Central Manager environment, datasources are defined on the Central


Manager. GuardAPI will allow you to create datasources on a managed unit, but
those datasources cannot be seen or used.

CLI and API 171


Table 82. create_datasource
Parameter Description
application Required. Identifies the application for which the datasource is being
defined. It must be one of the following:

ChangeAuditSystem

Access_policy

MonitorValues

DatabaseAnalyzer

AuditDatabase

CustomDomain

Classifier

AuditTask

SecurityAssessment

Replay

Stap_Verification
compatibilityMode Compatibility Mode: Choices are Default or MSSQL 2000. The
processor is told what compatibility mode to use when monitoring a
table.
conProperty Optional. Use only if additional connection properties must be
included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.

For a Sybase database with a default character set of Roman8, enter


the following property: charSet=utf8
customURL Optional. Connection string to the datasource; otherwise connection
is made using host, port, instance, properties, etc. of the previously
entered fields. As an example this is useful for creating Oracle
Internet Directory (OID) connections.
dbInstanceAccount Optional. Database Account Login Name that will be used by CAS
dbInstanceDirectory Optional. Directory where database software was installed that will
be used by CAS
dbName Optional. For a DB2 or Oracle datasource, enter the schema name.
For others, enter the database name.
description Optional. Longer description of the datasource.
host Required. Can be the host name or the IP address.
name Required. Provides a unique name for the datasource on the system.
password Optional. Password for user.
port Optional (integer). Port number.
serviceName Required for Oracle, Informix®, DB2, and IBM ISeries. For a DB2
datasource enter the database name, for others enter the service
name.
severity Optional. Severity Classification (or impact level) for the datasource.

172 CLI and API


Table 82. create_datasource (continued)
Parameter Description
shared Optional (boolean). Set to true to share with other applications. To
share the datasource with other users, you will have to assign roles
from the GUI.
type Required. Identifies the datasource type; it must be one of the
following:

DB2

DB2 for i

DB2 for z/OS

Informix

MS SQL Server

MS SQL Server (DataDirect)

MySQL

NA

Netezza

Oracle (DataDirect)

Oracle (Service Name)

Oracle (SID)

PostgreSQL

Sybase

Sybase IQ

Teradata

The following can be used when the application is CustomDomain


or Classifier:

TEXT

TEXT:FTP

TEXT:HTTP

TEXT:HTTPS

TEXT:SAMBA
user Optional. User for the datasource. If used, password must also be
used.

Example
grdapi create_datasource type=DB2 name=chickenDB2 password=guardium user=db2inst1 dbName=dn0chick

CLI and API 173


create_test_exception
Use this command to add records to the Tests Exceptions. This effects the behavior
for vulnerability assessments, if a test on a specific datasource fails it will check the
last record of the test exceptions table for that test/datasource such that if the
execution date is contained within the from and to dates of the last record the test
will be set to PASS, the recommendation will be set to the explanation (from the
exceptions record) and the result text will be set to:
Test passed, based on exception approved by: .... effective from date to date.

Note: The API only adds records to remove an exception a new record should be
created with new dates according to the needs.
Table 83. create_test_exception
Parameter Description
datasourceName Required. Valid name of a defined datasource.
testDescription Required. A valid test name within Security Assessments.
fromDate Required. Beginning date for when the exception is valid.
toDate Required. Ending date for when the exception is valid.
explanation Required. A recommendation as to why the test will pass.

Example
grdapi create_test_exception datasourceName=ORAPROD5 testDescription="CVE-2009-0997" fromDate="2012-0

list_datasource_by_name

Displays a datasource definition identified by a name.


Table 84. list_datasource_by_name
Parameter Description
name Required. The datasource name.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
CLI> grdapi list_datasource_by_name name=chickenDB2
ID=20000
Datasource DatasourceId=20000
Datasource DatasourceTypeId=2
Datasource Name=chickenDB2
Datasource Description=null
Datasource Host=chicken.corp.com
Datasource Port=50000
Datasource ServiceName=
Datasource UserName=db2inst1
Datasource Password=[B@1415de6
Datasource PasswordStored=true
Datasource DbName=dn0chick
Datasource LastConnect=null
Datasource Timestamp=2008-04-18 15:40:58.0
Datasource ApplicationId=2
Datasource Shared=true

174 CLI and API


Datasource ConProperty=null
Datasource type =DB2
Application Type = Access_policy
ok

list_datasource_by_id

Displays a datasource definition identified by an ID key.


Table 85. list_datasource_by_id
Parameter Description
id Required (integer). Enter the ID number of the datasource to be
listed.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi list_datasource_by_id id=2

delete_datasource_by_name
Deletes the specified datasource definition, unless that datasource is being used by
an application. This function removes the datasource, regardless of who created it.
Table 86. delete_datasource_by_name
Parameter Description
name Required. The datasource name.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi delete_datasource_by_name name=swanSybase

delete_datasource_by_id
Deletes the specified datasource definition, unless that datasource is being used by
an application. This function removes the datasource, regardless of who created it.
Table 87. delete_datasource_by_id
Parameter Description
id Required (integer). Enter the ID number of the datasource to be
listed.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi delete_datasource_by_id id=2

CLI and API 175


update_datasource_by_name
Updates a datasource definition.
Table 88. update_datasource_by_name
Parameter Description
name Required. Identifies the datasource to be updated.
newName Optional. Provides a new name, which must be unique for a
datasource on the system.
description Optional. Longer description of the datasource.
host Optional. Can be the host name or the IP address.
port Optional (integer). Port number.
serviceName Optional. For an Oracle datasource, enter the service name.
user Optional. User for the datasource. If used, password must also be
used.
password Optional. Password for user. If used, user must also be used.
dbName Optional. For DB2 datasources, enter the database name.
conProperty Optional. Use only if additional connection properties must be
included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.

For a Sybase database with a default character set of Roman8, enter


the following property: CHARSET=utf8
dbInstanceAccount Optional. Database Account Login Name that will be used by CAS
dbInstanceDirectory Optional. Directory where database software was installed that will
be used by CAS
shared Optional (boolean). Set to true to share with other applications. To
share the datasource with other users, you will have to assign roles
from the GUI.
customURL Optional. Connection string to the datasource; otherwise connection
is made using host, port, instance, properties, etc. of the previously
entered fields. As an example this is useful for creating Oracle
Internet Directory (OID) connections.
severity Optional. Severity Classification (or impact level) for the datasource.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi update_datasource_by_name name=chickenDB2 newName="chicken DB2" user=" " password=" "

update_datasource_by_id
Updates a datasource definition.
Table 89. update_datasource_by_id
Parameter Description
id Required (integer). Identifies the datasource.

176 CLI and API


Table 89. update_datasource_by_id (continued)
Parameter Description
newName Optional. Provides a new name, which must be unique for a
datasource on the system.
description Optional. Longer description of the datasource.
host Optional. Can be the host name or the IP address.
port Optional (integer). Port number.
serviceName Optional. For an Oracle datasource, enter the service name.
user Optional. User for the datasource. If used, password must also be
used.
password Optional. Password for user. If used, user must also be used.
dbName Optional. For DB2 datasources, enter the database name.
conProperty Optional. Use only if additional connection properties must be
included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each
property and value pair is separated from the next by a comma.

For a Sybase database with a default character set of Roman8, enter


the following property: CHARSET=utf8
dbInstanceAccount Optional. Database Account Login Name that will be used by CAS
dbInstanceDirectory Optional. Directory where database software was installed that will
be used by CAS
shared Optional (boolean). Set to true to share with other applications. To
share the datasource with other users, you will have to assign roles
from the GUI.
customURL Optional. Connection string to the datasource; otherwise connection
is made using host, port, instance, properties, etc. of the previously
entered fields. As an example this is useful for creating Oracle
Internet Directory (OID) connections.
severity Optional. Severity Classification (or impact level) for the datasource.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi update_datasource_by_id id=20000 user=" " password=" " newName="chickenDB2hooo"

list_db_drivers
List only the name of database drivers Oracle (DataDirect) and MS SQL SERVER
(DataDirect) are now supported as datasource types.

list_db_drivers_by_details

Lists each database driver in more details (name, class, driver class, URL, and
datasource type ID)

GuardAPI Datasource Reference Functions


Use these GuardAPI commands to create, list, and delete Datasource Reference
Functions.
CLI and API 177
create_datasourceRef_by_id
For a specific object of a specific application type (for example, a specific
Classification process), creates a reference to a datasource.
Table 90. create_datasourceRef_by_id
Parameter Description
appId Required (integer). Identifies the application. Must be from this list:
v 8 = SecurityAssessment
v 47 = CustomTables
v 51 = Classifier
datasourceId Required (integer). Identifies the datasource (from the datasource
definition).
objId Required (integer). Identifies an instance of the appID type specified.
For example, if apID=51, this would be the ID of a classification
process.

Example
grdapi create_datasourceRef_by_id appId=51 datasourceId=20000 objId=2

create_datasourceRef_by_name

For a specific object of a specific application type (for example, a specific


Classification process), creates a reference to a datasource.
Table 91. create_datasourceRef_by_name
Parameter Description
application Required. Identifies the application. Must be from this list:

SecurityAssessment

CustomTables

Classifier
datasourceName Required. Identifies the datasource (from the datasource definition).
objName Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the name of a
specific classification process.

Example
grdapi create_datasourceRef_by_name application=Classifier datasourceName=swanSybase objName=”c

list_datasourceRef_by_id
For a specific object of a specific application type (for example, a specific
Classification process), lists all datasources referenced.

178 CLI and API


Table 92. list_datasourceRef_by_id
Parameter Description
appID Required (integer). Identifies the application. Must be from this list:

8 = SecurityAssessment

47 = CustomTables

51 = Classifier
objID Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the ID of a
specific classification process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi list_datasourceRef_by_id appId=13 objId=1

list_datasourceRef_by_name

For a specific object of a specific application type (for example, a specific


Classification process), lists all datasources referenced.
Table 93. list_datasourceRef_by_name
Parameter Description
application Required. Identifies the application. Must be from this list:

SecurityAssessment

CustomTables

Classifier
objName Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the name of a
specific classification process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdap list_datasourceRef_by_name application=Classifier objName="class process1"

delete_datasourceRef_by_id

For a specific object of a specific application type (for example, a specific


Classification process), removes a datasource reference.

CLI and API 179


Table 94. delete_datasourceRef_by_id
Parameter Description
appId Required (integer). Identifies the application. Must be from this list:

8 = SecurityAssessment

47 = CustomTables

51 = Classifier
datasourceId Required (integer). Identifies the datasource (from the datasource
definition).
objId Required (integer). Identifies an instance of the appID type specified.
For example, if apID=51, this would be the ID of a classification
process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi delete_datasourceRef_by_id appId=51 datasourceId=2 objId=1

delete_datasourceRef_by_name

For a specific object of a specific application type (for example, a specific


Classification process), removes a datasource reference.
Table 95. delete_datasourceRef_by_name
Parameter Description
application Required. Identifies the application. Must be from this list:

SecurityAssessment

CustomTables

Classifier
datasourceName Required. Identifies the datasource (from the datasource definition).
objName Required. Identifies an instance of the application type specified. For
example, if the application is Classifier, this would be the name of a
specific classification process.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example
grdapi delete_datasourceRef_by_name application=Classifier datasourceName=swanSybase objName=”class p

GuardAPI Data User Security Functions


Use these GuardAPI commands to create, list, delete, and update Data User
Security Functions.

180 CLI and API


create_user_hierarchy
Add a relationship between a user and parent in the user data security hierarchy
Table 96. create_user_hierarchy
Parameter Description
userName Required. The name of the user
parentUserName Required. the name of the parent user.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi create_user_hierarchy userName=admin parentUserName=accessmgr

Note: An error will occur if the insert is cyclic (a parent reports to a child)

list_user_hierarchy_by_parent_user

List relationships in the user data security hierarchy


Table 97. list_user_hierarchy_by_parent_user
Parameter Description
userName Required. The name of the user
create If set (true or false) will or will not generate create statements for
create_user_hierarchy API calls.

Use this parameter to get all the commands necessary to generate a


batch file. This batch file can be used to move each parent and child
pairing to another Guardium system.
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi list_user_hierarchy_by_parent_user userName=admin create=true

Note: Only lists immediate parent-child relationship - will not display


"grandchildren"

delete_user_hierarchy_by_entry_id

Deletes a relationship in the user data security hierarchy by entry id


Table 98. delete_user_hierarchy_by_entry_id
Parameter Description
id Required (integer). Identifies the entry

CLI and API 181


Table 98. delete_user_hierarchy_by_entry_id (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi delete_user_hierarchy_by_entry_id id=1

Note: There is no failure condition if the entry doesn't exist

delete_user_hierarchy_by_user

Deletes a relationship in the user data security hierarchy by user


Table 99. delete_user_hierarchy_by_user
Parameter Description
userName Required. The name of the user
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi delete_user_hierarchy_by_user userName=admin

Note:

There is no failure condition if the user doesn't exist.

Multiple deletes occurs if the user has multiple parents.

create_allowed_db
Create a User-DB association
Table 100. create_allowed_db
Parameter Description
userName Required. The name of the user
serverIp Required. The server IP
instanceName Required. The instance name
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

182 CLI and API


grdapi create_allowed_db userName=admin serverIp=192.168.1.1
instanceName=abcd

list_allowed_db_by_user
List User-DB associations by user
Table 101. list_allowed_db_by_user
Parameter Description
userName Required. The name of the user
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi list_allowed_db_by_user userName=admin

delete_allowed_db_by_entry_id

Delete a User-DB association by entry id


Table 102. delete_allowed_db_by_entry_id
Parameter Description
id Required (integer). Identifies the entry
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi delete_allowed_db_by_entry_id id=1

delete_allowed_db_by_user

Delete a User-DB association by user


Table 103. delete_allowed_db_by_user
Parameter Description
userName Required. The name of the user
serverIp The server IP.
instanceName The instance name.
Note: For "blank" instance names, enter instanceName=[blank] (not
instanceName=blank)
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

CLI and API 183


grdapi delete_allowed_db_by_user userName=scott

update_user_db

Fully apply all recent changes to the active User-DB association map
Table 104. update_user_db
Parameter Description
api_target_host In a central management configuration only, allows the user to
specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed
units. On a managed unit it is the host name or IP of the CM.

Example

grdapi update_user_db

Note: In a Central Management configuration, this command should be run on a


Central Manager.

GuardAPI Enterprise Load Balancing Functions


Use these GuardAPI commands to view and set load balancing parameters, view
the current load map, and manage S-TAP and managed unit group associations.

get_load_balancer_load_map

View the current load map.

grdapi get_load_balancer_load_map

get_load_balancer_params

View the current load balancer configuration parameters.

grdapi get_load_balancer_params

set_load_balancer_param

Set load balancer configuration parameters.

grdapi set_load_balancer_params [paramName=value]

See Enterprise load balancing configuration parameters for a list of available


parameters and allowed values.

Multiple parameter and values pairs by can be specified on a single command line.
For example, grdapi set_load_balancer_params LOAD_BALANCER_ENABLED=1
STATIC_LOAD_COLLECTION _INTERVAL=360.

assign_load_balancer_groups

Assign a managed unit group to an application or S-TAP group.

grdapi assign_load_balancer_groups muGroupName=[value] appGroupName=[value]

184 CLI and API


Parameter Value Description
muGroupName managed unit group name For example,
muGroupName=mu_group_NA.
appGroupName application or S-TAP group For example,
name appGroupName=app_group_NA.

unassign_load_balancer_groups
Unassign a managed unit group from an application or S-TAP group.

grdapi unassign_load_balancer_groups muGroupName=[value]


appGroupName=[value]

Parameter Value Description


muGroupName managed unit group name For example,
muGroupName=mu_group_NA.
appGroupName application or S-TAP group For example,
name appGroupName=app_group_NA.

GuardAPI External Feed Functions


Use these GuardAPI functions to create mappings for external feeds.

create_ef_mapping

This function creates a mapping and populates tables based on the name of the
report specified by the reportName parameter. Each mapping has a name stored in
EF_MAP_TYPE_HDR.EF_TYPE_DESC, and that name will be identical to the value
of reportName. The target table name will also be based on the reportName
parameter, with underscores added between the words. For example, "My Report"
becomes MY_REPORT.
Table 105.
Parameter Description
reportName Name of the report to use for external feed
mapping. This parameter also determines
the name of the mapping and the target
table name.

modify_ef_mapping

Sometimes the names generated by create_ef_mapping are not suitable for


particular database, and modify_ef_mapping can be used to adjust the names to fit
database requirements. Only mappings with ID >= 20000 may be modified in order
to protect predefined Guardium mappings.
Table 106.
Parameter Description
reportName Name of the mapping to modify.

CLI and API 185


Table 106. (continued)
Parameter Description
modifyObj Specifies the database object to modify,
either table or column. Existing values can be
retrieved using the list_ef_mapping
function.
oldName Specifies the old table name to remove.
newName Specifies the new table name to use.

delete_ef_mapping

This function allows you to delete existing mappings. Only mappings with ID >=
20000 may be deleted in order to protect predefined Guardium mappings.
Table 107.
Parameter Description
reportName Name of the mapping to delete.

list_ef_mapping

If run without any parameters, this function returns a list of all customer-created
mappings. If run with the reportName parameter, this function returns details of the
specified mapping (such as the table and column names used by the external feed).
Table 108.
Parameter Description
reportName Optional. Name of the mapping for which to
return details.

GuardAPI File Activity Monitor Functions


Use the following GuardAPI commands to enable and disable the file activity
monitor, configure the file quick search activity and entitlement extractions
schedule, and get information on the file activity monitor.

Use the GuardAPI command, grdapi create_policy, to create a FAM policy. After
the policy is created, use FAM-specific GuardAPI commands.

For example:

grdapi create_policy ruleSetDesc='TEST'

grdapi create_fam_rule policyName='TEST' ruleName=r-test-sles11


actionName="Log As Violation and Audit" serverHost="9.70.144.98:FAM"
filePath="/famtest/*"

For instructions on how to use GuardAPI commands, see GuardAPI Reference


Overview help topic.

186 CLI and API


enable_fam_crawler
Sets the Guardium system to process crawler results and file activity data. The
results will be added automatically to quick search index files. Use the parameters
to schedule file quick search activity, entitlement extractions, and remote group
population.

Note: Quick Search must also be enabled with the command grdapi
enable_quick_search schedule_interval=1.
Table 109. enable_fam_crawler
Parameter Description
extraction_start Initial date/time from which data is extracted to file quick search. It is
limited to 2 days in the past. The default is current time. If the unit is
set to HOUR, then it is rounded to an hour. If it is set to DAY, then it is
rounded to a day.
schedule_start The default is current time.
activity_schedule_interval
Required. This parameter sets activity schedule interval. The
recommended interval is 2 with the unit set to MINUTE.
activity_schedule_units
Required. This parameter sets the unit of the activity unit. The values
are either MINUTE or HOUR. The recommended unit is MINUTE.
entitlement_schedule_interval
Required. This parameter sets the entitlement schedule interval. The
recommended interval is 1 with the unit set to DAY.
entitlement_schedule_units
Required. This parameter sets the unit of the entitlement schedule. The
possible values are MINUTE, HOUR, and DAY. The recommended unit
is DAY.

Example
grdapi enable_fam_crawler extraction_start=< > schedule_start=< >
activity_schedule_interval=2 activity_schedule_units=MINUTE
entitlement_schedule_interval=10 entitlement_schedule_units=MINUTE

disable_fam_crawler

Disables the file activity monitor. The file quick search activity and entitlement
extractions scheduler are removed. This function also disables remote group
population.

Example
grdapi disable_fam_crawler

get_fam_crawler_info
Shows the status of the file activity monitor. If it is enabled, the command shows
the settings for the entitlement extraction and file quick search activity schedule.
FAM Crawler (server side) is disabled.
FAM Crawler (server side) is enabled. Entitlement(1 DAY) Activity(2 MINUTE)

Example
grdapi get_fam_crawler_info

CLI and API 187


list_policy_fam_rule
Lists all the rules in a FAM policy.

Parameter Description
policyName Required. String. Policy name
ruleName Optional. String. If no ruleName is provided,
all policy rules with details will be shown. If
a ruleName is provided, details will be
listed for that rule.

create_fam_rule

Creates a new FAM rule.

Parameter Description
policyName Required. String. Policy name.
ruleName Required. String. Rule name.
filePath String. File path to be monitored. Either
filePath or filePathGroup must be specified.
notfilePath String. Must be yes or no. Yes means apply
this rule to all files except those in the
specified path.
filePathGroup String. Group of file paths. Either filePath or
filePathGroup must be specified.
includeSubDirectory String. Must be yes or no. Yes means include
files in all subdirectories.
removableMedia String. Must be yes or no.
osUser String. OS user name,
osUserGroup String. Group of OS users.
notOSUser String. Must be yes or no. Yes means use all
users except the specified osUser,
serverHost String. Host name.
serverHostGroup String. Group of hostnames.
command String. The command name to be included
in the rule.
commandGroup String. Group of commands.
notCommand String. Must be yes or no. Yes means use all
commands except the specified command.
actionName String. Required, The name of the FAM
action.
messageTemplate String. Message template name.
notificationType String. Notification type.
userLoginName String. User login name.
classDestination String. Name of custom class to be invoked.

188 CLI and API


policy_fam_rule_delete
Deletes a rule from a FAM policy.

Parameter Description
policyName Required. String. Policy name
ruleName Required. String. Name of the rule to be
deleted.

GuardAPI GIM Functions


Use these CLI commands to list, update, assign, remove and cancel GIM Functions.

gim_list_registered_clients

Lists all the registered clients.


Table 110. gim_list_registered_clients
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_list_registered_clients

gim_list_client_params

Lists all the (module) parameters assigned to a specific client.


Table 111. gim_list_client_params
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_list_client_params clientIP=192.168.12.210

gim_update_client_params

Updates a single module parameters in a specific client.


Table 112. gim_update_client_params
Parameter Description
paramName Required - Parameter Name
paramValue Required - Parameter Value

CLI and API 189


Table 112. gim_update_client_params (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_update_client_params clientIP=192.168.1.100 paramName=STAP_TAP_IP paramValue=192.168.1.100

gim_list_client_modules

Lists all the modules assigned to a specific client and their state
Table 113. gim_list_client_modules
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_list_client_modules clientIP=192.168.2.210

gim_load_package

Loads all the modules within 'filename'.

Note: This command will load a file which resides on local file system, therefore
the procedure (cmd='fileserver') of loading a file to the CM/Guardium appliance
must precede this command.
Table 114. gim_load_package
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_load_package filename=*.gim

Note: The wildcard “*” can be used within filename.

190 CLI and API


gim_assign_bundle_or_module_to_client_by_version
Assigns a bundle/module to a client.
Table 115. gim_assign_bundle_or_module_to_client_by_version
Parameter Description
clientIP Required - Client IP Address
module Required - Module
moduleVersion Required - Module Version
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_assign_bundle_or_module_to_client_by_version clientIP=192.168.1.100 module=BUNDLE-STAP

gim_schedule_install

Schedules for installation all the modules/bundles that were assigned to a client
and haven't been installed yet (for example, PENDING). If the parameter module
is specific, only the requested module will be scheduled.
Table 116. gim_schedule_install
Parameter Description
clientIP Required - Client IP Address
module Optional - Module. If module is not specified in the command, all the
modules for the specified clientIP will be scheduled for install.
date Required - Date; Format: 'now' or 'yyyy-MM-dd HH:mm'
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_schedule_install clientIP=192.168.1.100 module=BUNDLE-STAP date=”2008-07-02 14:50”
grdapi gim_schedule_install clientIP=192.168.1.100 date=”2008-07-02 14:50”

Note: Date in the past may be used to run something immediately.

gim_list_client_status
Displays the status of the latest operation executed for a specific client.
Table 117. gim_list_client_status
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

CLI and API 191


Example
grdapi gim_list_client_status clientIP=192.168.1.100

gim_uninstall_module
Uninstalls a module/bundle on a specific client.
Table 118. gim_uninstall_module
Parameter Description
clientIP Required - Client IP Address
module Required - Module.
date Required - Date; Format: 'now' or 'yyyy-MM-dd HH:mm'
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_uninstall_module clientIP=192.168.1.100 module=BUNDLE-STAP

gim_cancel_install

Cancels installation of a bundle/module on a specific client. Canceling installation


is possible only if a module/bundle is not already in the process of being installed
by a client (STATE=IP or IP-PR)
Table 119. gim_cancel_install
Parameter Description
clientIP Required - Client IP Address
module Required- Module.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_cancel_install clientIP=192.168.1.100 module=BUNDLE-STAP

gim_list_bundles
Lists all the available bundles. A bundle is a group of modules that can be
installed on a client.
Table 120. gim_list_bundles
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

192 CLI and API


grdapi gim_list_bundles

gim_list_mandatory_params
Lists the mandatory parameters for a single module.
Table 121. gim_list_mandatory_params
Parameter Description
module The name of the GIM module for which to display the mandatory
parameters
version The version of the GIM module for which to display the mandatory
parameters

Example
grdapi gim_list_mandatory_params module=name version=number

gim_assign_latest_bundle_or module_to_client

Assigns the latest (i.e. the highest version) available bundle or module for a
specific client.
Table 122. gim_assign_latest_bundle_or module_to_client
Parameter Description
clientIP Required - Client IP Address
module Required- Module.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_assign_latest_bundle_or_module_to_client clientIP=192.168.1.100 module=BUNDLE_STAP

gim_schedule_uninstall

Schedules uninstallation of all the modules/bundles that were assigned to a client


and haven't been uninstalled yet (i.e. “PENDING”). If the parameter 'module' is
specific, only the requested module will be scheduled.
Table 123. gim_schedule_install
Parameter Description
clientIP Required - Client IP Address
module Optional - Module. If module is not specified in the command, all the
modules for the specified clientIP will be scheduled for install.
date Required - Date; Format: 'now' or 'yyyy-MM-dd HH:mm'
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

CLI and API 193


grdapi gim_schedule_uninstall clientIP=192.168.1.100 module=BUNDLE-STAP date=”2008-07-02 14:50” gr

gim_cancel_uninstall
Cancels uninstallation of a bundle/module on a specific client. Canceling
uninstallation is possible only if a module/bundle is not already in the process of
being installed by a client (STATE=IP or IP-PR)
Table 124. gim_cancel_uninstall
Parameter Description
clientIP Required - Client IP Address
module Required- Module.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_cancel_uninstall clientIP=192.168.1.100 module=BUNDLE-STAP

gim_remove_bundle

The command will delete bundlePackageName from the database as well as from
the file system (from /var/log/guard/gim_packages , and also
from/var/gim_dist_packages if the Guardium system is a central manager).

parameters (required):

bundlePackageName

Parameter value take bundle package name as specified in the output of the
gim_list_unused_bundles. The command will be successful only if:

2.1 The value of bundlePackageName refers to a BUNDLE

2.2 The value of bundlePackageName is not assigned to any client

2.3 The value of bundlePackageName exists

2.4 There is one and only one bundle that refers to the value of
bundlePackageName

ALL the conditions (2.1 to 2.4) must be true in order to delete a bundle from the
database/file system. Otherwise an error will be generated.

Example
grdapi gim_remove_bundle bundlePackageName= bundlePackageName

gim_unassign_client_module
Unassigns a module from a client. Unlike 'gim_remove_module', this command
will untie the connection between a module and a specific client on the
CM/Guardium appliance. This command is will NOT uninstall or remove the
module on the actual DB-server machine. It is to be used only in cases on

194 CLI and API


synchronization problems between the DB-server (i.e client) information and the
CM/Guardium appliance information regarding the current state of the modules.
Table 125. gim_unassign_client_module
Parameter Description
clientIP Required - Client IP Address
module Optional - Module. If module is not specified in the command, all the
modules for the specified clientIP will be scheduled for install.
date Required - Date; Format: 'now' or 'yyyy-MM-dd HH:mm'
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_unassign_client_module clientIP=192.168.1.100 module=STAP

gim_get_purge_list

List old software packages (GIM files) that have previously been uploaded to the
Guardium appliance or CM.
Table 126. gim_get_purge_list
Parameter Description
olderThan Required - Number of days. Files older than the number of days
specified will be purged. Valid value is any number greater or equal to
0.
excludeLatest Optional - true or false (default value is true).

true – Avoid purging the latest version per OS per module.

false – Purge the latest version per OS per module.


api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_get_purge_list olderThan=30 excludeLatest=true

gim_purge
Remove old software packages (GIM files) that have previously been uploaded to
the Guardium appliance or CM.
Table 127. gim_purge
Parameter Description
olderThan Required - Number of days. Files older than the number of days
specified will be purged. Valid value is any number greater or equal to
0.

CLI and API 195


Table 127. gim_purge (continued)
Parameter Description
excludeLatest Optional - true or false (default value is true).

true – Avoid purging the latest version per OS per module.

false – Purge the latest version per OS per module.


filename Optional - A specific file that is to be removed. If the file specified is a
bundle (for example, starts with 'guard-bundle'), the content of this
bundle will be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_purge olderThan=30

Note:

Either the 'filename' parameter or (olderThan and/or excludeLatest) can be


specified in the command.

GIM purge will not purge files that are currently scheduled for installation.

GIM purge will not allow the removal of any file (for example, parameter
filename) that includes '/' character.

gim_get_available_modules

List the available modules / bundles available to install on a specific server.


Table 128. gim_get_available_modules
Parameter Description
clientIP Required - Client IP Address

Example
grdapi gim_get_available_modules clientIP=192.168.1.100

gim_get_client_last_event
List the latest operation executed for a specific client.
Table 129. gim_get_client_last_event
Parameter Description
clientIP Required - Client IP Address

Example

grdapi gim_get_client_last_event clientIP=192.168.1.100

grdapi gim_get_client_last_event clientIP=winx64

196 CLI and API


grdapi gim_get_client_last_event clientIP=9.70.144.73

gim_get_modules_running_status

List the modules / bundles currently running on a specific server.


Table 130. gim_get_modules_running_status
Parameter Description
clientIP Required - Client IP Address
process name of process
status ON or OFF
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_get_modules_running_status clientIP=192.168.1.100 process= status=

gim_list_unused_bundles
The command returns a list of unused (not installed on any database server)
bundles.

parameters (required):

includeLatest ( valid values 0/1)

If set to value 1, the returned list of unused bundles will include the latest unused
bundle.

Example
grdapi gim_list_unused_bundles includeLatest=1

gim_reset_client
Disassociate modules from selected client.
Table 131. gim_reset_client
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_reset_client clientIP=192.168.1.100

CLI and API 197


gim_set_diagnostics
Set diagnostics collection within GIM.
Table 132. gim_set_diagnostics
Parameter Description
clientIP Required - Client IP Address
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi gim_set diagnostics clientIP=192.168.1.100

gim_set_global_param

Set global parameters within GIM.


Table 133. gim_set_global_param
Parameter Description
clientIP Required - Client IP Address
paramName Required - Name of the parameter within the API function to be
mapped
paramValue Required - Value of the parameter within the API function to be
mapped
sqlguardip Optional - IP address /host name of the collector this GIM agent will
connect to.
ca_file Optional - Full file name path to the certificate authority PEM file.
key_file Optional - Full file name path to the private key PEM file.
cert_file Optional - Full file name path to the certificate PEM file.
gim_listener_default_port
Optional - Set a different port for the GIM agent server mode.
gim_listener_default_shared_secret
Optional - Set a shared secret to verify collectors that are sending
requests to the new server mode GIM agent.
no_listener Optional - Disable the GIM agent in server mode.
api_target_host In a central management configuration only, allows the user to specify
a target host where the API will execute. On a Central Manager (CM)
the value is the host name or IP of any managed units. On a managed
unit it is the host name or IP of the CM.

Example
grdapi gim_set_global_param clientIP=192.168.1.100 paramName=gim_listener_default_port paramValue=844

gim_remote_activation
Connects the collector's IP address to a server mode GIM agent or group of GIM
agents.

198 CLI and API


Table 134. gim_remote_activation
Parameter Description
targetGroup Optional - The group name of all the
database servers that the collector connects
to. It cannot be specified with the targetHost
parameter.
sharedSecret Optional - The shared secret that was
configured during installation.
targetPort Optional - The port server mode of the GIM
agent.
api_target_host In a central management configuration only,
allows the user to specify a target host
where the API will execute. On a Central
Manager (CM) the value is the host name or
IP of any managed units. On a managed
unit it is the host name or IP of the CM.

Example
grdapi gim_remote_activation targetGroup=<someGroup> sharedSecret=<password> targetPort=8445

GuardAPI Group Functions


Use these GuardAPI commands to create, list, and delete Datasource Group
Functions.

Note: In a Central Management environment, all groups are defined on the


Central Manager and sent to the managed units on a scheduled basis.

Group Functions

create_group

list_group_by_id

list_group_by_desc

delete_group_by_id

delete_group_by_desc

update_group_by_id

update_group_by_desc

flatten_hierarchical_groups

Member Functions

create_member_to_group_by_id

create_member_to_group_by_desc

list_group_members_by_id

CLI and API 199


list_group_members_by_desc

delete_member_from_group_by_id

delete_member_from_group_by_desc

create_group

create_group

Create a group definition.


Table 135. create_group
Parameter Description
desc Required. Enter a unique description for the new group.

200 CLI and API


Table 135. create_group (continued)
Parameter Description
type Required. Must be one of the following:

Application Event Value Number

Application Event Value String

Application Event Value Type

Application Item Name

Application Module

Application System ID

Application Transaction Code

APPLICATION USER

Audit Task Type

Client Hostname

Client IP

Client IP/DB User

Client IP/Src App./DB User

Clietn IP/Src App./DB User/Server IP/Svc. Name

Client MAC Address

Client OS

COMMANDS

CVE Pre-defind Tests

Database Name

DB Error Codes

DB PROTOCOL

DB PROTOCOL VERSION

DB Role

DB User/Object/Privilege

DB Ver./Patches

EXCEPTION TYPE

FIELDS

Files Permissions

Global ID

Guardium Audit Categories

Guardium Role

Guardium Users
CLI and API 201
Login Succeded Code

NET PROTOCOL
Table 135. create_group (continued)
Parameter Description
appID Required. Identifies the application for the group. It must be one of the
following values:

Public

Audit Process Builder

Baseline Builder

Classifier

DB2_zOS groups

Express Security

IMS zOS groups

Policy Builder

Security Assessment Builder

subtype Optional. A sub type is used to collect multiple groups of the same
group type, where the membership of each group is exclusive. For
example, assume that you have database servers located in three
datacenters, and that you want to group the servers by location. You
would define a separate group of database servers for each location, and
define all three groups with the same sub type (datacenter, for example).
category Optional. A category is an optional label that is used to group policy
violations and groups for reporting.
classification Optional. A classification is another optional label that is used to group
policy violations and groups for reporting.
owner Required. The owner
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples (follow exactly, upper-case and lower-case letters where indicated)


grdapi create_group desc=agroup type=OBJECTS appid=Public owner=admin
grdapi create_group appid=Access_policy owner=admin type="OBJECTS" desc=groupName1

list_group_by_id
Display the properties of a specific group.
Table 136. list_group_by_id
Parameter Description
id Required (integer). Identifies the group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

202 CLI and API


Example
grdapi list_group_by_id id=100003

list_group_by_desc
Display the properties of a specific group.
Table 137. list_group_by_desc
Parameter Description
desc Required. The name of the group to be displayed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_group_by_desc desc=agroup

delete_group_by_id
Table 138. delete_group_by_id
Parameter Description
id Required (integer). Identifies the group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_group_by_id id=100005

delete_group_by_desc
Table 139. delete_group_by_desc
Parameter Description
desc Required. The name of the group to be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_group_by_desc desc=agroup

update_group_by_id

Update properties of the specified group.


Table 140. update_group_by_id
Parameter Description
id Required (integer). Identifies the group to be updated.

CLI and API 203


Table 140. update_group_by_id (continued)
Parameter Description
newDesc Optional. Enter a unique description for the new group.
subtype Optional. A sub type is used to collect multiple groups of the same
group type, where the membership of each group is exclusive. For
example, assume that you have database servers located in three
datacenters, and that you want to group the servers by location. You
would define a separate group of database servers for each location, and
define all three groups with the same sub type (datacenter, for example).
category Optional. A category is an optional label that is used to group policy
violations and groups for reporting.
classification Optional. A classification is another optional label that is used to group
policy violations and groups for reporting.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi update_group_by_id id=100002 newDesc=beegroup subtype=bee category=be classification=bea

update_group_by_desc

Update properties of the specified group.


Table 141. update_group_by_desc
Parameter Description
desc Required. The name of the group to be updated.
newDesc Optional. Enter a unique description for the group.
subtype Optional. A sub type is used to collect multiple groups of the same
group type, where the membership of each group is exclusive. For
example, assume that you have database servers located in three
datacenters, and that you want to group the servers by location. You
would define a separate group of database servers for each location, and
define all three groups with the same sub type (datacenter, for example).
category Optional. A category is an optional label that is used to group policy
violations and groups for reporting.
classification Optional. A classification is another optional label that is used to group
policy violations and groups for reporting.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi update_group_by_desc desc=beegroup newDesc=beegroupee category=bebebe classification=bebebebe

204 CLI and API


flatten_hierarchical_groups
Update ALL hierarchical groups that exist in Group Builder.
Table 142. flatten_hierarchical_groups
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi flatten_hierarchical_groups

create_member_to_group_by_id

Add a member to a group specified by the group ID.


Table 143. create_member_to_group_by_id
Parameter Description
id Required (integer). Identifies the group to which the member is to be
added.
member Required. The new member name, which must be unique within the
group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi create_member_to_group_by_id id=100005 member=turkey

create_member_to_group_by_desc
Add a member to the named group.
Table 144. create_member_to_group_by_desc
Parameter Description
desc Required. The name of the group to which the member is to be added.
member Required. The new member name, which must be unique within the
group.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi create_member_to_group_by_desc desc=bgroup member=turkey

Use these commands to add members to the group


grdapi create_member_to_group_by_desc desc=groupName1 member=member_1

CLI and API 205


grdapi create_member_to_group_by_desc desc=groupName1 member=member_2
grdapi create_member_to_group_by_desc desc=groupName1 member=member_3
grdapi create_member_to_group_by_desc desc=groupName1 member=member_4
grdapi create_member_to_group_by_desc desc=groupName1 member=member_5

list_group members_by_id
List the members of the specified group.
Table 145. list_group members_by_id
Parameter Description
id Required (integer). Identifies the group whose members are to be listed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_group_members_by_id id=100001

list_group_members_by_desc

List the members of the specified group.


Table 146. list_group_members_by_desc
Parameter Description
desc Required. The name of the group whose members are to be listed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_group_members_by_desc desc=bgroup

delete_member_from_group_by_id
Remove a member from a specified group.
Table 147. delete_member_from_group_by_id
Parameter Description
id Required (integer). Identifies the group from which the member is to be
removed.
member Required. The name of the member to be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

206 CLI and API


grdapi delete_member_to_group_by_id id=100005 member=turkey

delete_member_from_group_by_desc
Remove a member from a specified group.
Table 148. delete_member_from_group_by_desc
Parameter Description
desc Required. The name of the group from which the member is to be
removed.
member Required. The name of the member to be removed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_member_from_group_by_desc desc=bgroup member=boston

GuardAPI Input Generation


GuardAPI Input Generation allows the user to take the output of one Guardium
report and feed it as the input for another Guardium entity; allowing users to used
prepared calls to quickly call API functionality.

Generate Input for Guard API Calls

The generation of Guard API calls from reports can be invoked in one of two
ways, either from a single row within a report or multi-rows that is based on a
whole report (what is seen on the screen). See the how-to topic, Generate API Call
From Reports, for an example.

When a report is displayed:

For Single Row:


1. Double-clicking on a row for drill-down displays an option to Invoke... Click
the Invoke... option to display a list of APIs that are mapped to this report.

For Multi Row


1. Click the Invoke... icon (within the report status line) to display a list of APIs
that are mapped to this report.
Continue the steps for both Single and Multi Row
2. Click the API you would like to invoke; bringing up the API Call Form for the
Report and Invoked API Function. Invoking an API call from a report for
multiple rows produces an API Call Form that displays and enables the editing
of all records that are displayed on the screen (dependent on the fetch size) to a
maximum of 20 records.
3. Fill in the Required Parameters and any non-Required Parameters for the
selected API call. Many of the parameters are pre-filled from the report but
might be changed to build a unique API call. For specific help in filling out
required or non-required parameters, see the individual API function calls
within the GuardAPI Reference guide.

CLI and API 207


For multi row, use the set of parameters for the API (those with a button for
each parameter) to enter a value for a parameter and then click the down
arrow button populate that parameter for all records. Also, use the check boxes
for each row to select or deselect a row from being included in the API call.

Note: Parameters with the name of 'password' are masked.


4. Use the drop-down list to select the Log level, where Log level represents the
following (0 - returns ID=identifier and ERR=error_code as defined in Return
Codes, 1 - displays additional information to screen, 2 - puts information into
the Guardium application debug logs, 3 - will do both 1 & 2)
5. Use the drop-down list to select a Parameter to encrypt.

Note: Parameter Encryption is enabled by setting the Shared Secret and is


relevant only for invoking the API function through script generation.
6. Choose to Invoke Now or Generate Script.
a. If Invoke Now is selected, the API call runs immediately and display an
API Call Output screen showing the status of the API call.
b. If Generate Script is selected
1) Open the generated script with your favorite editor or optionally save to
disk to edit and execute later.
Example Script
# A template script for invoking Sqlguard API function
delete_datasource_by_name seven times:
# Usage: ssh cli@a1.corp.com<delete_datasource_by_name_api_call.txt
# replace any < > with the required value
#
set guiuser <username> password <password>
grdapi delete_datasource_by_name name=192.168.2.91
grdapi delete_datasource_by_name name=egret-oracle
grdapi delete_datasource_by_name name=egret-oracle3
2) Modify the script; replacing any of the empty parameter values (denoted
by '< >')

Note: Empty parameters might remain in the script as the API call
ignores them
Example Modified Script
# A template script for invoking Sqlguard API function
delete_datasource_by_name seven times:
# Usage: ssh cli@a1.corp.com<delete_datasource_by_name_api_call.txt
# replace any < > with the required value
#
set guiuser <username> password <password>
grdapi delete_datasource_by_name name=egret-oracle3
3) Execute the CLI function call
Example Call
$ ssh
cli@a1.corp.com<c:/download/delete_datasource_by_name_api_call.txt

208 CLI and API


Mapping GuardAPI to Report Results
Guardium comes with a battery of predefined reports and many of them have
already been mapped to GuardAPI functions to ease configuration. In addition,
Guardium offers users the capability to define additional reports, even their own
custom made reports, and map them to GuardAPI functions per report.
1. Go to any predefined report in the Daily Monitor tab, Guardium Monitor tab,
or Tap Monitor tab.
2. Click the Invoke ... button.
3. Choose the Add API mapping selection.
4. At the new window, Add API mapping shows the name of the report, for
example, Guardium Logins; a search/filter mechanism to find the appropriate
GuardAPI command; and, selection choices for API functions available under
the Predefined Report. Choose the API function, and then click Map Report
Attributes.
5. At the new window, API-Report Parameter Mapping, map the parameter name
to the Report field. Sometimes there might be data that is not supplied with a
Guardium report. For these instances, a constant can be created, added to the
report and used within the API parameter mappings.

Note: Save overrides the current mapping.

Note: If the Guardium report, with a constant added, is exported, the constant
will not be exported.

To simplify the mapping between the GuardAPI parameters and Guardium


attributes, Guardium created the predefined report Query Entities & Attributes that
list all the Guardium attributes; giving users a GUI interface and allowing them to
easily drill down from that report and create the linkages quickly.

Existing Guardium attributes or user-defined constants may be mapped to the


GuardAPI parameters of Existing Attributes or Constants.

Note: When GuardAPI parameters are mapped to report attributes, if a report has
more than one attribute that is mapped to the same GuardAPI parameter, the
value picked for the API call is the first of these attributes according to the order of
display in the report.
Existing Attributes
1. Go to the Query Entities & Attributes report to add the API parameter
mappings. (Guardium Monitor -> Query Entities & Attributes)
2. The Query Entities & Attributes report is long because it lists all the
Guardium attributes. Narrow down the records you are interested in by
using the Customize button.
3. To create the mapping, double-click the attribute row you would like to
assign to a parameter name
4. Click the Invoke... option
5. Select the create_api_parameter_mapping API function
6. Fill in the functionName and parameterName in the API Call Form
7. Click the Invoke now button to create the API to Report Parameter
Mapping
See how-to topic, Using API Calls From Custom Reports, for a full scenario
that maps GuardAPI parameters through the GUI.

CLI and API 209


Constants
Sometimes there may be data that is not supplied within a Guardium
report. For these instances, a constant can be created, added to the report,
and then used within the API parameter mappings.
1. Go to the Query Entities & Attributes report to add the API parameter
mappings. (Guardium Monitor -> Query Entities & Attributes)
2. The Query Entities & Attributes report is long because it lists all the
Guardium attributes. Narrow down the records you are interested in
by using the Customize button.
3. To create a constant attribute, double-click any row for the entity you
would like to create a constant attribute for
4. Click the Invoke... option
5. Select the create_constant_attribute API function
6. Fill in the constant value to use and an attributeLabel you like to
name it
7. Click the Invoke now button to create the constant
8. To create the mapping, double-click the newly created attribute row
9. Click the Invoke... option
10. Select the create_api_parameter_mapping API function
11. Fill in the functionName and parameterName in the API Call Form
12. Click the Invoke now button to create the API to Report Parameter
Mapping
13. The newly created attribute must be added to the report. Modify the
Query through Query Builder and add the field.
See how-to topic, Using Constants within API Calls, for a full scenario that
creates and maps a constant attribute through the GUI.

Note: If the Guardium report, with a constant added, is exported, the


constant will not be exported.

Note: When using API mapping, table columns in a report appears in the
report field as long as the table column is an attribute of an entity. Some of
the columns such as count column will not be displayed in the report field
because it cannot be mapped.

Object Security for Certain GuardAPI commands


Role validation implements controls on selected GuardAPI commands to consider
the roles of the specific components (and not only the application) and disallow
actions if the roles do not match.

This means a user that has the appropriate roles for Policy Builder is able to
execute the GuardAPI command, delete_rule, on any policy, regardless of the roles
of this specific policy.

Role validation exists for the following Policy rules GuardAPI commands:
change_rule_order; copy_rule; copy_rules, delete_rule; update_rule.

Role validation exists for the following Group Description GuardAPI commands:
create_member_to_group_by_desc; create_member_to_group_by_id;
delete_group_by_desc; delete_group_by_id; delete_member_from_group_by_desc;
delete_member_from_group_by_id; update_group_by_id; update_group_by_desc.

210 CLI and API


Role validation exists for the following Datasource GuardAPI commands:
delete_datasource_by_id; delete_datasource_by_name; update_datasource_by_id;
update_datasource_by_name.

Role validation exists for the following Audit Process GuardAPI commands:
stop_audit_process.

API to run an audit process from tabular and graphical reports

An GuardAPI can be invoked automatically from any report portlet. When the
GuardAPI is invoked, it creates a new audit process report.

If such process for the user exists, then the parameters are updated and the same
process is used.

The behavior of the GuardAPI is as follows:

1 - If new process, it creates one receiver per email in the list (if any) with <p>a
content type as indicated in the emailContentType parameter. It will also create a
user receiver for the user that is logged in (invoking the API) if the
includeUserReceiver parameter is true.

2 - If existing process, all email receivers are removed and replaced with the
emails from the new list (if any) with the content type as defined in the
emailContentType parameter. If the list is empty, it removes all email address
receivers. If there is already a receiver for the user it will NOT be removed even if
the includeUserreceiver is false, however if the parameter is true and there is no
such receiver then it is added.

Once the audit process is generated, it is automatically executed (similar to a Run


Once Now) and users should expect an item on their to-do list for that audit
process.

create_ad_hoc_audit_and_run_once

Parameters:

1 - reportId - The ID on the report to be used for the only task in the Audit process

2 - isForReportRunOnce boolean indicates whether the process should be run once


after it is created.

3 - changeParIfExist boolean indicates whether the task parameters should be


updated if the process exists

4 - taskParameter All task parameters and the value for each concatenated with the
characters ^^ should be like: PAR1=Val1^^PAR2=Val2^^ etc it is valid to leave a
parameter empty, for example if PAR2 should remain empty it looks like:
PAR1=VAL1^^PAR2=^^PAR3=VAL3^^...

5 - processNamePar - Name of the process if empty it creates a process with the


name.

6 - sendToEmails: A comma-separated list of email addresses

7 - emailContentType 0-PDF or 1-CSV (applies ONLY to email receivers

CLI and API 211


8 - includeUserReceiver boolean indicates whether to create a receiver for the user
that is logged in

An GuardAPI can be invoked automatically from any report portlet. When the
GuardAPI is invoked, it creates a new audit process report.

Schedule APIs
modify_schedule parameters jobName jobGroup cronString startTime optional

list schedule

delete_schedule parameters jobName jobGroup deleteJob optional

schedule_job parameters jobType objectName optional cronString startTime


optional

Note: Some job types for the grdapi schedule_job function do not require an object
name. No validation is performed on the object name parameter and users see the
standard 'OK' prompt when the function is run with anything entered as the
objectName parameter for the following jobs types:csvExportJob, systemBackupJob,
dataArchiveJob, dataExportJob, dataImportJob, resultsArchiveJob,
AppUserTranslation, IpHostToAlias

grdapi schedule_job --get_param_values=jobType - Value for parameter 'jobType' of


function 'schedule_job' must be one of: CustomTableDataUpload;
AutoDetectProbeJob; AppUserTranslation; InstallPolicy: AuditJob; ResultArchive;
AutoDetectScanJob; CustomTableDataPurge; CSVExport; DataExport; DataArchive;
DataImport; PopulateGrpFromQry; SystemBackup; PopulateAlias; IpHostToAlias;
UnitUtilization

grdapi set_purge_batch_size

Set the batch size that is used during purge, aids in performance of purge and has
a default setting of 200,000. A trade-off in performance and disk space usage
should be noted as setting to a larger batch size increases the speed of the purge
but consumes more disk space and setting to a low batch size decreases the speed
of the purge but not consume as much disk space.

function parameters: batchSize - required api_target_host Example vx29> grdapi


set_purge_batch_size batchSize=200000 ID=0 ok

grdapi get_purge_batch_size

Gets the current setting for the purge batch size

function parameters: api_target_host Example vx29> grdapi get_purge_batch_size


ID=0 Purge Batch Size = 200000 ok

grdapi patch_install

function parameters: patch_date patch_number - required

212 CLI and API


grdapi populate_from_dependencies
function parameters: descOfEndingGroup - required descOfStartingGroup -
required flattenNamespace getFunctions getJavaClasses getPackages getProcedures
getSynonyms getTables getTriggers getViews isAppend - required
isEndingGroupQualified owner - required reverseIt selectedDataSourceName -
required api_target_host

create_computed_attribute

Use in Reports.
Table 149. create_computed_attribute
Parameter Description
attributeLabel Required.
entityLabel Required. Database user
expression Required. Server IP
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

delete_computed_attribute
Use in Reports.
Table 150. delete_computed_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
expression Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

update_computed_attribute
Use in Reports.
Table 151. update_computed_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
expression Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

CLI and API 213


create_constant_attribute
Use in Reports.
Table 152. create_constant_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
constant Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

delete_constant_attribute

Use in Reports.
Table 153. delete_constant_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
constant Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

update_constant_attribute

Use in Reports.
Table 154. update_constant_attribute
Parameter Description
attributeLabel Required.
entityLabel Required.
constant Required.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

create_ad_hoc_audit_and_run_once
Use in Reports.
Table 155. create_ad_hoc_audit_and_run_once
Parameter Description
chnageParlfExist Boolean. Required.

214 CLI and API


Table 155. create_ad_hoc_audit_and_run_once (continued)
Parameter Description
emailContentType Integer
includeUserReceiver
Boolean
isForReportRunOnce
Boolean. Required.
processNamePar String
reportID Integer. Required
sendToEmails String
taskParameter String
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

REST API
JSON (JavaScript Object Notation) output option supports GuardAPI functions.
This is part of REST APIs. REST stands for Representational State Transfer. It relies
on a stateless, client/server, cacheable communications protocol, and in virtually
all cases, the HTTP protocol is used. REST is an architecture style for designing
networked applications. The idea is that, rather than using complex mechanisms
such as CORBA, RPC, or SOAP to connect between machines, simple HTTP is
used to make calls between machines. RESTful applications use HTTP requests to
post data (create and/or update), read data (for example, make queries), and
delete data. Thus, REST uses HTTP for all four Create/Read/Update/Delete
operations. REST is a lightweight alternative to mechanisms like RPC (Remote
Procedure Calls) and Web Services (SOAP, WSDL).
Guardium’s Implementation of REST
1. Register Application (only once) and get Client Secret.
2. Store Client Secret in secure place.
3. Request Access Token for authorization.
4. Store Access Token so grdAPI command is authenticated properly.
5. Use Access Tokens to submit GuardAPI commands.
Example use cases
v I want the ability to dynamically get a small amount of audit data for a
certain IP address without having to login to the Guardium GUI.
v I want to populate an existing group, so I can update my policy to
prevent unauthorized access to sensitive information.
v I want to get a list of all users within a certain authorized access group.
v I want my application development team to help identify what sensitive
tables to monitor.
v I want to script access to grdAPI’s without using “expect” scripting
language, which requires me to code response text from the target
system.
HTTP has a vocabulary of operations (request methods)
v GET (pass parameters in the URL)
v POST (pass parameters in JSON object)

CLI and API 215


v PUT (pass parameters to change as JSON object)
v DELETE (pass parameters as JSON object)
Special user for internal REST API requests
For internal REST API requests, there is a special ROLE and USER
predefined in the system.
This user cannot be removed or modified through the accessmgr UI and
cannot be used to log in the UI.
This user's password will never expire, but is revoked if client ID is
revoked.
On OAuth client registration, a new function accepts this user and client
ID. It generates a random strong password for the user and store it in the
TURBINE_USER table.
It returns a client secret and the generated password.
The internal (S-TAP, maybe others) client must secure the client secret and
password.
Permissions for different functions can be assigned to the role through
accessmgr UI.
RestAPI vs. GuardAPI
GET = List
POST = Create
PUT = Update
DELETE = Delete
GuardAPIs
list_datasourcename_by_name (parameters - ?name="MSSQL_1")
-X GET https://10.10.9.239:8443/restAPI/datasource/?name="MSSQL_1"

create_datasource
-X POST https://10.10.9.239:8443/restAPI/datasource
update_datasource_by_name - JSON Object ’{password:guardium}'
-X PUT -d ’{password:guardium, name:"MSSQL_1}'
delete_datasource_by_id - JSON Object ’{"id":20020}'
-X DELETE -d ’{"id":20020}'

For further information, go to the Using the IBM InfoSphere Guardium


REST API article on DeveloperWorks.
http://www.ibm.com/developerworks/data/library/techarticle/dm-
1404guardrestapi/index.html

register_oauth_client
Use this GuardAPI command to wrap supported GuardAPI functions in a RESTful
API that uses JSON (JavaScript Object Notation) for input and output.

Use the GrdAPI command, grdapi register_oauth_client, to register the client and
obtain the necessary access token to call the REST services.

216 CLI and API


REST stands for Representational State Transfer. It relies on a stateless,
client/server, cacheable communications protocol, and in virtually all cases, the
HTTP protocol is used.

REST is an architecture style for designing networked applications. The idea is


that, rather than using complex mechanisms such as CORBA, RPC, or SOAP to
connect between machines, simple HTTP is used to make calls between machines.

RESTful applications use HTTP requests to post data (create and/or update), read
data (for example, make queries), and delete data. Thus, REST uses HTTP for all
four Create/Read/Update/Delete operations. REST is a lightweight alternative to
mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL).

function parameters:

client_id - String - required

grant_types - String - required. The only grant type that is supported is password.

redirect_uris - String - required

scope - String - required.

fetchSize - String- optional (default is 20 recores to retain backward compatibility,


maximum value is 30000.

sortColumn - optional - If specified must be the column title of one of the report
fields.

sortType - optional - asc or desc

Syntax

grdapi register_oauth_client <client_id> <grant_types> <redirect_uris> <scope>

getOAuthTokenExpirationTime

Use this GuardAPI command to get the expiration time of the REST API token

function parameters:

api_target_host - String

setOAuthTokenExpirationTime
Use this GuardAPI command to set the expiration time of the REST API token.

function parameters:

expirationTime - Integer - required

api_target_host - String

Syntax

grdapi setOAuthTokenExpirationTime ExpirationTime=10000

CLI and API 217


GuardAPI Process Control Functions
Use these GuardAPI commands to execute, copy, upload, list, and delete Process
Control Functions.

Execute (submit) a classification process

Runs a classification process. It is equivalent of executing Run Once Now from


Classification Process Builder. It submits the job which places the process on the
Guardium Job Queue, from which the appliance runs a single job at a time.
Administrators can view the job status by selecting Guardium Monitor >
Guardium Job Queue.

Note: Create a classification process before calling this API.


Table 156. Execute (submit) a classification process
Parameter Description
processName Name of the classification process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_cls_process processName="classPolicy1"

Execute (submit) a security assessment

Runs the specified assessment. It is equivalent of executing Run Once Now from
Security Assessment Finder. It submits the job. This places the process on the
Guardium Job Queue, from which the appliance runs a single job at a time.
Administrators can view the job status by selecting Guardium Monitor >
Guardium Job Queue.

Note: Create a Security Assessment before calling this API.


Table 157. Execute (submit) a security assessment
Parameter Description
assessmentDesc Name of the assessment

api_target_host In a central management configuration only, allows the user to specify a


target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_assessment assessmentDesc="assessment1"

Execute an audit process


Executes an Audit process. Runs the specified audit process. It is equivalent of
executing Run Once Now from Audit Process Builder.

Note: Create an audit process before calling this API.

218 CLI and API


Table 158. Execute an audit process
Parameter Description
auditProcess Name of the audit process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_auditProcess auditProcess="Appliance Monitoring"

Stop an audit process

The stop_audit_process API can not be used through the GuardAPI command line.
This function is only usable as an invocation through a drill down. See the
sub-topic, Stop an audit process, in Compliance Workload Automation help topic.
Table 159. Stop an audit process
Parameter Description
process Name of the audit process
run The RunID of the audit process
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
stop_audit_process

Execute a populate group from query

It populates the chosen group by executing a configured query. It is equivalent of


executing Run Once Now from Populate Group From Query Set Up screen. If the
group is not configured for import, it will display an error message.

Note: This grdapi can only be used for groups that have already been configured
in Populate Group From Query Set Up screen (query should have been chosen and
parameters should have been set)
Table 160. Execute a populate group from query
Parameter Description
groupDesc Group name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_populateGroupFromQuery groupDesc="A test"

CLI and API 219


Execute an application user translation
Imports the user definitions for all configured applications in Application User
Translation Configuration screen. It is equivalent of executing Run Once Now from
Application User Translation Configuration screen.

Note: To run this grdapi, must define at least one Application User Detection in
Application User Translation Configuration screen. If not a message will be
displayed.
Table 161. Execute an application user translation
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_appUserTranslation

Execute a flat log process


Merges the flat log information to the internal database. It is equivalent of
executing Run Once Now from Flat Log Process screen.

Note: This grdapi can only be executed if Flat Log Process is configured as Process
in Flat Log Process screen. If not, an error message will be displayed.
Table 162. Execute a flat log process
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_flatLogProcess

Execute an incident generation process

Executes a query which is defined for the selected incident generation process,
against the policy violations log. It generates incidents based on that query. It is
equivalent of executing Run Once Now from Edit Incident Generation Process
screen.

Note: Create a Incident Generation Process before calling this API.

Since an Incident generation process doesn't have a unique name, to distinguish a


specific one, we can either use a processId or a queryName.

There are two methods to call this grdapi:


v execute_incidentGenProcess
v execute_incidentGenProcess_byDetails

220 CLI and API


Table 163. execute an incident generation process
Parameter Description
processID Process ID of the incident
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_incidentGenProcess processId=20003
Table 164. execute_incidentGenProcess_byDetails
Parameter Description
queryName Query name
categoryName Category Name
user User
threshold Threshold
severity Severity level
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_incidentGenProcess_byDetails queryName="Policy Violation Count" user=admin severi

Upload custom data - executes (submits) a classification process

Uploads data to the custom table specified by tableName. It is equivalent of


executing Upload from Import Data screen of Custom Table Builder. To run this
grdapi, must first configure the specified custom table in Import Table Structure of
Custom Table Builder. From the UI, go to Tools/Report Builder/Custom Table
Builder, select a Custom Table, click upload data, and select datasource.

Note: tableName specifies name of an existing custom table.


Table 165. Upload custom data - executes (submits) a classification process
Parameter Description
tableName Name of custom table
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi upload_custom_data tableName="TEST_TABLE"

CLI and API 221


Import LDAP users
It imports Guardium user definitions from an LDAP server configured in LDAP
User Import screen. It is equivalent of executing Run Once Now from LDAP User
Import screen. (login in as accessmgr /LDAP Import)

Note: LDAP must be configured. Otherwise, the system will give an error
message.
Table 166. Import LDAP users
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi execute_ldap_user_import

Install policy

Install a policy or multiple policies. If multiple policies are to be installed then the
policies need to be delimited by a pipe character '|' with policies being in the
order you want to be installed. This needs to be done even if only one policy
might have had changes.

Install multiple policies with grdapi policy_install command. Install by position by


specifying the policies in the order that you want to install.

Even in UI, when you install a policy after another installed policy, it will reinstall
all of them. which is the same as grdapi policy_install command.
Table 167. Install policy
Parameter Description
policy Policy name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples
grdapi policy_install policy="Policy 1|Policy 2"
grdapi policy_install policy="policy 20|policy 30|policy 40"

Delete policy

Use the delete_policy command to delete a policy specified by the policyDesc


parameter.
Table 168. Delete policy
Parameter Description
policyDesc Policy name.

222 CLI and API


Table 168. Delete policy (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_policy policyDesc="Hadoop Policy"

List policy

Use the list_policy command to display a list of available policies or to display


details about a single policy.
Table 169. List policy
Parameter Description
policyDesc Policy name. If unspecified, the list_policy command returns a list of
available policies.
detail Accepts values of true or false. The default value is true and returns
policy details. Specifying a value of false returns only policy names.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples

Display details of a specific policy:


grdapi list_policy policyDesc="Hadoop Policy"

Display a detailed list of available policies:


grdapi list_policy

Display a list of available policy names (no details):


grdapi list_policy detail=false

Copy policy rule


Copy a rule <ruleDesc> of <fromPolicy> to the end of <toPolicy> rule's list.

Note: It Copies a rule of <fromPolicy> to the end of <toPolicy> rule's list. Both
<fromPolicy> and <toPolicy> must be created, before running this grdapi.
Table 170. Copy policy rule
Parameter Description
ruleDesc Rule Description
fromPolicy Policy name
toPolicy Policy name

CLI and API 223


Table 170. Copy policy rule (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi copy_rule ruleDesc="Rule Description" fromPolicy="policy1" toPolicy=" policy2 "

Clone policy

Use this GuardApi command to clone a policy.


Table 171. Clone policy
Parameter Description
policyDesc Policy name
clonedpolicyDesc Cloned Policy name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi clone_policy policyDesc="Hadoop Policy" clonedPolicyDesc="Hadoop Policy cloned1"

Update policy rule

Update a rule <ruleDesc> of <fromPolicy> for a rule parameter.

See Policies for additional information on the following policy rule parameters that
can be altered with the update_rule API call.
Table 172. Update policy rule
Parameter Description
ruleDesc Rule Description
fromPolicy Policy name
newDesc New Rule Description
clientIP Client IP
clientNetMask Client Net Mask
serverIP Server IP
serverNetMask Server Net Mask
objectName Object Name
sourceProgram Source Program
dbName Database Name
dbUser Database User
command Command
appUserName Application User Name

224 CLI and API


Table 172. Update policy rule (continued)
Parameter Description
dateTime Date and Time
logFlag Log Flag
exceptionType Exception Type
minCount Minimum Count
continueToNext Continue to Next
resetInterval Reset Interval
serviceName Service Name
osUser O/S User
dbType Database Type
netProtocol Net Protocol
clientMac Client MAC
fieldName Field Name
pattern Patter
appEventExists Application Event Exists
eventType Event Type
appEventStrValue Application Event String Value
appEventNumValue Application Event Number Value
appEventDate Application Event Date
eventUserName Event User Name
errorCode Error Code
severity Severity
category Category
classification Classification
dataPattern Data Pattern
sqlPattern SQL Pattern
xmlPattern XML Patter
mvcSystem MVS™ System
clientIpNotFlag Client IP Not Flag
serverIpNotFlag Server IP Not Flag
objectNameNotFlag Object Name Not Flag
sourceProgramNotFlag
Source Program Not Flag
dbNameNotFlag Database Name Not Flag
dbUserNotFlag Database User Not Flag
commandNotFlag Command Not Flag
appUserNameNotFlag
Application User Name Not Flag
exceptionTypeIdNotFlag
Exception Type ID Not FLag
serviceNameNotFlagService Name Not Flag
osUserNotFlag O/S User Not Flag
clientMacNotFlag Client MAC Not Flag
fieldNameNotFlag Field Name Not Flag

CLI and API 225


Table 172. Update policy rule (continued)
Parameter Description
errorCodeNotFlag Error Code Not Flag
replacementChar Replacement Character
messageTemplate Message Template
recordsAffectedThreshold
Records Affected Threshold
matchedReturnedTreshold
Matched Returned Treshold
clientIpGroup Client IP Group
serverIpGroup Server IP Group
objectGroup Object Group
objectCommandGroup
Object Command Group
objectFieldGroup Object Field Group
dbUserGroup Database User Group
commandsGroup Commands Group
dbNameGroup Database Name Group
sourceProgramGroupSource Program Group
appUserGroup Application User Group
serviceNameGroup Service Name Group
osUserGroup O/S User Group
netProtocolGroup Net Protocol Group
fieldNameGroup Field Name Group
errorGroup Error Group
appEventStrGroup Application Event String Group
clientProgramUserServerInstanceGroup
Client Program User Server Instance Group
quarantineMinutes Quarantine Minutes
clientInfo Use for DB2 and DB2_COLLECTION_PROFILE
clientInGroup Use for DB2_COLLECTION_PROFILE
api_target_host In a central management configuration only, allows the user to specify
a target host where the API will execute. On a Central Manager (CM)
the value is the host name or IP of any managed units. On a managed
unit it is the host name or IP of the CM.

Example
grdapi update_rule ruleDesc="Rule Description" fromPolicy="policy1" serviceName="ANY"

Change policy rule order


Change the ordered position of a rule within a policy.
Table 173. Change policy rule order
Parameter Description
fromPolicy Policy name
order New order position for Rule
ruleDesc Rule Description

226 CLI and API


Table 173. Change policy rule order (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi change_rule_order ruleDesc="Copy of policy1 exception1" fromPolicy="policy1" order=10

List policy rules

List the rules for a policy.


Table 174. List policy rules
Parameter Description
policy Policy name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi list_policy_rules policy="policy1"

Delete policy rule

Remove a rule from a policy.


Table 175. Delete policy rule
Parameter Description
fromPolicy Policy name
toPolicy Policy name
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_rule ruleDesc="Copy (3) of policy1 exception1" fromPolicy="policy1"

Uninstall policy rule

Use the uninstall_policy_rule command to uninstall the policy rule(s) specified


by the policy and ruleName parameters.
Table 176. Reinstall policy rule
Parameter Description
policy Policy name.

CLI and API 227


Table 176. Reinstall policy rule (continued)
Parameter Description
ruleName Rule name(s). Specify multiple policy rules using the pipe character, for
example ruleName="rule1|rule2|rule3.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples

Uninstall a single policy rule:


grdapi uninstall_policy_rule policy="Hadoop Policy" ruleName="Low interest Objects: Allow"

Uninstall multiple policy rules:


grdapi uninstall_policy_rule policy="Hadoop Policy" ruleName="Low interest Objects: Allow|Low Interes

Reinstall policy rule

Use the reinstall_policy_rule command to reinstall the policy rule(s) specified


by the policy and ruleName parameters.
Table 177. Reinstall policy rule
Parameter Description
policy Policy name.
ruleName Rule name(s). Specify multiple policy rules using the pipe character, for
example ruleName="rule1|rule2|rule3.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Examples

Reinstall a single policy rule:


grdapi reinstall_policy_rule policy="Hadoop Policy" ruleName="Low interest Objects: Allow"

Reinstall multiple policy rules:


grdapi reinstall_policy_rule policy="Hadoop Policy" ruleName="Low interest Objects: Allow|Low Interes

Delete Audit process results


Use this command to delete any audit process results.
Table 178. Delete audit process results
Parameter Description
ExecutionDateFromWhen did audit process begin
ExecutionDateTo When did audit process end
ProcessName Required. What is name of audit process

228 CLI and API


Table 178. Delete audit process results (continued)
Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_Audit_process_result ExecutionDateFrom=, ExecutionDateTo=, ProcessName=abab

Map API Parameters to Domain Entities and Attributes

Map API parameters to Domain entities and attributes so the parameters can be
populated by report values on API call generation or API automation.

Note: The Mapping GuardAPI Parameters to Domain Entities and Attributes in


GuardAPI Input Process Generation shows the domains, entities and attributes of
the system and has a GUI interface to invoke this API function.
Table 179. Map API Parameters to Domain Entities and Attributes
Parameter Description
functionName Name of the API function
parameterName Name of the parameter within the API function to be mapped
domain Any of the Guardium reporting domains such as Access, Alert,
Discovered Instances, Exceptions, Group Tracking, etc.
entityLabel Any of the entities for the reporting domain
attributeLabel Any of the attributes within the entity
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi create_api_parameter_mapping functionName="create_group" parameterName="desc" domain="Group

List API Parameter Mappings to Domain Entities and Attributes

List the parameter mappings for an API function.

Note: The Mapping GuardAPI Parameters to Domain Entities and Attributes in


GuardAPI Input Process Generation shows the domains, entities and attributes of
the system and has a GUI interface to invoke this API function.
Table 180. List API Parameter Mappings to Domain Entities and Attributes
Parameter Description
functionName Name of the API function
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

CLI and API 229


Example
grdapi list_param_mapping_for_function functionName="create_group"

Delete API Parameter Mappings for Domain Entities and


Attributes

Remove the parameter mappings for an API function.

Note: The Mapping GuardAPI Parameters to Domain Entities and Attributes in


GuardAPI Input Process Generation shows the domains, entities and attributes of
the system and has a GUI interface to invoke this API function.
Table 181. Delete API Parameter Mappings for Domain Entities and Attributes
Parameter Description
functionName Name of the API function
parameterName Name of the parameter within the API function to be mapped
domain Any of the Guardium reporting domains such as Access, Alert,
Discovered Instances, Exceptions, Group Tracking, etc.
entityLabel Any of the entities for the reporting domain
attributeLabel Any of the attributes within the entity
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_api_parameter_mapping functionName="create_group" parameterName="desc" domain="Group Tr

Close all the events defined on a specific process/task/execution

Close all the events defined on a specific process/task/execution for tasks of type
report. Specially needed if for example there is a task with a default event that
returned a large number of records, such task can not be signed unless all the
events are closed.
Table 182. Close all the events defined on a specific process/task/execution
Parameter Description
eventStatus Required. Event status. Must be a valid status for the default event
defined for the audit task and must be a final status.
execDate Required. Execution Date and Time
processDesc Required. Audit process description.
taskDesc Required. Audit task description.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi close_default_events eventStatus=Done execDate="2010-03-01 08:00:00" processDesc="Audit Proces

230 CLI and API


create_quarantine_allowed_until
Use in Policies.
Table 183. create_quarantine_allowed_until
Parameter Description
allowedUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

create_quarantine_until

Use in Policies.
Table 184. create_quarantine_until
Parameter Description
quarantineUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

delete_quarantine_until
Use in Policies.
Table 185. delete_quarantine_until
Parameter Description
quarantineUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

CLI and API 231


must_gather
Use grdapi must_gather command to collect information on the state of the
Guardium system that can be used by Guardium Support.
Table 186. must_gather
Parameter Description
commandsList String - required
description String - required
duration Integer - required
emailDestination String - required
invokingUser String - required
maxLength Integer - required
pmrNumber String - required
start Date - required
timestamp Date - required
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

restart_job_queue_listener

Use the restart_job_queue_listener command to restart the job queue listener if the
job queue fails to start, does not run waiting jobs, or if a job appears stuck in
running or stopping status for a prolonged period of time. Issuing this command
immediately restarts the job queue, and any currently executing jobs will be halted
and restarted.

Example:
grdapi restart_job_queue_listener

The restart_job_queue_listener command does not accept any parameters.

update_quarantine_allowed_until
Use in Policies.
Table 187. update_quarantine_allowed_until
Parameter Description
allowedUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

232 CLI and API


update_quarantine_until
Use in Policies.
Table 188. update_quarantine_until
Parameter Description
quarantineUntil Required.
dbUser Required. Database user
serverIP Required. Server IP
serverName Required. Server name
Type Required. Value must be one of: normal, DB2z, or IMS.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

GuardAPI Quick Search for Enterprise Functions


Use these GuardAPI commands to enable, disable, or configure Quick Search for
Enterprise features and parameters.

disable_quick_search

Disable Quick Search for Enterprise functionality.

grdapi disable_quick_search

Parameter Value Description


all true or false In an environment with a
Central Manager, use this
parameter to disable search
on all managed units. For
example, all=true.

This parameter is optional.


api_target hostname or IP address In an environment with a
Central Manager, this
parameter allows you to
specify a target host where
the API will execute. On a
Central Manager, the value is
the host name or IP of a
managed unit. On a
managed unit, the value it is
the host name or IP of the
Central Manager. For
example,
api_target=10.0.1.123.

This parameter is optional.

CLI and API 233


enable_quick_search
Enable Quick Search for Enterprise functionality.

grdapi enable_quick_search schedule_interval=[value] schedule_units=[value]

For example, the following command enables Quick Search for Enterprise with a
2-minute data extraction interval: grdapi enable_quick_search
schedule_interval=2 schedule_units=MINUTE.

Parameter Value Description


all true or false In an environment with a
Central Manager, use this
parameter to enable search
on all managed units. For
example, all=true.

This parameter is optional.


api_target hostname or IP address In an environment with a
Central Manager, this
parameter allows you to
specify a target host where
the API will execute. On a
Central Manager, the value is
the host name or IP of a
managed unit. On a
managed unit, the value it is
the host name or IP of the
Central Manager. For
example,
api_target=10.0.1.123.

This parameter is optional.


extraction_start date Define the date by which to
start the extraction of audit
data for search. If this
parameter is omitted,
extraction will start
immediately.

This parameter is optional.


includeViolations true or false Determine whether to
include violations in the
search indexes. Omitting
violations can help reduce
the size of search indexes.

This parameter is optional.


schedule_interval integer Used with the
schedule_units parameter to
define the interval for
extracting audit data. For
example,
schedule_interval=2
schedule_units=MINUTE.

This parameter is required.

234 CLI and API


Parameter Value Description
schedule_start date Date on which to begin
following the extraction
interval defined by the
schedule_interval and
schedule_units parameters.

This parameter is optional.


schedule_units HOUR or MINUTE Used with the
schedule_interval
parameter to define the
interval for extracting audit
data. For example,
schedule_interval=2
schedule_units=MINUTE.

This parameter is required.

set_enterprise_search_options

Define the search mode for Quick Search for Enterprise.

grdapi set_enterprise_search_options distributed_search=[value]

For example, the following command configures Quick Search for Enterprise in
all_machines mode to allow searching of data across the entire Guardium
environment from any Guardium machine in that environment: grdapi
set_enterprise_search_options distributed_search=all_machines.

Parameter Value Description


api_target hostname or IP address In an environment with a
Central Manager, this
parameter allows you to
specify a target host where
the API will execute. On a
Central Manager, the value is
the host name or IP of a
managed unit. On a
managed unit, the value it is
the host name or IP of the
Central Manager. For
example,
api_target=10.0.1.123.

This parameter is optional.

CLI and API 235


Parameter Value Description
distributed_search cm_only, local_only, or
cm_only
all_machines
Searches submitted
from a Central
Manager return
results from across
the Guardium
environment, but
searches submitted
from managed units
only return local
results from that
managed units
local_only
Searches submitted
from individual
machines return
results from that
machine only. There
is no ability to
search data from
across the
Guardium
environment.
all_machines
Searches can be
submitted from any
machine and return
results from across
the Guardium
environment.

This parameter is required,


and the default value is
cm_only.

GuardAPI Query Rewrite Functions


Automate testing or create definitions for certain complex queries that cannot be
done from the user interface by using Guardium APIs at the command-line
interface.

Note: If you create query rewrite definitions by using APIs, you can still use the
UI to retrieve those definitions for testing with the Query Rewrite Builder.

The GuardAPI functions related to query rewrite include:

assign_qr_condition_to_action

create_qr_action

create_qr_add_where

create_qr_add_where_by_id

create_qr_condition

236 CLI and API


create_qr_definition

create_qr_replace_element

create_qr_replace_element_byId

list_qr_action

list_qr_add_where

list_qr_add_where_by_id

list_qr_condition

list_qr_condition_to_action

list_qr_definitions

list_qr_replace_element

list_qr_replace_element_byId

remove_all_qr_replace_elements

remove_all_qr_replace_elements_byId

remove_qr_action

remove_qr_add_where_by_id

remove_qr_condition

remove_qr_definition

remove_qr_replace_element_byId

update_qr_action

update_qr_add_where_by_id

update_qr_condition

update_qr_definition

update_qr_replace_element_byId

assign_qr_condition_to_action

Create an association between a query rewrite condition and an associated action.

Parameter Description
actionName Required. The name of the query rewrite action.
conditionName Required. The name of the query rewrite condition to be associated with
the specified action.

CLI and API 237


Parameter Description
definitionName Required. The name of the query rewrite definition that is associated
with the specified condition and action.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:
grdapi assign_qr_condition_to_action definitionName="case 15" actionName="qr action15_2" conditionNam

create_qr_action

Create a query rewrite action for a specified query rewrite definition.

Parameter Description
actionName Required. The unique name of the query rewrite action.
definitionName Required. The query rewrite definition that is associated with this
action.
description An optional description.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:
grdapi create_qr_action definitionName="case 15" actionName="qr action15_3"

create_qr_add_where

Associate a query rewrite function to add a WHERE condition to the specified


query rewrite action.

Parameter Description
actionName Required. The unique name of the query rewrite action.
definitionName Required. The query rewrite definition that is associated with this
action.
whereText Text to add to a WHERE clause.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi create_qr_add_where definitionName="qrw_def_Oracle_1"


actionName="qrw_act__addwhere_id2" whereText="id=2"

238 CLI and API


create_qr_add_where_by_id
Associate a query rewrite function to add a WHERE condition to the specified
query rewrite action.

Parameter Description
qrActionId Required (integer). The unique ID of query rewrite action.
whereText Text to add to a WHERE clause.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi create_qr_add_where_by_id qrActionId=10002 whereText="id=2"

create_qr_condition

Create a query rewrite condition.

Parameter Description
conditionName Required. The unique name of this query rewrite condition.
definitionName Required. The query rewrite definition that is associated with this
condition.
depth Integer that specifies the depth of the parsed SQL that this condition
applies to (1 and higher). The default -1 means that the query rewrite
condition applies to any matching SQL at any depth.
True or false. Use this parameter to associate this condition with objects
isForAllRuleObjects
in a policy access rule. True indicates that the specified condition applies
to all objects in the access rule’s Object field or Object group for a fired
rule. The default is false, which means the query condition is specified
using the objects that are defined in this condition. Neither option
impacts any rule triggering behavior.
isForAllRuleVerbsTrue or false. Use this parameter to associate this condition with objects
in a policy access rule. True, indicates that the specified condition
applies to all verbs in the access rule’s Verb field or Verb group for a
fired rule. The default is false, which means the query condition is
specified using the verbs that are defined in this condition. Neither
option impacts any rule triggering behavior.
isObjectRegex True or false. Indicates that the specified object is specified by using a
regular expression. Default is false.
isVerbRegex True or false. Indicates that the specified verb is specified by using a
regular expression. Default is false.
object An object (table, view). The default “*” means all objects. This can also
be specified as a regular expression, in which case set the isVerbRegex
to True.
order Used to specify the order in which to assemble multiple related query
rewrite conditions for complex SQL. Default is 1.
verb A verb (select, insert, update, delete). The default “*” means all verbs.

CLI and API 239


Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi create_qr_condition definitionName="case 15" conditionName="qr


cond15_3" verb=select isForAllRuleObjects=false object=* depth=2 order=3

create_qr_definition
Create a query rewrite definition.

Parameter Description
dataBaseType Required. The type of database this query rewrite definition is
associated with. Acceptable values are: ORACLE or DB2.
definitionName Required. A unique name for this query rewrite definition condition.
description An optional description.
isNegateQrCond Indicates whether there is a NOT flag on the set of query rewrite
conditions that are associated with this definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi create_qr_definition dataBaseType="ORACLE" definitionName="case 15"

create_qr_replace_element

Create a replacement element, or set of elements, such as an entire SQL sentence or


a SELECT list.

Parameter Description
actionName Required. The unique name of the query rewrite action this rewrite
function is associated with.
definitionName Required. A unique name for this query rewrite definition condition.
True or false. Indicates that this action applies to all FROM elements.
isFromAllRuleElements
Default is false.
isFromRegex True or false. Indicates that the ‘from’ element is specified by using a
regular expression. Default is false.
True or false. Indicates that the "replace to" is the name of a function,
isReplaceToFunction
such as user-defined function.
replaceFrom The incoming string for a matching rule that is to be replaced. Use
replaceType to indicate specifically which element of the incoming
query to examine.
replaceTo The replacement string for the matching element.

240 CLI and API


Parameter Description
replaceType Required. Indicates what is to be replaced.

Must be one of the following:


v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi create_qr_replace_element definitionName="case 15" actionName="qr


action15_2" replaceType=VERB replaceFrom="select" replaceTo="select++"

create_qr_replace_element_byId
Create a replacement specification for a specified query rewrite action.

Parameter Description
True or false. Indicates that this action applies to all FROM elements.
isFromAllRuleElements
Default is false.
isFromRegex True or false. Indicates that the “from” element is specified by using a
regular expression. Default is false.
True or false. Indicates that the “replace to” is the name of a function,
isReplaceToFunction
such as user-defined function.
qrActionId Required (integer). The unique ID of query rewrite action.
replaceFrom The incoming string for a matching rule that is to be replaced. Use
replaceType to indicate specifically which element of the incoming
query to examine.
replaceTo The replacement string for the matching element.
replaceType Required. Indicates what is to be replaced.

Must be one of the following:


v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

CLI and API 241


grdapi create_qr_replace_element_byId qrActionID="1116" replaceType=OBJECT
replaceFrom="employee" replaceTo="employee_2"

list_qr_action
Lists query actions for a specified query definition.

Parameter Description
actionName The name of the query rewrite action.
definitionName Required. The query rewrite definition name.
detail True or false. The default is true, which lists all the associated attributes
of the actions. Only the name is returned for false.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_action definitionName="case 2"

Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_action definitionName="case 2"
#######################################################################

QR actions of definition ’case 2’ - (id = 1 )

#######################################################################
qr action ID: 1
qr action name: qr action2
qr action description: add where by id

ok
Example:
grdapi list_qr_action definitionName="case 2" detail=false

Output:

qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_action definitionName="case 2" detail=false


#################################
QR actions of definition ’case 2’ - (id = 1 )
#################################
qr action2
ok

list_qr_add_where

Lists “add where” functions for a specified query action and query definition pair.

Parameter Description
actionName The name of the query rewrite action.
definitionName Required. The query rewrite definition name.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

242 CLI and API


Example:

grdapi list_qr_add_where actionName="qrw_act_addwhere_id2"


definitionName="qrw_def_Oracle_1"

list_qr_add_where_by_id
Lists “add where” functions for a specified query action.

Parameter Description
qrActionId Required (integer). The unique identifier for the query rewrite action.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_add_where_by_id qrActionId=20023

list_qr_condition

Lists the query rewrite conditions that are associated with a particular query
rewrite definition.

Parameter Description
conditionName The name of a query rewrite condition.
definitionName Required. A query rewrite definition.
detail True or false. The default is true, which lists all the associated attributes
of the conditions. Only the name is returned for false.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_condition definitionName="case 2" conditionName="qr cond2"

Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_condition definitionName="case 2" conditionName="qr c
#######################################################################
QR Condtions of Definition ’case 2’ - (id = 1 )

#######################################################################

qr condition id: 1
qr condition name: qr cond2
qr definition ID: 1
qr condition verb: *
qr condition object: *
qr condition dept: -1
is verb regex: false

CLI and API 243


is object regex: false
is action for all rule verbs: false
is action for all rule objects: false
qr condition order: 1

list_qr_condition_to_action

Lists the associations between a query rewrite condition and a query rewrite action
for a particular query definition.

Parameter Description
actionName Required (integer). The unique identifier for the query rewrite action.
definitionName Required. A query rewrite definition.
Detail True or false. The default is true, which lists all the associated attributes
of the conditions for the specified action and definition. Only the name
is returned for false.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_condition_to_action actionName="qr action15_2"


definitionName="case 15"

Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_condition_to_action actionName="qr action2" definitionNa
#######################################################################

QR Condtions of Action ’qr action2’ - (id = 1 )

#######################################################################

qr condition id: 1
qr condition name: qr cond2
qr definition ID: 1
qr condition verb: *
qr condition object: *
qr condition dept: -1
is verb regex: false
is object regex: false
is action for all rule verbs: false
is action for all rule objects: false
qr condition order: 1

list_qr_definitions

Lists query rewrite definitions.

Parameter Description
definitionName Required. A query rewrite definition.
Detail True or false. The default is true, which lists all the associated attributes
of the conditions for the specified action and definition. Only the name
is returned for false.

244 CLI and API


Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_definitions

Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_definitions
#######################################################################
QR Definitions

#######################################################################
qr definition ID: 1
qr definition name: case 2
qr definition description:
is negation set on qr conditions: false

list_qr_replace_element

Lists replacements for a specified query rewrite action and query rewrite definition
pair.

Parameter Description
actionName Required. A query rewrite action.
definitionName Required. A query rewrite definition.
Detail True or false. The default is true, which lists all the associated attributes
of the replacement elements for the specified action and definition. Only
the names are returned for false.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_replace_element actionName="qr action2" definitionName="case


2"

Output:
qrwg1.guard.swg.usma.ibm.com> grdapi list_qr_replace_element actionName="qr action2" definitionNam
QR replace elements for action ’qr action2’ - (qrActionId = 1 )

#######################################################################

CLI and API 245


qr replace element ID: 1
qr replace type: object
qr replace from: emp
qr replace to: NEW_EMP
qr is from regex: false
qr is from all rule elements: false

***********************************************************************
qr replace element ID: 2
qr replace type: selectList
qr replace from: Whole select list
qr replace to: EMPNO,SAL
qr is from regex: false
qr is from all rule elements: false

list_qr_replace_element_byId

Lists replacements for a specified query rewrite action.

Parameter Description
detail True or false. The default is true, which lists all the associated attributes
of the replacement elements for the specified action and definition. Only
the names are returned for false.
qrActionId Required (integer). The unique identifier for the query rewrite action.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi list_qr_replace_element_byId detail=true qrActionId="22222"


replaceType="OBJECT"

remove_all_qr_replace_elements
Deletes query replacement specifications from the system.

Parameter Description
actionName Required. A query rewrite action.
definitionName Required (integer). The unique identifier for the query rewrite action.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST

246 CLI and API


Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi remove_all_qr_replace_elements definitionName="new case 2"


actionName="new qr action2“

remove_all_qr_replace_elements_byId
Deletes query replacement specifications from the system.

Parameter Description
qrActionId Required (integer). A query rewrite action identifier.
definitionName Required. A query rewrite definition.
replaceType If specified, must be one of the following:
v SELECT
v VERB
v OBJECT
v SENTENCE
v SELECTLIST

If replaceType is not specified, then all replacements for the specified


action and definition is deleted.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi remove_all_qr_replace_elements actionName="qr action15_2"


definitionName="case 15" replaceType="OBJECT"

remove_qr_action

Deletes a specified query rewrite action from the system.

Parameter Description
actionName Required. A query rewrite action.
definitionName Required. A query rewrite definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

CLI and API 247


grdapi remove_qr_action actionName="qr action15_2" definitionName="case 15"

remove_qr_add_where_by_id

Deletes a specified “add where” function from the system.

Parameter Description
qrAddWhereId Required (integer). An “add where” function.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi remove_qr_add_where_by_id qrAddWhereId=22666

remove_qr_condition

Deletes a query rewrite condition from the system.

Parameter Description
conditionName Required. A query rewrite condition.
definitionName Required. A query rewrite definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi remove_qr_condition conditionName="qr cond15_1" definitionName="case


15"

remove_qr_definition

Deletes a query rewrite definition from the system.

Parameter Description
definitionName Required. A query rewrite definition.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi remove_qr_definition definitionName="case 15"

248 CLI and API


remove_qr_replace_element_byId
Deletes a specified query element replacement from the system.

Parameter Description
Required (integer). A replacement definition ID.
qrReplaceElementId
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi qrReplaceElementId=33333

update_qr_action

Updates an existing query rewrite action with a new name and optional
description.

Parameter Description
actionName Required. The unique name of the query rewrite action.
definitionName Required. The query rewrite definition that is associated with this
action.
description An optional description.
newName The new name for the query rewrite action.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi update_qr_action definitionName="case 2" actionName="qr action2"


newName="new qr action2"

update_qr_add_where_by_id
Allows update of an existing “add where” function with new replacement text.

Parameter Description
qrAddWhereId Required (integer). The unique identifier for the query rewrite “add
where” function.
whereText The replacement text for the identified where clause.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi update_qr_add_where_by_id 22222 whereText="1=2"


CLI and API 249
update_qr_condition
Update an existing query rewrite condition.

Parameter Description
conditionName Required. The unique name of this query rewrite condition.
definitionName Required. The query rewrite definition that is associated with this
condition.
depth Integer that specifies the depth of the parsed SQL that this condition
applies to (1 and higher). The default -1 means that the query rewrite
condition applies to any matching SQL at any depth.
True or false. Indicates that the specified condition applies to all objects
isForAllRuleObjects
for the fired rule. Default is false.
isForAllRuleVerbsTrue or false. Indicates that the specified condition applies to all verbs
for the fired rule Default is false.
isObjectRegex True or false. Indicates that the specified object is specified by using a
regular expression. Default is false.
isVerbRegex True or false. Indicates that the specified verb is specified by using a
regular expression. Default is false.
newName The new name for the query rewrite condition.
Object An object (table or view). The default “*” means all objects. This can
also be specified as a regular expression, in which case set the
isVerbRegex to True.
Order Used to specify the order in which to assemble multiple related query
rewrite conditions for complex SQL. Default is 1.
Verb A verb (select, insert, update, delete). The default “*” means all verbs.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:

grdapi update_qr_condition definitionName="case 16" conditionName="qr


cond15_3" newName="qr cond16_3" verb=select object=* dept=2 order=3

update_qr_definition
Update an existing query rewrite definition.

Parameter Description
dataBaseType Required. The type of database this query rewrite definition is
associated with. Must be either ORACLE or DB2.
definitionName Required. A unique name for this query rewrite definition condition.
description An optional description.
isNegateQrCond Indicates whether there is whether there is a NOT flag on the set of
query rewrite conditions that are associated with this definition.
newName Optional. Specify a new unique name.
sampleSql Optional. Specify a sample SQL statement. In most cases, you will not
use this unless you want to use the inputted sample SQL later in the UI.

250 CLI and API


Parameter Description
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit,
it is the host name or IP of the CM.

Example:
grdapi update_qr_definition dataBaseType="DB2" definitionName="case 15" sampleSql="select EMPNO fr

update_qr_replace_element_byId

Update an existing replacement specification for a specified query rewrite action.

Parameter Description
Required. The type of database this query rewrite definition is
isFromAllRuleElements
associated with. Must be either ORACLE or DB2.
isFromRegex True or false. Indicates that the “from” element is specified by using a
regular expression. Default is false.
True or false. Indicates that the “replace to” is the name of a function,
isReplaceToFunction
such as user-defined function.
Required (integer). The unique ID of query rewrite action.
qrReplaceElementId
replaceFrom The incoming string for a matching rule that is to be replaced. Use
replaceType to indicate specifically which element of the incoming
query to examine.
replaceTo The replacement string for the matching element.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API executes. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example:
grdapi update_qr_replace_element_byId qrReplaceElementId=1 isFromAllRuleElements=false isFromRegex

GuardAPI Role Functions


Use these GuardAPI commands to grant, list and revoke Role Functions.

Note: In a Central Management environment, the object to which you want to add
a role may reside on the Central Manager or on a managed unit. See the Overview
of the Aggregation & Central Management help book, for more information.

grant_role_to_object_by_id

Add a role to the specified object - a Classification process, for example.


Dependencies are checked before adding the role. For example, before adding a
role to a Classification process, that role must be assigned to all components
contained by that Classification process (the classification policy and any
datasources referenced).

CLI and API 251


Table 189. grant_role_to_object_by_id
Parameter Description
objectTypeId Required (integer). Identifies the type of object to which the role will be
assigned. It must be one of the following integers:

1=Query

2=Report

3=Alert

4=Baseline

5=Policy

6=SecurityAssessment

7=PrivacySet

8=AuditProcess

12=CustomTable

13=Datasource

14=CustomDomain

15=ClassifierPolicy

16=ClassificationProcess
objectId Required (integer). Identifies the object to which the role will be
assigned.
roleId Required (integer). Identifies the role to assign. This can be any existing
role ID, or the special value -1, which allows access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi grant_role_to_object_by_id objectTypeId=13 objectId=2 roleId=3

grant_role_to_object_by_Name
Add a role to the specified object - a Classification process, for example.
Dependencies are checked before adding the role. For example, before adding a
role to a Classification process, that role must be assigned to all components
contained by that Classification process (the classification policy and any
datasources referenced). Parameters

252 CLI and API


Table 190. grant_role_to_object_by_Name
Parameter Description
objectType Required. Identifies the type of object to which the role will be assigned.
It must be one of the following:

Query

Report

Alert

Baseline

Policy

SecurityAssessment

PrivacySet

AuditProcess

CustomTable

Datasource

CustomDomain

ClassifierPolicy

ClassificationProcess
objectName Required. The name of the object (the query or report, for example) to
which the role will be assigned.
role Required. The name of the role to assign. This can be any existing role,
or all_roles to allow access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi grant_role_to_object_by_Name objectType=Datasource objectName=


“swanSybase” role=admin

list_roles_granted_to_object_by_id
Displays the roles assigned to the specified object - a Classification process, for
example.

CLI and API 253


Table 191. list_roles_granted_to_object_by_id
Parameter Description
objectTypeId Required (integer). Identifies the type of object to which the role will be
assigned. It must be one of the following integers:

1=Query

2=Report

3=Alert

4=Baseline

5=Policy

6=SecurityAssessment

7=PrivacySet

8=AuditProcess

12=CustomTable

13=Datasource

14=CustomDomain

15=ClassifierPolicy

16=ClassificationProcess
objectId Required (integer). Identifies the object to which the role will be
assigned.
roleId Required (integer). Identifies the role to assign. This can be any existing
role ID, or the special value -1, which allows access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi list_roles_granted_to_object_by_id objectTypeId=7 objectId=1

list_roles_granted_to_object_by_Name
Displays the roles assigned to the specified object - a Classification process, for
example.

254 CLI and API


Table 192. list_roles_granted_to_object_by_Name
Parameter Description
objectType Required. Identifies the type of object to which the role will be assigned.
It must be one of the following:

Query

Report

Alert

Baseline

Policy

SecurityAssessment

PrivacySet

AuditProcess

CustomTable

Datasource

CustomDomain

ClassifierPolicy

ClassificationProcess
objectName Required. The name of the object (the query or report, for example) to
which the role will be assigned.
role Required. The name of the role to assign. This can be any existing role,
or all_roles to allow access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi list_roles_granted_to_object_by_Name objectType=PrivacySet


objectName="privaceSet 1"

revoke_role_from_object_by_id
Removes a role from the specified object - a Classification process, for example.
Dependencies are handled automatically. For example, if the role foo is removed
from a specific query, the role foo will also be removed from any report based on
that query.

CLI and API 255


Table 193. revoke_role_from_object_by_id
Parameter Description
objectTypeId Required (integer). Identifies the type of object to which the role will be
assigned. It must be one of the following integers:

1=Query

2=Report

3=Alert

4=Baseline

5=Policy

6=SecurityAssessment

7=PrivacySet

8=AuditProcess

12=CustomTable

13=Datasource

14=CustomDomain

15=ClassifierPolicy

16=ClassificationProcess
objectId Required (integer). Identifies the object to which the role will be
assigned.
roleId Required (integer). Identifies the role to assign. This can be any existing
role ID, or the special value -1, which allows access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi revoke_role_from_object_by_id objectTypeId=13 objectId=5 role=-1

revoke_role_from_object_by_Name
Removes a role from the specified object - a Classification process, for example.
Dependencies are handled automatically. For example, if the role foo is removed
from a specific query, the role foo will also be removed from any report that uses
that query.

256 CLI and API


Table 194. revoke_role_from_object_by_Name
Parameter Description
objectType Required. Identifies the type of object to which the role will be assigned.
It must be one of the following:

Query

Report

Alert

Baseline

Policy

SecurityAssessment

PrivacySet

AuditProcess

CustomTable

Datasource

CustomDomain

ClassifierPolicy

ClassificationProcess
objectName Required. The name of the object (the query or report, for example) to
which the role will be assigned.
role Required. The name of the role to assign. This can be any existing role,
or all_roles to allow access by all roles.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

grdapi revoke_role_from_object_by_Name objectType=Datasource


objectName="swanSybase" role=admin

GuardAPI S-TAP functions


Use these CLI commands to create, list, delete, restart, and set S-TAP functions.

create_stap_inspection_engine
Add an inspection engine to the specified S-TAP. S-TAP configurations can be
modified only from the active Guardium host for that S-TAP, and only when the
S-TAP is online.
Table 195. create_stap_inspection_engine
Parameter Description
stapHost Required. The host name or IP address of the database server on which
the S-TAP is installed.

CLI and API 257


Table 195. create_stap_inspection_engine (continued)
Parameter Description
protocol Required. The database protocol, which must be one of the these values:

DB2

DB2 Exit (DB2 version 10),

FTP

Informix

Kerberos

Mysql

Netezza®

Oracle

PostgreSQL

Sybase

Teradata

Windows File Share

exclude IE

Windows S-TAP hosts can also use the following protocols:

MSSQL

named pipes
portMin Required (integer). Starting port number of the range of listening ports
that are configured for the database. (Do not use large inclusive ranges,
as this degrades the performance of the S-TAP.)
portMax Required (integer). Ending port number of the range of listening ports
for the database.
teeListenPort Optional (integer). Not used for Windows. Under UNIX, replaced by the
KTAP DB Real Port when the K-TAP monitoring mechanism is used.
teeRealPort Required when the TEE monitoring mechanism is used. The Listen Port
is the port on which the S-TAP listens for and accepts local database
traffic. The Real Port is the port onto which S-TAP forwards traffic.
connectToIp Optional (integer). The IP address for the S-TAP to use to connect to the
database. Some databases accept local connection only on the “real” IP
address of the machine, and not on the default (127.0.0.1).
client Required. A list of Client IP addresses and corresponding masks to
specify which clients to monitor. If the IP address is the same as the IP
address for the database server, and a mask of 255.255.255.255 is used,
only local traffic is monitored. A client address/mask value of
1.1.1.1/0.0.0.0 monitors all clients. (See the example.)
encryption Optional. Activate ASO encrypted traffic where encryption=0 (no) or
encryption=1 (yes).
excludeClient Optional. A list of Client IP addresses and corresponding masks to
specify which clients to exclude. This option enables you to configure
the S-TAP to monitor all clients, except for a certain client or subnet (or
a collection of these options).

258 CLI and API


Table 195. create_stap_inspection_engine (continued)
Parameter Description
procNames For a Windows Server: For Oracle or MS SQL Server only, when named
pipes are used. For Oracle, the list usually has two entries:
oracle.exe,tnslsnr.exe. For MS SQL Server, the list is usually just one
entry: sqlservr.exe.
namedPipe Windows only. Specifies the name of a named pipe. If a named pipe is
used, but nothing is specified here, the S-TAP retrieves the named pipe
name from the registry.
ktapDbPort Optional (integer). Not used for Windows. Under UNIX, used only
when the K-TAP monitoring mechanism is used. Identifies the database
port to be monitored by the K-TAP mechanism.
dbInstallDir UNIX only. Enter the full path name for the database installation
directory. For example: /home/oracle10
procName For a UNIX Server: For a DB2, Oracle, or Informix database, enter the
full path name for the database executable. For example:

/home/oracle10/prod/10.2.0/db_1/bin/oracle
db2SharedMemAdjustment
These three parameters are used for a DB2 inspection engine, only
under the following conditions:
db2SharedMemClientPosition
v The DB2 server is running under Linux.
db2SharedMemSizev The K-TAP monitoring mechanism is installed.
v Clients connect to DB2 using shared memory.

When these parameters are used, grdapi verifies only that the protocol
is db2; it does not verify that the conditions have been met.

See the DB2 Linux S-TAP Configuration Parameters topic for a detailed
explanation of how to use these parameters.
instanceName Optional (string). Used only for MSSQL or Oracle encrypted traffic.
Either the MSSQL or ORACLE encryption flag must be turned on before
this parameter can be used.
informixVersion Informix Version.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API runs. On a Central Manager (CM), the value
is the host name or IP of any managed units. On a managed unit, it is
the host name or IP address of the CM.

Example
grdapi create_stap_inspection_engine stapHost=192.168.2.118 protocol=Oracle portMin=1521 portMax=1

Note:

Sometimes, when adding an inspection engine, a false message of Configuration


rejected by S-TAP- see S-TAP event log for details, is displayed even though the
configuration was not rejected and installed correctly.

Client IP/mask is required for UNIX S-TAP, optional for Windows S-TAP.

list_inspection_engines

Display the properties of all S-TAPs on the specified host, optionally for a specific
database type only.

CLI and API 259


Table 196. list_inspection_engines
Parameter Description
stapHost Required. The host name or IP address of a database server on which
S-TAPs are installed (and configured to report to this Guardium
appliance).
type Optional. If used, inspection engines for the specified database type only
will be listed. Type must be one of the following:

db2

informix

mssql

mssql-np

oracle

sybase
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

a1.corp.com> grdapi list_inspection_engines stapHost=192.168.2.33 type=oracle

ID=20162

Stap Host: 192.168.2.33 - Not Active

oracle Inspection Engines:

name =ORACLE2

type =ORACLE

connect to IP=127.0.0.1

install dir = /home/oracle10

exec file = /home/oracle10/product/10.2.0/db_1/bin/oracle-guard

instance name = MSSQLSERVER

encrypted = no

port range = 1521 - 1521

tee listen port = null, tee rel port = 1521

client = 127.0.0.1/255.255.255.255

client = 192.168.0.0/255.255.0.0

260 CLI and API


name =ORACLE3

type =ORACLE

connect to IP=127.0.0.1

install dir = /home/oracle9

exec file = /home/oracle9/bin/oracle

instance name = MSSQLSERVER

encrypted = no

port range = 1521 - 1521

ok

list_staps

Display the database servers from which S-TAPs report to this Guardium system,
optionally listing only the servers that have S-TAPs for which this Guardium
system is the active host (that is, the one to which the S-TAP is sending data and
the one from which the S-TAP configuration can be modified).
Table 197. list_staps
Parameter Description
onlyActive Optional (Boolean). Enter true, or omit this parameter, to list only those
hosts having S-TAPs for which this Guardium system is the active host.
Enter false to list all hosts on which S-TAPs have been configured to use
this Guardium system as either a primary or secondary host.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example

a1.corp.com> grdapi list_staps onlyActive=false

ID=0

staps:

stap host = FALCON

stap host = 192.168.2.33

stap host = 192.168.2.173

stap host = 192.168.2.248

stap host = jumbo

ok

CLI and API 261


delete_stap_inspection_engine
Remove an S-TAP inspection engine. This Guardium system must be the active
host for the S-TAP from which the inspection engine will be removed.
Table 198. delete_stap_inspection_engine
Parameter Description
stapHost Required. The host name or IP address of the database server on which
the S-TAP is installed.
type Required. Identifies the type of inspection to be removed. Type must be
one of the following:

Cassandra, CouchDB, DB2, DB2 Exit, FTP, GreenPlumDB, Hadoop,


HTTP, iSERIES, Informix, KERBEROS, MongoDB, MS SQL, mssql-np,
Mysql, Named Pipes, Netezza, Oracle, PostgreSQL, SAP Hana, Sybase,
Teradata, or Windows File Share
sequence Required (integer). The sequence number of the inspection engine to be
removed within the set of inspection engines of the specified type. You
can use the grdapi list_inspection_engines command with the type
option first, to verify the sequence number of the inspection engine to
be removed.
waitForResponse Optional. Specifies whether the API will wait for a response from the
S-TAP. Valid values are 0 (do not wait) and 1 (wait for a response). The
default is 1 when stapHost is a single host name or IP address and 0 in
all other cases.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi delete_stap_inspection_engine stapHost=192.168.2.118 type=Oracle sequence=1

Note: Sometimes, when deleting an inspection engine, a false message of Cannot


remove Inspection Engine - the specified inspection engine is not found, is
displayed even though the removal was successful.

restart_stap

Restart an S-TAP inspection engine.


Table 199. restart_stap
Parameter Description
stapHost Required. The host name or IP address of the database server on which
the S-TAP is installed.
api_target_host In a central management configuration only, allows the user to specify a
target host where the API will execute. On a Central Manager (CM) the
value is the host name or IP of any managed units. On a managed unit
it is the host name or IP of the CM.

Example
grdapi restart_stap stapHost=192.168.2.118

262 CLI and API


set_stap_debug
Filter log content by database, protocol, client information, instead of dumping all
traffic to the log.

function parameters :

stapDebugInterval - required

stapDebugLevel - required

stapDebugOn - required

stapHost - required

api_target_host

store_stap_approval

Use this function to block unauthorized S-TAPs from connecting to the Guardium
system.

If ON, then S-TAPs can not connect until they are specifically approved.

If an unapproved S-TAP connects, it is immediately disconnected until the specific


authorization of the IP address of that S-TAP.

There is a pre-defined report for approved clients, Approved TAP clients. It is


available on the Daily Monitor tab.

Note:

A valid IP address is required, not the host name.

The store_stap_approval command does not work within an environment where


there is an IP load balancer.

Within a Central Managed environment, after adding the IP addresses to approved


S-TAPs, there is a wait time associated with synchronization that might take up to
an hour. After synchronization is complete, the status of the approved S-TAP will
appear green in the GUI.

Function: store_stap_approval

function parameters :

isNeeded - Boolean - required

api_target_host - String

Syntax

grdapi store_stap_approval ON | OFF

CLI command

CLI and API 263


store stap approval and show stap approval

add_approved_stap_client

Use this GuardAPI command to add an approved S-TAP client.

Use of this GuardAPI command does not restart the sniffer and does not affect
already connected S-TAPs. This command affects only new S-TAP connections.

Function: add_approved_stap_client

function parameters :

stapHost - String - required

api_target_host - String

Syntax

grdapi add_approved_stap_client <stapHost>

list_approved_stap_client

Use this GuardAPI command to list approved S-TAP clients.

Function: add_approved_stap_client

function parameters :

api_target_host - String

Syntax

grdapi list_approved_stap_client

list_stap_verification_results

Use this GuardAPI command to list S-TAP verification results.

function parameters:

stapHost - String. The host name or IP address of the database server on which the
S-TAP is installed.

Syntax

grdapi list_stap_verification_results <stapHost>

delete_approved_stap_client
Use this GuardAPI command to remove an approved S-TAP client.

Use of this GuardAPI command does not restart the sniffer and does not affect
other already connected S-TAPs. This command affects only the specified S-TAP
connections.

264 CLI and API


Function: add_approved_stap_client

function parameters :

stapHost - String - required

api_target_host - String

Syntax

grdapi delete_approved_stap_client <stapHost - String - required>

set_ktap_debug

ID=0

function parameters :

ktapDebugInterval - required

ktapFunctionNames

stapHost - required

api_target_host

display_stap_config
Display all the properties of all S-TAPs on the specified host.
Table 200. display_stap_config
Parameter Description
stapHost Required. The host name or IP address of a database server on which
S-TAPs are installed and configured to report to this Guardium system,
or a comma-separated list of host names or IP addresses. You can also
use these values:
all_active
All S-TAPs that are configured to report to this Guardium
system
all_windows_active
All S-TAPs that are configured to report to this Guardium
system and are running on Windows machines
all_unix_active
All S-TAPs that are configured to report to this Guardium
system and are running on UNIX machines

Examples:
grdapi display_stap_config stapHost=myhost1,myhost2
grdapi display_stap_config stapHost=all_active

CLI and API 265


update_stap_config
Update properties of all S-TAPs on the specified host.
Table 201. update_stap_config
Parameter Description
stapHost Required. The host name or IP address of a database server on which
Guardium system, or a comma-separated list of host names or IP
addresses. You can also use these values:
all_active
All S-TAPs that are configured to report to this Guardium
system
all_windows_active
All S-TAPs that are configured to report to this Guardium
system and are running on Windows machines
all_unix_active
All S-TAPs that are configured to report to this Guardium
system and are running on UNIX machines
updateValue Required. One or more key-value pairs, in this format:
section.parameter_name:new_value. section indicates the section of the
guard_tap.ini file in which the parameter is contained, and can be TAP
or DB_x, where DB_x is a designation for an inspection engine that
appears as a section header in the file. You can specify new values for
multiple parameters by separating the entries with an ampersand (&) .
waitForResponse Optional. Specifies whether the API will wait for a response from the
S-TAP. Valid values are 0 (do not wait) and 1 (wait for a response). The
default is 1 when stapHost is a single host name or IP address and 0 in
all other cases.

Examples:
grdapi update_stap_config stapHost=all_windows_active updateValue=TAP.XXXX

verify_stap_inspection_engine_with_sequence
Use this command to verify the S-TAP inspection engine.

function parameters:

addToSchedule - String - Constant values list; valid values are Yes and No.

datasourceName -String. If this parameter is specified, advanced verification is


performed against the specified datasource. If this parameter is omitted, standard
verification is performed.

sequence - Integer - required. The sequence number of the existing inspection


engine for verification. You can use the grdapi list_inspection_engines command
with the type option first, to verify the sequence number of the inspection engine
to be verified.

stapHost - String - required - The host name or IP address of the database server
on which the S-TAP is installed.

protocol - Required. The database protocol, which must be one of the these values:
DB2, DB2 Exit (DB2 version 10), FTP, Informix, Kerberos, Mysql, Netezza, Oracle,

266 CLI and API


PostgreSQL, Sybase, Teradata, Windows File Share, exclude IE, Windows S-TAP
hosts can also use the following protocols: MSSQL, named pipes.

Example:
grdapi verify_stap_inspection_engine_with_sequence stapHost=9.70.144.212
sequence=3

revoke_ignore_stap

This command revokes existing IGNORE S-TAP SESSION (REVOKABLE) policy


rule actions that ignore S-TAP session traffic. This command only revokes soft
ignore rules (marked as REVOKABLE) and cannot revoke hard rules (not marked
as REVOKABLE).
Table 202. revoke_ignore_stap
Parameter Description
stapHost Required. The host name or IP address of a
database server on which S-TAPs are
installed and configured to report to this
Guardium system, or a comma-separated list
of host names or IP addresses. You can also
use these values:
all_active
All S-TAPs that are configured to
report to this Guardium system
all_windows_active
All S-TAPs that are configured to
report to this Guardium system and
are running on Windows machines
all_unix_active
All S-TAPs that are configured to
report to this Guardium system and
are running on UNIX machines
api_target_host In a central management configuration only,
allows the user to specify a target host
where the API will execute. On a Central
Manager (CM) the value is the host name or
IP of any managed units. On a managed
unit it is the host name or IP of the CM.

Example
grdapi revoke_ignore_stap stapHost=myhost1

set_ztap_logging_config

These two GuardAPI commands will read or save log_db2z_target to


ADMINCONSOLE_PARAMETER.ZTAP_LOGGING_CONFIG. By default it is 0
(enable).

When log_db2z_target =0, nothing is changed. When log_db2z_target =1, targets in


db2z protobuf message are logged to GDM_OBJECT in addition to objects from
parser.

function parameters :

CLI and API 267


parameter - String - required

value - String - required

api_target_host - String

Syntax

grdapi get_ztap_logging_config

grdapi set_ztap_logging_config parameter=log_db2z_target value=1

Guardium for Applications JavaScript API


The Guardium for Applications JavaScript API exposes a set of objects and classes
for use in JavaScript programs that run in the JavaScript engine to manipulate
captured messages. You can use this API if the selection tool does not meet all
your needs for some pages. By using this API, you can modify the content of
HTTP messages as well as other message parameters, such as the URL.

When you use the selection tool to define masking actions, it creates scripts that
are run when rule conditions are met. These scripts modify the HTTP messages
that occur with the use of the application. If this process does not give you the
results that you require, you can create your own scripts to manipulate the
contents and properties of the HTTP messages. Designing these scripts requires
that you understand the messages that are exchanged when users interact with the
applications that you want to mask.

To use your custom scripts, identify the conditions for running the scripts, then
create a mask in context action, and add one or more action items that invoke your
custom scripts. In these scripts, you can use the objects and classes that are
described here.

In addition to the objects and classes, the API provides a function that can be used
for debug purposes:
dbgm(...); //prints the supplied arguments to stdout.

For example,
dbgm(’this ’ + ’is’ + ’ a debug output’); //prints "this is a debug output"

You can insert values from the current class or object into the output string. For an
example, see the json global object.

The following notation is used in describing the objects and classes:


v [r] indicates that a property is read-only
v [rw] indicates that a property is read/write
v [] indicates that a parameter is optional.
v nnn:ttt is a property definition where nnn is a property name and ttt is its type
v "any property" means any nnn
v mmm(nnn:ttt[, ...]) is a method definition where mmm is the method name, nnn
is a parameter name, and ttt is the parameter type. The [, ...] indicates that
additional parameter:type pairs can be specified.

The Guardium for Applications JavaScript API defines objects and classes.

268 CLI and API


Javascript API objects
html

There is only one html object as a property of a global JS object.


Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on HTML text
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action

Note: The only way to get to a specific node in an HTML document is to use an
XPath expression.

Example:
var ns = html.xpath(’some xpath expression returning text nodes’);
// "ns" is an object of JS class XmlNodeSet (see classes sections for mode details)
// providing the node set is not empty we can now mask text node contents according to the
// information stored in the current action
// the following lines mask contents of the first node in the set
if (ns.size > 0)
html.mask(ns[0]);
// the following code masks the ’a1’ attribute of the second node in the set
if (ns.size > 1)
html.mask(ns[1], ’a1’);

xml

A global object representing a parsed XML message.


Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on the XML
tree
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action

Note: The only way to get to a specific node in an XML document is to use an
XPath expression.

Example: similar to the example for the html object.

json

A global object representing parsed JSON message.


Properties
data: JsonNode - root node for the parsed JSON message
Methods
v mask(n: JsonNode, p: String) - mask string value in "n.p" according to a
method stored in the current action. "n" can be either a JS object or an
array. In the latter case, "p" must be a string representing an array index.

CLI and API 269


v mask(n: JsonNode, i: int) - mask string value in "n[i]" according to a
method stored in the current action. "a" must be a JSON array (not an
object)

Example:
json.data = {"p1": "v1", "p2": "v2"}; // this would entirely replace JSON in the message
json.data.p1 = {};
json.data.p2 = null;
json.data.a1 = [1, 2, "aasdf"];
json.data.a1[0] = false; // 1 -> false
json.mask(json.data.a1, 2); // "aasdf" will be masked with "*****" if the parent action
// defines "replace" masking method
dbgm(JSON.stringify(json.data)); // should print:
// {"p1": {}, "p2": null, "a1": [false, 2, "*****"]}

form

A global object representing parsed form data, typically in POST requests.


Properties
data: FormData - provides access to the actual form data (parsed
name/value list)
Methods
mask(n: String) - mask form value with name "n" according to a method
stored in the current action.

Example:
// set value in form field "p1"
form.data["p1"] = "v1";
// mask form field "p2"
form.mask("p2");
// mask all fields in the form
for (var f in form.data)
form.mask(f);

query
A global object representing a parsed URL query part, as appears in the browser.
Properties
data: QueryData - provides access to the actual URL query data (parsed
name/value list)
Methods
mask(n: String) - mask query value with name "n".

Example:
// set value in query field "p1"
query.data["p1"] = "v1";
// mask query field "p2"
query.mask("p4");
// mask all fields in the query
for (var f in query.data)
query.mask(f);

text

A property of the global object of type String. Assignments to this property directly
modify message body. However, if during the message processing both HTML tree

270 CLI and API


structure and plain message text are modified, only modifications that are applied
to the HTML tree hold, as the modified tree is serialized back to the message
buffer and replaces its content.
Properties
none
Methods
none

Example:
text = ’this string will replace content in the message buffer’;

Javascript API Classes


XmlNodeSet

Instances of this class are created by xpath() methods of html and xml global
objects. These are actually the standard JavaScript Array objects containing
XmlNode objects as their elements. Access to these elements is provided through
the [] operator as it would normally be for JS arrays.
Properties
none
Methods
none

Example:
var ns = html.xpath(’some xpath expression’); // ns: Array of XmlNode objects
dbgm(’number of nodes in set: ’ + ns.length); // print number of nodes in the set "ns"
var node = ns[0]; // node: XmlNode

XmlNode

Instances of this class are also created by xpath() methods of html and xml global
objects.
Properties
v name: String [r] - get node name
v text: String [rw] - get/set inner text for text nodes only
v attributes: XmlAttributeSet [r] - access node attributes
Methods
none

Example:
node.attributes[’a1’] = ’attribute one’; // node is of type XmlNode; setting ’a1’ attribute value
var a2 = node.attribute[’a2’]; // getting attribute ’a2’ value (of type string)

XmlAttributeSet

Instances of this class are used to access the XmlNode attributes through the
attributes property of XmlNode objects. The class behaves as a regular JS Array. All
the array elements are of type String.
Properties
any property [rw] - get/set the respective attribute value for given
XmlNode object

CLI and API 271


Methods
none

Example: similar to class XmlNode.

JsonNode

This is a dummy object, completely transparent to the calling script. It serves as a


bridge between the JS JSON interface and the native JSON parser and allows
manipulation of native JSON objects from within scripts as if they were normal JS
objects.
Properties
any property [rw] - get/set property value of underlined JS object
Methods
none

Note: JSON.stringify shows a JsonNode containing an array as if it were a regular


JS object.

FormData

Provides read/write access to the parsed form data represented as a name/value


list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none

Example: see object form.

QueryData
Provides read/write access to the parsed URL query data represented as a
name/value list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none

Example: see object query.

Guardium for Applications JavaScript API classes


The Guardium for Applications JavaScript API defines these classes.

XmlNodeSet

Instances of this class are created by xpath() methods of html and xml global
objects. These are actually the standard JavaScript Array objects containing
XmlNode objects as their elements. Access to these elements is provided through
the [] operator as it would normally be for JS arrays.

272 CLI and API


Properties
none
Methods
none

Example:
var ns = html.xpath(’some xpath expression’); // ns: Array of XmlNode objects
dbgm(’number of nodes in set: ’ + ns.length); // print number of nodes in the set "ns"
var node = ns[0]; // node: XmlNode

XmlNode

Instances of this class are also created by xpath() methods of html and xml global
objects.
Properties
v name: String [r] - get node name
v text: String [rw] - get/set inner text for text nodes only
v attributes: XmlAttributeSet [r] - access node attributes
Methods
none

Example:
node.attributes[’a1’] = ’attribute one’; // node is of type XmlNode; setting ’a1’ attribute value
var a2 = node.attribute[’a2’]; // getting attribute ’a2’ value (of type string)

XmlAttributeSet

Instances of this class are used to access the XmlNode attributes through the
attributes property of XmlNode objects. The class behaves as a regular JS Array. All
the array elements are of type String.
Properties
any property [rw] - get/set the respective attribute value for given
XmlNode object
Methods
none

Example: similar to class XmlNode.

JsonNode
This is a dummy object, completely transparent to the calling script. It serves as a
bridge between the JS JSON interface and the native JSON parser and allows
manipulation of native JSON objects from within scripts as if they were normal JS
objects.
Properties
any property [rw] - get/set property value of underlined JS object
Methods
none

Note: JSON.stringify shows a JsonNode containing an array as if it were a regular


JS object.

CLI and API 273


FormData
Provides read/write access to the parsed form data represented as a name/value
list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none

Example: see object form.

QueryData

Provides read/write access to the parsed URL query data represented as a


name/value list.
Properties
any property [rw] - get/set property value, which directly affects
associated native NameValueList object
Methods
none

Example: see object query.

Guardium for Applications JavaScript API objects


The Guardium for Applications JavaScript API defines these objects.

html
There is only one html object as a property of a global JS object.
Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on HTML text
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action

Note: The only way to get to a specific node in an HTML document is to use an
XPath expression.

Example:
var ns = html.xpath(’some xpath expression returning text nodes’);
// "ns" is an object of JS class XmlNodeSet (see classes sections for mode details)
// providing the node set is not empty we can now mask text node contents according to the
// information stored in the current action
// the following lines mask contents of the first node in the set
if (ns.size > 0)
html.mask(ns[0]);
// the following code masks the ’a1’ attribute of the second node in the set
if (ns.size > 1)
html.mask(ns[1], ’a1’);

274 CLI and API


xml
A global object representing a parsed XML message.
Properties
none
Methods
v xpath(expression: String) : XmlNodeSet - run XPath query on the XML
tree
v mask(n: XmlNode[, attribute: String]) - mask the node or its specified
attribute according to the method stored in the current action

Note: The only way to get to a specific node in an XML document is to use an
XPath expression.

Example: similar to the example for the html object.

json

A global object representing parsed JSON message.


Properties
data: JsonNode - root node for the parsed JSON message
Methods
v mask(n: JsonNode, p: String) - mask string value in "n.p" according to a
method stored in the current action. "n" can be either a JS object or an
array. In the latter case, "p" must be a string representing an array index.
v mask(n: JsonNode, i: int) - mask string value in "n[i]" according to a
method stored in the current action. "a" must be a JSON array (not an
object)

Example:
json.data = {"p1": "v1", "p2": "v2"}; // this would entirely replace JSON in the message
json.data.p1 = {};
json.data.p2 = null;
json.data.a1 = [1, 2, "aasdf"];
json.data.a1[0] = false; // 1 -> false
json.mask(json.data.a1, 2); // "aasdf" will be masked with "*****" if the parent action
// defines "replace" masking method
dbgm(JSON.stringify(json.data)); // should print:
// {"p1": {}, "p2": null, "a1": [false, 2, "*****"]}

form
A global object representing parsed form data, typically in POST requests.
Properties
data: FormData - provides access to the actual form data (parsed
name/value list)
Methods
mask(n: String) - mask form value with name "n" according to a method
stored in the current action.

Example:

CLI and API 275


// set value in form field "p1"
form.data["p1"] = "v1";
// mask form field "p2"
form.mask("p2");
// mask all fields in the form
for (var f in form.data)
form.mask(f);

query

A global object representing a parsed URL query part, as appears in the browser.
Properties
data: QueryData - provides access to the actual URL query data (parsed
name/value list)
Methods
mask(n: String) - mask query value with name "n".

Example:
// set value in query field "p1"
query.data["p1"] = "v1";
// mask query field "p2"
query.mask("p4");
// mask all fields in the query
for (var f in query.data)
query.mask(f);

text

A property of the global object of type String. Assignments to this property directly
modify message body. However, if during the message processing both HTML tree
structure and plain message text are modified, only modifications that are applied
to the HTML tree hold, as the modified tree is serialized back to the message
buffer and replaces its content.
Properties
none
Methods
none

Example:
text = ’this string will replace content in the message buffer’;

276 CLI and API


Index
A G I
Aggregator CLI Commands 3 GuardAPI Archive and Restore Inspection Engine CLI Commands 83
Alerter CLI Commands 9 Functions 135
API 1, 268, 272, 274 GuardAPI Assessment Functions 138
GuardAPI Auto-discovery Functions 143
GuardAPI Capture Replay
N
Network Configuration CLI
C Functions 146
GuardAPI Catalog Entry Functions 155
Commands 87
Certificate CLI Commands 13
GuardAPI Classification Functions 157
CLI 1
GuardAPI Data User Security
CLI Overview 1
Configuration and Control CLI
Functions 181 Q
GuardAPI Database User Functions 168 Quick Search for Enterprise 233
Commands 21
GuardAPI Datasource Functions 171
GuardAPI Datasource Reference

D
Functions 178
GuardAPI GIM Functions 189
S
store network interface
diag CLI command 56 GuardAPI Group Functions 199
high-availability 87
GuardAPI Input Generation 207
store network interface secondary 87
GuardAPI Process Control
E Functions 218
GuardAPI QR Functions 236
Enterprise load balancing 184
GuardAPI Reference Overview 129 U
GuardAPI Role Functions 251 User Account, Password and
GuardAPI S-TAP Functions 257 Authentication CLI Commands 115
F
File Handling CLI Commands 72

277

You might also like