Professional Documents
Culture Documents
/conversion/tmp/scratch/409485398.docx
5.2.1. Select.......................................................................................................64
5.2.2. Joining Tables.........................................................................................64
5.2.3. Sub Queries.............................................................................................64
5.2.4. Union.......................................................................................................64
5.2.5. Insert.......................................................................................................64
5.2.6. Update.....................................................................................................64
5.2.7. Delete......................................................................................................64
5.3. CONTROL STATEMENTS.....................................................................................82
5.3.1. Grant.......................................................................................................82
5.3.2. Revoke.....................................................................................................82
5.3.3. Commit....................................................................................................82
5.3.4. Roll Back.................................................................................................82
6. PROGRAM STRUCTURE.....................................................................................87
6.1. HOST VARIABLES..............................................................................................87
6.1.1. Declaring Host Variables........................................................................87
6.2. INDICATOR VARIABLES.....................................................................................87
6.3. SQLCA.............................................................................................................87
6.4. COBOL STRUCTURE OF SQLCA.......................................................................87
6.5. SQLCA RETURN CODES...................................................................................87
6.6. SQLCA WARNINGS...........................................................................................87
6.7. IMPORTANT SQL CODES....................................................................................87
6.8. STATIC SQL.......................................................................................................87
6.9. DYNAMIC SQL..................................................................................................87
6.10. EXAMPLE FOR A DB2 APPLICATION PROGRAM................................................87
7. PROGRAM PREPARATION...............................................................................122
7.1. STEPS IN PROGRAM PREPARATION..................................................................122
7.2. DCLGEN (DECLARATIONS GENERATOR )........................................................122
7.3. PRECOMPILE....................................................................................................122
7.4. BIND................................................................................................................122
7.4.1. Binding A DBRM To A Package...........................................................122
7.4.2. Binding An Application Plan................................................................122
7.5. COMPILE AND LINKEDIT.................................................................................122
7.6. OVERVIEW OF DB2 APPLICATION PROGRAM PREPARATION AND EXECUTION
122
7.7. ASSOCIATING LOAD MODULES AND PACKAGES.............................................122
8. SECURITY FEATURES.......................................................................................140
8.1. PRIVILEGES......................................................................................................140
8.2. REFERENTIAL INTEGRITY................................................................................140
8.2.1. DB2 Enforcement Of Referential Integrity............................................140
8.2.2. Referential Integrity Enforcement Rules...............................................140
8.2.3. Example For Referential Integrity Violation.........................................140
8.3. DATABASE RECOVERY IN CASE OF FAILURE..................................................140
8.3.1. Unit Of Recovery...................................................................................140
8.3.2. Data Recovery.......................................................................................140
9. CONCURRENCY..................................................................................................157
9.1. CONCURRENCY................................................................................................157
9.2. LOCKING STRATEGY........................................................................................157
9.3. LOCK SIZES AND TYPES..................................................................................157
9.4. ACQUIRE RELEASE PARAMETERS...................................................................157
©Case Consult (India) Pvt. Ptd. 03/1998 2 DB2 Fundamentals
/conversion/tmp/scratch/409485398.docx
9.5. ISOLATION PARAMETER...................................................................................157
10. DB2I (DB2 INTERACTIVE )..........................................................................167
10.1. DB2I...............................................................................................................167
10.2. SPUFI.............................................................................................................167
11. UTILITIES.........................................................................................................171
11.1. LOAD...............................................................................................................171
11.2. RUNSTATS........................................................................................................171
11.3. REORG.............................................................................................................171
12. ADVANCED DB2..............................................................................................179
12.1. MORE ABOUT INDEXES...................................................................................179
12.1.1. Example Of An Index............................................................................179
12.1.2. Clustered Indexes..................................................................................179
12.1.3. Non Clustered Indexes..........................................................................179
12.2. SPECIAL REGISTERS........................................................................................179
12.3. MORE ABOUT LOCKS......................................................................................179
12.3.1. Modes Of Table And Tablespace Locks.................................................179
12.3.2. Modes Of Row And Page Locking........................................................179
12.3.3. Lock Mode Compatibility Of Table And Table Space Locks.................179
12.3.4. Lockmode Compatibility Of Row And Page Locks...............................179
12.4. INVOKING ONLINE UTILITIES..........................................................................179
/conversion/tmp/scratch/409485398.docx
1. Purpose and Scope of the Document
The Purpose of this document is to train fresh Software Engineers who would like to get
familiarized with DB2 and as a reference material for application programmers.
RISC SYSTEM/6000
DB2 FOR
DB2 FOR OS/2
AIX
DB2 FOR
SINIX
DB2 FOR
MVS DB2 FOR
OS/400
DB2 FOR
VSE & VM
2.1. Database Management Systems (DBMS)
DBMS
DATABASE
APPLICATION PROGRAM
QUERY PROCESSOR
STORAGE MANAGER
DATA
Depending on data models used, database management systems are mainly divided into
three.
RELATIONAL DBMS
HIERARCHICAL DBMS
NETWORK DBMS
DB2 is based on relational data model which was formulated by DR. E.F CODD in 1970.
Relational systems have their origin in the mathematical theory of relations. Using
relational data model , IBM developed DB2 in 1983.
TABLE S TABLE SP
TABLE P represents PARTS. Each kind of part has a unique PART NUMBER (P#), a
PART NAME (PNAME), a COLOUR (COLOR), a WEIGHT (WEIGHT) and a
location where the PART IS STORED (CITY).
In a relational data model TABLES are called RELATIONS, ROWS are called
TUPLES and COLUMNS are referred as ATTRIBUTES.
In a relational data mode association of ROWS of different TABLES are done using
COLUMN VALUES of common columns.
S1 P1
S2 …..
SMITH NUT
JONES
20 RED 12 LONDON
10
PARIS SHIPMENT
SEGMENT
LCHILD
In this view data is represented by a simple TREE STRUCTURES and DBMS links
these data bases using pointers.
The user sees three individual trees for supplier database, each tree has a parent
supplier. Each tree can be called a supplier record occurrence. Similarly you can see
part record occurrence and shipment record occurrence.
Shipment database contains the shipment quantity. The logical child of shipment
database consists of supplier number , part number and pointers to corresponding
databases .Similarly the supplier and parts databases also contains logical child which
points to the shipment database. Now the user can access shipment from supplier and
part databases. Similarly parts and supplier databases are also accessed from shipment
database
S2 JONES….. P2 BOLT…….
QUANTITY RECORDS
300
400
…….
NETWORK DBMS consists of owner databases and member databases. The member
database can be accessed only via the owner database.
In the example there are two owners for a member database. Supplier and part record
sets are owners of shipment record set. Using this database the user can access the
shipment of a particular part by a specific supplier
The supplier S1 supplies part P1 of quantity 300. From the supplier S1 there is a
pointer towards the supplied quantity and another pointer connects this to the
corresponding part. An owner can have more than one pointer towards different
quantities.
The sample database consists of THREE tables and these tables are used through out this
book.
TABLE S TABLE SP
TABLE P represents PARTS. Each kind of part has a unique PART NUMBER (P#), a
PART NAME (PNAME), a COLOUR (COLOR), a WEIGHT (WEIGHT) and a
LOCATION where the PART IS STORED (CITY).
PRIMARY KEY IS P#
TABLE SP represents SHIPMENTS .It connects other TWO TABLES .It represents a
SHIPMENT of PARTS OF KIND P1 by the SUPPLIER called S1 and the SHIPMENT
QUANTITY. For a given SHIPMENT the combination of S# and P# is unique .That is
the PRIMARY KEY is the COMBINATION of the above mentioned and the
FOREIGN KEYS ARE S# AND P#
3. Structure Of DB2
This chapter deals with the definitions and examples of objects present in DB2.The topics
included in this chapter are
3.2. Databases
3.5. Tables
3.7. Indexes
3.8. Views
3.9. Synonyms
3.10. Aliases
DB2 CATALOG
DB2 DIRECTORY
ACTIVE AND ARCHIVE LOGS
BUFFER POOLS
HIERARCHY OF DATA STRUCTURES
INDEX X1
VOLUME 2
(DASD)
INDEX X2
TABLE T3
PARTITIONED PART 1
TABLESPACE
S2 TABLE T3
PART 2
PARTITIONED INDEX X3
PART 1
STORAGE
PARTITIONED INDEX X3 GROUP G2
PART 2
VOLUME2
(3380)
Hierarchy Of Data structures
The total collection of stored data is divided into a number of disjoint databases. They
are USER DATABASES and SYSTEM DATABASES.
Each table space contains one or more stored tables. A stored table contains a set of
stored records. A given stored table must be wholly contained within a single table
space.
Each INDEXSPACE contains exactly one index. A given index must be wholly
contained with in a single index space. A given stored table and all of its associated
indexes must be wholly contained within a single DATABASE.
TABLESPACE1
INDEX 2
TABLESPACE 2
STORAGE GROUP 1
VOLUME 1 VOLUME 2
TABLESPACE 1
TABLE 1
TABLE 2
A TABLE SPACE can be thought of as a logical address space on secondary storage that is
to hold one or more stored tables. Table spaces are divided into equal sized units called
PAGES which are written or read from DASD. Tables are physically stored in one or more
VASM linear datasets.
A table space can consists of 1 to 64 VSAM datasets which can together contain up to 64
GIGABYTES of data. When you create a table space you can specify the database and
storage group to which the tablespace belongs and table space type. As the amount of data
in tables grow storage will be acquired from appropriate storage groups and added to the
tablespace.
Fundamentally the table space is a storage unit for recovery and reorganization. If the table
space is very large the RECOVERY and REORGANIZATION could take a long time.
Hence making the tablespace simple, segmented, or partitioned can drastically affect the
performance.
SIMPLE TABLESPACE
SIMPLE TABLESPACE
FREE
PAGE
FREE
SPACE
4K
PAGE
RECORD OF TABLE 1
RECORD OF TABLE 2
In simple table space records of tables are interleaved .Records of different tables may be
present in a single page and to find all rows of a table a scan of the whole table space is
needed. But by loading the data in an appropriately interleaved manner; accessing
logically related data will be more efficient.
If a table is dropped, its rows are not deleted. The space occupied by the rows does not
become available until the table space is reorganized. All tables in a simple table space
must reside in the same user-defined data set or in the same storage group.
one stored table per table space is always the most satisfactory arrangement in the case of
simple TABLE SPACE.
SEGMENTED TABLESPACE
SEGMENT1
SEGMENT2
4K
PAGE SEGMENT3
RECORD OF TABLE 1
RECORD OF TABLE 2
Each segment in the segmented tablespace contains rows from only one table. But the
tablespace can contain multiple tables, in different SEGMENTS. In order to find a row, it
is not necessary to scan the entire table space, but only the segments that contain the table.
Hence sequential access to a particular table is more efficient.
If a table in a segmented table space is dropped, the space for that table can be reused
without performing a reorganization of the table space.
A segmented table space can have between 1 AND 32 VSAM linear data sets. the
maximum size of a data set in the segmented table space is 2 GIGABYTES and so, the
maximum size of a segmented table space is 64 GIGABYTES .
PARTITIONED TABLESPACE
A—F PARTITION1
G—P PARTITION2
Q—Z PARTITION3
RECORD OF TABLE 1
PARTITIONED TABLESPACES are intended for stored tables that are sufficiently large.
Partitioned table contains exactly one stored table, partitioned in accordance with value
ranges of a particular column or column combination .
Partitioning a table space provides several advantages for large tables. When DB2 scans
data to answer a query it can scan through partitions simultaneously instead of scanning
through the entire table from the beginning to end.
A utility can work on all partitions simultaneously instead of working on one partition at a
time. Also, different utilities can work on different partitions simultaneously. This can
significantly reduce the amount of time needed for a utility to finish.
TABLES
TABLE S
KEY COLUMNS
TABLE S VIEW
A VIEW is a named table that is represented, not by its own physically separate,
distinguishable stored data, but rather by its definition in terms of other named tables.
VIEWS are created for base tables or views or a combination of views and tables.
When you define a view DB2 stores the definition of the view in the DB2 catalog. Data is
physically present in base tables only and not in views. When a view is accessed then data
is dynamically retrieved from the base table.
Advantages Of Views
2. They allow the same data to be seen by different users in different ways.
3. Automatic security is provided for data that is present in the base table by creating a
view in which sensitive data is not visible.
INDEX SPACES
INDEX SPACE 1
INDEX 1
INDEX 1
RID VALUE
PAGE P
An index contains values from one or more of a table’s columns and a pointer to the record
in a data which matches the index value. DB2 will find data more efficiently by scanning
the index and following the pointer than by scanning the entire tablespace.
Record ID of index has two parts. First part is to identify the page where the record lies
and the second part is the byte offset from the bottom of the page identifying the record.
Index is structured in ascending or descending sequence on one or more columns. A given
value of interest can be located quickly in the index because of their ascending or
descending structure.
Indexes are of two types, unique and non unique indexes. A non unique index can
reference duplicate values, a UNIQUE INDEX will not. You can create an index any time
after you create the table. But creating an index before loading the data provides
significant performance advantages.
Indexes can be clustered or non clustered. A clustering index is one in which the records
are physically stored in data pages in the sequential order of their index values. The index
is used to control physical placement of the indexed records. Newly inserted records are
physically stored such that the physical sequence of those records in storage closely
approximates the logical sequence as defined by the index. In a non clustered index the
records will not be in the order of index values.
A table can have any number of indexes but it can have only one clustered index.
Clustering is extremely important for optimization purpose. The optimizer will try to
choose an access path based on the clustering index .
For detailed explanation of indexes please refer ‘More about indexes’, chapter 12.
ALIASES
Aliases are useful for creating meaningful names for TABLES and VIEWS. ALIASES are
created using CREATE ALIAS statement. One user can use an ALIAS created by another
user since aliases are not private to the creator
EXAMPLE
The fully qualified name of the table SAMPLE is ALPHA.SAMPLE and another user
BETA can refer to the table sample by its fully qualified name.
SELECT *
FROM ALPHA.SAMPLE
The user BETA can create an alias called ZTEST for the table ALPHA.SAMPLE using
create statement.
And now he can refer to the table SMPLE created by ALPHA by simply referring to the
alias ZTEST
SELECT *
FROM ZTEST
Another user GAMMA can also use BETA’S ALIAS ZTEST to refer to ALPHA’S
SAMPLE table.
SELECT * FROM
BETA.ZTEST
SYNONYMS
EXAMPLE
The fully qualified name of the table SAMPLE is ALPHA.SAMPLE and another user
BETA can refer to the table sample by its fully qualified name.
SELECT *
FROM ALPHA.SAMPLE
The user BETA can create a SYNONYM called ZTEST for the table ALPHA.SAMPLE
using create statement.
And now he can refer to the table SAMPLE created by ALPHA by simply referring to the
SYNONYM ZTEST
SELECT *
FROM ZTEST
However the user BETA and table ALPHA.SAMPLE must be at the same site. Also the
name ZTEST is completely private to the user BETA. Another user GAMMA cannot use
the synonym created by BETA and if it wants a synonym it should be created on
ALPHA.SAMPLE.
DB2 CATALOG
SYSIBM.SYSTABLES , SYSIBM.SYSCOLUMNS .
SYSIBM.SYSTABLESPACE , SYSIBM.SYSTABAUTH
SYSIBM.SYSTABLES
CONTAINS INFORMATION OF A TABLE. WHEN A NEW TABLE
IS CREATED DB2 INSERTS ONE ROW INTO THIS CATALOG
TABLE.
SYSIBM.SYSCOLUMNS
CONTAINS INFORMATION ABOUT THE COLUMNS IN A TABLE.
THIS TABLE CONTAINS ONE ROW FOR EVERY COLUMN OF
EACH ROW IN A TABLE .
SYSIBM.SYSTABLESPACE
CONTAINS INFORMATION OF THE TABLE SPACE CREATED.
THIS TABLE CONTAINS ONE ROW FOR EACH TABLESPACE.
SYSIBM.SYSTABAUTH
CONTAINS INFORMATION OF THE TABLE NAMES AND
AUTHORIZATION ID’s WHICH HAVE PRIVILEGES ON THAT
TABLE
DB2 Catalog
The CATALOG in DB2 is a system database that contains information concerning various
objects that are of interest to DB2 itself. Examples of such objects are tables, views,
indexes, databases, plans, packages, access privileges, and so on. These information is
essential, if the system is to able to do it’s job properly.
CATALOG itself contains TABLES and you can see the contents of catalog tables using
normal query language ( SQL ). When you create, drop or alter any structure, DB2 updates
or deletes rows of the catalog that describe the structure.
Optimizer component of bind will use catalog information to choose best access strategy.
DB2 DIRECTORY
DATABASE DSNDB01.
ANOTHER PAGE
Buffer pools, also known as virtual buffer pools, are areas of virtual storage used
temporarily to store pages of table spaces or indexes. When an application program needs
to access a row of a table, DB2 retrieves the page containing that row and places the page
in a buffer. If the row is changed, the buffer must be written back to the table space. If
the needed data is already in a buffer, the application program will not have to wait for it
to be retrieved from DASD. The result is faster performance.
DB2 can provide 2 types of buffer pools, 4K and 32K buffer pools. There are fifty 4K
buffer pools named BP0, BP1, P49 and ten 32K buffer pools named BP32K, BP32K1,
BP32K9. DB2 manages each buffer pools separately . Generally system administrator
decides how much memory to allocate for buffer pools. The more memory allocated to
buffer pool the more data it can hold and therefore the greater the likelihood that an
application request will find the data there.
4. Data Types
This chapter describes various data types used in DB2 and their examples. COBOL
declarations of the corresponding DATA TYPES are also included.
4.1.1. Nulls
NUMERIC DATA
STRING DATA
DATE / TIME DATA
RANGE OF VALUES
SMALLINT -32768 to +32767
SPKZ DECIMAL(5, 2)
DRU SMALLINT
HDNR INTEGER
NULLS
EXAMPLES
Null values are used in a table when actual values are unknown. Suppose the weight of a
part in the SUPPLIER-PARTS DATABASE is null, then it means that
In other words we do not know a genuine weight value that can sensibly be put in the
weight slot in the row for the part in question. Instead we mark that slot as null and we
interpret that mark to mean precisely that we do not know what the real value is. we can
insert a null value in the WEIGHT column if it is declared as NULL. But if it is declared
as NOT NULL WITH DEFAULT, it is possible to insert a row into the table without
specifying a value for WEIGHT column. In that case the column will contain default
values corresponding to the column data type.
Suppose that NOT NULL is specified for column WEIGHT in the SUPPLIER-PARTS
DATABASE, then this will guarantee that every row in table P will always contain a
genuine (not null) WEIGHT value. In other words a value must always be provided for
column WEIGHT when a row is inserted into the P table.
If a given column is allowed to contain nulls and a row inserted into the table with no
value provided for that column DB2 will automatically place a null in that position.
Suppose that the WEIGHT column in supplier-table database is specified as NULL, then
we can insert a row in the table P without specifying a value for WEIGHT. DB2 will
automatically put a null value in that column.
NOT NULL WITH DEFAULT means the column in question cannot contain nulls but it is
nevertheless still legal to omit a value for the column on insert. If a row is inserted and no
value is provided for some column to which NOT NULL WITH DEFAULT applies DB2
automatically places one of the following non null default values in that position.
CHARACTER FORMAT
RANGE OF VALUES
CHARACTER(n) : 1 TO 254
RANGE OF VALUES
GRAPHIC(N) : 1 TO 127
INTERNAL REPRESENTATIONS
DATE YYYYMMDD
TIME HHMMSS
TIMESTAMP YYYYMMDDHHMMSSNNNNNN
CCTEMP DATE
CCDAT TIME
CCSTAMP TIMESTAMP
Date / Time Data
Columns whose data types are DATE, TIME, OR TIMESTAMP are represented in an
internal form that is transparent to the user of SQL. But DATES, TIMES, AND
TIMESTAMPS can also be represented by DATE/TIME strings. These are character string
representations of date values. When you retrieve date/time values they must be assigned
to properly declared character string variables.
Example
The ISO date format ‘1987-10-12’ is internally stored in 4 bytes. But you must assign a
variable with 10 bytes as host variable for retrieving the above date.
EQUIVALENT COBOL DECLARATIONS OF DATA TYPES
IN DB2 operations are done using structured query language. This chapter explains types
of SQL and their usage. SQL statements are divided into
DDL Statements
DML Statements
Control Statements
SQL PROGRAMMING
SQL
CREATE
ALTER
DROP
SELECT
UPDATE
INSERT
DELETE
CONTROL STATEMENTS
GRANT
REVOKE
COMMIT
ROLLBACK
5.1. DDL Statements
Data definition language statements used for creating, changing and dropping DB2
objects. The following sections explain these statements with suitable examples
5.1.3.1. Keys
5.1.7. Drop
CREATE DATABASE
EXAMPLE
EXAMPLES
In the given examples DB2 automatically creates VSAM linear datasets needed for the
tablespace within the specified storage group. Each data set will be defined on a volume of
the storage group specified in the create tableapace statement. The values specified for
PRIQTY and SECQTY determine the primary and secondary allocations for the data set.
Erase parameter Indicates whether the DB2-managed data sets for the tablespace are to be
erased when they are deleted during the execution of a utility or an SQL statement that
drops the table space. ERASE NO does not erase the data sets. ERASE YES erases the
data sets. As a security measure, DB2 overwrites all data in the data sets with zeros before
they are deleted.
FREEPAGE parameter Specifies how often to leave a page of free space when the table
space or partition is loaded or reorganized. The default is FREEPAGE 0, leaving no free
pages. PCTFREE parameter indicates what percentage of each page to leave as free space
when the table is loaded or reorganized. The default is PCTFREE 5.
LOCKSIZE parameter Specifies the size of locks used within the table space . For more
information please refer chapter 9.
NUMPARTS parameter Indicates that the table space will be partitioned and the number
of partitions in that tablespace.
BUFFERPOOL parameter Identifies the buffer pool to be used for the table space and
determines the page size of the table space.
CLOSE parameter specifies whether or not the data sets are eligible to be closed when the
table space is not being used or the limit on the number of open data sets is reached.
CLOSE YES says the dataset is eligible for closing. This is the default. CLOSE NO
specifies that the dataset is not eligible for closing.
SEGSIZE parameter Indicates that the table space will be segmented and specifies, how
many pages are to be assigned to each segment. If SEGSIZE and NUMPARTS parameters
are not given, then the table space will be SIMPLE
CREATE TABLE
EXAMPLES
S1 SMITH 20 LONDON
S2 JONES 10 PARIS
S3 BLAKE 30 PARIS
KEY
PRIMARY KEY
A TABLE CANNOT HAVE MORE THAN ONE PRIMARY KEY, AND THE
COLUMNS OF A PRIMARY KEY CANNOT CONTAIN NULL VALUES.
TABLE S TABLE SP
BASE TABLE: S
VIEW : GOOD_SUPPLIERS
S# STATUS CITY
EXAMPLE
EXAMPLES
EXAMPLES
FREE PLAN
1. Dropping an alias has no effect on any view or synonym that was defined using the
alias.
2. When you drop the database , the database and all of its table spaces, tables, index
spaces, and indexes are dropped.
3. Whenever an index is directly or indirectly dropped ,it’s index space is also dropped.
4. When the synonym is dropped, view or alias that defined using the synonym are not
dropped.
5. Whenever a table is directly or indirectly dropped, all privileges on the table, all
referential constraints in which the table is a parent or dependent, and all synonyms,
views, and indexes defined on the table are also dropped.
6. Whenever a table space is directly or indirectly dropped, all tables in the table space
are also dropped.
7. Whenever a view is directly or indirectly dropped, all privileges on the view and all
synonyms and views that are defined on the view are also dropped.
8. when the package version is dropped, all privileges on the package are dropped and all
plans that are dependent on the execute privilege of the package are invalidated.
5.2. DML Statements
Data manipulation statements are used for retrieving data from DB2 tables. The following
statements together known as data manipulation language.
5.2.1. Select
5.2.1.4. Order By
5.2.1.8. Group By
5.2.1.9. Having
5.2.4. Union
5.2.5. Insert
5.2.6. Update
5.2.7. Delete
DML STATEMENTS
SELECT
SQL SELECT
REQUIRED SEQUENCE
SELECT
FROM
WHERE
ORDER BY
EXAMPLE
QUERY
SELECT S# , STATUS
-TELLS WHICH COLUMNS TO USE
FROM S
-TELLS WHICH TABLES TO USE
RESULT
S# STATUS
S3 30
S2 10
COMPARISON OPERATORS
= EQUAL
^= NOT EQUAL
RESULT
P#
P1
P2
P3
P4
P5
P6
MULTIPLE CONDITIONS
AND OR
RESULT 1 RESULT 2
S# SNAME S# SNAME
S1 SMITH S1 SMITH
S4 CLARK S2 JONES
S4 CLARK
ORDER BY
RESULT
P#
RESULT 1 RESULT 2
P1 NUT 12 P2 BOLT 17
P2 BOLT 17 P3 SCREW 17
P3 SCREW 17 P6 COG 19
P5 CAM 12
PARTIAL SEARCH
S1 SMITH LONDON
S4 CLARK LONDON
RESULT 2
S# SNAME CITY
S2 JONES PARIS
S5 ADAMS ATHENS
RESULT 3
S# SNAME CITY
S1 SMITH LONDON
S4 CLARK LONDON
RESULT 4
S# SNAME CITY
S1 SMITH LONDON
S4 CLARK LONDON
RESULT 5
S# SNAME CITY
S3 BLAKE PARIS
S4 CLARK LONDON
AGGREGATE FUNCTIONS
RESULT 1 RESULT 2
RESULT
P#
P1 600
P2 1000
P3 400
P4 500
P5 500
P6 100
HAVING
QUERY : SELECT P#
FROM SP
GROUP BY P#
HAVING COUNT (*) > 1.
RESULT
P#
P1
P2
P4
P5
JOINING TABLES
RESULT
PART P2.
RESULT
SNAME
SMITH
JONES
BLAKE
CLARK
UNION
QUERY : SELECT P#
FROM P
WHERE WEIGHT > 16
UNION
SELECT P#
FROM SP
WHERE S# = ‘S2’ ;
RESULT
P1
P2
P3
P6
INSERT
QUERY 1 : INSERT
INTO P
VALUES ( ‘P8’, ‘SPROCKET’, ‘PINK’, 14, ‘NICE’ ) ;
QUERY 2 : INSERT
INTO P ( P#, CITY, WEIGHT )
VALUES ( ‘P7’, ‘ATHENS’, 24 );
… …. … .... …….
P8 SPROCKET PINK 14 NICE RESULT 1
P7 ? ? 24 ATHENS RESULT 2
UPDATE
QUERY : UPDATE P
SET COLOR = ‘YELLOW’ ,
WEIGHT = WEIGHT + 5
CITY = NULL
WHERE P# = ‘P1’ ;
RESULT
QUERY 1 : DELETE
FROM S
WHERE S# = ‘S5’ ;
QUERY 2 : DELETE
FROM SP
WHERE QTY > 300 ;
5.3. Control Statements
Statements other than DDL and DML are explained in this section. They are
5.3.1. Grant
5.3.2. Revoke
5.3.3. Commit
GRANT
TABLE PRIVILEGES
GRANT SELECT ON TABLE S TO CHARLY ;
COLLECTION PRIVILEGES
GRANT CREATE IN EWSK TO JOHN ;
DATABASE PRIVILEGES
GRANT CREATETAB ON DATABASE DBX TO NANCY ;
USE PRIVILEGES
GRANT USE OF TABLESPACE DBX.TS76 TO TOM ;
SYSTEM PRIVILEGES
GRANT CREATEDBC TO ARNOLD ;
REVOKE
EXAMPLES
EXAMPLE
UPDATE TABLE S
SET STATUS = 20
WHERE S# = S1;
COMMIT;
ROLLBACK
IS EXECUTED
EXAMPLE
ROLLBACK WORK ;
6. Program Structure
This section gives an over view of a DB2 application program .Different sections to be
included in an application are explained briefly.
6.3. SQLCA
SQLCA
DECLARE CURSOR
OPEN CURSOR
FETCH CURSOR
CLOSE CURSOR
Programs that access DB2 are written in a number of host languages - COBOL, PL/1, C ,
ASSEMBLER , FORTRAN, BASIC etc. These programs can contain SQL statements that
retrieves or updates database.
The start and end of SQL statements must be indicated by delimiters. The delimiters used
in COBOL are EXEC SQL and END-EXEC.
SQL statements must be coded with in these delimiters. Even if multiple SQL statements
appear sequentially, each SQL statement should be indicated by delimiters.
Pre compiler uses delimiters to identify SQL statements from the host language.
EXEC SQL
UPDATE S
SET STATUS = 10
WHERE CITY = ‘ATHENS’
END-EXEC.
HOST VARIABLES
SQL STATEMENT
HOST VARIABLE
SQL STATEMENT 1
INSERT INTO S
( S#, SNAME )
VALUES ( ‘S6’ , ‘GEORGE’ )
SQL STATEMENT 2
INSERT INTO S
( S#, SNAME )
VALUES ( :SUPCODE, :SUPNAME )
In SQL statement 1 the values to be inserted are hard coded .Second SQL statement shows
the use of host variables in an embedded SQL .This statement could be included in a
processing loop with the program’s logic assigning various values to the host variables.
EXAMPLE (2)
SQL STATEMENT
UPDATE S
SET STATUS = STATUS * :PERCENT
WHERE S# = :SUPCODE
END EXEC.
You can declare all host variables in the working storage section of the COBOL program.
The host variable declaration should match with the corresponding column definition
The host variable names must not begin with SQL or EXEC.
Another method of declaring host variables is using the verb INCLUDE. All the host
variables are declared in a partitioned dataset member and that member is included in the
source program using verb INCLUDE.
INDICATOR VARIABLES
HOST VARIABLE
INDICATOR VARIABLE
When the program is to receive a value from a column that allows nulls, the program can
get either a value or null. So the program requires two variables, a host variable to receive
value and an indicator variable to indicate the presence of null value in the selected
column.
If DB2 attempts to indicate the presence of a null and the program does not provide an
indicator variable an error occurs.
If the value returned is null then the null indicator indicates that by a negative value and
the value in the host variable remains unchanged. The program should have an indicator
variable for each column that allows null.
In EXAMPLE1 when the selected column contains a null value then the program logic is
coded in such way to tackle it.
Example 2 shows that indicator variables are used for UPDATE operations also. Before
updating the table, the indicator variable is made negative and DB2 sets the column to null
ignoring the value in the host variable.
Indicator variable should be declared like you declare a host variable. Data type of an
indicator variable is SMALLINT and corresponding cobol declaration is given below
1. EXEC SQL
SELECT SNAME , CITY
INTO :SUPNAME:SUPNAMIND , :PGMCITY
FROM S
WHERE S# = ‘ S1’
END EXEC.
IF SUPNAMIND < 0
PERFORM NONAME-SECTION
ELSE
PERFORM NAME-SECTION.
2. IF ( some condition )
SUPNAMIND = -1
ELSE
SUPNAMIND = 0.
EXEC SQL
UPDATE S
SET SNAME = :SUPNAM:SUPNAMIND
WHERE S# = ‘S1’
END EXEC.
SQLCA ( SQL COMMUNICATION AREA)
PROGRAM
STATUS OF EXECUTED SQL
SQLCA
The SQL communication area (SQLCA) is a data structure that must be included in any
host language program using SQL .The SQLCA provides a way for DB2 to pass feedback
about it’s operations to the program .After executing an SQL DB2 returns via the
SQLCA ,codes indicating the execution was successful or identifying errors and special
conditions. The program can then test for these codes and react according to their content.
The SQLCA structure contains variables for a number of codes and messages.
Programmers can code the necessary structure(explained in next page) , copy it from a
source library or have DB2 generate it.
An include statement allows the source program to include SQLCA structure from the
copy library and is shown below.
EXEC SQL
INCLUDE SQLCA
END EXEC.
COBOL STRUCTURE OF SQLCA
01 SQLCA.
05 SQLCAID PIC X(8).
05 SQLCABC PIC S9(9) COMP-4.
05 SQLCODE PIC S9(9) COMP-4.
05 SQLERRM.
49 SQLERRML PIC S9(4) COMP-4.
49 SQLERRMC PIC X(70).
05 SQLERRPPIC X(8).
05 SQLERRD OCCURS 6 TIMES
PIC S9(9) COMP-4.
05 SQLWARN.
10 SQLWARN0 PIC X.
10 SQLWARN1 PIC X.
10 SQLWARN2 PIC X.
10 SQLWARN3 PIC X.
10 SQLWARN4 PIC X.
10 SQLWARN5 PIC X.
10 SQLWARN6 PIC X.
10 SQLWARN7 PIC X.
05 SQLEXT.
10 SQLWARN8 PIC X.
10 SQLWARN9 PIC X.
10 SQLWARNA PIC X.
10 SQLSTATE PIC X(5).
INTEGER CHAR(1)
CONDITION SQLCODE SQLWARN0 REQUEST
STATUS
-107 THE NAME name IS TOO LONG. MAXIMUM ALLOWABLE SIZE IS size
-117 THE NUMBER OF INSERT VALUES IS NOT THE SAME AS THE NUMBER
OF OBJECT COLUMNS
-911 THE CURRENT UNIT OF WORK HAS BEEN ROLLED BACK DUE TO
DEADLOCK OR TIMEOUT. REASON reason-code, TYPE OF RESOURCE resource-
type, AND RESOURCE NAME resource-name
WHENEVER STATEMENT
EXEC SQL
WHENEVER Condition Action
END-EXEC
CONDITION:
SQLERROR
-- NEGATIVE SQLCODE
SQLWARNING
-- POSITIVE SQLCODE ( NOT +100 )
-- OR SQLWARN0 = ‘W’
NOT FOUND
-- SQLCODE = +100
ACTION:
GO TO :SECTA
-- CONTROL TRANSFERRED TO STATEMENT LABELED
SECTA
CONTINUE
-- PROGRAM CONTINUES WITH NEXT STATEMENT
-- USED TO CANCEL THE EFFECT OF PRIOR
WHENEVER
Each WHENEVER statement applies to all of the SQL statements that follow it in the
program listing, regardless of order in which the statements are actually executed. This
happens because COBOL precompiler puts appropriate branching instruction after every
SQL statement that follows the whenever statement.
WHENEVER statements can be used for three different conditions and these are similar to
IF THEN statements. IF SQLcode satisfies some condition then the program performs the
branching .
EXAMPLE
EXEC SQL
WHENEVER NOT FOUND CONTINUE
END-EXEC
EXEC SQL
WHENEVER SQLERROR PERFORM ERR-SECTION
END-EXEC
EXEC SQL
WHENEVER SQLWARNING PERFORM WARN-SECTION
END-EXEC
INCLUDE STATEMENT
01SQLCA
EXEC SQL 05 SQLCAID PIC X(8).
05 SQLCABC PIC S9(9) COMP-4.
INCLUDE SQLCA
END-EXEC.
STATUS CITY
2 LONDON
10 PARIS
20 LONDON
RANK CITY
STATUS CITY
20 LONDON
10 PARIS
20 LONDON
RANK CITY
STATUS CITY
20 LONDON
10 PARIS
20 LONDON
RANK CITY
Processing Multiple Rows
In previous example the result of the query gives multiple rows. But there is no method to
determine the number of rows satisfying the condition before actually receiving data from
DB2.Therefore it is not possible to allocate storage in the application program to receive
an entire set of data.
When we are using host variables for retrieving data and if the result is a single row the
query will work and SQL return code will be set to zero. But in the given example the
result of the query gives multiple rows and the host language can deal with only at a time
Now the program is in error, SQLCODE will be set to a negative value and the values of
the host variables will be unpredictable.
DB2 provides the use of cursors to process SETS of data. The cursor is used to retrieve all
rows in the SET one by one. Each fetch of the cursor retrieves the next row in the set of
data that meets the condition.
SELECT WITH FETCH
DEFINE A CURSOR
EXEC SQL
DECLARE CURSOR K10 FOR
SELECT SNAME, CITY DEFINITION
FROM S
WHERE STATUS < 30
END-EXEC
DECLARE cursor statement defines a cursor with the specified name with an associated
query as specified by the select that forms part of that declare. The declare cursor
statement is not an executable code, but a purely declarative statement. A program can use
any number of DECLARE CURSOR statements, and each of which must be of a different
name.
Open cursor statement generates executable code. The select clause used in the
DECLARE CURSOR statement is effectively executed when the cursor is opened. This is
done using the current value of the host variable (if used).
This executable code will allow subsequent fetch statements to access the set of data that
meets the definition of the DECLARE CURSOR‘s underlying SELECT statement.
Opening the cursor is a must and DB2 will not open it on the first fetch.
The FETCH statement will retrieves a row of data from the set made accessible by the
open statement. Data is retrieved in host variables specified after the INTO clause of
FETCH statement. After the first FETCH statement which retrieves the first row, the
cursor will be advanced to the next row during the second FETCH operation and then
assigns values from that row to host variables.
After retrieving the required rows the CURSOR can be closed. The CLOSE CURSOR
statement releases the cursor from the set of data.
UPDATE USING A CURSOR
EXEC SQL
DECLARE CURSOR K10 FOR
SELECT SNAME, CITY
FROM S
WHERE STATUS = :RANK
FOR UPDATE OF CITY
END-EXEC
EXEC SQL
OPEN K10
END-EXEC
EXEC SQL
FETCH K10 INTO :SUPNAME, :CITY
END-EXEC
EXEC SQL
UPDATE S
SET CITY = :NEWCITY
WHERE CURRENT OF K10
END-EXEC
EXEC SQL
CLOSE K10
END-EXEC
The usual logic of cursors is used for updating a row which is present in the set of data,
satisfying the select statement in the DECLARE CURSOR. The columns that may be
updated are specified using FOR UPDATE OF clause in the DECLARE CURSOR
statement.
The update operation is done after fetching a row from the SET of data. This type of
update is useful where the retrieved row is required for the program before updating it.
UPDATE WHERE CURRENT OF CURSOR clause updates the row where the cursor is
presently positioned. The next row can be updated only after issuing another FETCH.
EXEC SQL
DECLARE CURSOR K10 FOR
SELECT SNAME, CITY
FROM S
WHERE STATUS = :RANK
FOR UPDATE OF CITY
END-EXEC
EXEC SQL
OPEN K10
END-EXEC
EXEC SQL
FETCH K10 INTO :SUPNAME, :CITY
END-EXEC
EXEC SQL
DELETE FROM S
WHERE CURRENT OF K10
END-EXEC
EXEC SQL
CLOSE K10
END-EXEC
PROGRAM
PLAN
* SELECT
CALL SELECT
RESULT DB2
TABLE
STATEMENT
PROGRAMMER KNOWS THE SQL STATEMENT TO BE USED
AND ALWAYS DOES THE SAME FUNCTION ON THE SAME
TABLES AND COLUMNS.
BIND
ON ALL SQL STATEMENTS
BEFORE PROGRAM EXECUTION
BUILDS A STORED PLAN
AUTHORIZATION
HELD BY THE PLAN BINDER
DYNAMIC SQL
PROGRAM PLAN
PREPARE DB2
EXECUTE
RESULT
STATEMENT
BIND
ON SINGLE STATEMENT
AT STATEMENT EXECUTION
ACCESS STRATEGY NOT SAVED
AUTHORIZATION
01 STMT
49 LEN PIC S9(4) COMP.
49 TEXT PIC X(200).
01 X PICX(6).
01 Y PICX(6).
01 Z PIC X(6).
……………
……………
……………
IDENTIFICATION DIVISION.
******************************************************************
* IDENTIFICATION DIVISION. *
* *
******************************************************************
PROGRAM-ID. SXD11018.
******************************************************************
* *
******************************************************************
ENVIRONMENT DIVISION.
******************************************************************
* *
* ENVIRONMENT DIVISION. *
* *
******************************************************************
CONFIGURATION SECTION.
******************************************************************
*
* CONFIGURATION SECTION. *
* *
******************************************************************
SPECIAL-NAMES.
DECIMAL-POINT COMMA.
INPUT-OUTPUT SECTION.
******************************************************************
* *
* INPUT-OUTPUT SECTION. *
* *
******************************************************************
FILE-CONTROL.
DATA DIVISION.
******************************************************************
* *
* DATA DIVISION. *
* *
******************************************************************
FILE SECTION.
******************************************************************
* *
* FILE SECTION. *
*
***********************************************************
WORKING-STORAGE SECTION.
******************************************************************
* *
* WORKING-STORAGE SECTION. *
* *
******************************************************************
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
*
* DB2-COMMUNICATION-AREA DECLARATIONS
*
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
EXEC SQL INCLUDE SQLCA
END-EXEC.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
*
* SQL-TABLE DECLARATIONEN
*
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
EXEC SQL INCLUDE VT11018 ( OUTPUT OF DCLGEN )
END-EXEC.
/
………………………………………………………………
………………………………………………………………( WORKING STORAGE
VARIABLES )
………………………………………………………………
LINKAGE SECTION.
******************************************************************
* LINKAGE SECTION *
******************************************************************
*
………………………………………………………………….
…………………………………………………………………
…………………………………………………………
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
*
* SQL-CURSOR DECLARATIONS
*
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
EXEC SQL
DECLARE T11018_ALL_ROW CURSOR FOR
SELECT
TAB_INDEX
, DAHWV
, DAHWB
, DART
FROM
VT11018
ORDER BY TAB_INDEX
END-EXEC.
PROCEDURE DIVISION
******************************************************************
* *
* PROCEDURE DIVISION *
* *
******************************************************************
*
******************************************************************
******************************************************************
*
……………………………………………………….
……………………………………………………….
………………………………………………………
/
******************************************************************
8100-OPEN-T11018-CURSOR SECTION.
******************************************************************
EXEC SQL
OPEN T11018_ALL_ROW
END-EXEC
******************************************************************
…………………………………………………………….
…………………………………………………………….
……………………………………………………………..
………………………………………………………………
8100-FETCH-T11018-ROW SECTION. ( THIS SECTION SHOULD BE IN A
LOOP TO FETCH ALL ROWS)
******************************************************************
EXEC SQL
FETCH T11018_ALL_ROW
INTO
:TAB-INDEX
, :DAHWV
, :DAHWB
, :DART
END-EXEC
******************************************************************
………………………………………………………….
……………………………………………………………
…………………………………………………………………
…………………………………………………………….
8100-CLOSE-T11018-CURSOR SECTION.
******************************************************************
EXEC SQL
CLOSE T11018_ALL_ROW
END-EXEC
STOP RUN.
7. Program Preparation
This chapter explains the steps involved in executing a db2 application program.
Control information required for each step and examples are also provided
The topics discussed in this chapter are
7.3. Precompile
7.4. Bind
DCLGEN
BIND
EXECUTION
INPUT OUTPUT
DECLARE TABLE
STATEMENT
CONTROL STATEMENTS
WHICH INCLUDE TABLE OR DCLGEN
VIEW NAME AND NAME OF
THE HOST LANGUAGE HOST LANGUAGE
DATA STRUCTURE
The purpose of having DECLARE TABLE statement in the source code is to allow the pre
compiler to check the syntax of the SQL statements referring to the tables and views.
Having the table and view declarations in the source code allows the pre compiler to
uncover syntactical errors, which otherwise would not be found until DB2 binds the plan
or package.
DCLGEN TABLE(S)
LIBRARY(NTCI.PTAB.DCL(S))
ACTION(REPLACE)
LANGUAGE(COBOL)
STRUCTURE(S)
QUOTE
******************************************************************
* DCLGEN TABLE(S) *
* LIBRARY(NTCI.PTAB.DCL(S)) *
* ACTION(REPLACE) *
* LANGUAGE(COBOL) *
* STRUCTURE(S) *
* QUOTE *
* IS THE DCLGEN COMMAND THAT MADE THE FOLLOWING
* STATEMENTS *
******************************************************************
EXEC SQL DECLARE S TABLE
(S# CHAR(5) NOT NULL,
SNAME CHAR(20),NOT NULL WITH DEFAULT
STATUS SMALLINT NOT NULL WITH DEFAULT,
CITY CHAR(15) NOT NULL WITH DEFAULT
) END-EXEC
******************************************************************
* COBOL DECLARATION FOR TABLE VT11010 *
******************************************************************
01 S.
10 S# PIC X(5).
10 SNAME PIC X(20).
10 STATUS PIC S9(4) COMP.
10 CITY PIC X(15).
******************************************************************
* THE NUMBER OF COLUMNS DESCRIBED BY THIS DECLARATION IS 4*
******************************************************************
PRECOMPILE
SOURCE MODULE
SOURCE LISTING
DIAGNOSTICS PRE COMPILE
CROSS REFERENCES
MODIFIED
SOURCE CODE DBRM
(CONTAINS (CONTAINS
CONSISTENCY CONTOKEN )
TOKEN)
DB2 application programs include SQL statements and you cannot compile those
programs until you change the SQL statements into the language recognized by your
compiler. Hence you must use a pre compiler whose function is to analyze the host
language source module, stripping all SQL statements it finds and replacing them by host
language call statements. PRE COMPILER also creates a DBRM from the SQL
statements it encountered. DBRM communicates your SQL requests to DB2 during BIND
process.
One DBRM is created for a program, the name of the DBRM and program will be the
same. DBRM contains SQL statements and host variable information extracted from the
source program. The DBRM also contains a consistency token that identifies the program
and ties the DBRM to the modified source statements by using the same consistency token
present in the modified source code.
The pre compiler does a syntax checking using the definition given by the DECLARE
TABLE statement on all SQL statements in the program. Pre compiler gives error ,
warnings and other diagnostic messages depending on the result of syntax checking .
BIND
INPUT OUTPUT
DBRM1
BIND PLAN
DBRM2
BIND PLAN
BIND INVOLVES
The out put of the precompiler contains extracted SQL statements from the source module.
But DB2 has to do the syntax checking and should determine the best access strategy for
each SQL. DB2 records all these information in the PACKAGE.
Each package is assigned to exactly one collection when it is bound. When you bind a
package, you specify the collection to which the package belongs. The collection is not a
physical entity, and you do not create it; the collection name is merely a convenient way of
referring to a group of packages. Usually all of the packages used in a given application
would be assigned to the same collection
There are two types of bind. First method is to bind DBRMS to an application plan. In the
second method DBRMS are bound to the package and packages to the PLAN. Plan
contains pointers to the packages.
Bind examines the SQL statements in the input DBRM an checks whether all the
necessary elements of a statement are present and syntactically correct. It also checks that
the individual binding the plan is authorized to perform the operations requested by the
SQL statement.
Optimizer component of bind interrogates catalog tables, chooses the access path and
generates the machine code calls needed to execute the statement.
BINDING A DBRM TO A PACKAGE
DBRM CATALOG
PACKAGE
When you BIND a package specify the collection to which the package belongs. The
collection is not a physical entity and you do not create it.
In the example collection name is EWSK and the package name produced by this bind is
EWSK.SXD11010
The owner of the package is AM1240.The owner of an object has all privileges on that
object. If no value is entered the default is the use of the primary AUTHID of the binder.
The qualifier parameter will be used as the qualifier of all unqualified tables and views
referenced in the application program.
Validate parameter is used to specify the method DB2 will use to validate the package or
plan. Validation can be performed during bind or when the program runs, indicated by the
choices of BIND or RUN with the VALIDATE parameter.
VALIDATE (RUN) is the default value.
EXPLAIN indicates whether to provide information to the user about the access strategy
decided by the bind. Default is EXPLAIN(NO)
ISOLATION specifies the locking strategy while using cursors. Default is RR (repeatable
read) , can be over ridden using CS (cursor stability): For more information on
ISOLATION parameter please refer chapter 9
RELEASE parameter indicates when the locks should be released while using a cursor.
BINDING AN APPLICATION PLAN
LIST OF PACKAGES
OR
DBRMS
OR
BOTH
PLAN
In the above example name of the plan is A610, two DBRMS, and three packages are
bound to the plan.
Parameters used in this example have the same meaning as in the bind package statement.
Determines the size (in bytes) of the authorization cache acquired in the EDM pool for the
plan. At run time, the authorization cache stores user IDs authorized to run. Consulting the
cache can avoid a catalog lookup for checking authorization to run the plan.
COMPILE AND LINKEDIT
MODIFIED
SOURCE MODULE
(CONTAINS CONTOKEN)
COMPILER
OBJECT MODULE
OTHER
LINK EDITOR OBJECT MODULES
LOAD MODULE
(CONTAINS CONTOKEN)
SOURCE
MODULE
MODIFIED
SOURCE PRECOMPILER DBRM
MODULE
COMPILER BIND
OBJECT PACKAGE
MODULE
OTHER
LINKAGE OBJECT LIST OF BIND
EDITOR MODULES PACKAGES
MAIN MEMORY
ASSOCIATING LOAD MODULES AND PACKAGES
CT CT
PRE COMPILE
MODIFIED
SOURCE DBRM
CT CT
LOAD
MODULE PLAN
CONTOKENS
SHOULD MATCH
TO EXECUTE
Associating Load Modules And Packages
Assume that the plan of an application program contains only DBRMS. When this
program executes the CONSISTENCY TOKEN present in the load module and the
corresponding DBRM should be the same . Otherwise the program will not be executed
and gives an SQLCOSE OF -805.
Now the DBRM of an application program is bound to the package and a set of packages
are bound to the plan. When this program is executed the load module and the package
which the program wants to execute should have the same CONSISTENCY TOKEN,
failure of this will give an SQLCODE -805.
8. Security Features
DB2 provides data integrity by using different security mechanisms. Data access is
controlled by using authorization ID’s and privileges given to that ID. This chapter briefly
explains these security features and DB2’s referential integrity support.
8.1. Privileges
Authorization ID’S are provided to users of DB2 to prevent unauthorized use of DB2
objects. Users are known to DB2 by this authorization identifier given by the system
administrator and it is user’s responsibility to identify themselves to by supplying that ID
when they sign on to the system.
Two types of authorization ID’s DB2 uses to control and track system utilization are
primary and secondary AUTHIDs.
Each individual is assigned a PRIMARY AUTHID that is used to sign on to the system.
Generally it is the primary authorization ID that identifies a process in DB2. When
unqualified tables, views, indexes are used in the application program, this AUTHID
becomes the qualifier of the object. The operations which can be performed by this
AUTHID depends on the privileges granted to it by system administrator or other users.
System administrator may provide a secondary authid to a group of developers who need a
set of privileges associated with that id. Now the user has all the privileges of both primary
and secondary authid. The secondary authorization ID is optional .
A user can use DB2 under either a primary AUTHID or secondary AUTHID or both.
Suppose you are using primary id and want to shift to secondary authid for some operation
to perform . This shift can be achieved by using the command
SYSADM
SYSCTRL
SYSTEM CONTROL AUTHORITY allows the holder to execute any operation, except
for operation that access database contents.
Example: CREATE STOGROUP
DBADM
DBCTRL
DBMAINT
SYSOPR
SYSTEM OPERATOR AUTHORITY allows the holder to carry out console operator
functions on the system.
Example: STARTING AND STOPPING SYSTEM TRACE ACTIVITIES
PACKADM
Referential integrity consists of a set of rules used in DB2 to provide accuracy , validity or
correctness of data in database. Maintaining integrity is of paramount importance and this
task is handled by the system rather than the user . For this the system needs to be aware
of integrity rules, it should monitor all operations and should ensure that they do not
violate any of those rules.
DB2 supports ENTITY INTEGRITY RULE by enforcing the programmer to make the
column declaration of the primary key not null. If the primary key is composite then all the
columns in that composite key should be declared as not null. The justification for this is
basically that the primary key values in base tables serve to identify entities in the real
world. Primary keys are used for direct row level retrieval and relating one table to another
in relational database. Therefore an unknown value in primary column will be
meaningless.
DB2 enforces that values of a given foreign key must match the values of the
corresponding primary key. But the converse is not a requirement. ie the primary key
corresponding to some given foreign key might contain a value that currently does not
appear as a value of that foreign key. Table which contains the primary key is the parent
table and table containing foreign key is the dependent table.
This referential integrity rule can be violated during data manipulation like update, delete,
insert. DB2 will monitor all operations and it will not allow any violation in referential
integrity rules.
DB2 ENFORCEMENT OF REFERENTIAL INTEGRITY
INSERT RULE
UPDATE RULE
DELETE RULE
Insertion of rows containing new primary key values of the parent table do not require
checks of associated foreign keys because additions pose no threat to referential integrity.
Values added to foreign key columns of depended table through inserts, on the other hand
must have corresponding primary key values.
Updating the primary key of the parent table will be restricted if matching foreign keys are
found in dependent table. While updating the dependent table the new foreign key value
must be present in the parent table. Otherwise the request will be rejected.
The delete rule of a referential constraint applies when a row of the parent table is deleted.
The effect of this delete on dependent tables will dependent on the ON DELETE clause of
FOREIGN KEY DEFINITION. The possible specifications of ON DELETE clause are
RESTRICT, CASCADE, SET NULL.
When the deleting a primary key value, assume that the delete rule is RESTRICT, then the
delete is restricted to the case where there are no matching rows in the dependent table. If
matching rows exist then the delete request will be rejected.
The delete rule CASCADE deletes all matching rows. Ie This deletes the row
corresponding to the primary key in parent table and the matching rows in dependent
table.
For using the delete rule SET NULL the foreign key must have nulls allowed.
Here row corresponding to the primary key value in the parent table will be deleted and
the foreign key value will be set to null in all matching rows of the primary key in
dependent table.
Example For Referential Integrity Violation
TABLE S TABLE SP
PRIMARY KEY; FOREIGN KEY clauses of the create table statements for these tables
are given below
TABLE SP TABLE S
In this example table S is the parent table and table SP is the dependent table. PFK and
SFK are constraint names that will be used by DB2 in diagnostic messages relating to the
foreign keys S# and P#. If the user does not specify the name DB2 will create one derived
from the name of the first column participating in the foreign key.
Four different cases of potential referential integrity violations for these tables are
explained below
CASE1
An insert on the SP table might introduce a shipment for which there is no matching
supplier. For example
INSERT
INTO SP (S#, P#, QTY )
VALUES ( ‘S20’, ……) ;
CASE2
An update on column SP.S# of the SP table might introduce a shipment supplier number
for which there is no matching supplier. For example
UPDATE SP
SET S# = ‘S20’
WHERE….;
CASE3
A deletion on the S table might remove a supplier for which there exists a matching
shipment. For example
DELETE
FROM S
WHERE S# = ‘S1’ ;
CASE4
An update on column S.S# of the S table might remove a supplier for which there exists a
matching shipment . For example
UPDATE S
SET S# = ‘S20’
WHERE S# = ‘S1’ ;
In order to enforce referential constraint, the system must deal with all four of these cases.
Explanation
CASE1
This situation is prevented by the virtue of the fact that SP.S# is a foreign key in table SP
matching the primary key S.S# of table S. Such an insert will simply be rejected. But an
insert that introduces a shipment for a supplier that does already exist in table S will be
accepted.
CASE2
In this case also the update will be rejected . But an update that introduces an SP.S# value
that does already exist in table S will be accepted.
CASE3
This situation is handled by the delete rule CASCADE. In general RESTRICT would
mean that the delete will be accepted only if there no such matching shipments.
CASCADE would mean that any such matching shipments will de removed anyway. And
SET NULL would mean that any such matching shipments will not be removed but will
be updated so that they are no longer matching.
CASE4
This situation is handled by the implicit update rule restrict, which means that the update
will be accepted only If no such matching shipments exist.
DATABASE RECOVERY IN CASE OF FAILURE
UNIT OF RECOVERY
COMMIT
DATA DATA
A unit of recovery is the work done by a DB2 for an application, that changes DB2 data
from one point of consistency to another. A point of consistency is a time when all
recoverable data that an application program accesses is consistent with other data.
A unit of recovery begins with the first change to the data after the beginning of the job or
following the last point of consistency and ends at a later point of consistency. If failure
occurs within a unit of recovery, DB2 backs out any changes to data, returning the data to
its state at the start of the unit of recovery; that is, DB2 undoes the work.
DATA RECOVERY
BACKUP
DATA BASE
UPDATED
DATABASE
F
COMMIT A
I
L
U
UPDATE1 UPDATE2 R
E
LOG
BACKUP
RECOVERED
DATABASE
RESTORE
UPDATE
LOG
Data Recovery
Backups are maintained by database administration for the data in DB2 subsystem.
Backups may be of the entire database or of one or more tablespaces. In case of failure
database recovery is done using these backups.
All data changes and other significant activities are recorded in logs by DB2. Database
manager may use the backup copies and the logs to re-establish the data base to the last
committed unit of work. Changes that were not committed before the failure are not
recovered in any case
In the given example, the backup is made for a database by DB2. After that the database is
changed , and that is made permanent by issuing a commit. Again the application program
tries to do another update and before it’s completion a failure occurs
Now we want to recover the data in the database. The database is recovered from he
backup and the changes that were made in that database till the last commit were done.
and the database is restored.
9. Concurrency
Objects in DB2 can be used by many users at the same time. This is achieved by the using
proper locking system. This chapter explains how DB2 uses these locks and how much
control the programmer has over the concurrency in DB2.
9.1. Concurrency
DB2 is a shared system, that is a system that allows any number of users to access the
same database at the same time. Any such system requires some kind of concurrency
control mechanism to ensure that concurrent transactions do not interfere with each other
operation. The absence of such a mechanism will lead to errors and inconsistencies in data
DB2 uses locks to control access to same database by multiple users. The basic idea of
locking is simple, when a transaction needs an assurance that some object that is interested
in, will not change in some unpredictable manner by another user. An exclusive lock on
the object will provide this assurance. The effect of the lock is to lock other transactions
out of the object, and thereby to prevent them from changing it. The first transaction is
thus able to carry out its processing in the certain knowledge that the object in question
will remain in a stable state for as long as the transaction wishes to.
If a transaction requests a lock that is not currently available, then the transaction simply
waits until it gets it. In practice the installation can specify a maximum wait time; If a
transaction ever reaches that threshold in waiting for a lock, it times out and the lock
request is failed.
LOCKING STRATEGY
DB2 allows multiple users to access same object at same time, but they are controlled by
locks. DB2 selects appropriate locking mechanism based on concurrency control
requirements inherent in the application program. They are called implicit locks.
In addition to the implicit locking mechanism, DB2 provides certain explicit facilities.
Lock table statement can be coded in the application program to acquire an explicit lock
on an object on behalf of the application program. Other parameters are explained in the
following pages.
Example
LOCKSIZE TABLESPACE
THIS MEANS THAT ALL LOCKS ACQUIRED ON DATA IN THE
TABLE SPACE WILL BE AT THE TABLE SPACE LEVEL
LOCKSIZE PAGE
THIS MEANS THAT LOCKS ACQUIRED ON DATA IN THE TABLE
SPACE WILL BE AT TABLE LEVEL
LOCKSIZE ROW
THIS MEANS THAT THE LOCKS ACQUIRED ON DATA IN THE
TABLE SPACE WILL BE AT THE ROW LEVEL
LOCKSIZE ANY
THIS MEANS THAT DB2 WILL DECIDE THE APPROPRIATE
PHYSICAL UNIT OF LOCKING FOR THE TABLESPACE
Proper selection of lock size is important for better performance and concurrency of the
database. A locksize of tablespace allows a process to lock the tablespace which controls
all tables inside the table space. On the other hand row lock will only lock the row which
the application program wants.
In a simple tablespace locking table space means locking all tables inside that table space
which will reduce concurrency. But a page lock will lock only those rows of tables present
in that page and other users can access other rows in that tablespace concurrently.
Locking larger or smaller amounts of data allows you to trade performance for
concurrency. When you use page or row locks instead of table or tablespace locks
concurrency usually improves, meaning better response times .When you use only table or
tablespace locks then processing time and storage used is reduced. But concurrency is also
reduced , meaning longer response times for some users.
For maximum concurrency, locks on a small amount of data held for a short duration are
better than locks on a large amount of data held for a long duration of time. However
acquiring a lock requires processor time, and holding a lock requires storage. These things
should be kept in mind while deciding a lock size.
ACQUIRE RELEASE PARAMETERS
ACQUIRE ( ALLOCATE )
ACQUIRES THE LOCK WHEN THE PLAN IS ALLOCATED
ACQUIRE (USE )
ACQUIRES THE LOCK WHEN THE OBJECT IS FIRST ACCESSED.
RELEASE (DEALLOCATE)
RELEASES THE LOCKS WHEN THE PLAN IS DE ALLOCATED
RELEASE(COMMIT)
RELEASES THE LOCK AT THE NEXT COMMIT POINT. IF THE
APPLICATION ACCESSES THE OBJECT AGAIN IT MUST ACQUIRE
THE LOCK AGAIN
ISOLATION PARAMETER
REPEATABLE READ(RR)
READ STABILITY(RS)
CURSOR STABILITY(CS)
UNCOMMITTED READ(UR)
If an SQL statement embedded in a host language program will return multiple rows, the
developer must declare in the program a cursor that presents them to the host program one
at a time, usually with in a repeatedly executed block. DB2 can handle locking for these
cursors using different ISOLATION levels.
ISOLATION(RR) Repeatable read: A row or page lock is held for all accessed rows,
qualifying or not, at least until the next commit point. If the application process returns to
the same page and reads the same row again, the data cannot have changed and no new
rows can have been inserted.
ISOLATION (RS) Read stability: A row or page lock is held for pages or rows that are
returned to an application at least until the next commit point. If a row or page is rejected
during stage 2 processing, its lock is still held, even though it is not returned to the
application.
If the application process returns to the same page and reads the same row again, the data
cannot have changed, although additional rows might have been inserted by another
application process. A similar situation can also occur if a row or page that is not returned
to the application is updated by another application process. If the row now satisfies the
search condition, it appears.
ISOLATION(CS) Cursor stability: A row or page lock is held only long enough to
allow the cursor to move to another row or page. For data that satisfies the search
condition of the application, the lock is held until the application locks the next row or
page. For data that does not satisfy the search condition the lock is immediately released.
ISOLATION(UR) Uncommitted read: The application acquires few locks and can run
concurrently with most other operations. But the application is in danger of reading data
that was changed by another operation but not yet committed.
10. DB2I (DB2 Interactive )
DB2I is an interactive facility available in DB2 . Almost all of the functions of DB2 are
available in DB2I , Which can be used by developers .This chapter contains
10.1. DB2I
10.2. SPUFI
DB2I (DB2 INTERACTIVE )
===>
DB2 provides a number of commands for use in readying a program for execution that
programmers can use to perform the functions required to convert code from source to
executable modules. A convenient alternative is to work through DB2I , which provides a
menu interface to the necessary command processor . If you develop programs using TSO
and ISPF, you can prepare them to run using the DB2 Program Preparation panels. These
panels guide you step by step through the process of preparing your application to run.
There are other ways to prepare a program to run, but using DB2I is the easiest, as it leads
you automatically from task to task.
DB2I primary option menu lists the functions it can perform. The user can select any one
of these functions according to his requirements
SPUFI (SQL processor using file input) supports the online execution SQL statements
from a TSO terminal. SPUFI is intended basically for application programmers who wish
to perform SQL portions of their programs.
The DCLGEN menu allows users to invoke the declarations generator program, which
produces the DECLARE TABLE statements and host language data structure.
Other options like PRECOMPILE, BIND, RUN are used for preparing and executing DB2
application program.
UTILITIES menu helps the user to invoke DB2 online utilities like LOAD, REORG,
RECOVER etc. The necessary utility control statements to direct the operation of the
specific utility must be created before the utility is invoked.
SPUFI
DB2
For analyzing and managing physical data present in data base, DB2 offers a number of
utilities . This chapter gives a brief explanation of these utilities
11.1. Load
11.2. Runstats
11.3. Reorg
UTILITIES
LOAD
REORG
RECOVER
RUNSTATS
DSNJU003
DSNJU004
DSN1CHKR
LOAD
EXAMPLE
LOAD DATA
RESUME NO
LOG NO
inddn ddname
INTO TABLE D2110K.S
( S# POSITION (1) CHAR 5
P# POSITION (6:11) CHAR 6
QTY POSITION (12:15) INTEGER );
Load
Load utility is used to load data from a sequential file to a TABLE in a table space.
In the previous example the TABLE S is loaded from the dataset specified in the load jcl.
ddname of the input dataset that is used in the LOAD JCL is given in INDDN parameter.
Each fields and their positions are also specified.
If the table space already contains data, you can choose whether you want to add new data
to existing data or replace the existing data. This can be done using the parameter
RESUME.
RESUME NO: Indicates that the dataset is to be empty. This is the default option.
RESUME NO REPLACE: This causes the utility to over write the existing data.
RESUME YES: This allows the utility to add new rows to the existing table.
The LOG NO command instructs the utility not to record data in the log as they are loaded
.IF the user does not specify LOG NO , the utility records the changes which can be used
for recovery purpose. Default is LOG YES. Recording data in the log during a load can
increase the time required for the load significantly.
RUNSTATS
EXAMPLE
The RUNSTATS utility reads tablespaces and indexes to collect statistics describing the
data. The main statistics collected include number of rows in the table, number of pages
that contain the rows of the table, number of distinct values of indexed column ,
percentage of space occupied by rows etc. RUNSTATS utility uses this information to
update CATALOG tables.
In the previous example RUNSTATS utility is used for table space TABSP in database
D2110K. All tables in the tablespace are specified by TABLE(ALL) keyword. Here you
can specify the table name in parentheses after keyword TABLE on which the utility has
to run. You can obtain statistics on all indexes on all tables in the named table space by
specifying INDEX(ALL).The user can get statistics of one more specific indexes by
specifying them in parentheses after the keyword INDEX .
The RUNSTATS utility is useful for finding out the free space remaining in a tablespace
and we use that information for reorganizing the tablespace.
REORG
The REORG online utility reorganizes a table space or index to improve access
performance and reclaim fragmented space. In addition, the utility can reorganize a single
partition of either a partitioned index or a partitioned table space.
REORG utility reorganizes table space or index as you specify in control statements.
When an index space only is reorganized then the data pages are not processed. Only leaf
pages which contains indexes are scanned.
In the given example REORG utility is run on tablespace TABSP in database D2110K. If
you want to reorganize an index then specify REORG INDEX (index name) . LOG NO
parameter is specified in the example to avoid writing data records in the log while loading
the tablespace.
12. Advanced DB2
This section explains some of the advanced concepts in DB2. The detailed discussions on
indexes and DB2 locks are included. Advanced topics present in this section are
A table can have more than one index, and an index key can use one or more columns. An
index key is a column or an ordered collection of columns on which an index is defined. A
composite key is a key built on 2 to 64 columns.
The usefulness of an index depends on its key. Columns that you use frequently in
performing selection, join, grouping, and ordering operations are good candidates for use
as keys
DB2 allows you to enter duplicate values in a key column. If you do not want duplicate
values, use CREATE UNIQUE INDEX. If a table has a primary key, its entries should be
unique. Its uniqueness is enforced by defining a unique index on the primary key columns,
EXAMPLE OF AN INDEX
25 61 86 ROOT PAGE
8 17 25 33 40 61 70 75 86 INTER-
MEDIATE
PAGES
LEAF
PAGES
. . 8 . . 17 . . 25 . . 33 . 40 . . 61 . . 70 . . 75 . . 86
DATA
PAGES
Indexes in DB2 are based on a structure known as B-Tree. Indexes can have more than
one level of pages. Index pages that point directly to the data in the tables are called leaf
pages. If the index has more than one leaf page, it must have at least one non leaf page,
containing entries that point to leaf pages. If it has more than one non leaf page, the non
leaf pages whose entries point directly to leaf pages are said to be on the first level; there
must be a second level of non leaf pages to point to the first, and so on. The highest level
contains a single page, called the root page.
A typical index is shown in the figure, which is a multilevel, tree structured index with the
property that the tree is always balanced, that is that is all leaf entries in the structure are
equidistant from the root of the tree. and this property is maintained as new entries are
inserted into the tree and existing entries are deleted
The root page is the top of the structure. The root page will contain an entry for each non
leaf or immediate page. The entry in the root page consists of the high value contained on
the intermediate page and a pointer to that page.
The immediate pages are similar in structure to the root page, expect that the range of
values addressed is more specific. The immediate page contains an entry for each of the
leaf pages addressed. The entry consists of the high value contained on the leaf page and a
pointer to this leaf page.
The leaf pages contain the RID, using which the record can be located in a table space.
The leaf pages collectively address the entire table.
CLUSTERED INDEXES
25 61 ROOT PAGE
8 17 33 40 INTERMEDIATE
PAGES
LEAF
PAGES
DATA
PAGES
A clustering index is one for which the sequence defined by the index is the same as or
close to the physical sequence. The clustering holds the most potential for performance
gains. With a clustering index DB2 takes responsibility for maintaining rows in sequence
on the clustering index columns as long as there is free space. DB2 maintains clustering by
placing inserted rows in the indexed column’s sequence on available free space in the data
pages.DB2 can then process the table in that order efficiently. If it is a non clustering index
then DB2 has to reread data pages to identify all the qualifying rows, which will reduce
performance.
Clustering is valuable when DB2 must process a column’s values in sequence. The SQL
statements ORDER BY, GROUP BY, and DISTINCT require such processing. If a column
specified in these operation and there is not a suitable index on the column, DB2 must sort
it to put it in sequence before returning even one row to the user. If there is a clustering
index on that column DB2 uses this column to retrieve the rows in sequence and return the
rows immediately one by one.
To specify a clustering index, use the CLUSTER clause in the CREATE INDEX
statement.
25 61 ROOT PAGE
8 17 33 40 INTERMEDIATE
PAGES
LEAF
PAGES
DATA
PAGES
CURRENT DATE
CURRENT DEGREE
CURRENT PACKAGESET
CURRENT RULES
CURRENT SERVER
CURRENT SQLID
CURRENT TIME
CURRENT TIMESTAMP
CURRENT TIMEZONE
USER
Special Registers
DB2 supports a number of special registers. A special register is a storage area that DB2
defines for a process. Wherever its name appears in an SQL statement, the name is
replaced by the register's value when the statement is executed. Thus, the name acts like a
function that has no arguments. (zero argument built in scalar functions)
You can use the SET statement to change the current value of a register. Where the
register's name appears in other SQL statements, the current value of the register replaces
the name when the statement executes. A commit or rollback operation has no effect on
the values of special registers. Nor does any SQL statement, other than SET statement can
change a register value
CURRENT DATE, specifies the current date. The data type is DATE. The date is derived
by the DB2 that executes the SQL statement that refers to the special register.
EXAMPLE
Example: For executing a program, identify the collection ID for its package as EWSA.
CURRENT SQLID specifies the SQL authorization ID of the process. The data type is
CHAR(8). This SET statement is used to change the authorization id for a process
Example: Set the SQL authorization ID to 'GROUP34' (one of the authorization IDs of the
process).
CURRENT TIME, specifies the current time. The time is derived by the DB2 that
executes the SQL statement that refers to the special register. ,
Example: Display information about all project activities and include the current date and
time in each row of the result.
CURRENT USER specifies the primary authorization ID of the process. The data type is
CHAR(8).
Example: Display information about tables, views, and aliases that are owned by the
primary authorization ID of the process.
IS INTENT SHARE
IX INTENT EXCLUSIVE
S SHARE
U UPDATE
X EXCLUSIVE
SIX: THE LOCK OWNER CAN READ ANY DATA IN THE TABLE AND
CHANGE ROWS IN THE TABLE PROVIDED IT CAN OBTAIN AN
X LOCK ON THE TARGET ROW OR PAGE FOR CHANGE. ROW
LOCKS ARE NOT OBTAINED FOR READING.
S : THE LOCK OWNER CAN READ ANY DATA IN THE TABLE AND
WILL NOT OBTAIN ROW OR TABLE LOCKS
U : THE LOCK OWNER CAN READ ANY DATA IN THE TABLE AND
MAY CHANGE DATA IF AN X LOCK ON THE TABLE CAN
BE
OBTAINED. NO ROW OR PAGE LOCKS ARE OBTAINED
The locking modes IS IX SIX are used at the TABLE OR TABLESPACE level to support
row or page locks. They permit row or page level locking while preventing more exclusive
locks on the table by other applications.
When an application obtains an IS lock on a table, that application may acquire a lock on a
row or page for read only. Other applications can also read the same row. In addition other
applications can change data on other rows in the table.
An application having an IX lock on a table can change a row after acquiring a row or
page lock. Other applications can READ/CHANGE data on other rows in the table.
When an application has an SIX lock on a table, that application may acquire a lock on a
row for change. Other application can only read other rows in the table.
The modes S U and X are used at the table level to enforce the strict table locking
strategy. No row or page level locking is used by application that possess one of these
locking modes.
When an application obtains an S lock on a table, that application can read any data in that
table. It will allow other applications to obtain locks that support read only requests for
any data in the entire table. No application can change any data in the table until s lock is
released.
When an application obtains a U lock on a table , that application can read any data from
that table and may eventually change data in that table by obtaining an X lock. Other
applications can only read data In that table.
When an application obtains an X lock on a table that application can read and change any
or all data in the table or tablespace . No other application can access data in the entire
table or tablespace for READ or CHANGE
MODES OF ROW AND PAGE LOCKING
S SHARE IS
U UPDATE IX
X EXCLUSIVE IX
IX YES NO YES NO NO NO
SIXYES NO NO NO NO NO
U YES YES NO NO NO NO
X NO NO NO NO NO NO
If application A obtains an IS lock against a given table application B could obtain an IS,
S, IX, SIX or U lock against the same table at the same time. However an X lock would
not be permitted at the same time.
This particular example illustrates the concept of IS lock acting as supporting lock for a
lower level of locking. The only table lock that is not compatible is X lock which would
require exclusive lock use of the table. The presence of IS lock indicates that a lower level
of locking is required for this table and X lock is not given.
Study of the chart reinforces the definitions of table and row lock modes presented on
previous pages. Review the row for IX under application A. Assume that application A
obtains an IX lock on table Y. This lock indicates that the application intends to obtain
locks to support change at the row level. The application will allow other rows to be read
and updated but will prevent access to the target rows Examine each of the possible
competing table locks that application B might request
IS-- Intent to lock for read only at row level. This lock is compatible. There may be
contention at the row level if application A is changing the same row that application B
wants to read.
S-- Share lock at the table level. This lock is not compatible since the S lock states that the
entire table is available for read only by the application possessing the lock and all other
applications. The IX lock states that an intent to change data at the row level which
contradicts the requirement for read only. Therefore application B could not obtain the S
lock
IX-- Intent to lock for change at the row level. This lock is compatible. There may be lock
contention at the row level if application A is changing the same row that application B
wants to change.
SIX—The SIX lock states that lock request for changing data may be required at the row
level for the application processing the lock. In addition the rest of the table is available
for read only applications. The IX lock implies change at the row level as well.
Application B could obtain six lock on the table
U-- Read with intent to update. This table level lock states that the application processing
the lock may read any data and may potentially exchange the U lock for an X lock.
However until this exchange is done other applications can obtain locks supporting read
only. Application B would not be able to obtain the U lock at the same time that
application a possessed an IX lock on the same table.
X-- The application possessing this mode of lock on the table requires exclusive use of
the table. No other access is permitted. The ix lock possessed by an application A would
prevent application B from obtaining x lock
The same type of statements can be logically derived for the other rows in the chart
LOCK MODE COMPATIBILITY OF ROW AND PAGE LOCKS
S YES YES NO
U YES NO NO
X NO NO NO
Creating utility control statements is the first step required to run an online utility. Utility
control statements define the function the utility job performs. Utility control statements
are read from the SYSIN input stream. The SYSIN stream can contain multiple utility
control statements. Control statements are different for each utility and are explained in
chapter 11
There are different methods of invoking DB2 online utilities. Commonly used methods are
using DB2I and IBM supplied JCL procedure DSNUPROC.
When you use DB2I (DB2 interactive ) panel for executing a utility you must specify the
name of the utility , the dataset which contains the control information and other datasets
needed by the the utility. Then you can execute the utility from that panel.
DB2 on line utilities can be invoked using DSNUPROC procedure, For that you must
write and submit JCL, in your JCL, the EXEC statement invokes the DSNUPROC
procedure. You must give the control statements as input to DSNUPROC and use the
necessary datasets required for the execution of the utility.
Sample JCL For Invoking Online Utilities