Professional Documents
Culture Documents
No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or
by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no
expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No
liability is assumed for incidental or consequential damages in connection with or arising out of information
contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in
rendering legal, medical or any other professional services.
COMPUTER SCIENCE,
TECHNOLOGY AND APPLICATIONS
RICHARD EARP
AND
SIKHA BAGUI
Copyright © 2021 by Nova Science Publishers, Inc.
All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted
in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying,
recording or otherwise without the written permission of the Publisher.
We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to
reuse content from this publication. Simply navigate to this publication’s page on Nova’s website and
locate the “Get Permission” button below the title description. This button is linked directly to the title’s
permission page on copyright.com. Alternatively, you can visit copyright.com and search by title, ISBN,
or ISSN.
For further questions about using the service on copyright.com, please contact:
Copyright Clearance Center
Phone: +1-(978) 750-8400 Fax: +1-(978) 750-4470 E-mail: info@copyright.com.
Independent verification should be sought for any data, advice or recommendations contained in this
book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons
or property arising from any methods, products, instructions, ideas or otherwise contained in this
publication.
This publication is designed to provide accurate and authoritative information with regard to the subject
matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in
rendering legal or any other professional services. If legal or any other expert assistance is required, the
services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS
JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A
COMMITTEE OF PUBLISHERS.
Additional color graphics may be available in the e-book version of this book.
Dedicated to my
father, Santosh Saha, mother, Ranu Saha,
husband, Subhash Bagui,
sons, Sumon Bagui and Sudip Bagui,
and nieces, Priyashi Saha and Piyali Saha
S.B.
CONTENTS
Preface ix
Chapter 1 Good Database Design 1
1.1. Introduction 1
References 24
Chapter 2 Database Integrity 25
2.1. Introduction – What Is Integrity? 25
References 40
Chapter 3 The Database 41
3.1. Introduction 41
References 58
Chapter 4 Privileges and ROLEs 59
4.1. Introduction 59
Chapter 5 The Dictionary 73
5.1. The Dictionary Paradigm 73
5.2. Drilling Down into Information in
the Dictionary 74
viii Contents
1.1. INTRODUCTION
A database is about all people in an enterprise using one and only one
version of data. In the early days of computers, a company may have had
numerous programmers who kept data for their department or division. For
the moment, suppose there were three programmers in a company -- one in
the Billing department, one in Sales, and one in Inventory. Suppose each of
2 Richard Earp and Sikha Bagui
these individual programmers kept a file about the products and customers
who bought the products. Billing needed to know the price paid for the
products and what to charge customers who bought the product. Sales
needed the same kind of information and also the quantity-on-hand and
customer information for future sales. Inventory needed not only quantity-
on-hand but also order points for inventory items and supplier information.
Many years ago, the concept of one file (or better yet, one set of files),
one programmer in one department was called the “my file” concept.
Imagine a large company with numerous programmers each keeping their
own version of product data for their department. You might well expect
when taken as a whole, the idea of “product data” depended on which
programmer’s files were current and accurate.
The problem with the “my file” idea is this plethora of files kept by each
group would probably have stored data redundantly. If one wanted to know
how many things were in inventory, it depended on who was asked. If one
wanted to know a customer’s address, it again depended on who supplied
the information and how current it was. Although it might be expected two
independent programmers would keep their data current, more than likely
they updated their data in different ways at different times. So, early in the
days of computers and files, the question was asked, “Why not share one
file about products or customers or inventory across the whole company?”
Then, all departments and their personnel would be accessing the same
information.
Sharing data is a good idea; but, before we share data, we need to discuss
the “goodness” of the data to be shared. Our first consideration in regarding
goodness of data is to define relational normal forms and use them for our
database. While special cases may exist where one picture of data does not
seem to fit every situation, generally a relational database in the third normal
form is considered a good database. We shall now see how to get to the third
normal form.
Good Database Design 3
When we look into the creation of a database, we must first consider the
design of it. If a person wants to build a house, the first thing to do is to draw
up a plan, a blueprint. Building a database correctly must be done is a similar
way. What data goes with which table? Are there right and/or wrong ways
to design tables? Relational database consists of creating two-dimensional
tables -- tables with rows and columns. The two-dimensional arrangement
of data mirrors the concept of matrices in mathematics. A two-dimensional
mathematical matrix has rows and columns like this:
Matrix
Column1 Column2 Column3
Row1 33 11 66
Row2 22 66 33
Row3 99 33 44
Rooms
Math Chemistry Physics
8 AM 10 12 14
9 AM 12 16 10
10 AM 14 10 12
realizing all data can be arranged in two dimensional tables. Further, the data
in the tables must be arranged correctly to be efficiently searched.
Three normal forms are defined in describing data in two dimensional
tables. These arrangements of data are referred to as the first, second, and
third normal forms. A correct relational database is at least in the third
normal form (3NF). Having a database in the 3NF implies data is also in the
first (1NF) and the second normal forms (2NF). In the next few sections, we
explain the principles of normal forms. As you peruse the ideas presented in
normalizing a database, the traditional approach is to define the normal form
(NF) as what the NF does not contain. So, bear with the descriptions as they
tend to define things such as, “If table R does not contain X, then R is in
normal form Y.”
The first normal form (1NF) demands all data in tables be “atomic.”
Codd originally used an expression describing values of attributes as from a
“simple domain.” Atomicity or a “simple domain” implies the data items
cannot be broken down -- hence, the characterization of the data as “atomic.”
There are two ways related data can be non-atomic -- repeating groups and
composite attributes. A table in 1NF has only atomic attributes; it does not
have repeating groups or composite attributes.
Or
Employee
EmpNo EName Dept1 Dept2 Dept3
101 Alice Smith Marketing Finance Sales
102 Bob Baker Finance H.R.
103 Chuck Charles Marketing Sales
Now, suppose you want to find all the employees who are qualified to
work in Sales. How do you find this information in the arrangement of data
above? You have no choice but to look at every row in the database and see
if an employee has the qualification you seek. In this design qualifying
departments are listed for each employee and no order of the departments
within {Dept} is implied. Even if the departments were in alphabetical order
for each employee, difficulties still arise.
The Sales department occurs in different places in the repeating group
in different rows. Even if the repeating group were in alphabetical order, we
see Sales occurs in the third position of the first row and in the second
position of the third row. Repeating groups are not readily searchable as the
design stands.
If an employee were to qualify for a new department, adding that data
to the table in this alphabetical arrangement of departments may be a
6 Richard Earp and Sikha Bagui
Employee
EmpNo EName
101 Alice Smith
102 Bob Baker
103 Chuck Charles
Qualifications
Dept EmpNo
Marketing 101
Finance 101
Sales 101
Finance 102
H.R. 102
Marketing 103
Sales 103
Good Database Design 7
Why is the latter design better than the original arrangement? First of
all, the data in the Qualifications table is now accessible by Dept. If you
were looking for a qualified employee for Marketing, you need only search
the rows of the Qualifications table which would point you to the EmpNos
for those qualified for Marketing. You might say, “Before we had to look at
all rows of the original Employee table to find Marketing, and now we have
to look at even more rows in the Qualifications table to find it.”
The difference is the Dept attribute in the Qualifications table can be
arranged to make finding specific job titles easier to find. One arrangement
would be to alphabetize the job titles. If you were looking for Marketing, the
alphabetical list of job titles in the Qualifications table would make finding
Marketing easier. Another facility for finding data in Qualifications would
by indexing the Dept column.
Codd went a step further in defining relational database to be sets of
atomic data arranged in matrix-like tables. The keyword in the previous
sentence is “sets.” Mathematical sets have no implied order. The above
Employee table could be depicted as:
Employee
EmpNo EName
103 Chuck Charles
101 Alice Smith
102 Bob Baker
<H.R., 103>
is added to the table of Qualifications. It does not matter where the row
is placed in the set of Qualifications because the order of the rows is not
defined. Adding the new row is straightforward as no further re-arrangement
is required as it could have been with the repeating group.
Based on the sample data, we notice Location contains two parts -- a city
and a state. The city-state combination is called a “composite” attribute
because it is composed of two parts. If there were never a question about
finding employees by state, this arrangement of data could possibly be
acceptable. However, the city and state should be entered as separate
attributes so no assumption about how data should be accessed is broached.
A more proper representation of our Employee table would be:
The sense of Location is “where the person works” rather than a home
address. If the sense of Location were understood, the qualifier in the table
description could be dropped.
There could be situations where someone must find all the employees
working in a state. For example, a person in the payroll department might
need to calculate a state income tax rate for each employee. If the data were
arranged as we have it above, finding employees by state would again
require a row-by-row search of our Employee table. As you look at
examples, think beyond the example to a table with thousands of employees.
Here, Location is called a composite attribute -- it consists of parts which
compose the whole attribute Location -- a City and a State. Tables with
composite attributes are considered non-atomic because the attribute can be
split into parts. With the composite broken down, and the meaning of
Location clear, a better representation of the Employee table would be:
Our table with sample atomic data would now look like this:
Employee
EmpNo EName City State
103 Chuck Charles Tampa FL
101 Alice Smith Mobile AL
102 Bob Baker Pensacola FL
104 Donna Davis Pensacola FL
Although the data above seems to be atomic, the composite attribute has
left us with a column that is less “searchable.” The person in payroll would
prefer the State be split off to focus on finding employees in states so what
do we do? Decompose the table to arrange the data like this:
10 Richard Earp and Sikha Bagui
Employee
EmpNo EName City
103 Chuck Charles Tampa
101 Alice Smith Mobile
102 Bob Baker Pensacola
104 Donna Davis Pensacola
State
State EmpNo
FL 103
AL 101
FL 102
FL 104
The 1NF is defined as: All data in a table must be atomic. Atomicity
means we rid the table of repeating groups and composite structures. How
do we reorganize non-1NF tables? We decompose the original table into
tables not containing the offending (non-1NF) parts.
1.1.3. Keys
employee. The argument for and against Ename being a key is a semantic
one. The point that Ename may not be unique and hence is not a good
candidate for a key is based on the possibility of people having the same
name.
City is likewise not a good candidate for identifying either the EmpNo
or the Ename. Just looking at the sample data, finding an employee by
knowing where the employee works (City) is not going to identify a specific
employee. Disqualifying an attribute as a key candidate by inspecting data
in a table is called a “counter-example argument.” Find two rows where
knowing City implies knowing the EmpNo rules out City as a candidate key.
Every table in relational database has a primary key. The primary key of
a table must be unique so as to identify all the information in a row of the
table. In the Employee table, EmpNo is defined to be a unique row identifier.
If an EmpNo is supplied, one row in the table is associated with that attribute;
EmpNo is the primary key of the Employee table. Because EmpNo is the
primary key, we underline it in the description of the table like this:
What is the key of this table? It can’t be Dept because there are several
departments associated with various employees. Dept is not a unique
identifier. How about EmpNo? In the Qualifications table, EmpNo is not
unique either as employees with multiple qualifications occupy more than
one row. Since both attributes are disqualified candidate keys, the only
possibility for a key of Qualifications is the concatenation of Dept and
EmpNo.
Every table in relational database has a unique primary key. Why is this
true? In a relational database, tables are sets of rows -- just like a
mathematical set. In mathematical sets, things are either in the set or they
are not; but, there is no sense of ordering and no duplicate entries. We said,
12 Richard Earp and Sikha Bagui
“Every table in relational database has a primary key.” This is true because
if you take the combination of all the attributes in a table, the row where
those attributes appear is part of a set and hence must be unique.
The actual primary key of a table may consist of fewer attributes than
all of them. Consider the example of Employee:
Employee
EmpNo EName City
103 Chuck Charles Tampa
101 Alice Smith Mobile
102 Bob Baker Pensacola
104 Donna Davis Pensacola
One way to find a minimal key from the obvious FD of all attributes
concatenated together is to begin testing each attribute and asking if the FD
holds or not. With Employee, we ask these questions:
Does EName -> EmpNo? No, because there could be employees with
the same name. Semantics tells us EName is unsuitable as a candidate key.
Does State -> EmpNo? No, a state cannot identify an EmpNo because a
State (in the Location composite) is likely assigned to more than one
employee. Semantics disqualifies State as a candidate key as does finding a
counter example. If data were supplied for the table, as it is above, you have
two employees assigned to work in Florida; hence, this alone disqualifies
State as a candidate key.
As each scenario is examined, we find we have only one candidate key;
hence, EmpNo is the primary key. A primary key is said to be a chosen
candidate key and is underlined in a relation.
To illustrate the 2NF, we need to add some more data to our employee
example.
We would like to add a salary. We assume the salary for a particular job
qualification depends on the qualification itself as well as the employee
working in that department. Where in the above database do you put salary?
Neither the EmpNo nor the Dept taken alone will identify a Salary. The
description of the Qualifications table with Salary included looks like this:
Qualifications
Dept EmpNo Salary
Marketing 101 47
Finance 101 25
Sales 101 51
Finance 102 31
H.R. 102 39
Marketing 103 42
Sales 103 41
So what is the 2NF? The 2NF is defined as follows: All data in the table
is dependent on the whole key of the table. The Salary attribute is defined
by (dependent on) the concatenated key of EmpNo and Dept. Therefore, the
Qualifications table is in the 2NF.
Qualifications
Dept EmpNo Salary
Marketing 101 47
Finance 101 25
Sales 101 51
Finance 102 31
H.R. 102 39
Marketing 103 42
Sales 103 41
or
Good Database Design 17
Qualifications_X2NF
Dept EmpNo Salary Employee_address
Marketing 101 47 23 Palafox St.
Finance 101 25 23 Palafox St.
Sales 101 51 23 Palafox St.
Finance 102 31 86 Vine Ave.
H.R. 102 39 86 Vine Ave.
Marketing 103 42 5 Park Place
Sales 103 41 5 Park Place
18 Richard Earp and Sikha Bagui
Here, all the attributes are identified by the employee number. Changing
the address involves changing only one row in the Employee table.
Terminating Tom involves changing the null value for Date_terminated to
a real date. Adding Tom to the database without assigning a qualifying
department or salary is no problem. The anomalies and redundancy are
removed when tables are in 2NF.
What about the functional dependencies? The original ones still hold:
is valid.
What about the Project_supervisor? It is true since the employee has
only one project assigned and the supervisor for the project is also
identifiable by the EmpNo. The problem here is the supervisor of the project
is better identified by the Project_id than by the EmpNo. The better
description of functional dependencies would be:
and
decomposition is
and
Since we have decomposed the tables in the database, how are they put
back together? The reconstruction is done with queries to recombine the
decomposed tables. For example, answering a request like, “Find the
employees supervised by Sam Smith,” would call for an equi-join of
Employee and Project like this:
Ex 1.3.
A database design problem:
Design a database such that all tables are in 3NF.
A person collects baseball cards. Each card has a wholesale and retail
value. Each card is relevant to one and only one player, but the player may
be on different teams at varying times. A given baseball card for a player is
for the time when that player is part of some team. For example, John Jones
was with the Pirates in 2016, but in 2020 and 2018 he played for the Tigers.
Jones would have a card for each year for each team.
The player has a batting average and a count of home runs by the season
for a team. Each player has a birth date, handedness (right or left), hitting
preference (right, left, or both), home address, city, state, and zip. You also
need a listing of teams with appropriate information (City, State, Stadium
name, Mascot, Owner, General Manager).
It is suggested that you have your design approved by your instructor. If
no instructor is available, explain your design to another person in a
structured manner to see if your database is understood. Relationship should
be described as one to many, many to many, or many to one (the relationship
cardinality). And the description of a relationship should include the words
may or must. (Is the relationship mandatory or not?). As an example, you
might say, “A player may be related to one or more teams. Every team must
have one or more players. A team must have one or more owners.”
Describe how you envision the relationship of team to player. The
relationship is likely many to many (M:N) so explain how your tables reflect
the relationship and what intersection data is involved. Explain why your
database is in 1NF, 2NF, and 3NF.
24 Richard Earp and Sikha Bagui
REFERENCES
DATABASE INTEGRITY
(1) There are two people with the same name in the database; in which
case, the database design needs to be refined to include a unique
identifier for each customer.
or
26 Richard Earp and Sikha Bagui
(2) The database lacks integrity because you would be getting two
answers when you expect one. Somewhere in the database this
person has multiple addresses stored redundantly.
When you first learned SQL, you learned to create tables from your
account. Your first account was probably set up by an instructor and most
likely involved simply signing on without consideration of space or
privileges. Given the default environment, the simplest form of the
CREATE TABLE involves naming the table, the attributes, and their
Database Integrity 27
datatypes. As an example for this chapter, we will create tables storing data
about children in a kindergarten. While we will present the SQL to create
the kindergarten database tables, first realize that some database design
should have been completed. We begin with a CREATE TABLE looking
like this:
Without thinking too hard about it, we have already made decisions
affecting integrity. We have so far limited the number of attributes to only
three. We have decided all children will be identified by an ID attribute
which we defined as a character string of 9 alphanumeric characters. For the
example, we could use social security numbers (which are now assigned at
birth to citizens of the United States). We have limited the name to 20
characters and have decided to store the Height as an integer representing
how tall a child is in inches.
In another country, the size and datatypes may be different, but we will
continue to use this example.
The first way to define a primary key is very common for single attribute
primary keys. The technique is to include the primary key constraint on the
same line as the attribute. The SQL would look like this:
(a) Data that is simply not knowable – perhaps a future retirement date?
(b) Data that is known, but missing? Why is it missing? If, for example,
the null occurred in a birthday attribute, is it really unknown or does
the person refuse to allow the birthday to be known.
(c) Data that is known, but does not apply for a situation?
If the table creator wanted to disallow null values, the way to do it would
be to add the appendage “NOT NULL” to the attribute definition. Here is an
example of a named NOT NULL constraint:
30 Richard Earp and Sikha Bagui
-- Up to 20 characters
Height INT CHECK (Height >= 24),
-- Height in inches
CONSTRAINT ID_PK PRIMARY KEY(ID)
);
In the case of Height, above, if for some reason, a person were permitted
to enter data into the Child table with a Height violating the CHECK
constraint, the following ALTER TABLE command could be used:
When we started this section, we said there were two kinds of CHECK
constraints. We illustrated a column constraint by placing restrictions on
what could be entered into a column -- what values were allowable for an
32 Richard Earp and Sikha Bagui
attribute to accept. Since the constraint was entered after the attribute
definition, the column name was not included in the constraint definition.
/* Since two attributes are involved in this constraint, the attributes are
named, making this a table-level constraint */
);
(a) A table may have more than one UNIQUE constraint. Tables can
have only one PRIMARY KEY.
(b) A value in a UNIQUE constraint can be null. A PRIMARY KEY
can never be null.
(c) UNIQUE constraints can exist in addition to the PRIMARY KEY.
Teams
Team_number TeamName
1 Pirates
2 Giants
3 Cubs
4 Phillies
Fans
Fnum Fname Tnum
100 Anne 2
106 Genevieve 1
102 Beryl 1
104 Mary Fran 3
108 Mary Jo 2
109 David 4
110 Johnny
113 Rich 3
Team_number is the primary key of the Teams table, and Tnum in the
Fans table is referred to as a foreign key. A foreign key references a primary
key attribute usually found in a different table. The relationship between the
Teams and Fans is through the primary key of the Teams table
(Team_number), and the foreign key of the Fans table (Tnum).
To violate referential integrity, it would be inappropriate to enter a row
in the Fans table where the Team_number was not defined in the Teams
table. To try to insert the row:
<105,’Chloe’,4>
<100,’Anne’,2>
to
<100,’Anne’,6>,
<2,’Giants’>
<100,’Anne’,2>
The Fans table (the referencing table) would then be created using the
following statement:
The CREATE TABLE Teams ... must be executed and populated first.
If we use CREATE TABLE as illustrated for the Fans table before the
Teams table was created, we would be attempting to reference a non-
existent team. Likewise, if a Team_number value did not exist in the Teams
table, then referencing data could not be added to the Fans table.
Ex 2.1.
Above we saw a table about children in kindergarten containing
constraints. Create the table in your account and populate it with the inserts
below. As you process additions, report and explain the results.
38 Richard Earp and Sikha Bagui
Ex 2.2.
In the chapter we created a table called Reportcard with attributes
Student_id, Course_id, and Grade_assigned. We used variable character
datatypes for Student_id and Course_id and a datatype of CHAR(1) for
Grade_assigned.
Ex 2.3.1.
Create two tables -- Vet and Dog.
In the Vet table, include a Vet _ID for a primary key and add
Vet_address, Vet_phone, and Vet_city. Make the Vet_id’s 100, 200, and 300.
In the Dog table, use attributes Gender, Neutered, Year_born,
Dog_name, Owner, and a foreign key, Vett, referencing the Vet table. Add
and name the following constraints in Dog: Gender should be F or M,
Neutered Y or N, Year_born no sooner than 1995. Name the referential
constraint Vet_ID_FK.
Populate both tables with five rows in Dog and three in Vet. Populate
the Vet table first (Why?). Be sure to reference all vets at least once.
Database Integrity 39
Before you progress further, create a backup of the populated Vet and
Dog tables you just created. To create the backup, use a command like this:
As you do this exercise, execute INSERT commands that will and will
not cause errors. Insert a dog named Fido to use Vet 200. Insert a dog named
Fluffy that wants to use Vet 400. Explain why the commands worked or
didn’t work.
After you have finished INSERT commands, try UPDATE and
DELETE commands. Update the Dog table changing a vet on some dog to
another vet. Also, try to change a vet to a non-existing vet. Delete Vet 300
and verify the result in the Dog table or explain whatever error occurs.
Ex 2.3.2.
We have three vets and their Vet_id’s are 100, 200, and 300.
Restore the original Vet and Dog tables from the backups. Then use
ALTER TABLE to set the DELETE paradigm on the referential constraint
on all dogs referencing Vets to SET NULL. Then DELETE a vet and check
the Dog table to see if indeed the reference to the deleted vet got set to null.
Ex 2.3.3.
Restore all tables to the originals. Create another table called Owner
with the attributes Owner_name, Owner_id, Owner_address, Owner_phone.
In the CREATE TABLE, make Owner_id the PRIMARY KEY and add at
least one constraint to each of the other attributes. INSERT at least one
owner for each dog.
Now, revisit Dog and change the DELETE paradigm to CASCADE for
the referential constraint in Dog referencing the Vet table. This time
DELETE a vet and determine whether the dog referencing that vet was
deleted. Then, display the Owner table and determine whether the owner of
the deleted dog is still there or not. Explain your results.
40 Richard Earp and Sikha Bagui
REFERENCES
[2.1] The difference between column level and table level CHECK
constraints is column level constraints apply to only one attribute in
the table (a single column). Column level constraints do not specify
a column name because the constraint is given when the attribute is
defined and written after the column definition.
THE DATABASE
3.1. INTRODUCTION
Danny has to manage the creation of the database with the Level 2
programmers. Our original database will contain only three tables:
Customer, Product, and Vendor. Later, there may be more objects created
within each problem area, but for now, we will start with just these three
tables.
Data in databases is designed to be shared. So far, we have discussed
design and integrity in an effort to distinguish a good database from a poorly
designed one. In this chapter, we move from the one-user, one-account mode
to a multi-user environment. In consideration of other users, there must be
rules governing things such as user space and how users will interact. Who
creates tables? Where are the tables located? Who accesses what data? Who
changes the content of the tables with INSERTs, UPDATEs, and DELETEs?
With the shared-file concept comes the idea of privileges for each user and
a defined tablespace for the database.
If we were building a house, our first considerations would be how the
proposed house was designed and where the house would be located. The
design affects the location and vice-versa. We should be confident about the
design of the data itself, but now we must look at two issues -- where will
we put the database (a tablespace) and how will we handle other users
(privileges)?
The approach we will take is to first handle user interactions (privileges)
and then consider the tablespace issue. The reason for this approach is that
one person should be responsible for both users and space. That person is
the Database Administrator -- the DBA.
user to connect to the database. Users need the CREATE SESSION privilege
to “sign on” (connect) to the database and to manipulate or query objects
within the database. There are 80 system privileges defined in Oracle 8. See
reference [3.1] and other footnotes at the end of the chapter.
Commands starting with CREATE, ALTER, or DROP involve system
privileges. The command, CREATE TABLE, mentions an object, a table,
but creating the table is actually a system privilege.
Disclaimer: We do not portend this work is a complete coverage of
privileges, but is rather an introduction by example of how the Oracle
database system may evolve within a multi-user environment. Readers are
encouraged to do research on each subject we cover.
Danny has the task of assigning subject areas to developers --
Customers to Chris, Products to Pat, and Vendors to Van. Danny first
needs to create the user accounts for Chris, Pat, and Van so the main tables
can be created. Then, Chris needs to create the Customer table, and the
others need to create their main tables as well. Before we discuss how
Danny’s account (Danny1) is created, we could begin with a basic CREATE
USER command looking like this:
User created.
Grant succeeded.
The above two statements seem to work well, but there is a problem.
While both of the above statements will execute correctly, they fall short of
being useful for our database for Hardware_City.
All SQL statements have a simple form assuming defaults and a fuller
form overriding some defaults. The fuller form contains embellishments to
clarify the SQL. For example, SELECT can be as simple as:
(1) Temp2 can connect to the database but can’t do anything else. At
the very least, Temp2 has to be able to create or access tables. Danny
needs to GRANT the CREATE TABLE privilege to Temp2; and so,
Danny would execute this statement:
At this point, you might think, “Why not let Danny deal with granting
all these privileges to Level 3 users and possibly beyond?” The answer is we
want a hierarchy of users; and it will be Temp2’s job to handle the privileges
for Temp3 at Level 3. It will be Temp3’s job to manage Level 4 privileges.
While Temp2 has the privilege to connect to the database with CREATE
SESSION as it stands, that privilege cannot be passed on to another user at
Level 3 the way it is written above. The more correct statement for Danny
to execute would be:
Now we test the privilege hierarchy. In moving from user to user, the
CONNECT command is used. CONNECT may be abbreviated with CONN:
CONN Temp2/Temp
User created.
Grant succeeded.
Grant succeeded.
Connected.
User created.
50 Richard Earp and Sikha Bagui
At this point, we have seen how the DBA plans a hierarchy of users.
Before actually dealing with Danny, Chris, Van, and others, we will create
a space in which to store data and get started creating users, tables, and
perhaps other objects.
Let us begin our database construction by creating the account for
Danny1 with a few more embellishments to the above examples. First of all,
we would like to create a special place to store the data for Hardware_City.
We want to define a space to hold our database data; this space is defined as
a tablespace. If a tablespace is not created specifically for Danny and the
other users under Danny, then they would use the default tablespace USERS.
It will prove best if we contain the data in one named tablespace because
there are SQL commands at the tablespace level, and we can use tablespace
manipulation to deal with just the users in Hardware City.
The CREATE TABLESPACE command precedes the CREATE USER
Danny1. It would look like this:
In this scenario, we are going to have only one DBA, Danny, whose
username and password are Danny1/Danny.
For now, Danny allocates 2 megabytes for Pat to use. (This space can be
enhanced later should the need arise.) Since the account Pat2 is now created,
Danny begins assigning privileges to Pat with statements like we saw before.
After the GRANTs from Danny, Pat2 connects and then this Data Dictionary
view-query is executed from Pat2’s account. (We leave the actual GRANT
command Danny1 executes to the reader as an exercise.)
Note: if you run this query, you may have to run a command to set the
column width, e.g., COLUMN USERNAME FORMAT a10.
Now that we have seen the general pattern of how the users are created,
it is more efficient to use scripts to do so. A script is a series of commands
stored on the host computer. We presume the host computer is using UNIX
as an operating system. To create a script, the following steps are taken.
Step 1. From SQL*Plus the command, host, is executed.
Step 2. In the host operating system, a text file is created containing the
commands a person would use in SQL. The text file must have the sql file
type appended. So, if a text file is created on the host and called
“Create_User,” the text file is stored in UNIX as Create_User.sql.
Step 3. We issue the command, exit, in the host to exit back to
SQL*Plus. Notice that the command, exit, is lower case. UNIX is extremely
case sensitive.
Step 4. We execute the script using the format:
@Create_user
or
Danny could exit to the host and create a text file called Create_user.
The text file might look like this:
Danny stores the script, exits back to SQL, and then executes from
within SQL:
@Create_User
The result of a command issued and a script executed is the same. The
difference is that in the script, Danny can embellish the CREATE USER
command and can use the script for Van and Chris by changing the name
and password in the CREATE USER command.
As we mentioned above, the actual CREATE USER command is a bit
more complicated than just CREATE USER .. IDENTIFIED BY .. Instead,
the command needs a defined tablespace
The script is altered to include the tablespace, so it now looks like:
And, to create user Van2 and Chris2, Danny needs only to change Pat2
to Van2 and Pat to Van and re-run the script rather than writing three lines
The Database 55
of code for each user. As queries get more complicated, it is far easier to
change a script than to re-type commands.
We will deal with the actual creation of users with privileges in the
Exercises at the end of the chapter. Then we will consider privileges and
table creation in Chapter 4.
Just as Create_User script was created and used, Runfirst will be handled
in a similar fashion. The text file on the host computer will be named
“Runfirst.sql,” and the script would be executed from SQL*Plus as:
SQL> @Runfirst
/* Runfirst.sql */
/* Initialize environmental parameters – Run this script upon signing on
*/
/* June 21 2021 */
/* Created by yourname */
define_editor=vi
SET LIN 100
/* LIN = linesize */
SET WRAP OFF
/* other SET commands could be added here */
PROMPT Editor is defined as vi
PROMPT Parameters LINESIZE and WRAP are SET
PROMPT Widen your window to accommodate 100 characters
/* Runfirst.sql written by Danny July 5, 2021
The script is to be executed upon sign on. The SET commands
Found herein are not persistent */
Ex 3.1.
Have the System Administrator create the tablespace for the company
and the DBA, Danny1/Danny. It is common practice to refer to accounts in
this username/password format.
The Database 57
Ex 3.2.
As Danny1, create a script in a host text file called Readme.sql. Have
the script contain the following:
Ex 3.3.
From Danny1, run two commands:
DESC Dict
and
Repeat the SELECT from above first by setting the LINESIZE to 50 and
use SET WRAP ON. Then, use SET WRAP OFF and repeat the SELECT.
58 Richard Earp and Sikha Bagui
The point of this exercise is to set environmental values for many things,
such as LINESIZE and WRAP. SET other parameters you find in the HELP
SET output and see what changes (if anything) in your command SELECT
* FROM Dict.
Important – When this exercise is completed, execute the command SET
ROWNUM 0 to reset row counting.
Ex 3.4.
Log on as Danny1 and create the scripts Create_user and Runfirst from
Danny1.
REFERENCES
[3.1] http://docs.oracle.com/cd/A64702_01/doc/server.805/a
58397/ch21.htm Oracle8 Administrator’s Guide, Release 8.0,
A58397-01, Chapter 2.
[3.2] http://www.dba-oracle.com/t_sql_plus_column_format.htm
[3.3] Oracle provides excellent web support for all commands and topics
related to SQL. A list of SET commands and what they do may be
found at: https://docs.oracle.com/cd/E11882_01/server.112/
e16604/ch_twelve040.htm#SQPUG060.
Chapter 4
4.1. INTRODUCTION
The account Pat2 is created and Pat is anxious to create the Product
table. If this were an exercise of a single user, Pat would simply issue a
CREATE TABLE command, load the table with some data, and move on.
However, this is a situation where Pat and the whole database crew know
Pat is responsible for the Product table and more people than just Pat will
be using Product. Since Pat and everybody else will be sharing their data,
we have to deal with granting privileges to each person. Who can access
what data and how can their access be controlled?
In Chapter 3, privileges were allotted to Pat by Danny, one at a time. No
other users were granted privileges. In this chapter, we will demonstrate how
and why it is a better idea to use scripts to create users and assign privileges
via ROLEs.
(Danny) for the name of the user we want to create. The PROMPT command
will be followed by an ACCEPT command:
The script would be run by Danny for Pat, Van, and Chris, creating their
accounts with all the same space. We included another DROP command to
insure there is no “debris” left from previous activity.
Level 2 will need to create Level 3 users and will be granted the
CREATE USER privilege. However, this GRANT has no need to pass along
CREATE USER to Level 4:
We now have our users defined with the privileges they need to create
their tables. The easiest way to do this is for each of the Level 2 users to
create a script for table creation. The reason for using a script is because the
Privileges and ROLEs 63
After creating this text file on the host, Pat reconnects to SQL and
executes the above script with @Create_product.
Users Van and Chris will write similar scripts as an Exercise.
Pat’s job is now to populate the Product table. Here again, should
Product need to be recreated or should there be a change in the data, a script
is most appropriate for table loading. After exiting to the host, Pat creates
this text file:
(2000,’Paint Buckets’,’Paint’,452,3.95,’ITEM’);
INSERT INTO Product VALUES
(3000,’Sheet Metal Screws’,’Hardware’,12500,1.45,’PKG OF 5’);
INSERT INTO Product VALUES
(4000,’Wall Sockets’,’Electrical’,124,7.68,’ITEM’);
INSERT INTO Product VALUES
(5000,’Citronella’,’Chemicals’,25,6.24,’Candle’);
/* stored as Load_product.sql on host *
Since Pat has created a table and populated it, Pat can view the contents
of table with a “SELECT *” statement. It is prudent to define the column
headings and sizes so the output looks reasonably good. Again, a script is
the best way to handle this task because if the output does not look good, the
script can be easily changed. “Good” is subjective, but clearly one needs to
control the look of output. Here is an example of how to do it:
/* Display the contents of the Product table. Written by Pat July 7, 2021
*/
COLUMN Product_id HEADING “PID” FORMAT 9999
COLUMN Pname HEADING “Pname” FORMAT a20
COLUMN PType FORMAT a10
COLUMN Qoh FORMAT 99999
COLUMN Price FORMAT 99999.99
COLUMN itemtype FORMAT a12
SQL> SELECT * FROM Product;
The reason Pat could perform this query is Pat created Product; the
terminology used is Pat owns Product. A person can manipulate and manage
objects they own without explicit GRANTs.
Pat now needs to create the user, Morgan. This time we will not need
the PROMPT/ACCEPT from the earlier example because Pat has only one
user to create. From Pat’s account, we write and execute this CREATE
USER script: (How is Pat allowed to create a user?)
The table name for Product from Morgan3’s account had to be qualified
(Pat2.Product). It would be convenient for Morgan if a synonym were
created for Product. Since the current connection to the database is via
Morgan, the creation of a synonym would be:
Connected.
Oops again! So Pat does not have the CREATE SYNONYM either, so
we must go back to Danny1 and have Danny GRANT the CREATE
SYNONYM privilege to Pat2. Remember Danny is the DBA and the DBA
ROLE contains broad privileges, one of which includes CREATE
SYNONYM WITH ADMIN OPTION. Pat is GRANTed CREATE
SYNONYM. Since Pat needs to pass the privilege to Morgan, the GRANT
from Danny to Pat must be done with the WITH ADMIN OPTION attached:
Privileges and ROLEs 67
Connected.
Grant succeeded.
Now, not only were Pat’s privileges modified but also anyone with the
ROLE2 privilege now has the same privileges as Pat. Pat proceeds to deal
with Morgan:
Connected.
Grant succeeded.
Connected.
68 Richard Earp and Sikha Bagui
Synonym created.
Rather than dealing with grants to each of the Level 2 and 3 people to
access a table, this command could have been used:
This way, anyone can see what’s in the table, tablename. If a table is
truly common information such as a table of area codes or state
abbreviations, granting access to PUBLIC might be okay; however, granting
to PUBLIC is a loose way to handle security. A grant to PUBLIC allows
access to anyone at any time. Also, if the information were so ubiquitous it
did not need security, then it would seem odd if some one person were not
allowed to see the information. Why would you GRANT SELECT access to
PUBLIC and then REVOKE the privilege from someone?
We suggest you avoid the PUBLIC option for GRANTing privileges.
The security using PUBLIC grants is just too loose.
Privileges and ROLEs 69
Ex 4.1.
If you have not done so, DROP all users with CASCADE. Then, write
and store the CREATE USER script; execute the script to create Van2,
Chris2, and Pat2.
Ex 4.2.
Create ROLE2 as we did in the chapter and GRANT ROLE2 to the
Level 2 users.
Ex 4.3.
Write, store, and execute the script to load the Product table as Pat.
Ex 4.4.
Write, store, and execute the script to display the Product table from
Pat2.
Ex 4.5.
Create ROLE3 with the privileges CREATE SESSION, CREATE
TABLE, and CREATE SYNONYM. This must be done as Pat2, Van2, and
Chris2.
Ex 4.6.
From each Level 2 user, write, store, and execute a script to create the
appropriate Level 3 user. You don’t need PROMPT/ACCEPT as each Level
3 is answerable to a specific Level 2 user. Also, include the GRANT of
ROLE3 as the last line of the scripts.
As a reminder, here is the organization chart:
Level 2 – Chris
Level 3 – Kelly
Level 2 – Van
Level 3 – Sam
Level 4 - People in various departments using the data for day-to-day
activity, such as Salespeople, Accountants, etc. These people use the queries
developed at Level 3.
Ex 4.7.
For each Level 2 person, write scripts to create the appropriate tables:
Vendor by Van2 and Customer by Chris2.
Ex 4.8.
For each Level 2 person, write scripts to load their tables. At the end of
this exercise, the tables Product, Vendor, and Customer should be created
and populated.
Ex 4.9.
For each Level 3 person, CONNECT and create a synonym for the main
tables.
Ex 4.10.
Connect as each Level 2 and 3 person and execute these commands:
All privileges should be in place and all synonyms defined. Should there
be a problem with any user executing these three commands, fix it.
Ex 4.11.
From any account, connect to the database and verify the main tables
look like this:
Privileges and ROLEs 71
SQL> @Vendor_query
SQL> @Product_query
SQL> @Customer_query
THE DICTIONARY
In accessing tables in the Data Dictionary, here are the steps we will
follow:
In this query, we upper case the name of the table just in case a
TABLE_NAME is in lower or mixed case. After looking at the result of this
query, we settle on a table to examine, USER_TABLESPACES.
COUNT(*)
----------
12
FORCE_LOGGING VARCHAR2(3)
EXTENT_MANAGEMENT VARCHAR2(10)
ALLOCATION_TYPE VARCHAR2(9)
SEGMENT_SPACE_MANAGEMENT VARCHAR2(6)
DEF_TAB_COMPRESSION VARCHAR2(8)
RETENTION VARCHAR2(11)
BIGFILE VARCHAR2(3)
PREDICATE_EVALUATION VARCHAR2(7)
ENCRYPTED VARCHAR2(3)
COMPRESS_FOR VARCHAR2(12)
In the dictionary, many tables are very broad like this one -- many
attributes. Tables in the dictionary often consist of numerous columns with
what seems to be odd information in them. This is odd in the sense you could
spend a great deal of time becoming an expert on the contents of one table
when most of the time you might just want to see one or two interesting
columns.
Here we’d like to see just the names of tablespaces, so we choose only
one attribute. We can skip the formatting because table names have a
maximum length of 30 characters.
TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSTEM
SYSAUX
UNDOTBS1
The Dictionary 77
TEMP
USERS
EXAMPLE
DBCLASS
DBCLASS3
DBTEMP
HARDWARE
TEMPH
HARDWARE_CITY <--- here we are!
12 rows selected.
Had we chosen several attributes for the result set, we would suggest
using column formatting as we did in the earlier example.
Ex 5.1.
How many dictionary tables deal with privileges? Of these, you will
notice some are interesting:
ROLE_ROLE_PRIVS
ROLE_SYS_PRIVS
ROLE_TAB_PRIVS
SESSION_PRIVS
TABLE_PRIVILEGES
USER_SYS_PRIVS
USER_TAB_PRIVS
USER_TAB_PRIVS_MADE
USER_TAB_PRIVS_RECD
Ex 5.2.
Look at the information in these particular tables:
ALL_TAB_PRIVS
ALL_TAB_PRIVS_MADE
ALL_TAB_PRIVS_RECD
Are all of the users at the same level of privilege? Have all the Level 2
users been GRANTed the same privileges? Level 3?
Ex 5.3.
Write a script to create and populate a table of people or something
meaningful to you, for example, a list of your friends and their phone
numbers. The table may be something like one of these:
You can add more information if you like. The point of this exercise is
to write a script containing both the table creation as well as appropriate
INSERT commands. Include no less than five rows in your table. (It does
not have to be realistic.) It is suggested the script start with:
When the script has been executed, GRANT another person SELECT
privileges on your table via a user-defined ROLE called MYROLE which
you will create. Then, use the Data Dictionary to view information about
your table and the GRANT you made. With a WHERE clause, filter the
result set to return only this one table, e.g., Friends, in each of the following
dictionary tables:
6.1. BACKUPS
backup creation and how long the backup should be kept would depend on
the Level 2 person responsible for the table.
6.2. AUDITING
Now that we are dealing with multiple people accessing and changing
our main tables, we need to audit the tables to verify changes before and
after the fact.
An audit script might include counts, sums, averages, minimum, and
maximum values. Part of such a script could look like this:
As Exercise 6.2, we will need to create audit scripts for each main user
at Level 2. To refresh your memory, here are the tables and attributes of our
main tables:
When dealing with object privileges, there is another shortcut using the
ALL keyword. We have several instances where we might want to:
You could use the ALL keyword here and simplify the individual grants
like this:
84 Richard Earp and Sikha Bagui
Pat and Chris need to execute GRANT ALL to their persons at Level 3.
Ex 6.1.
Connect as Van, Chris, and Pat and execute appropriate table backups
for Vendor, Customer, and Product, respectively.
Ex 6.2.
Connect as Van, Chris, and Pat. Create and execute create audit scripts
for Vendor, Customer, and Product, respectively. Run the scripts for each
Level 2 person. Level 2 persons should present their audit script to the other
users and be open to suggestions for improvement.
Accessing Other Users’ Tables with Scripts 85
Ex 6.3.
Connect as Van, Chris, and Pat and execute appropriate GRANTs to
Sam, Kelly, and Morgan.
Ex 6.4.
Each Level 3 person should perform several DML commands. In doing
these commands, they should record what they did and when they did it. For
example, have Kelly INSERT a customer, Sam UPDATE a vendor row, and
Morgan DELETE a product. Then, have Sam DELETE two vendors, etc.
Ex 6.5.
REVOKE the UPDATE privilege from Pat. Then, see if the UPDATE
has cascaded to Morgan.
/* Query_ALL_OBJ_PRIVS.SQL */
/* June 21 2021 */
COLUMN GRANTEE FORMAT A8
COLUMN OWNER FORMAT A8
COLUMN TABLE_NAME
FORMAT A9
COLUMN GRANTOR FORMAT A8
COLUMN PRIVILEGE HEADING “PRIV” FORMAT A9
COLUMN GRANTABLE HEADING “G-able” FORMAT A6
COLUMN Hierarchy HEADING “Hier” FORMAT A4
SELECT * FROM ALL_TAB_PRIVS;
SQL> @ Query_ALL_OBJ_PRIVS
86 Richard Earp and Sikha Bagui
Ex 6.6.
Each user should have the Level 3 persons show what they did to the
main table. Then, the Level 2 user should run the audit script again after the
exercises above are completed. Do the counts and amounts balance in each
main area? If not, why not?
Ex 6.7.
Restore the three main tables to the values they had before these
exercises. All Level 2 users should have created backup tables so re-
establishment of the tables prior to this exercise should be simply reversing
the backup procedure:
We now have our three main tables: Vendor, Product, and Customer.
We have also established our users and have set up privileges for each user
to view or manipulate tables. Per exercises at the end of Chapter 6, Level 2
users and Danny have audit scripts on the main tables. Each Level 2 user has
versioned backups of the main tables. In this chapter, we will assume
everything is in working order and see if any problems arise by adding two
tables and querying the database.
When a person builds a piece of electronic equipment, the moment
arrives when power is applied -- electronic circuit builders call this the
“smoke-test.” If the circuit smokes, something is clearly wrong. In this
chapter, we apply the smoke-test to our little database by creating linking
tables to complete the M:N relationships between Customer and Product
as well as between Vendor and Product.
Many customers buy many products. This infers an M:N relationship
between customers and products. Normally, an M:N relationship such as
Customer:Product is realized using a linking table containing the key of
the Customer table, the key of the Product table, and some “intersection
data.” Here, the intersection data would be at least the price paid for the
product; it could contain more data.
88 Richard Earp and Sikha Bagui
We have created the three main tables and set them up so everybody in
the group can SELECT from all of these tables. Furthermore, we have
created synonyms for all tables in all accounts. The next step is to create
some intersection tables to link these main tables together.
We assume M:N relationships for Customer:Product and
Product:Vendor because, Many customers buy Many products and Many
products are bought by Many customers. Many products are purchased from
Many vendors and Many vendors sell Many products to Hardware City. The
task is to set up the linking or intersection tables for these two intersection
relationships and to populate them. These intersection tables should be
created at Level 2, and the question must be asked “Who will be responsible
for these two tables?” And, “Why Level 2?”
At this point, a management decision must be made. The DBA makes
the choice. Level 2 should manage intersection data because Level 2 users
control Customer, Product, and Vendor. It, therefore, seems appropriate
for the programmers at Level 2 to handle the intersection tables.
To bridge customers and products, the most appropriate people to deal
with the intersection data would be Chris or Pat. Danny appoints Pat to be
the caretaker of the Buy table.
Pat then thinks, “What is required for the link between customers and
products (the Buy table)?” The linking table will have a concatenated key
(Customer_ID, Product_ID) with foreign key integrity constraints such that
all rows in Buy have customers and products already in the database.
Further, intersection data will be quantity bought, price paid per item, and a
date. The description of Buy in Pat’s view would look like this:
Also, what does Pat need to do to bring Chris into the process and what other
privileges need to be GRANTed?
Table created.
While the attributes Cust_ID, Prod_ID, Qty, and Price are simply
numbers, the insertion of a date into the table is illustrated with a specific
date format using the TO_DATE function to insure uniformity.
8 rows selected.
(1000,100,20,8.45, TO_DATE(‘06/15/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(3000,100,200,1.25, TO_DATE(‘06/03/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(3000,200,300,1.20, TO_DATE(‘06/14/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(3000,500,1000,1.08, TO_DATE(‘06/20/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(4000,400,40,6.25, TO_DATE(‘06/13/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(4000,300,25,6.45, TO_DATE(‘06/10/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(4000,200,20,6.85, TO_DATE(‘06/20/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(5000,100,35,5.85, TO_DATE(‘06/08/2021’,’mm/dd/yyyy’));
INSERT INTO Supply VALUES
(5000,400,75,5.75, TO_DATE(‘06/12/2021’,’mm/dd/yyyy’));
Now, let us try a few queries to test out privileges and integrity
constraints. First, we will connect as Pat2, show the intersection tables, and
then GRANT privileges to the other users. All users will get SELECT on the
table Buy. In addition, Chris2 will get UPDATE, DELETE, and INSERT on
Buy because Chris manages Customer. Van2 will GRANT privileges on
Supply.
CONN Pat2/Pat
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing
options
8 rows selected.
12 rows selected.
@GRBuy
SQL> @Vendor_query
SQL> @customer_query
SQL> @Product_query
SQL> l
1 SELECT c.Cname, p.Pname
2 FROM Buy b, Customer c, Product p
3 WHERE b.Customer_id = c.Customer_id -- equi join
4 AND b.Product_id = p.Product_id -- equi join
5* AND c.Cname like ‘%Penn%’
Cname Pname
--------------- --------------------
Penny Penn Sheet Metal Screws
7.4.1. Auditing
If at the end of day Chris runs an audit query for Customer and finds
the count of customers is different from yesterday, the change log should
reflect who added or deleted a customer and when. Here, because we are
looking at a somewhat simplistic version of auditing, it would be incumbent
on an updater to record the action in the appropriate table. Triggers (Chapter
8) would be far superior for auditing because no overt action on the part of
the person making the change would be required. For now, we will assume
someone changes the Customer table and dutifully records the action in the
change log. If the number of customers balances, then the audit may be over
for today. If the counts do not match, there should be a trail to the person
Expanding the Database 99
who made whatever change there was. It would be the responsibility of the
Level 2 user to verify the changes were correct as well as undo incorrect
changes and/or use a backup version of a table to figure out what happened.
Going a little deeper, Chris could design a query to count each field
value in the main Customer table. Suppose Chris knew there were 25
distinct zip codes at the beginning of the day and 26 distinct zip codes at the
end of the day. This would prompt Chris to look at the change-log tables to
learn who modified the Customer table.
To be even stricter, Chris could approve or disapprove changes before
they were made and then would be able to see the changes were consistent
with the current version of the Customer table. There would have to be a
system in place where a petition to change was made by someone, and where
Chris approved or disapproved the change and finally verified the change
made was valid. Chris could then enter the change data into the change log.
This stricter version would likely only be workable if the number of changes
per day were few.
After checking the change logs, Chris could archive the change logs,
update the backup of Customer, and be ready for the next day. Archiving
backups also involves versioning as we discussed before where the month
and day were appended to the backup table. If a problem occurred, a backup
could replace the changed table, and the person who made the change would
have to re-petition to change the database the next day.
As an example of an auditing query, it is important to check if a customer
were deleted, there must be a count of customers before and after the
deletion. Further, there must be a check that the customer is deleted from all
intersection tables and the new and old sums of products sold balance. You’d
have to check whether the referential integrity constraints worked as
designed. Remember referential integrity constraints for DELETE may be
defined as RESTRICT, SET NULL, or CASCADE.
7.4.2. Backup
August 7:
How long are backups kept? It depends on DML traffic and when the
Level 2 person is sure all is stable.
One last point -- backup tables are for emergency use. Absolutely no one
has access to these tables other than the Level 2 person who maintains them.
Ex 7.1.
Modify and execute the scripts to create Buy and Supply starting with
the above models. Add all necessary integrity constraints on every attribute
in the create scripts including limits on amounts which can be inserted into
the table for quantity and price. Suppose quantity must be between zero and
1000 and price between fifty cents and twenty dollars. Here is a checklist of
constraints:
PRIMARY KEY
UNIQUE KEY
FOREIGN KEY (REFERENCES)
CHECK CONSTRAINT
NOT NULL
Ex 7.2.
Pat and Van: Create a script for granting SELECT to all users on Supply
and Buy.
Ex 7.3.
Connect to each user and create a synonym for Buy and Supply.
Ex 7.4.
Find the name of the vendor who sells the least and the name who sells
the most amount of stuff to Hardware_City. Least amount = MIN(Quantity
*Price)
Ex 7.5.
Find the average amount spent by all customers on each product.
Ex 7.6.
(1) If you have not done so, backup the Buy table.
(2) Create a table and script for Buy updates (Buy_update_log).
(3) Write and execute an audit script for Buy counting the sum of QTY
and Price in the Buy table. Call the script Buy_audit.
(4) Have Customer 350 buy 5 items of product 2000 for $6.64. Insert
the new purchase into the Buy table. This is a simple INSERT
command.
(5) Update the Buy_update_log, inserting the values from (d).
(6) Execute the audit script, Buy_audit.
Ex 7.7.
Repeat Exercise 7.6 with a similar transaction for the Supply table. This
time, issue an UPDATE rather than an INSERT. You will have to create an
update log.
Ex 7.8.
Repeat Exercise 7.6 with another transaction for the Buy table. This
time, DELETE two rows in Buy. Be sure to create a delete log as part of the
exercise.
Ex 7.9.
Create a backup of the main tables. Give no one the privilege of viewing
backups. Write a script reporting the SUM, AVERAGE, MIN, and MAX
value of each numeric attribute in all the tables we have in our database.
Create a change log for each of the main tables as well as INSERT,
UPDATE, and DELETE values in each of the main tables. When you are
finished the updates, backup the change log and then audit your changes. To
make this exercise more realistic, allow others to do the updates and then do
your audits.
Ex 7.10.
We have done some preliminary auditing and the question “What if the
database is not in a consistent state?” arises. The answer lies in being able to
reconstruct the tables using a backup system. Practically, if the tables were
very large, it might be more prudent to undo bad transactions. Here our
tables are small, and we will use the backup approach. Whether a complete
backup or an undo technique is taken to repair the database, a backup version
of each table is necessary.
Connect as each Level 2 user and create a backup of Vendor,
Customer, and Product. If this was done previously, verify via auditing the
backups are correct and in good order -- they should be identical to the
Expanding the Database 103
current version of the main table. Each Level 2 user who is responsible for
the intersection tables should also create a backup of the intersection data --
Pat for Buy and Van for Supply. Van and Pat should have these backup
tables hidden. One more note on backup tables: As mentioned, these hidden
backup tables should be versioned. The timing would depend on how often
tables are checked and audited, but the versioning might go like this:
In this way, should (say) Van discovers a problem in the Vendor table,
Van can go back and see where the data was consistent, lock down Vendor
(Chapter 10), figure out which data needs to be repaired, restore the Vendor
table to the point of validity, and unlock it.
REFERENCES
In this chapter, we will create triggers to deal with each of these change
logs. First of all, before we can create a trigger, guess what? We have to be
GRANTed the privilege to do so. Danny must GRANT CREATE TRIGGER
to ROLE2. Since all Level 2 users have already been granted ROLE2, when
Danny GRANTs this new privilege to ROLE2, all Level 2 users will inherit
the privilege.
We will begin with a simple example of a trigger, and then we will
embellish it. Rather than use the versions of change logs above, it will be
easier to mimic the Customer table in the code for the trigger; we can simply
put in the changed values (:new) along with “what was” (:old). Here is a
better change-log table for UPDATEs including all fields in the Customer
table:
Then, the trigger for auditing UPDATEs on Customer could look like
this:
:old.caddress,
:old.ccity,
:old.cstate,
:old.czip,
:old.cphone,
user,
sysdate);
END;
/
Then,
When created, the trigger is enabled. When the trigger fires, it updates
the log table putting the :old values into it.
...
INSERT INTO Customer_UPDATE_LOG
VALUES (:old.customer_id,
:old.CNAME, ...
In this case, the name of the script creating this trigger was chosen to be
“Ctrigu.”
SQL> @Ctrigu
Trigger created.
Customer_ID CADDRESS
----------- --------------------
330 1988 Druid Hwy.
335 15823 Fish Lane.
340 1 Small Ct.
345 2014 Newly Blvd.
350 77 Nopound St.
1 row updated.
The result of this query could be formatted differently. For example, the
date attribute could be formatted to give the exact time of day and the widths
of the output fields could be made larger or smaller.
Triggers to Enforce Auditing 111
While the above trigger works well, it falls a bit short of what we really
want to accomplish with auditing. If we adopted the approach from above,
we would have to write triggers for each table for each change. Rather, we
can create a more robust trigger for each main table like this (Modeled after
“A Fresh Look at Auditing Row Changes,” by Connor McDonald [8.2]):
We start with a new audit table which we will use for all changes in the
Customer table:
In this table, we included a column for :new or :old and include the type
of change (U = Update, I = Insert, D = Delete).
112 Richard Earp and Sikha Bagui
To test the trigger, we first begin by backing up the Customer table one
more time. We will restore the original table when we are finished. Here is
the result of test:
SQL> l
1 CREATE TABLE Customer_audit
2 (Customer_ID NUMBER(3),
3 Cname VARCHAR(20),
4 Caddress VARCHAR(30),
5 Ccity VARCHAR(20),
6 Cstate CHAR(2),
7 Czip CHAR(5),
8 Cphone CHAR(10),
9 CHANGED_BY VARCHAR(20),
11 CHANGED_WHEN DATE,
12 OLD_OR_NEW CHAR(1),
13* TYPE_OF_CHANGE CHAR(1));
SQL> @Create_customer_audit
Table created.
SQL> @Ccat
Trigger created.
Table created.
Using the previously stored script for viewing the contents of the
Customer table:
SQL> @Customer_query
1 row created.
Then, an UPDATE:
1 row updated.
The deleted customer 340 may be noted with no new values (just the old
ones):
There are two rows in the Audit table for updates, one before the
UPDATE with Old values and one after the UPDATE with New values.
SQL> @customer_query
5 rows deleted.
5 rows created.
SQL> @Customer_query
The triggers we have illustrated are AFTER triggers used for auditing
after-the-fact. We defined our CREATE TABLEs with CONSTRAINTs to
insure integrity. Why not insure integrity with a BEFORE trigger instead of
using CONSTRAINTs?
The answer is, it could be done. We will illustrate a “naked” CREATE
TABLE with no CONSTRAINTs and manage the integrity with a BEFORE
trigger. Here is an example:
We have a table Sale with an attribute Amount which must be between
zero and 10.
In the definition of a table called Sale, we include a CONSRAINT like
this:
Before we get carried away writing triggers for many things that can
happen, there are other considerations regarding them. First of all, for large
databases, there may well be a performance problem with triggers. Speed of
execution of SQL is often an issue, particularly if the number of users and
size of the database is large.
Next, if a person writes several triggers, it is possible that one trigger
causes another to fire -- a situation known as “cascading triggers.” [8.3]
We recommend triggers be used for auditing and perhaps gathering
statistics on transactions. They may be used for dealing with business rules
so complex that they cannot be handled in any other way. In any case,
triggers may be disabled and enabled and performance checked. To disable
a trigger, the SQL statement would be:
Ex 8.1.
Using the model in Section 8.2, create audit triggers for each of the
tables we have created so far for Hardware_City: Vendor, Product, Buy
(the intersection of Product and Customer), and Supply (the intersection
of Vendor and Product).
Ex 8.2.
Cause the audit triggers created in Ex 8.1 to be fired using at least one
DML command on each table and display the result set of the audit-checking
tables.
Triggers to Enforce Auditing 119
Ex 8.3.
Create a small table called Permission with the following attributes:
Now suppose there were a business rule like this: Before any table in the
database could be updated, a permission to do so must be granted by the
owner of the table. For example, suppose Sam, a Level 3 user wanted to
update a quantity in the Vendor table. Sam would have to approach Van and
ask permission to do so. Van would then enter this data into the Permission
table.
Ex 8.4.
Create a table called Cats (Cat_name). Populate the table with four
rows. Create the statement level framework we described in section 8.5.
Execute a delete of one row and show the before and after table. Then,
execute a delete of all rows and show the before and after tables.
REFERENCES
[8.1] Practical Guide to Using SQL in Oracle, 3rd Edition, Earp, Richard
W., Bagui, Sikha S., Taylor and Francis Publishing, 2021.
[8.2] “A Fresh Look at Auditing Row Changes,” by Connor McDonald,
Oracle Magazine Online, March 2016, http://www.oracle.
120 Richard Earp and Sikha Bagui
com/technetwork/issue-archive/2016/16-mar/o26performance-
2925662.html.
[8.3] See https://docs.oracle.com/cd/B19306_01/server.102/b14220/
triggers.htm in which a section, “Some Cautionary Notes about
Triggers,” discusses cascading triggers.
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
17-JAN-21 11.30.37.819000 PM -06:00
SQL> @Product_query
USER is “PAT2”
SQL>
Multiple Users and Transactions 123
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.33.04.882000 AM -05:00
Now imagine Morgan logs on about the same time and also executes the
same script. The result will be the same for Morgan other than the
timestamp.
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.34.27.665000 AM -05:00
Now suppose Pat changes the price of Citronella FROM 6.24 to 6.05.
From Pat2:
1 row updated.
124 Richard Earp and Sikha Bagui
After this UPDATE FROM by Pat, Morgan executes the script again:
SQL> @Product_query
SQL> SHOW USER
USER is “Morgan3”
SQL> SELECT Current_timestamp FROM Dual;
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.39.12.108000 AM -05:00
What just happened? Pat updated the Product table, but Morgan sees
the value unchanged. Why? When Pat connected to the database, a
transaction began. Pat then initiated an UPDATE on the Product table; Pat
did so as part of Pat’s transaction. Whatever changes Pat makes in the
Product table will not be seen by any other user until Pat concludes the
transaction. How does Pat conclude the transaction? Three ways:
Transaction ending situations (1) and (3) are said to be explicit because
Pat explicitly did something to end the transaction. Situation (2) is an
implied transaction end.
A command with an implied COMMIT is any DDL command (Data
Definition Language). A DDL statement changes the structure of the
database. A CREATE, ALTER, or DROP command are examples of DDL
statements.
Scenario 1:
User A connects, a transaction X begins.
User A disconnects from the database with no transaction ending
command, transaction X ends.
Scenario 2:
User A connects, a transaction Y begins.
User A does some things, then issues a COMMIT command, transaction
Y ends.
126 Richard Earp and Sikha Bagui
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.41.05.698000 AM -05:00
Now, Morgan issues the same command as before but after Pat has
ended the transaction.
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.41.32.883000 AM -05:00
1 row updated.
From Pat2:
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
128 Richard Earp and Sikha Bagui
1 row updated.
From Danny1:
Connected.
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.53.10.793000 AM -05:00
SQL> SELECT
/* This script from Burleson Consulting, [9.1].
*/
t1.sid,
t1.username,
t2.xidusn,
t2.used_urec,
t2.used_ublk
FROM
v$session t1,
v$transaction t2
WHERE
t1.saddr = t2.ses_addr;
Multiple Users and Transactions 129
Gives:
From Pat2:
Rollback complete.
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.57.36.255000 AM -05:00
From Danny1:
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
20-JUL-21 05.58.02.158000 AM -05:00
SQL> SELECT
t1.sid,
... same script as above
SQL> /
no rows selected.
130 Richard Earp and Sikha Bagui
But instead of this, Pat2 writes the statement, gets interrupted and
instead of finishing it, puts in a semicolon before the WHERE and executes
this statement:
5 rows deleted.
SQL> @Product_query
no rows selected.
SQL> ROLLBACK;
Rollback complete.
SQL> @Product_query
(1) We note for Saw Blades/Tools, we have two vendors and currently
have 100 Saw Blades on hand.
(2) The vendor named Hardy Hardware has agreed to sell us 10 more
packages of Saw Blades for a price of 8.35 per package.
(3) We sell 5 packages of Saw Blades to Penny Penn.
7. The Product table should have a total of QOH+5 Saw Blades at this
point.
8. The Buy table should show one more entry for Penny Penn.
9. The transaction nears conclusion. It is incumbent upon Pat and Van
to check the results of this updating. If there is a problem in checking and
balancing results, a ROLLBACK is executed. If everything balances and
quantities and entries are correct, a COMMIT is executed.
10. With either COMMIT or ROLLBACK, the transaction ends.
Realizing this transaction involves possible pitfalls, Pat can modify the
transaction plan. If there is a problem, some intermediate ROLLBACK will
not destroy the whole plan.
Pat decides to include some SAVEPOINTs in the plan. The plan as
outlined above now includes new actions: (Assume Hardy Hardware is in
the Vendor table.) Here is the complete plan:
1. We note for Saw Blades/Tools, we have two vendors and currently
have 100 Saw Blades on hand.
1a. Pat connects to the database and a transaction begins.
1b. Pat tells Van to put Hardy Hardware’s info into the Vendor
table.
1c. Pat checks the QOH (quantity on hand) from the Product
table for Saw Blades.
1d. The Supply table is checked to see how many entries we
have for Hardy Hardware.
2. The vendor named Hardy Hardware has agreed to sell us 10 more
packages of Saw Blades for a price of 8.35 per package.
2a. An entry in the Supply table is inserted to reflect the
purchase of Saw Blades from Hardy Hardware. The DML
part of the transaction begins here.
2b. A SAVEPOINT, Save1, is created.
3. We sell 5 packages of Saw Blades to Penny Penn.
3a. The Product table is queried to see it now has QOH+10 total
Saw Blades.
3b. The Supply table is queried to show we have one more entry
134 Richard Earp and Sikha Bagui
Gives:
Qty on hand
Multiple Users and Transactions 135
-----------
110
1b.
1 SELECT v.Vendor_id, COUNT(*)
2 FROM Supply s, Vendor v
3 WHERE s.Vendor_id = v.Vendor_id
4 AND v.Vname LIKE ‘Hardy%’
5* GROUP BY v.Vendor_id
SQL> /
Vendor_ID COUNT(*)
---------- ----------
400 3
1 row created.
12 rows selected.
Gives:
QOH
----------
110
Gives:
Multiple Users and Transactions 137
Vendor_ID COUNT(*)
---------- ----------
400 3
1 row created.
Savepoint created.
5 rows updated.
Whoa! Pat updates the whole Product table rather than just the entry for
Saw Blades. Pat forgot to include a WHERE clause in the UPDATE
command. What to do?
Rollback complete.
1 row updated.
Gives:
138 Richard Earp and Sikha Bagui
QOH
----------
120
Savepoint created.
Gives:
Vendor_ID COUNT(*)
---------- ----------
400 4
Savepoint created.
Gives:
Customer_ID COUNT(*)
----------- ----------
Multiple Users and Transactions 139
350 1
1 row created.
Savepoint created.
1 row updated.
Savepoint created.
Gives:
Customer_ID COUNT(*)
----------- ----------
350 2
Multiple Users and Transactions 141
SQL> ROLLBACK;
Rollback complete.
Ex 9.1.
1. To simplify this exercise, create two scripts:
SHOW USER
SELECT Current_timestamp FROM Dual;
b) CheckVtables.sql:
SELECT
t1.sid,
t1.username,
t2.xidusn,
t2.used_urec,
t2.used_ublk
FROM
v$session t1,
v$transaction t2
WHERE
t1.saddr = t2.ses_addr;
When you want to know who a user is and what time it is, execute
Time_now (@Time_now). When you want to see what’s in
V$TRANSACTION, you need to connect as Danny1 and execute
CheckVtables (@CheckVtables)
Before doing exercises on the main tables in Hardware_City, it is
imperative to back up all tables. Each person responsible for each table needs
142 Richard Earp and Sikha Bagui
to be asked whether a backup exists for each table. While the idea of rolling
back a transaction seems foolproof, even foolproof situations in database
tend to somehow find a way to be fouled.
Ex 9.2.
Assume you are Van. Create a transaction plan to add a vendor to the
database. Then add an entry to the Product table showing your new vendor
supplied something. Have your transaction plan approved by your instructor.
If your instructor approves the plan, connect to Oracle as Danny1, and at
another station log on as Van. As you execute each step of your transaction
plan, run the scripts we presented above Danny1: @Time_now and
@CheckVtables, Van2: @Time_now.
Multiple Users and Transactions 143
Ex 9.3.
Run the scripts from each account.
REFERENCES
CONCURRENCY
Illustration 1
Time Action
00:00 Pat connects to Oracle, starts a session, and starts a
transaction.
00:01 Pat updates the Product table -- the Product table is
locked by Oracle (sort of.. stay tuned) before the update.
00:02 Pat disconnects from Oracle -- Pat’s transaction ends,
Pat’s session ends.
Illustration 2
00:05 Pat connects to Oracle, starts a session, and starts a
transaction.
00:06 Pat updates a row in the Product table -- the row in the
Product table is locked before the update takes place.
00:07 Pat executes the COMMIT command -- Pat’s transaction
ends.
Since Pat didn’t disconnect from Oracle, Pat’s session is still in force;
but, a new transaction begins for Pat due to the COMMIT.
Had Pat executed a ROLLBACK command instead of COMMIT, the
result of the above would be the same; but the Product table would be
unchanged.
There are several undesirable things possible if there were no locking
(no concurrency mechanism) in place. A way of handling concurrency in
Oracle is to discuss what are called “isolation levels” (as defined in the
database standard for SQL92). Isolation levels in Oracle control what gets
locked and for how long. There are four isolation levels involving a database
transaction [10.5]:
Read Committed
Serializable
Read Only
Read uncommitted (not supported by Oracle).
Concurrency 149
Illustration 3
00:00 Pat connects to Oracle -- Pat starts a session and starts a
transaction.
00:01 Pat updates a quantity in Product 1000 in the Product
table.
00:02 Chris reads the quantity in Product 1000 in the Product
table.
00:03 Pat does a ROLLBACK causing Pat’s transaction to
commit.
If this were possible in Oracle, the data Chris read would be the result
of a “dirty read.” Chris could have read uncommitted data from the Product
table because Pat’s transaction was not yet committed. The isolation level
allowing this dirty read would be: isolation level read uncommitted, which
is not supported in Oracle.
A non-repeatable or fuzzy read illustration:
Illustration 4
00:05 Pat connects to Oracle and starts a session.
00:06 Chris connects to Oracle and starts a session.
00:07 Chris reads the quantity of Product 1000 (read #1).
00:08 Pat updates the quantity in Product 1000 in the Product
150 Richard Earp and Sikha Bagui
Illustration 5
00:15 Pat connects to Oracle and starts a session.
00:16 Chris connects to Oracle and starts a session.
00:07 Chris issues a command summing the quantities of all
Products. (query #1).
00:08 Pat inserts Product 2001 into the Product table and
COMMITs.
00:09 Chris re-executes the “sum the quantities” query and gets
a different result (query #1a).
In this case, Chris’s query identified a set of rows to get a result. Then,
Pat’s insert added a row. When Chris re-executes the “sum the quantities”
query, a different result appears. If the isolation level read committed were
in effect, this anomaly would be possible.
So what does Chris do about the two possible errors in the default state
of Oracle transaction handling? Recall, the isolation level read committed is
Concurrency 151
the Oracle default. Therefore, to prevent a fuzzy read or phantom read, some
other mechanism would be required of the user such as:
Issuing LOCK commands are very specific and tend to be overriding the
Oracle transaction system. The more usual approach to controlling
concurrency is to set user isolation level commands -- specifically, we will
use SET TRANSACTION ... with an awareness of (a) Oracle defaults and
(b) what other users may need to do.
The SET TRANSACTION can be used either at the session or the
transaction level.
10.3. LOCKING
EXCLUSIVE mode means other users may see the contents of a table
but may not perform DML on it. Some writers say an Exclusive lock will
not allow anyone to see the table being locked; but in Oracle, this is not so.
SHARE mode supposedly means other users may view the contents of
a table while it is locked. One might think this would mean an intermediate
result might be visible, but it is not. For example, if we proceeded with the
following scenario with no explicit locking, we see that Oracle takes care of
locking (at the row level):
Sign on as Pat2 and Van2 on two different platforms.
In Pat2, table Product is updated.
Van2 queries Product, but Van2 does not see the Pat2 update until Pat2
ends transaction.
Van2 sees the un-updated version of Product (as in READONLY
isolation).
Pat2 COMMITs or ROLLBACKs and Van2 can now see the table with
Pat2’s update in it if Product is re-queried by Van2.
Now suppose Pat does some explicit locking:
Suppose Pat did not want Van’s update to succeed. Can Pat issue a
ROLLBACK to undo what Pat did? No, Pat2 would have to use the backup
of Product to recover unless Van2 issues a ROLLBACK.
All of this so far is the same as Pat not issuing a LOCK TABLE
command at all. What is different is that if no LOCK TABLE is issued, Pat
can change a row and Van can change a different row. Oracle’s locking
system is triggered by Pat issuing an UPDATE command (like UPDATE,
DELETE, or INSERT). The lock applied by Oracle is a ROW LEVEL
LOCK. From the Oracle website [10.3]:
Row Locks (TX) -- A row lock, also called a TX lock, is a lock on a
single row of a table. A transaction acquires a row lock for each row
modified by one of the following statements: INSERT, UPDATE, DELETE,
MERGE, and SELECT ... FOR UPDATE. The row lock exists until the
transaction commits or rolls back.
When a transaction obtains a row lock for a row, the transaction also
acquires a table lock for the table in which the row resides. The table lock
prevents conflicting DDL operations to override data changes in a current
transaction.
If the Product table is explicitly locked by Pat at the table level, any
change Van tries to make will wait for Dan to unlock the table.
If no explicit locking by Pat is in force and if Van has UPDATE
privileges on Product, Pat can change a row and Van can change some other
row because default Oracle locks are ROW locks. What if Van tries to
change the same row Pat is updating? Van will wait just like when Pat issued
a LOCK TABLE command.
156 Richard Earp and Sikha Bagui
So what does this prove? Explicit locking disallows another user from
updating the table at all until the explicit lock is released. Oracle applies row
level locking when someone updates a table. As we saw, Pat updates a row,
Van can update another row. If Van tries to update the row Pat is working
on and has not committed, Van will wait.
There are ways to disable locks on tables to enact a more complicated
concurrency control method. These more complicated techniques come into
play when there are many transaction conflicts to slow the system
significantly. In the last chapter, we showed a transaction in slow motion,
one statement at a time. In reality, a series of transaction would take place
in a script or a procedure and would include triggers or the procedure itself
to check for consistency problems.
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.33.02.084000 PM -06:00
SQL> @qqC
158 Richard Earp and Sikha Bagui
SQL> @qqP
SQL> @Ti
USER is “PAT2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.33.58.197000 PM -06:00
1 row updated.
SQL> @ti
USER is “PAT2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.34.30.147000 PM -06:00
SQL> @ti
USER is “PAT2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
Concurrency 159
Here, Pat sees the un-updated Customer table. Chris updated the
Customer table at 08.35.07, but Pat does not see the update because Chris’s
transaction is still going on.
SQL> @qqC
SQL> @qqP
At 08.35.07 Chris updated the Customer table and hence locked the
Customer_id = 330 row.
At 08.36.35 Chris attempts to update the Product table which is locked
by Pat so Chris will wait for Pat to end transaction.
At 08.36.41 Pat issues an UPDATE on the Customer table,
Customer_id = 330
SQL> @ti
USER is “PAT2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.38.47.222000 PM -06:00
SQL> @ti
USER is “PAT2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.33.15.313000 PM -06:00
SQL> @qqC
SQL> @qqP
Concurrency 161
SQL> @ti
USER is “Chris2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.33.41.162000 PM -06:00
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.34.51.098000 PM -06:00
We know that at 08.33.58 Pat updates the Product table. So, what Chris
is seeing is the un-updated Product table because Pat has not yet ended Pat’s
transaction.
SQL> @qqP
SQL> @ti
USER is “Chris2”
CURRENT_TIMESTAMP
162 Richard Earp and Sikha Bagui
---------------------------------------------------------------------------
22-DEC-21 08.35.07.010000 PM -06:00
SQL> @qqC
SQL> @qqP
SQL> @ti
USER is “Chris2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.35.57.835000 PM -06:00
1 row updated.
SQL> @qqC
SQL> @ti
USER is “Chris2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.36.35.400000 PM -06:00
SQL> @qqP
Pid Pname Ptype Qty on hand Price Type
----- ------------------ ----------------- ----------- --------- -----------------
2000 Paint Buckets Paint 400 3.95 ITEM
SQL> @Ti
USER is “Chris2”
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.38.09.766000 PM -06:00
1 row updated.
SQL> @Ti
USER is “Chris2”
164 Richard Earp and Sikha Bagui
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
22-DEC-21 08.39.18.344000 PM -06:00
SQL> @QqC
SQL> @Qqp
Chris was able to update the Product table and the Customer table.
SQL> ROLLBACK;
Rollback complete.
Chris does a ROLLBACK and restores both the Customer and Product
tables.
SQL> @Qqc
SQL> @Qqp
Ex 10.1.
Use two different host sign-ons to connect to Oracle with two people.
The better way to do this is to use two different computers. Let one person
be Van and the other Pat.
Before you begin, have Pat GRANT UPDATE on Product to Van and
have Van GRANT UPDATE on Vendor to Pat
Let Pat update the Product table, Product_id = 1000 changing the Price
to 9.70.
Let Van update Vendor, Vendor_ID = 200 changing Vcity to Milton
Let Pat update Vendor, Vendor_ID = 200 changing the company name
to Pumps R Us
Let Van UPDATE Product, Product_id = 1000 changing QOH to 105
This set of transactions will deadlock. As you do these commands for
Pat and Van, use the Time_now.sql script from Chapter 9 and SHOW USER
before you do any command. When the two transactions are complete,
ROLLBACK both to end the transactions. One will not need to be rolled
back because Oracle already did it when the deadlock occurred. Which one
does not need a ROLLBACK? Since each transaction involved two
166 Richard Earp and Sikha Bagui
Ex 10.2.
Modify the above scenario and re-do the exercise, but have Pat and Van
explicitly LOCK the tables they intend to use first in SHARE mode and then
in EXCLUSIVE mode. What changes took place in the execution of the
commands?
Ex 10.3.
Modify the above scenario and re-do the exercise, but this time have Pat
and Van issue SET TRANSACTION commands, one for each isolation level
and granularity, i.e., once for transaction level and once for session level for
each of the three isolation levels. Notice what effect each SET
TRANSACTION has on the sequence of events leading to deadlock with no
locking or transaction sets.
As you do this exercise, set isolation levels within each person’s
transaction and then again as a SET at the session level. When executing
SET TRANSACION or ALTER SESION SET TRANSACTION
commands, you need to do so like this:
COMMIT;
SET TRANSACTION LEVEL ...
Do whatever
COMMIT;
The reason for the COMMITs is to insure you are starting a transaction
when you are testing. Of course, you need to think about what you have been
doing and be sure COMMIT is appropriate for your work.
As you do these transactions as (say) Pat, note what Van is allowed to
do and not do. Carefully note when Van’s transaction is waiting and what
happens when Pat COMMITs (or does a ROLLBACK).
Concurrency 167
REFERENCES
[10.1] https://en.wikipedia.org/wiki/ACID.
[10.2] Database concepts, Chapter 13, Data Concurrency and Consistency,
https://docs.oracle.com/cd/B19306_01/server.102/b14220/consist.
htm.
[10.2] LOCK TABLE statement:
https://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj40506.html.
[10.3] Database SQL Language Reference, Automatic Locks in DML
Operations.
https://docs.oracle.com/cloud/latest/db112/SQLRF/ap_locks001.ht
m#SQLRF55502.
[10.4] A higher level of granularity than rows and tables is not experienced
by users -- the database itself can be locked via a process called
“quiesce.” The database can be put into a quiescent state by two
users SYS and SYSTEM. These two users are created when the
database is created and have privileges even more powerful than that
of the DBA. These two users can do backup and recovery operations
of the entire database as well as system upgrades. If such operations
are required, then the database itself is quiesced; and basically
everyone is locked out until the backup or upgrade is dealt with.
[10.5] “Oracle Isolation Level Tips,” Burleson Consulting,
http://dba-oracle.com/t_oracle_isolation_level.htm, retrieved April
25, 2020.
[10.6] Database SQL Language Reference.
https://docs.oracle.com/cd/B28359_01/server.111/b28286/stateme
nts_10005.htm#SQLRF01705.
[10.7] “Oracle Deadlocks,” Burleson Consulting,
http://www.dba-oracle.com/t_oracle_deadlock.htm, retrieved June
17, 2020.
168 Richard Earp and Sikha Bagui
Transactions: http://www.dba-oracle.com/t_v_transaction.htm
LOCK TABLE:
https://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj40506.html
ACID: https://en.wikipedia.org/wiki/ACID
https://docs.microsoft.com/en-us/sql/odbc/reference/develop-
app/transaction-isolation-levels
ABOUT THE AUTHORS
Dr. Richard Walsh Earp is the former Chair of and a former Associate
Professor in the Department of Computer Science and is the former Dean of
the College of Science and Technology at the University of West Florida in
Pensacola, Florida, USA. He has taught a variety of Computer Science
courses including Database Systems and Advanced Database Systems. Dr.
Earp has authored and co-authored several papers and has co-authored
several books with Dr. Bagui. Some of the books co-authored with Dr. Bagui
include: Learning SQL: A Step-by-Step Guide using Oracle, Database
Design Using ER Diagrams, Learning SQL: A Step-by-Step Guide using
Access, SQL Server 2014: A Step by Step Guide to Learning SQL, A
Practical Guide to Using SQL in Oracle.
Dr. Sikha Saha Bagui is Professor and Askew Fellow in the Department
of Computer Science, at The University West Florida, Pensacola, Florida.
Dr. Bagui is active in publishing peer reviewed journal articles in the areas
of database design, data mining, BigData, pattern recognition, and statistical
computing. Dr. Bagui has worked on funded as well unfunded research
projects and has over 75 peer reviewed journal publications. Dr. Bagui has
also co-authored several books on Oracle SQL, SQL Server, Access SQL,
and Database Design with Dr. Richard Earp. Bagui also serves as Associate
Editor and is on the editorial board of several journals.
INDEX
# B
count, 23, 70, 73, 74, 75, 82, 98, 99, 135, decomposition, 6, 21
136, 137, 138, 140 default tablespace, 50, 51, 52, 54, 61, 65
create, x, 26, 27, 28, 30, 32, 33, 36, 37, 38, delete, 18, 34, 35, 36, 39, 46, 83, 84, 85, 86,
39, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 93, 98, 99, 102, 105, 106, 111, 112, 114,
55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 115, 119, 121, 130, 131, 155
67, 68, 69, 70, 78, 79, 81, 82, 84, 89, 90, desc, 57, 73, 74, 75, 88, 135, 139
91, 93, 94, 95, 100, 101, 102, 107, 108, developers, 42, 45, 46
109, 111, 112, 113, 116, 118, 119, 125, dictionary, vii, 29, 57, 73, 74, 76, 77, 78, 79,
141, 142 85, 127
create or replace trigger, 107, 108, 109, 112 dirty reads, 149, 151, 152, 168
create session, 45, 47, 48, 49, 52, 53, 60, 62, disable, 29, 30, 31, 37, 118, 156
69 disable constraint, 31, 37
create synonym, 66, 67, 68, 69, 81 DML, viii, 81, 83, 85, 100, 118, 121, 125,
create table, 26, 27, 28, 30, 32, 33, 36, 37, 127, 130, 132, 133, 146, 147, 153, 154,
39, 46, 48, 49, 50, 52, 53, 59, 60, 62, 63, 167
69, 79, 81, 90, 91, 93, 95, 100, 107, 108, drop, 29, 30, 31, 37, 46, 60, 61, 63, 65, 69,
111, 113, 116 79, 125, 155
create tablespace, 50 drop constraint, 31, 37
create trigger, 107 drop table, 63, 79, 155
create user, 46, 47, 48, 49, 50, 51, 52, 53, drop user, 60, 61, 65
54, 59, 60, 61, 62, 65, 69 durability, 146, 147
D E
data dictionary, 52, 73, 77, 79, 127 enable, 29, 30, 31, 36, 37, 118
database, vii, viii, ix, x, 1, 2, 3, 4, 5, 6, 10, enable constraint, 31, 37
11, 12, 13, 14, 16, 18, 19, 22, 23, 25, 26, equi-join, 22
27, 29, 34, 40, 41, 42, 43, 44, 45, 46, 47, execute, 39, 47, 48, 53, 57, 58, 65, 66, 69,
48, 50, 51, 52, 59, 62, 65, 66, 70, 81, 84, 70, 81, 84, 85, 100, 101, 119, 130, 141,
87, 89, 93, 94, 97, 98, 99, 102, 105, 118, 142, 147
119, 120, 121, 122, 124, 125, 130, exit, 53, 54, 57, 94
131,132, 133, 142, 145, 146, 147, 148,
149, 152, 153, 156, 167, 168, 169
F
database administrator, 42, 43, 44, 45
database integrity, vii, 1, 25, 26
files, 2, 44
database objects, 45
first normal form, 3, 4, 6
DBA, 42, 43, 44, 45, 47, 50, 51, 52, 56, 66,
foreign key, 33, 34, 35, 36, 38, 89, 100, 103
69, 77, 79, 83, 89, 127, 153, 167
functional dependencies, 12, 20, 21, 22
DDL, 125, 155
deadlock, 156, 157, 159, 160, 163, 165, 166,
167, 168
Index 173
G K
grant, viii, 46, 47, 48, 49, 50, 51, 52, 61, 62, key, 10, 11, 12, 13, 14, 15, 17, 18, 19, 21,
65, 66, 67, 68, 69, 79, 83, 84, 93, 95, 27, 28, 29, 34, 35, 36, 57, 87, 89, 103
107, 127, 165
grant all, viii, 83, 84
L
grant create session, 47, 49, 61, 65
grant create synonym, 66, 67
linesize, 55, 56, 57, 58, 114, 122
grant select, 65, 68, 83, 95, 127
lock, 103, 147, 151, 153, 154, 155, 156,
grant update, 165
157, 166, 167, 168
group by, 82, 135, 136, 138, 140
lock table, 153, 154, 155, 167, 168
H M
having, 4, 11, 33, 47
M:N relationship, 19, 87, 89, 91
help set, 55, 57, 58
mathematical sets, 7, 11
hierarchy of users., 46, 50
max, 75, 82, 97, 102
host, 50, 53, 54, 56, 57, 63, 64, 94, 165
min, 75, 82, 101, 102
minimal key, 12, 13
I multi-user environment, ix, 26, 43, 44, 46,
146, 147
identified by, 19, 20, 27, 46, 47, 49, 51, 52,
54, 61, 65
N
index, viii, 44, 57, 74, 171
insert, 18, 34, 35, 36, 39, 45, 63, 64, 79, 83,
normal form, ix, 2, 3, 4, 6, 12, 15, 19, 22, 24
84, 85, 86, 91, 92, 93, 95, 98, 101, 102,
normalizing, 4
105, 106, 107, 108, 109, 111, 112, 114,
not null, 29, 30, 32, 33, 35, 36, 37, 40, 63,
115, 116, 117, 119, 121, 135, 137, 139,
75, 100
150, 155
null, 18, 19, 29, 30, 33, 35, 36, 39, 74, 75,
insert into, 45, 63, 64, 79, 86, 91, 92, 95,
88, 117, 135, 139
107, 108, 109, 112, 114, 116, 117, 119,
135, 137, 139
isolation, 146, 148, 149, 150, 151, 152, 153, O
154, 166, 167, 168
isolation level, 148, 149, 150, 151, 152, object, 43, 45, 46, 60, 83, 84, 156
153, 166, 167, 168 oracle, i, iii, ix, x, 29, 31, 40, 46, 52, 57, 58,
93, 103, 117, 119, 120, 124, 125, 130,
142, 143, 147, 148, 149, 150, 151, 152,
153, 154, 155, 156, 157, 160, 163, 165,
167, 168, 169
order by, 47
174 Index
statement, 29, 36, 46, 48, 64, 77, 84, 105, trigger, viii, 105, 106, 107, 109, 111, 112,
106, 118, 119, 121, 125, 130, 132, 151, 113, 116, 117, 118, 119
153, 156, 167
statement-level, 105, 106
U
SUM, 82, 102
synonyms, 45, 70, 89
unique constraint, 33
system administrator, 41, 42, 50, 51, 56, 69
unique key, 100
system privileges, 43, 45, 46
unix, 53, 55, 142
update, 6, 18, 35, 36, 39, 45, 83, 84, 85, 93,
T 98, 99, 101, 102, 105, 106, 107, 108,
109, 110, 111, 112, 114, 115, 119, 121,
table, x, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 123, 124, 127, 128, 129, 132, 137, 140,
15, 16, 17, 18, 19, 20, 21, 22, 26, 27, 28, 142, 146, 148, 152, 154, 155, 156, 157,
29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 158, 159, 162, 163, 164, 165, 166
40, 43, 45, 46, 49, 52, 55, 57, 59, 62, 63, upper, 75
64, 65, 66, 68, 69, 73, 74, 75, 76, 77, 78,
79, 81, 83, 84, 85, 86, 87, 89, 90, 91, 93, user_tables, 75, 76, 79
94, 95, 96, 98, 99, 100, 101, 102, 103, user_tablespaces, 75, 76
105, 106, 107, 108, 109, 110, 111, 112, user-defined, 51, 79
113, 114, 115, 116, 118, 119, 121, 124, username, 46, 51, 52, 53, 56, 128, 141
126, 127, 130, 131, 132, 133, 134, 137, users, viii, ix, x, 25, 26, 41, 42, 43, 44, 45,
141, 142, 146, 147, 148, 149, 150, 151, 46, 47, 48, 50, 52, 53, 55, 59, 60, 61, 62,
153, 154, 155, 156, 157, 158, 159, 161, 63, 65, 69, 77, 78, 81, 83, 84, 86, 87, 89,
162,163, 164, 165 93, 94, 98, 101, 105, 107, 118, 121, 145,
table_privileges, 78, 79 147, 151, 153, 154, 156, 167
tablespace, x, 43, 50, 51, 54, 56, 60, 75, 76
temporary user, 46, 60
W
third, ix, 2, 4, 5, 19, 24, 29
to_date, 90, 91, 92, 135, 137, 139
where, x, 2, 7, 8, 9, 11, 12, 14, 15, 21, 22,
transaction, 90, 91, 93, 102, 121, 124, 125,
27, 30, 34, 43, 47, 51, 59, 75, 77, 79, 83,
126, 127, 128, 129, 130, 131, 132, 133,
96, 97, 99, 103, 110, 114, 117, 123, 124,
134, 135, 141, 142, 143, 145, 146, 147,
126, 127, 128, 129, 130, 131, 134, 135,
148, 149, 150, 151, 152, 153, 154, 155,
136, 137, 138, 140, 141, 146, 156, 157,
156, 157, 159, 160, 161, 163, 165, 166,
158, 159, 162, 163, 165
168
with admin option, 48, 49, 52, 61, 62, 66, 67
transitive dependency, 21
wrap, 55, 56, 57, 58
wrap on, 55, 57