You are on page 1of 36

Oracle 10g: A Simple Security Approach

A detailed discussion on the setup and administration of database access and security is beyond the
scope of this document.

This document examines simplifying the setup and administration of database security in an ever
changing / volatile environment.

The majority of tasks an Oracle DBA does in maintaining the Security and Database Access are:

0100 User account management

0101 Creating users, roles, profiles.

Granting table level / system level privileges to users / roles


Drop / Lock users. Password management.

0200 Special requirements

0201 Protecting sensitive / confidential data using column level / row level security
by implementing Virtual Private Database / or custom built views.

0202 Granting Truncate privileges on tables in others’ schemas.

Creation and maintenance of database access and security is becoming a complex and demanding task
day in and day out. A lack of proper planning and approach will put the DBA's life into fire fighting mode.

0100 User account management

The creation and re-creation of tables in Development/ Integration environments is an ongoing process.

The creation of new schemas and tables in Production environment is continuous, with new code /
modules migration from Development / Integration environments.

In either case, not all of the users in the database can access the newly created / re-created objects.

Users granted only the DBA role or SELECT_ANY TABLE system level privilege can access the newly
created / re-created tables in a database without any re-granting of privileges.

For security reasons, if we cannot grant the SELECT_ANY TABLE system level privilege or the DBA role
to the users in the database, imagine the trouble of frequently granting the necessary privileges on the
newly created / re-created tables to users /roles.

Tracking all the newly created/ re-created tables and granting permissions on these tables to different
users / roles is a full time DBA job. To address this issue and simplify the process of granting privileges,
create three roles for each schema, to manage the different table level permissions required by the
users. (Currently, the existing schemas in our database are - TOM, DICK and HARRY). Alternatively, we
can reduce or increase the number of roles in the database, depending on the operational requirements
and ease.
The details of roles and their granted table level privileges are as follows:

Role Name Privilege


--------- --------- -----------
TOM_SELECT SELECT PERMISSION ON ALL TABLES IN TOM'S SCHEMA
TOM_SINSERT SELECT/INSERT PERMISSIONS ON ALL TABLES IN TOM'S SCHEMA
SELECT/INSERT/UPDATE/DELET PERMISSIONS ON ALL TABLES IN TOM'S
TOM_SIUD
SCHEMA
DICK_SELECT SELECT PERMISSION ON ALL TABLES IN DICK'S SCHEMA
DICK_SINSERT SELECT/INSERT PERMISSIONS ON ALL TABLES IN DICK'S SCHEMA
SELECT/INSERT/UPDATE/DELET PERMISSIONS ON ALL TABLES IN DICK'S
DICK_SIUD
SCHEMA
HARRY_SELECT SELECT PERMISSION ON ALL TABLES IN HARRY'S SCHEMA
HARRY_SINSERT SELECT/INSERT PERMISSIONS ON ALL TABLES IN HARRY'S SCHEMA
SELECT/INSERT/UPDATE/DELET PERMISSIONS ON ALL TABLES IN HARRY'S
HARRY_SIUD
SCHEMA
Create the roles for each schema.
Create role tom_select;
Create role tom_sinsert;
Create role tom_siud;

Create role dick_select;


Create role dick_sinsert;
Create role dick_siud;

Create role harry_select;


Create role harry_sinsert;
Create role harry_siud;

Grant create session to these roles:


Grant create session to tom_select;
Grant create session to tom_sinsert;
Grant create session to tom_siud;

Grant create session to dick_select;


Grant create session to dick_sinsert;
Grant create session to dick_siud;

Grant create session to harry_select;


Grant create session to harry_sinsert;
Grant create session to harry_siud;

This approach is good and maintainable with only few schemas in the database. With more schemas in
the database, the number of roles to be created will become a huge list.
To further simplify the approach, you can have three roles for the entire database, like db_select,
db_sinsert and db_siud. Grant the necessary permissions on all of the schema tables in the database to
these three roles.

Another approach is to have multiple schemas grouped into different groups, depending on their
application, sensitivity, criticality. Create group1_select, group1_sinsert and group1_siud roles for each
group.

Whenever a new user is created, grant the necessary roles to the user.

Create a procedure to grant different levels of permission to different roles on the newly created / re-
created tables. Schedule the procedure to run every day at 18:00 hrs. By scheduling the procedure to
run everyday at 18:00 hrs, the users can access all the tables, created / re-created the day before.

declare
begin
for t1 in (select owner, table_name from all_tables
where owner in ('TOM','DICK','HARRY')) loop
execute immediate 'grant select on '||t1.owner||'.'||t1.table_name||' to '||
t1.owner||'_SELECT';
execute immediate 'grant select, insert on '||t1.owner||'.'||t1.table_name|| '
to '||t1.owner||'_SINSERT';
execute immediate 'grant select, insert, update, delete on '||t1.owner||'.'||
t1.table_name|| ' to '||t1.owner||'_SIUD';
end loop;
end;

This procedure grants the required permissions on the existing tables in the database to different roles
as per the security setup.

This procedure works fine if each schema in the database has three roles, however, the procedure needs
to be modified if the roles setup is different, i.e. three roles for the entire database or all of the schemas
are put into different groups and three roles per group are created.

At any time, the table level permissions on all the schema objects can be granted/re-granted to different
roles by executing the procedure.

Grant the roles to users at the time of the users’ creation.

Imagine the number of scripts I would have to execute in the database if I didn't have a setup like this
and I had to grant various table level permissions on all created/ re-created schema tables to all the
users.

If required, create a procedure to grant various roles to users.

declare
begin
for u1 in (select username from all_users
where username in upper('')) loop /* Here supply the username */
for p1 in (select username from all_users
where username in ('TOM','DICK','HARRY')) loop
/* uncomment necessary grant of roles and comment unnecessary grant of roles
in this script */
--execute immediate 'grant '||p1.username||'_select to '||u1.username;
--execute immediate 'grant '||p1.username||'_sinsert to '||u1.username;
execute immediate 'grant '||p1.username||'_siud to '||u1.username;
end loop;
end loop;
end;

Wherever specific privileges are needed, grant the special privileges to specified users separately.

For password management, create a default profile with required limits on password management and

assign it to the users in the database.

A sample profile:

Create profile apps_developer limit


Failed_login_attempts 3
Password_lock_time 3
Password_life_time 30
Password_grace_time 3
Password_reuse_time 150
Password_reuse_max 5 ;

If only one user was created in the database, this single user owns all of the schema objects. Moreover,
if all of the users are connecting to the database as a single schema owner, then where is the need for
further simplification?

In Part II, I will be dealing with some of the data models for accomplishing column level and row level
data security. Please refer to Oracle's Metalink for a detailed discussion on Virtual Private Database
setup.

I will be elaborating on how to grant TRUNCATE permissions on third party tables to users--what are the
traps a DBA should avoid while granting TRUNCATE permissions on third party tables to users and what
are best practices.

Though the title says “A Simple Security Approach Part II”, the topics covered in this article are
COMPLEX in nature. Welcome to the world of complexities in database access and security.

This document describes setting up column level / row level security of data through custom built views,
and a critical review on the creation and maintenance of procedures for truncating tables in third party
schemas.

A detailed discussion on the setup and implementation of Virtual Private Database / Transparent Data
Encryption is beyond the scope of this document.

Please refer to Oracle’s Metalink for a detailed discussion on Virtual Private Database setup and
Transparent Data Encryption setup
0200 Special requirements

0201 Protecting sensitive / confidential data using column level / row level security through custom built
views.

0202 Granting truncate privileges on tables in others’ schemas.

Managing specific and special requirements of application developers is always a challenge. The present
day DBA needs to know the practicability of these requirements in the Oracle environment. Weigh the
merits and demerits of implementation of such special requirements and their impact on the application.

0201 Protecting sensitive / confidential data using column level / row level security through custom built
views

For my designing and modeling of views, I work with scott.emp table. Assume that all users logging into
the database have their employee details available in scott.emp table. All users are given select
permissions on the view. Any user whose employee details are not available in the scott.emp table will
get 0 rows as output from the custom built view.

Connect as scott

select * from scott.emp;


EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
----- ----- ---------- --------- ---------- --------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10

14 rows selected.
Create a user for accessing the data from our custom built view.
Create user blake identified by blake account unlock;
Grant create session to blake;

Imposing ROW Level security:

Version I – Only the login user’s details are displayed.

create or replace view emp_jp_view as


select a.empno, a.ename, a.job, a.mgr, a.hiredate,a.sal,a.comm,a.deptno
from scott.emp a,
(select sys_context('USERENV','SESSION_USER') username from dual) b
where a.ename = b.username;

Grant select on emp_jp_view to blake;


connect blake/blake
select * from j2000.emp_jp_view;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
------- ----- ------- ---- --------- ---- ----- --------
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

Version II – Details of the login user and users who work for this user are also displayed

create or replace view emp_jp_view as


select a.empno, a.ename, a.job, a.mgr, a.hiredate,a.sal,a.comm,a.deptno
from scott.emp a,
(select ename from scott.emp
Connect by prior empno = mgr
Start with ename in (select sys_context('USERENV','SESSION_USER') username
from dual)) b
where a.ename = b.ename;

connect as blake
select * from j2000.emp_jp_view;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7900 JAMES CLERK 7698 03-DEC-81 950 30

6 rows selected.

Imposing COLUMN Level security:

Version III – Only the login user’s details are displayed.

create or replace view


emp_jp_view
as
select
a.empno,
decode(b.username,'KING',a.ename,'***') ename,
decode(b.username,'KING',a.job,'***') job,
decode(b.username,'KING',a.mgr,0) mgr,
a.hiredate,
decode(b.username,'KING',a.sal,0) sal,
a.comm,
a.deptno
from scott.emp a,
(select sys_context('USERENV','SESSION_USER') username from dual) b;
As per the logic implemented in the view creation, only the user KING is authorized to see the sensitive
and confidential columns in the scott.emp table. Any user logging in as KING can see all the columns in
the tables. All others can see the columns EMPNO, HIREDATE and DEPTNO, the rest of the columns in
the view are masked.

We can also include a list of employees into our logic, who can see all the columns from the view.

Connect as blake
Select * from j2000.emp_jp_view;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO


---------- ---------- --------- ----- --------- ----- ---------- ----------
7369 *** *** 0 17-DEC-80 0 20
7499 *** *** 0 20-FEB-81 0 300 30
7521 *** *** 0 22-FEB-81 0 500 30
7566 *** *** 0 02-APR-81 0 20
7654 *** *** 0 28-SEP-81 0 1400 30
7698 *** *** 0 01-MAY-81 0 30
7782 *** *** 0 09-JUN-81 0 10
7788 *** *** 0 19-APR-87 0 20
7839 *** *** 0 17-NOV-81 0 10
7844 *** *** 0 08-SEP-81 0 0 30
7876 *** *** 0 23-MAY-87 0 20
7900 *** *** 0 03-DEC-81 0 30
7902 *** *** 0 03-DEC-81 0 20
7934 *** *** 0 23-JAN-82 0 10
14 rows selected.

As the connected user is not KING, all the columns of the view are not displayed.

Imposing ROW Level and COLUMN level security:

Version IV – Only the login users details are displayed.

create or replace view


emp_jp_view
as

select
a.empno,

case when a.ename in (select sys_context('USERENV','SESSION_USER') from dual)


then a.ename
else '***' end ename,
case when a.ename in (select sys_context('USERENV','SESSION_USER') from dual)
then a.job
else '***' end job,
case when a.ename in (select sys_context('USERENV','SESSION_USER') from dual)
then a.mgr
else 0 end mgr,
a.hiredate,
case when a.ename in (select sys_context('USERENV','SESSION_USER') from dual)
then a.sal
else 0 end sal,
a.comm,

a.deptno

from scott.emp a;

View created.

Connect as blake
select * from j2000.emp_jp_view
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 *** *** 0 17-DEC-80 0 20
7499 *** *** 0 20-FEB-81 0 300 30
7521 *** *** 0 22-FEB-81 0 500 30
7566 *** *** 0 02-APR-81 0 20
7654 *** *** 0 28-SEP-81 0 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 *** *** 0 09-JUN-81 0 10
7788 *** *** 0 19-APR-87 0 20
7839 *** *** 0 17-NOV-81 0 10
7844 *** *** 0 08-SEP-81 0 0 30
7876 *** *** 0 23-MAY-87 0 20
7900 *** *** 0 03-DEC-81 0 30
7902 *** *** 0 03-DEC-81 0 20
7934 *** *** 0 23-JAN-82 0 10

Version V – Details of the login user and others who work for this user are also displayed

Create or replace view


emp_jp_view
as
select
a.empno,
case when a.ename in (select ename from scott.emp connect by prior empno =
mgr Start with ename in
(select sys_context('USERENV','SESSION_USER') username from dual))
then a.ename else '***' end ename,
case when a.ename in (select ename from scott.emp Connect by prior empno =
mgr Start with ename in
(select sys_context('USERENV','SESSION_USER') username from dual))
then a.job else '***' end job,
case when a.ename in (select ename from scott.emp connect by prior empno =
mgr Start with ename in
(select sys_context('USERENV','SESSION_USER') username from dual))
then a.mgr else null end mgr,
a.hiredate,
case when a.ename in (select ename from scott.emp connect by prior empno =
mgr Start with ename in
(select sys_context('USERENV','SESSION_USER') username from dual))
then a.sal else 0 end sal,
a.comm,
a.deptno
from scott.emp a;

connect as blake
select * from j2000.emp_jp_view
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
----- ---- ---------- --------- ---------- --------- ---------- ----------
7369 *** *** 17-DEC-80 0 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 *** *** 02-APR-81 0 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 *** *** 09-JUN-81 0 10
7788 *** *** 19-APR-87 0 20
7839 *** *** 17-NOV-81 0 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 *** *** 23-MAY-87 0 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 *** *** 03-DEC-81 0 20
7934 *** *** 23-JAN-82 0 10

14 rows selected.

0202 Granting Truncate privileges on tables in others’ schemas.

Any user with DBA role or DROP ANY TABLE system level privilege can TRUNCATE tables in others’
schemas. Any schema owner can truncate tables in his/ her schema.

In secure production environments, it is not a good practice to grant DBA role / DROP ANY TABLE
system level privilege to users, for this purpose.

The workaround is to create a procedure in a scheme, which is granted DBA role or DROP ANY TABLE
system level privilege. In turn, grant execute permissions on the truncate table procedure to the users,
who need truncate privileges on other schemas’ tables.

CREATE OR REPLACE PROCEDURE TRUNCATE_TABLE_PROC (oname IN varchar2,


tname IN varchar2 ) as
/**********************************
AUTHOR JP Vijaykumar
Oracle DBA
DATE 10-05-2006
**********************************/
begin
execute immediate 'truncate table '||oname||'.'||tname;
exception
when others then
raise_application_error(-20001,'Insufficient privileges');
end;
The above truncate table procedure is a simple procedure. Whoever is granted EXECUTE privileges on
the procedure can truncate any table in the database. It is a time bomb.

In the event that this procedure is misused in a production / or secure environment, imagine the trouble
to recover the lost data

It is not a good practice to use generic truncate table procedures in production / or secure environments.
For a more secure truncate table procedure, please refer to my article, Secure Truncate Table
Procedure.

Day in and day out, sensitive and confidential data is stolen from everywhere. In protecting the sensitive
and confidential data from the unauthorized and hackers, using data modeling and custom built views,
the options are endless and unimaginable.

If my prime objective is to protect confidential data from hackers and unauthorized access, my simple
approach is DEVIDE AND CONQUER. I will keep my trillion dollars in a safe that can only be opened
with 10 KEYS--NOT WITH A SINGLE KEY. Even if I loose 9 KEYS out of 10, there is no harm. My money
is safe. However, if the safe can be opened with a SINGLE key and that single key is lost. Then all is lost.

Depending on the sensitivity of data and business requirements, the hierarchical structure of employee
details may be stored in one denormalized table or in more normalized tables. It is a good practice to
store the employee details in more normalized tables than in one denormalized table. To further protect
the data from unauthorized access and hackers, create the tables in different schemas. You can even
move these different schema tables into different databases.

No matter how hard you protect the data in its entirety, it is theft prone. So dismember your data and
protect it. Redefine the database access to each schema. Only allow a few users to access all these
schema tables to create the full picture. Grant SELECT permission on the custom built view to the end
users. In addition, users accessing the view can only see their own data. No other’s data is visible.
Create public synonym for the view.

When you partition the table horizontally, the records in their entirety are segregated into separate
partitions, depending on the partition key ranges.

Partition the table vertically into smaller tables with a few columns each. Unless all of the partitions are
joined together, the complete record cannot be constructed. Data from a single vertical partition is not
complete.

So I will cut my scott.emp table, which holds sensitive and confidential data, into vertical partitions.

Like

Desc scott.name [scott.n101] {scott.n101@test} (scott.n101@dallas)


Empno varchar2(10)
Firstname varchar2(20)
Lastname varchar2(20)

Desc harry.job [harry.j102] {harry.j102@test} (harry.j102@austin)


Empno varchar2(10)
Job varchar2(20)
Mgr varchar2(10)

Desc bruce.time [bruce.t103] {bruce.t103@test} (bruce.t103@houston)


Empno varchar2(10)
Hiredate date
Reviewdate date

Desc tom.sal [tom.s104] {tom.s104@test} (tom.s104@arlington)


Empno varchar2(10)
Sal number(10,2)
Comm Number(10,2)
Bonus number(10,2)
Incentives number(10,2)

It is a good strategy to keep the view and its parent tables in different schemas / or databases. The
viability of an application to populate the vertically partitioned scott.emp table across schemas /
databases is to be taken care of before moving the vertical partitions across schemas / databases.

For this secure data model to work, the application needs to be modified / re-written to populate the
dismembered EMP table pieces across the schemas / databases.

No select privileges should be granted to schema owners scott, bruce, tom and harry on others’ schema
tables. Only the view_admin user can access the tables in scott, tom, bruce and harry’s schemas to
create the required row level / column level view.

Do not allow any users to login as the view_admin user. Only users, who are granted select privileges on
the custom built view are allowed to login to the database.

Everything has its merits and demerits. If you keep everything in one box, the hackers’ job is made easy.
If you take extra care and distribute the data, at least all is not lost.

You can further complicate the security model and implement more levels and masking with synonyms
and making the access more complicated with grants and profiles.

Audit all sessions accessing the sensitive and confidential tables.

Imagine my architecture.

STAGE03 PUBLIC Synonym EMPLOYEE

STAGE02 View JP.N101_J102_T103_S_104_VIEW

STAGE01
scott.n101@dallas harry.j102@cleveland bruce.t103@buffalo
tom.s104@austin

Let us analyze the scenarios; an intruder accessed the database at –


STAGE01 – any schema owner can access one vertical partition of the table. As the table's names are
meaningless, it is difficult to access the right table. As any schema owner is not permitted to access data
from other schema’s tables, the rest of the data is safe and secure.

STAGE02 – all of the user logins as VIEW_ADMIN are blocked. Unless someone connects as sys or
system and obtains the view definition for all my important tables, data is not lost. Such an occurrence is
very remote. I am auditing all users accessing secure and confidential data.

STAGE03 – all users with select permission on the view can access his or her own data from the view.
Don’t you think this data model is secure and safe from unauthorized access and hackers?
Do you want me to complicate things further?
Eight Ways to Hack Oracle

Introduction

Oracle is touted as being unbreakable, if talk weren't so cheap. Well as with any computing system, there
are ways to hack it, and Oracle is no exception. In this piece, we'll talk about some of the ways that you
can get at data you're not supposed to. We'll start by taking the perspective of the hacker, and we hope
as a manager of databases yourself this will illustrate areas where your infrastructure may be vulnerable.
We'll then follow that by discussing ways to protect against the vulnerability.

1. SQL Injection

With many Oracle databases these days, they are the backend datastore for a web application of one
sort or another. The thing about web applications which makes them vulnerable and relatively easy
targets for us are threefold. One, they are complex, composed of many components making them difficult
to test thoroughly. Two, the barrier to entry for programmers is lower. You don't have to be a C
programming guru to hack together some webpages. We'll show why that matters to us shortly. The third
reason is urgency. Web apps are always in development mode, so they're constantly changing, rolling
out new features. So, security is necessarily a lower priority. Ok on to the good stuff.

SQL Injection is simply entering information in a web form, and secretly adding some unexpected code,
tricking the application to execute that on the database, and return results the programmer had not
foreseen. For example, you have a user login form which requests username and password. In the
username field, you enter:

sean'); select username, password from all_users;--

Now if the programmer was not smart enough to "sanitize" our input, i.e. check for things like this, then
this will execute on the remote db and this sensitive data will be dumped back to our browser. Wow!

Here's a great comic which illustrates this quite well: http://xkcd.com/327/

You may think this is scary, but there's more. David Litchfield in his book "Oracle Hacker's Handbook"
calls one particular pl/sql injection the "holy grail" because it is vulnerable in Oracle 8 all the way through
the current 10g release 2. If it's not obvious, that means you can use it on almost *any* Oracle database
out there.

How's it work you ask? You make use of a package called DBMS_EXPORT_EXTENSION, use injection
to get our code to execute an exception handler that grants some user or for that matter all users, DBA
privileges!

This was what the famous Alert 68 was all about, and according to Litchfield was never really properly
patched.
Defending Against This Attack

In a word, diligence. There is no bulletproof solution, as it involves all the subtleties of applications that
face the internet. There are various SQL Injection Testing techniques available. There is an excellent 3-
part article at Security Focus called "Penetration Testing for Web Applications"

It is also possible to *detect* SQL Injection to some degree with various intrusion detection tools. Learn
more over at Pete Finnigan's security site (search the page for "detecting sql injection")
http://www.petefinnigan.com/orasec.htm

For developers there are packages that help you *sanitize* your inputs. If you call the various clean and
sanitize routine on every value you receive from a form, you are much more protected than otherwise.
But of course be sure to test and verify by hitting the application with SQL Injection tools. That's really the
only way to be sure.

Pete Finnigan has reported that Steven Feurstein is working on SQL Guard, a pl/sql package to provide
this type of library to developers. Read more here:
http://www.petefinnigan.com/weblog/archives/00001115.htm

2. Default Passwords

Oracle is such a huge product and there are schemas created for everything. Most of these logins have
default passwords. Is the database administrator diligent? One way to find out. Take a gander at some of
the more common ones:

Username Password
applsys apps
ctxsys change_on_install
dbsnmp dbsnmp
outln outln
owa owa
perfstat perfstat
scott tiger
system change_on_install
system manager
sys change_on_install
sys manager

What's more even if these are changed, sometimes they are quite easy to guess, give "oracle", "oracl3",
"oracle8", "oracle9", "oracle8i" and "oracle9i" a try as well.

Pete Finnigan has a very comprehensive and up to date list of default users and passwords for you to try
out. This list also includes hashed passwords, so if you've queried all_users, you can compare against
this list.

http://www.petefinnigan.com/default/default_password_list.htm
Defending Against the Attack

As a Database Administrator, you should audit all your database passwords regularly. If there is business
resistance to changing easily guessable passwords, explain calmly, but with a clear and visual illustration
of what could happen, and what the risks are.

Oracle also provides password profile security. You can enable profiles that enforce a certain level of
complexity in your database passwords. You can also enable regular password expiration. Beware
enabling this for logins that only happen through a webserver, or middle tier application server, as the
application may suddenly break, if no one directly sees the warnings and notifications.

3. Brute Force

Brute force, as the name implies, is the method for banging away at the lock, or keyhole until it breaks. In
the case of Oracle it means trying every username and password by automating the process with a little
bit of code to help you.

For years now, a piece of software called John the Ripper has been available to unix administrators for
exactly this task. Now there is a patch available for you so you can use this handy software for banging
away at Oracle passwords. Want to speed this process up even more? Prepare in advance a table of all
password hashes. Such a table is called a Rainbow table. You will have a different one for each
username because the password hashing algorithm uses the username as the salt to the function. We
won't get into that in too much detail, but here's a resource for further study:
http://www.antsight.com/zsl/rainbowcrack/

Oracle servers default to automatically lockout a particular account after ten failed logins. Normally
though "sys as sysdba" does not have this restriction. The thinking I guess is if you lockout the
administrator, then everyone is locked out! Fortunately, for us this means programs like OraBrute make
our lives much easier! Author Paul Wright has put together a great program for banging on the front door
of your fortress all day and all night until it opens. Head on over to Paul's blog and download a copy for
yourself! http://www.oracleforensics.com/wordpress/index.php/2007/03/04/oracle-passwords-and-
orabrute-paper-update/

Defending Against the Attack

Defending against this type of attack can be done with the methods describe above for default
passwords. A curious and proactive DBA might also go the extra step to download these tools, and
attempt to hack into his own system. This will help illustrate your real risks, and better educate how safe
you really are.

4. Sneaking Data Out The Back Door

(I was going to have a section called Auditing, but that makes more sense for the article on prevention).

In the security world, this concept is known as data exfiltration. It comes from the military term, opposite
of infiltration, it means getting out without being noticed. In the context of getting data from a target
database, it could be as simple as picking up some tape backups and restoring the database, or getting
a copy from a retired crashed disk. However, it can also involve snooping network traffic for relevant
packets of data.
Oracle has a package called UTL_TCP, which can make outside connections to other servers. It could be
used with a little programming magic, to sending a low bandwidth stream of data from the database to
some remote host. Oracle also comes with some useful packages to hide what might be inside your
secret stream of data, so make ample use of those if you think an intrusion detection system might be
monitoring your activities. They include DBMS_OBFUSCATION_TOOLKIT and DBMS_CRYPTO.

Defending Against the Attack

The best way to defend against these types of attacks is to setup an intrusion detection system. These
can watch incoming and outgoing packets on the network. Some provide "deep packet inspection" which
actually tests for certain SQL, and based on a set of rules, triggers alarms in certain circumstances.
These tools can look for telltale signs like added UNIONs, various types of short- circuiting, truncating
with a comment "--" and so on.

Conclusion:

So, as you can see there are a lot of ways to plan your attack, and get into a target Oracle database.
DBAs should keep in mind that for each vulnerability, there is a way to defend against it, so vigilance is
key. In Part II of this series, we will cover the insecurities of the Oracle Listener, privilege escalation to
get more access from a less privileged login we already have, executing operating system commands,
which can be very powerful, and under appreciated, and lastly filesystem security. If you can read the
raw data out of the binary data files making up your database, you can completely circumvent any
security measures put in place by Oracle.

5. Listener

I think one of the things that is truly amazing about computing, is how extremely difficult it is to tame it.
Over and over again we see, particularly in the area of security, how simple some vulnerabilities are, and
how they arose simply because the user (in our case the hacker) did not behave or think the way the
designer (programmer or software developer) had intended or expected.

Oracle's listener is setup out of the box so that one can remotely administer it. What if the attacker sets
the logfile of the listener to be the Unix .rhosts file? Well the attacker can effectively WRITE to the
.rhosts file. This file on Unix configures who is allowed to login without a password using the rsh, rlogin,
and rcp commands. You can imagine what happens next!

This is really the tip of the iceberg in terms of security surrounding Oracle's listener. There are also
buffer overflows and a lot more to look at. In fact, Litchfield's Oracle Hacker's handbook has a whole
chapter on the topic!

From the prevention side of the house, Oracle has made some strides to allow better security if only you
put it in place. For starters, set a password for administrating the listener. Burdened by an ever-growing
set of passwords to manage, this might seem like too much, but consider the threat before you look the
other way. Oracle has also added ADMIN_RESTRICTIONS, which prevent certain things from being
done remotely. For instance, you would then have to be local to set the location of logfiles.

6. Privilege Escalation
In a nutshell "privilege escalation" involves using your existing usually underprivileged account in tricky,
sneaky or nefarious ways to gain greater privileges or even DBA privileges!

Here's an example, using one of the CREATE ANY grants. I have access to the database via a user
SEAN who has CREATE ANY TRIGGER, so we can create a trigger in any schema. If you can track
down a table which any user can write to, create a trigger in SYSTEM which executes when you the
unprivileged user, INSERT or UPDATE that publicly writeable table. The trigger you write calls a stored
procedure you also write, which, and here's the rub, executes with AUTHID CURRENT_USER. That
means it'll have the privileges of the SYSTEM user when it executes *YOUR* procedure. Now inside
your nefarious stored procedure you include "EXECUTE IMMEDIATE 'GRANT DBA TO SEAN'"; Voila!

Now I can:

1. Insert into my public table (the trigger fires)

2. The trigger is owned by SYSTEM

3. SYSTEM calls my change_privileges stored procedure, which is AUTHID CURRENT_USER

So although *I* could not have executed to change my own privileges I managed to get SYSTEM to
execute it, and that user *DOES* have the privileges, so I am now granted DBA!!

What's a Database Administrator to do? Well for starters, you should audit your database for CREATE
ANY privileges and remove the ones that aren't required. Secondly, you should scan the forums such as
www.securityfocus.com for the latest vulnerabilities surrounding privilege escalation. Lastly, it might not
hurt to enable auditing of certain types of activities so the database will help you help yourself. While it
audits things like GRANT DBA you can monitor that audit log for malicious or unexpected activity.

7. Operating System Commands & Security

Hackers aren't always logged into your system at a shell prompt. In fact, we hope they never are!
Nevertheless, that doesn't mean they can't pretend. By coaxing the Oracle database to run commands
at the Operating System level though, we're effectively giving the hacker a way to have just that, a
method for running commands. Those commands could delete or corrupt files, overwrite logs (to hide
their tracks), create accounts, or anything else that one could potentially do at the command line. So,
how do they do it?

Although there are a number of ways, the easiest is through languages like Java or PL/SQL. Often the
ability to create external stored procedures is available. By default, it is anyway. This can allow a stored
procedure, which performs a system call to execute. This system call then can execute with the
privileges of the "oracle" account by which Oracle was installed in the first place. So from there you can
see where it goes.

Although Oracle has made some strides to protect against these types of things, your best bet in terms of
prevention is monitoring. By keeping an eye on the activities inside your system, you're better able to be
proactive if an attacker tries something malicious like this.

8. Filesystem Security
Access to the filesystem is one area that is a tricky one to get your head around. The "oracle" OS user
owns all of the Oracle software, and datafiles of your database, so if or when a user inside the database
accesses files on the filesystem using the UTL_FILE package, they have access to many things they
wouldn't have access to inside the database, because their GRANTs and ROLEs constrain them. If they
create read datafiles, they can affectively gain access to the raw binary data that make up your tables
and indexes, and with some work can deduce the content therein. They may also be able to write to
those files and affectively corrupt them. Dangerous indeed.

Oracle has made some strides to prevent this by introducing the DIRECTORY object. One must have a
DIRECTORY object defined to do certain types of reading and writing now in 10g. That means a user
must have CREATE DIRECTORY privilege, which we've seen previously can be attained by various
methods of privilege escalation. Even given all of this, there are still ways to gain access to the
filesystem and read and write files via PL/SQL or Java.

Conclusion:

As you can see there are certainly many vulnerabilities present in the Oracle database product. At times,
it might seem like a giant house of cards built by very smart engineers who were more honest than the
hackers that prey upon it. As such, they did not envision or entertain the numerous ways that someone
might try to chip away at, or pull out cards and weaken the foundation.

To be completely fair, many of these can be addressed by adamant DBAs who spend time and energy to
address them. Oracle has patches for many of the issues inside the database, and intrusion detection
systems can provide an additional layer of security. We hope that our notes to managers and DBAs in
each of these sections will point you in the right direction on research into the issues, and execution of
your own security plan.
Transparent Data Encryption

Synopsis. Oracle 10g dramatically improves the overall security of the data that’s stored within the
Oracle database. This article – the third in this series – reviews how Oracle 10gR2 protects against an
intruder’s efforts to view the data stored within a database’s physical files by implementing the
impressive features of Transparent Data Encryption (TDE).

In the previous articles in this series, we first discussed how to implement fine-grained auditing using
Oracle 10g’s enhanced DBMS_FGA package. We then looked at methods that prevent users from
querying or changing data via the fine-grained access control tools that are supplied with the enhanced
DBMS_RLS package in Oracle 10g.

To continue the scenario I set forth in those articles, imagine next that your CIO asks you and your fellow
Oracle DBAs to attend a meeting with the newly-appointed Director of Security and Compliance at your
company. “We’ve heard good things about your team,“ the Director tells your CIO, “especially that thing
with catching the thief in Accounts Payable. And we’re certainly impressed with how easily your team
implemented those new security features to prevent any more violations of accounting policies.”

The new Compliance Director continues, “And now we need to go one step further. To insure that we’re
in compliance with Sarbanes-Oxley guidelines, we’ll need to make sure that we’ve encrypted all critical
and sensitive data in all of our databases. And this means not just our critical financial data - we’ll need to
make sure all sensitive employee data is encrypted too.”

Then the Compliance Director says, “Oh, and I almost forgot ... this needs to be done by the end of 2006
so that we can add it to our end-of-year Sarbanes-Oxley compliance report. That gives you plenty of time
to get it done.” Your CIO looks concerned, since it’s already late November. But then you smile at the
CIO and say, “No problem. We can make that date easily.” Once again, Oracle 10g’s robust security
features save the day as you explain your plans to encrypt data within the database.

Encrypting Sensitive Data

I’ve demonstrated how to determine how to track which users are performing questionable or fraudulent
transactions with Oracle 10gR2’s with Fine-Grained Auditing (FGA) features. I’ve also illustrated how
Oracle 10gR2 insures that access to sensitive data via queries and DML statements can be controlled
with the enhanced row-level security features.

As robust as these features are, however, it’s more important than ever to guarantee that data is
encrypted within the database’s physical files as well. For example, if a malicious intruder were to gain
even read-only access to my database’s datafiles, then all of the protections afforded by fine-grained
auditing and row-level security would be bypassed because unencrypted data could be read “in the
clear.” While I grant it might take some additional manipulation to make sense of these data, it’s a risk
that I’d rather not take.

Prior to Oracle 10g, the only tool I had to encrypt data was the DBMS_OBFUSCATION_TOOLKIT
Oracle-supplied package. This package provided me with the ability to apply predefined encryption
methods to data stored within a specific column in an Oracle database table. However,
DBMS_OBFUSCATION_TOOLKIT has some serious drawbacks:
• Limited encryption algorithms. DBMS_OBFUSCATION_TOOLKIT only provides a few encryption methods in
earlier releases, specifically, Triple DES (3DES) and DES.
• Non-transparent encryption and decryption. DBMS_OBFUSCATION_TOOLKIT provides no direct method to
directly encrypt data in columns that need protection. I’m thus forced have to develop a separate function to encrypt
data for each individual column. In addition, decrypting column data that’s been encrypted with
DBMS_OBFUSCATION_TOOLKIT is equally difficult: I need to create a decryption function for each encrypted
column.
• Difficult implementation. To apply encryption and decryption for queries, I’ll need to create a view that accesses that
table and displays the encrypted data by calling the decryption function for that column. For DML operations, it’s even
more complex: I will most likely need to create triggers against INSTEAD-OF views that implement the encryption
method I’ve chosen.

Note that Oracle 10g does supply a new package named DBMS_CRYPTO that eases some of these
difficulties and is obviously designed to replace DBMS_OBFUSCATION_TOOLKIT. DBMS_CRYPTO
also provides significantly more encryption algorithms and sophisticated ciphers. (See my prior article on
DBMS_CRYPTO for additional details on this new package in Oracle 10g.)

The Ultimate Encryption Solution: Transparent Data Encryption (TDE)

However, what I really hoped for was an even better alternative to DBMS_OBFUSCATION_TOOLKIT,
something that would allow me to encrypt and decrypt columns without having to create any additional
views and triggers. My hopes were answered with the introduction of Transparent Data Encryption (TDE)
features in Oracle 10g. Implementing Transparent Data Encryption is simple and elegant:

• First, I’ll need to create and store a public encryption key for my database in an Oracle Wallet using either command-
line utilities or the Oracle Wallet Manager (OWM).
• Once the Oracle Wallet is created, I open it for use against my database, and then I create a separate master
encryption key for all data that needs to be encrypted inside the database.
• When I choose to encrypt a column’s data, an external Oracle application called the External Security Module (ESM)
uses the database’s master encryption key to apply encryption and decryption to the data using one of four supplied
encryption algorithms. Oracle stores metadata about which columns are encrypted column in a special (and secure!)
data dictionary table in the database.

Creating an Oracle Wallet with Oracle Wallet Manager (OWM)

To create an Oracle Wallet for the first time, I’ll execute the Oracle Wallet Manager (OWM) utility by
either selecting it from the list of available Oracle 10g applications (in Windows NT), or by typing the
command owm from within a terminal window prompt (in all other operating system environments).
Figure 3.1 shows the initial screen that OWM displays.

When I choose to create a new Wallet, OWM will display a warning that the default wallet directory
doesn’t exist (Figure 3.2). I can then supply the desired directory path (Figure 3.3). Once this directory is
chosen, OWM will prompt for a password for the Wallet (Figure 3.4). This password must conform to
minimum password security rules (i.e., at least one capital letter, one lowercase letter, one number, and
one special character) and must be a minimum of eight characters in length.

Finally, OWM will offer the option to create additional security certificates (Figure 3.5), but I’ll simply
ignore this option for now. The result of a successful Wallet creation is shown in Figure 3.6. Now the new
Oracle Wallet is created, OWM allows me to save the wallet file (usually named ewallet.p12) in the
directory of my choice. As shown in Figure 3.7, I’ve chosen to save my wallet file in my database
server’s /home/oracle/_dba/ folder. This is the file that the External Security Module (ESM) will read
when it’s time to apply encryption to each selected column in the database.
Now that the Oracle Wallet is prepared, I’ll next need to configure my Oracle database to utilize it. First, I
need to add parameters to my database’s SQLNET.ORA network configuration file. I’ll then open the
Wallet for access from within the database by issuing the ALTER SYSTEM SET WALLET OPEN
IDENTIFIED BY <password>; command from within SQL*Plus. Once the Wallet has been opened
successfully, I’ll create the database’s master encryption password for the database by issuing the
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY <password>; command. Listing 3.1 shows
the commands I issued to prepare the database for column encryption.

Exploiting Unencrypted Data In Oracle Datafiles

Now that my Oracle Wallet is set up and the database itself is ready to handle column-level encryption,
let me first illustrate how easy it is to obtain information directly from the datafile of an Oracle tablespace
when data is not encrypted. First, I’ll create a new, relatively small tablespace, TBS_ENCRYPTED, and
I’ll name its datafile tbs_encrypted01.dbf. I’ll then create a new table, HR.EMPLOYEE_SECURED, in
that new tablespace, and I’ll load some test data into that table. See Listing 3.2 for the code I used to
create these objects.

Now that I have some test data to view, allow me to show you how easy it is to browse the contents of a
datafile directly to view unencrypted data stored within. As shown in Listing 3.3, I’ll first take the
TBS_ENCRYPTED tablespace offline, and then I’ll use the Linux strings utility to browse the contents of
this tablespace’s datafile. This utility reads a file and then returns output that filters out all character
strings. Note that it’s possible to view the unencrypted values for Social Security Number using this
method. (Of course, it might take a little bit of interpretation to find significant data using this method, but
hackers are an industrious and ingenious lot.)

Encrypting Columns With TDE: The ENCRYPT command

Now that I’ve opened the Oracle Wallet and have created the database’s master encryption key, it’s a
relatively simple matter to apply encryption directly to several columns in my database’s tables. Oracle
10g allows me to choose among several options for encrypting data within columns:

Encryption Algorithms. There are four different algorithms available:

• AES192: Advanced Encryption Standard with a 192-bit key size. This is the default encryption algorithm; it will be
applied if I attach the ENCRYPT attribute to a column.
• AES128: Advanced Encryption Standard with a 128 bit key size.
• AES256: Advanced Encryption Standard with a 256-bit key size.
• 3DES168: Triple Data Encryption Standard with a 168-bit key size.

“Salted” vs. “Unsalted” Encryption. By default, Oracle 10g also “salts” the encrypted column by
adding a random string to the data value before it’s encrypted. This makes it more difficult for an intruder
to break down encrypted data because it disrupts the intruder’s ability to apply standard pattern matching
techniques. Oracle 10g also permits the deactivation of this default salting method by specifying the
directive NO SALT.

To illustrate these features, I’ll add different encryption levels to four of the columns in table
HR.EMPLOYEE_SECURED as shown in Listing 3.4:

• Column SOCSECNBR will be encrypted using the default AES192 encryption method. Even though it’s not necessary
because it’s the default value, I’ll specify the SALT attribute.
• Likewise, columns ETHNICITY and TERMINATION_RSN will be encrypted using the default AES192 encryption
method and will be ”salted” by default.
• The TERMINATION_DT column will be encrypted using the default AES192 encryption method; however, it will not be
“salted” because I’ve specified the NO SALT attribute.

Encryption of Indexed Values. Since Oracle may be able to utilize an index value to answer a query
much faster than querying data in a table directly, Oracle 10g also ensures that indexed values are
encrypted. Oracle 10g does require that “unsalted” encryption be applied to the indexed column. To
illustrate this, I’ll add an index on the AP.VENDORS.CREDIT_CARD column in Listing 3.4, and then I’ll
attempt to activate standard encryption for that column. Note that Oracle 10g will initially reject my
attempt to encrypt the data in that column because I didn’t specify the NO SALT directive; once I do so,
Oracle 10g will allow me to encrypt the data successfully.

Viewing Metadata For Encrypted Columns. Oracle 10g also provides methods to retrieve metadata
about the columns that have been encrypted with Transparent Data Encryption. The query in Listing 3.5
accesses the DBA_ENCRYPTED_COLUMNS data dictionary table, and it shows the five columns that
have been encrypted up to this point, as well as their current encryption algorithm and whether “salting”
has been employed.

Applying Encryption During Table or Column Creation. Oracle 10g permits me to apply encryption to
a newly-created column simultaneously during its addition to the table. I can also apply encryption to any
column that requires additional security during the creation of a new table simply by specifying the
ENCRYPT directive for that column.

Changing A Table’s Encryption Algorithm. Oracle 10g will apply the same encryption algorithm for all
columns that are encrypted within the table. In other words, it’s not possible to apply 3DES168
encryption to one column, and AES192 encryption to another: All columns in the table are encrypted
using the identical algorithm. If I attempt to encrypt a column using an encryption algorithm different from
the one already in use, Oracle 10g will return an error; however, Oracle 10g will allow me to change the
encryption algorithm for the entire table using the ALTER TABLE <table_name> REKEY USING
<new_encryption_algorithm>; command.

Removing Encryption From a Column. To remove encryption on a column that has been previously
encrypted, I simply issue the ALTER TABLE <table_name> MODIFY <column_name> DECRYPT;
command against the encrypted column.

Listing 3.6 offers examples of how to add a new encrypted column, how to “rekey” an encrypted table so
that it uses a different encryption algorithm, how to remove “salting” from a previously “salted” column,
and how to remove encryption from a previously encrypted column. I’ve also included the results of
querying the database’s metadata in DBA_ENCRYPTED_COLUMNS after these examples have been
applied to the database.

Proof of Concept. To prove that Transparent Data Encryption is actually working, I’ve rerun the same
tests against the datafile for the TBS_ENCRYPTED tablespace, and the results of these tests are shown
in Listing 3.7. Note that although the strings command still returns some character string values, notice
that the strings are essentially meaningless collections of encrypted data.

Deactivating Transparent Data Encryption: Closing the Wallet


What happens if the Oracle Wallet is closed while the database is open? The answer is that the data
stored in the encrypted columns cannot be accessed; however, all non-encrypted data can still be
accessed.

I’ve demonstrated in Listing 3.8 what happens when the Wallet is closed by issuing the ALTER
SYSTEM SET ENCRYPTION WALLET CLOSE; command. I then issued a query against two columns in
AP.VENDORS that are not encrypted, and the query returned the expected result from that table. Note
that when I issue a query against an encrypted column, however, Oracle returns an ORA-28365
exception and informs me that the Oracle Wallet is not open at this time.

Now, A Warning: Additional Licensing Costs

I would be remiss if I neglected to mention that these Transparent Data Encryption features will most
likely incur additional licensing costs because TDE is considered part of Oracle Advanced Security.
However, if you must absolutely ensure that sensitive data stored within your databases is fully
encrypted, and you decide to pursue encryption via DBMS_OBFUSCATION_TOOLKIT or
DBMS_CRYPTO, I’d strongly suggest that you weigh the costs of developing a custom solution via these
packages versus the relatively simple methods that Transparent Data Encryption provides.

Conclusion

With the addition of Transparent Data Encryption features, Oracle 10g ensures that a malicious intruder’s
efforts to view data that’s stored within the physical datafiles that comprise an Oracle database’s
tablespaces are thwarted by encrypting data with robust, difficult-to-decipher encryption algorithms.
These new features are relatively simple to implement and require virtually no maintenance to insure
their viability.

References and Additional Reading

Even though I’ve hopefully provided enough technical information in this article to encourage you to
explore with these features, I also strongly suggest that you first review the corresponding detailed
Oracle documentation before proceeding with any experiments. Actual implementation of these features
should commence only after a crystal-clear understanding exists. Please note that I’ve drawn upon the
following Oracle 10gR2 documentation for the deeper technical details of this article:

B14214-02 Oracle Database New Features Guide

B14258-01 PL/SQL Packages and Types Reference

B14268-02 Oracle Database Advanced Security Administrator’s Guide

B14266-01 Oracle Database Security Guide


Securing Data Extracts, Exports and Recovery Files
Synopsis. Oracle 10gR2 includes some security features that have been long-overdue: the ability to
encrypt DataPump Export dump sets and Recovery Manager (RMAN) backup sets. This article – the
fourth and final in this series – discusses and illustrates how any Oracle DBA with a reasonable level of
experience can implement this unprecedented level of database security.

In the previous three articles in this series, we first explored the fine-grained auditing capabilities of
Oracle 10g’s enhanced DBMS_FGA package, and we then investigated ways to forestall a user session
from accessing or modifying data via the fine-grained access control capabilities of Oracle 10g’s
enhanced DBMS_RLS package. In the last article, we discovered how simple it is to encrypt values
stored within table columns and indexes using Oracle 10g’s new Transparent Data Encryption (TDE)
features.

To conclude our ongoing saga of security pratfalls, imagine that your CIO stops by your desk early one
morning and says, “I’ve been thinking about what our good Director of Security and Compliance is going
to ask our team to concentrate on next. I’d like us to get a head start on plugging any other security gaps,
so I’ve asked every team in our IT group to think about any possible vulnerabilities anywhere in our
systems.”

The CIO continues, “One possible exposure that keeps me awake at night are the datafeeds we currently
send out of house to our external vendors. I know we also send external feeds across our network for
processing in our data warehouse in Osaka, Japan. In addition, what about the disaster recovery files for
our production database here that we’re saving on tape? Are those really secure? Our whole production
database is on those tapes – what if someone ‘borrowed’ those tapes as they’re being shipped to the
storage vault?”

You wait until your CIO has finished massaging the furrows in his brow, and then you grin and say, “I hate
to sound like a broken record, sir, but once again Oracle 10g has our bases covered.” You then explain
how Oracle10g’s advanced security features permit you to encrypt Oracle DataPump export dumpsets,
external tables, and RMAN backups … and watch as a few more worry lines disappear from your CIO’s
visage.

Preparing for Encryption

Oracle 10g Transparent Data Encryption (TDE) features are at the heart of many of the encryption
capabilities I will be exploring in this article. I explained how to set up Transparent Data Encryption in the
prior article in this series. For the remainder of this article, please assume that an Oracle Wallet has
already been created, the Oracle Wallet has already been opened, and finally that a master encryption
key has been established already for the database. (Figures 3.1 through 3.7 illustrate how to set up the
Oracle Wallet so that TDE features are enabled, while Listing 3.1 shows the necessary commands for
preparing an Oracle database to take advantage of these features.)

Encrypting External Tables

Let’s first turn our attention to a new feature in Oracle 10gR2, namely the ability to create an external
table directly from any source data, including standard (i.e. non-external) and external tables. A new
access method, ORACLE_DATAPUMP, lets me construct an external table using a CTAS (i.e. CREATE
TABLE … AS SELECT) SQL statement method and write the contents of that external table to one or
more files in a designated directory.

To illustrate, I’ll create a new external table, HR.XT_EMPLOYEE_INFO, by combining several columns
from the sample schema’s HR.EMPLOYEES table with data from the HR.EMPLOYEE_SECURED table
that I created in the previous article. (See Listing 3.2 from that article to view the structure of
HR.EMPLOYEE_SECURED.) Listing 4.1 shows the statements I executed to create the new external
table.

Since the Oracle Wallet I created in the last article is still open, Oracle 10g allows me to read the
contents of HR.EMPLOYEE_SECURED when I create the new external table. However, note that even
though my query is reading an encrypted table, the data in the external table that is created is not
encrypted by default. To enable encryption of data in the external table, I must add the ENCRYPT
attribute to the table’s column declarations.

Listing 4.1 also shows the second attempt at creating the encrypted table to enforce encryption. Within
that listing, I’ve also provided links to the actual external table’s files to illustrate that Oracle 10gR2 does
indeed encrypt the data.

So how does Oracle 10gR2 handle reading data from an external table that contains encrypted data? I
simply need to include that password string within the CREATE TABLE DDL statements that I use to
create the external table on, say, a different server or in a different physical location on the same server.
Each time that a new external table is created, Oracle 10gR2 uses the current master key to generate a
new password string for the external table, so I’m never in danger of revealing the master encryption
password for the database. Also note that the external table’s password string can be specified in
“obfuscated” format if I so choose.

Handling Encrypted Data with Oracle DataPump

Since I can encrypt and decrypt data stored in an external table that’s been created via the
ORACLE_DATAPUMP access method, it’s no surprise that I can also encrypt a dump set that’s been
constructed via Oracle 10g DataPump operations because DataPump uses the same access method to
store data in a dump set whenever the source table contains LOBs or LONG columns.

Listing 4.2 shows an example of creating a DataPump dump set as part of a DataPump export
operation. Note that I must set a value for the ENCRYPTION_PASSWORD parameter to activate
encryption for the entire dumpset. The value for this parameter can be placed on the DataPump
command line or within a parameter file, and it can be obscured by supplying it in its “obfuscated” format
as well.

One of the advantages of this encryption approach is I can transfer the dumpset containing the encrypted
data to another target database without concern that the data stored within the dumpset could be
compromised, even if I am transporting the dumpset across an unsecured network link. That’s because
without the encryption password, the dumpset cannot be used as a source for a DataPump import
operation.

Listing 4.3 demonstrates how to import an encrypted DataPump export dumpset back into the same
database. In this example, I’ve also remapped the two tables that were successfully exported in the
previous step into a new schema, NEWHR. I’ve also specified a value for the
ENCRYPTION_PASSWORD parameter in its “obfuscated” format.
Encrypting Oracle Backup Sets

I’ll be the first to admit that I never really thought about my database backups as potential security risks!
However, if a hacker is capable enough to ferret out sensitive information from the contents of an
external table or DataPump export file, she’s probably more than crafty enough to crack open a backup
file to obtain unprotected data.

The good news is that Oracle 10g offers multiple encryption algorithms for RMAN backup sets. As shown
in Listing 4.4, the new V$RMAN_ENCRYPTION_ALGORITHMS dynamic view can be queried to show
all available algorithms. Oracle 10g also offers three different methods for encrypting RMAN backup sets:
transparent mode, password mode, and dual mode. Deciding which mode to use mostly depends on how
the RMAN backups will be used within an enterprise.

Method 1: Transparent Mode. This method uses Transparent Data Encryption (TDE) features to
encrypt the RMAN backup sets. It’s therefore best suited for encrypting backups that are going to be
used for restoration and recovery operations on the same server, since the Oracle Wallet is usually set
up on a per-server basis.

I’ve configured transparent mode for my RMAN backups by issuing the CONFIGURE ENCRYPTION
FOR DATABASE ON; configuration command in Listing 4.5. Note that I’ve also changed the encryption
algorithm for all database backup sets to AES192 using the CONFIGURE ENCRYPTION ALGORITHM
'<algorithm>’; command. As this example shows, no other changes are required to existing RMAN
scripts because the CONFIGURE command saves this RMAN encryption configuration within the
database’s control file.

Controlling Backup Set Encryption Scope: CONFIGURE vs. SET. While the CONFIGURE command
lets me establish and control backup set encryption at the database level, I can also override its scope
via the SET command from within an RMAN command script. For example, even though I may have
enabled encryption globally, I can still disable it for selected tablespaces with the SET ENCRYPTION ON
FOR TABLESPACE <tablespace_name>; directive. Contrariwise, I could also globally disable
encryption of RMAN backup sets and then activate it for specific tablespaces, as I’ve illustrated at the
end of Listing 4.5.

Method 2: Password Mode. In this mode, RMAN requires one additional directive to set a password
before backups can be encrypted. Since I might not have installed TDE features on every database
server – it does incur additional licensing costs, after all! – this method is more appropriate for encrypting
RMAN backups that are going to be shipped to another server via a network connection. It’s also
appropriate for RMAN backups that are going to be stored on alternate media in a vaulted, protected
location.

To implement password mode, I only need to add one additional line to my current RMAN backup script.
The SET ENCRYPTION IDENTIFIED BY <password> ONLY; directive tells RMAN that any backup sets
created during this RMAN session will be encrypted using just the specified password. I’ll demonstrate
this by first temporarily disabling transparent data encryption using the ALTER SYSTEM SET
ENCRYPTION WALLET CLOSE; command from within SQL*Plus. I’ll then specify a password string
within my RMAN script and initiate a backup for just two tablespaces, EXAMPLE and
TBS_ENCRYPTED, as shown in Listing 4.6.

Method 3: Dual Mode. This method provides the best of both worlds. If an Oracle Wallet is open when I
create RMAN backup sets, then Oracle will apply encryption to them using Transparent Data Encryption
methods; otherwise, RMAN will use the specified password for encryption. Therefore, dual mode is most
useful for backups that may be utilized for restoration and recovery purposes on either side of your
company’s firewall. In Listing 4.7 I’ve shown an example of an RMAN script that implements dual-mode
encryption.

Limitations. There are some minor but nevertheless important restrictions that Oracle 10g places on
encrypting RMAN backup sets:

• Encryption of RMAN backup sets is only available in Oracle 10gR2 Enterprise Edition; therefore, the COMPATIBLE
initialization parameter must be at least 10.2.x.x.
• Only RMAN backup sets can be encrypted; image copy backups cannot be encrypted.
• Finally, if I change or reset the current database master encryption key, the database can still be restored using an
older master key. The database master key can in fact be reset at any time, and RMAN will still be able to restore any
encrypted backup made for the database.

Using Encrypted Oracle Backup Sets During Recovery Operations

Although these encryption methods certainly guarantee the security of my backup sets, there are some
important implications when using an encrypted backup set during RMAN RESTORE and RECOVER
operations:

• If the backup set was encrypted in transparent mode, RMAN attempts to obtain the decryption password from the
Oracle Wallet only. If the wallet is not open, RMAN will return an error message and abort the RESTORE operation.
• If the backup set was encrypted in dual mode, RMAN tries to retrieve the decryption password from the Oracle Wallet
first. If the wallet isn’t open, however, RMAN then looks for a password string that’s been specified in the SET
DECRYPTION IDENTIFIED BY <password>; directive within the RMAN script. If a password can’t be found in either
location, RMAN returns an error message, and the RESTORE operation will be aborted.
• If I used the password only mode to encrypt the backup set, RMAN only looks for the password as specified in the
SET DECRYPTION IDENTIFIED BY <password>; directive in the RMAN script. If no such directive is found, RMAN
returns an error message and aborts the RESTORE operation.

I’ll illustrate this last scenario in Listing 4.8. First, I’ll remove the datafile for the TBS_ENCRYPTED
tablespace and “bounce” the database so that the tablespace’s corresponding datafile requires
restoration and recovery via RMAN. Since I used the password-only mode when I created the most
recent backup for this datafile, RMAN issued an ORA-19913 error during this first attempt at restoration
because the password string wasn’t specified. (Note that I used the VALIDATE directive to determine if
the RESTORE operation is possible; I’ve found that this can save valuable time while attempting
restoration of an encrypted backup set.) When I reissue the RMAN command script with the proper
password-only specification, however, RMAN readily finds the datafile, restores it, and proceeds with
recovery.

Since the database generates a new encryption key for each encrypted backup, backup encryption keys
are never stored “in the clear.” Depending upon the RMAN encryption mode I have selected, the key is
encrypted using the specified password string, the DB master key, or both. This does reveal a critical
caveat for using password-only RMAN encrypted backups: If I lose the password for the backup set I
need to restore, there is no way to restore that backup set. Of course, this offers excellent protection
should the backup set fall into the wrong hands, but it also means that I must carefully guard the
password(s) that have been set up.

Finally, it’s important to remember that backup encryption can possibly result in a deleterious effect on
backup performance because of the overhead of encrypting all the data in the backup. However, I may
be able to overcome any performance issues by allocating additional channels to increase the speed of
the RMAN encrypted backups.

Conclusion

Oracle 10g seals virtually every potential gap in database security with its robust encryption capabilities.
DataPump Export dump sets, external tables created with the ORACLE_DATAPUMP access method,
and Recovery Manager (RMAN) backup sets can be encrypted to prevent leakage of sensitive
information. In addition, Oracle Advanced Security’s Transparent Data Encryption (TDE) features offer
simple methods to enforce encryption at the column, table, datafile, backup set, and database levels.

References and Additional Reading

Even though I’ve hopefully provided enough technical information in this article to encourage you to
explore with these features, I also strongly suggest that you first review the corresponding detailed
Oracle documentation before proceeding with any experiments. Actual implementation of these features
should commence only after a crystal-clear understanding exists. Please note that I’ve drawn upon the
following Oracle 10gR2 documentation for the deeper technical details of this article:

B14191-01 Oracle Database Backup and Recovery Advanced User’s Guide

B14194-03 Oracle Database Backup and Recovery Reference

B14200-02 Oracle Database SQL Reference

B14214-01 Oracle Database New Features Guide

B14215-01 Oracle Database Utilities

B14266-01 Oracle Database Security Guide

B14268-02 Oracle Database Advanced Security Administrator’s Guide


Fine-Grained Auditing

Synopsis. Oracle 10g extends the original fine-grained auditing (FGA) features that Oracle 9iR2
introduced, including expanded capacity to audit for specific events based on statements issued, the
columns that a SQL statement has accessed, and even the subset of data that the statement is affecting.
This article – the first in an ongoing series on Oracle 10g Security – demonstrates how to implement
FGA in Oracle 10g and illustrates how to take advantage of the newest Oracle 10g Release 2 (10gR2)
FGA features.

Imagine this scenario: Your CIO calls you into a confidential meeting with the Director of Accounting at
your company and informs you that a hitherto highly-trusted employee is suspected of stealing vast sums
of money from the company. “He has been quite ingenious,” says the Director. “He edited a Vendor’s
credit card number, changed it to match that of his personal credit card number, then created a series of
fake invoices for the vendor. Once the invoices were created, he issued credit memos for the invoices,
thus generating large credits for his credit card account. And then he simply deleted the invoices and
changed the credit card numbers back to the original values.”

The lines of concern thicken on the Director’s face as she continues. “And we think he’s still trying to do
this, but at maddeningly infrequent intervals! Can you help us catch him? We need solid proof of what
they’ve done so we can prosecute him and retrieve the funds that he embezzled.” Fortunately, Oracle
10g provides you with several tricks up your sleeve. You tell the Director and CIO about your plan to
catch a thief … and they smile broadly. “Congratulations, “ says the CIO, “you’ve just earned your pay for
the week.”

Basic Auditing: An Overview

All unintended drama aside, this situation is probably not as infrequent as we might imagine. Data
continues to become more voluminous and the need to keep sensitive data secured will continue
unabated for the foreseeable future. Fortunately, while Oracle 10g provides us with several excellent
tools to keep sensitive data secure – a topic for the next article in this series – DBAs also need to know
when that sensitive data is being touched by queries and manipulated by DML statements.

Oracle has long provided a set of standard auditing tools for observing and tracking the activity within a
database’s tables, sessions, and objects with the AUDIT command. For example, if I wanted to audit for
any activity against the HR.EMPLOYEES table, I would first have to change the value for the
AUDIT_TRAIL initialization parameter to DB and bounce the database to activate auditing. Then I could
simply issue the following command to perform the standard audit:

AUDIT ALL ON HR.EMPLOYEES BY ACCESS;

I could then query the DBA_AUDIT_TRAIL view to see if any user session had issued a SELECT,
INSERT, UPDATE, or DELETE statement against that table, as well as view what SQL statements had
been issued.

This approach, however, does leave a lot to be desired. For one thing, I can only limit my auditing
activities to one type of SQL statement, or all statements. Also, with AUDIT, I have no choice but to audit
each and every statement that’s been applied against a table. In our scenario, I really need to focus on
only a few columns in the database tables that make up the Accounts Payable system – for example, the
Vendor’s credit card number -- but standard auditing will return all SQL statements regardless of whether
this column was accessed.

Fine-Grained Auditing (FGA) Policies and DBMS_FGA

Oracle 9i Release 0 provided us with capabilities to perform fine-grained auditing (FGA) through a new
package named DBMS_FGA. This package allows me to implement auditing at an extremely low level of
granularity against any table in the database through a special database object called an FGA policy.

Just as with standard auditing, I can implement an FGA policy to tell Oracle which table(s) I wish to audit
for unexpected activity, and it also tells Oracle which type(s) of SQL statements (SELECT, INSERT,
UPDATE, or DELETE) should be audited. Oracle 10g further improves fine-grained auditing via FGA,
and offers significant upgrades to those features introduced in Oracle 9i:

Tighter Column References. An FGA policy insures that auditing only is performed when one or more
specific column(s) in a table or view are referenced. For example, I can tell Oracle to audit a SELECT
statement only when it references one or more specified columns. I can also tell Oracle that a statement
should only be audited when any one of the columns listed is found in the statement, or only when all of
the columns are found.

Conditional Auditing. I can configure an FGA policy so that auditing is only triggered when a specific
data subset has been affected. For example, I can instruct Oracle to trigger an audit only when a row of
data is changed via an UPDATE statement that meets the conditional criteria specified. Oracle 10g also
allows specification of a NULL condition if there are no conditions to apply.

Combined Audit Trails. In Oracle 10gR2, the standard and fine-grained auditing views have been
combined for easier viewing in DBA_COMMON_AUDIT_TRAIL. Also, it’s now possible to write out FGA
audit trail information in either XML or extended XML formats to external files. A new Oracle 10g view,
V$XML_AUDIT_TRAIL, can be queried directly to view the contents of the generated XML audit trail
files. I can use this feature to map out specific operating system directories for storage of the XML audit
trail logs, thus providing an even more secure place to which the audit trails can be written.

Event Handling. Finally, I can instruct Oracle to trigger a call to an event handler when a specific event
occurs. For example, if a particularly sensitive audit event is raised, I might want to send an e-mail or
page to someone in my IT shop’s data security division so that when the event happens we can be
prepared to take immediate action against the perpetrator of the violation.

Implementing Fine-Grained Auditing: A Demonstration

To simulate the security scenarios I mentioned at the start of this article, I’ll first construct a new Accounts
Payable (AP) schema and three new tables (AP.VENDORS, AP.INVOICES, and AP.INVOICE_DETAILS)
within that schema. To show that Oracle 10g now allows views to be audited as well, I’ll also build a
reporting view, AP.RV_INVOICE_DETAILS, that joins together these three tables in READ ONLY mode.
See Listing 1.1 for links to the corresponding code to construct the schema, including an example of a
stored procedure that functions as a handler package for any FGA-triggered event.

Now that the new schema is built and some sample data has been populated, I’ll construct new FGA
policies that reference that table. Listing 1.2 shows how to build the new policies and then interrogate
the DBA_POLICIES data dictionary view to see the results. Note that by default Oracle 10g will not
enable the FGA policies unless specifically told to do so; I’ve purposely left one policy in DISABLED
status to illustrate how to achieve this.

Now that the test data is loaded and the FGA policies are in place, I can demonstrate how FGA works. I’ll
issue a series of SQL statements against the tables and view in the AP schema to demonstrate how
Oracle 10g tracks the execution of the statements. See Listing 1.3 for the end results of these
demonstrations, and notice that the statements in the second FGA policy are simply ignored because the
policy is not yet enabled.

In Listing 1.4, I’ve constructed some queries against the FGA audit trail data dictionary view,
DBA_FGA_AUDIT_TRAIL, that contains the results of any FGA policy that Oracle applied to the
statements and data within the AP schema during the prior set of unit tests.

Finally, it’s extremely simple to disable and drop any existing FGA policy, as shown in Listing 1.5. This
code shows how to disable the FGA policy against table AP.INVOICES, as well as how to drop the
existing FGA policy against the AP.RV_INVOICE_DETAILS reporting view.

Conclusion

Fine-grained auditing is an excellent tool for tracking changes to individual rows and columns of data
within database tables and views. Oracle 10g has significantly improved these features by combining the
FGA audit trail view with the standard audit trail view, increasing the granularity of the filtering that can be
applied against the audited data, and allowing for audit data to be written out to XML-format files for
additional security.

In the next article, I’ll ramp up our discussion of Oracle 10g Security features by demonstrating how to
prevent access to data before it can be viewed or changed with Oracle 10gR2’s enhanced row-level
security features.

References and Additional Reading

Even though I’ve hopefully provided enough technical information in this article to encourage you to
explore with these features, I also strongly suggest that you first review the corresponding detailed
Oracle documentation before proceeding with any experiments. Actual implementation of these features
should commence only after a crystal-clear understanding exists. Please note that I’ve drawn upon the
following Oracle 10gR2 documentation for the deeper technical details of this article:

B14214-01 Oracle Database New Features Guide

B14231-01 Oracle Database Administrator’s Guide

B14258-01 PL/SQL Packages and Types Reference

B14266-01 Oracle Database Security Guide


Virtual Private Database
Synopsis. Oracle 10gR2 enhances and expands data security with new row-level security features that
ensure a user can only view, add, or modify data based on specific virtual private database (VPD)
rulesets. This article – the second in this series – discusses how these new features improve upon those
in prior releases and demonstrates how to implement VPD in any Oracle 10gR2 database.

To continue the scenario from the previous article in this series, imagine that your CIO calls you into yet
another clandestine meeting with the Director of Accounting at your company a few months after the first
confidential gathering. “Good news!” says the Director. “The audit trails that you put in place with that
FGA thing you talked about helped us catch our thief. We caught her red-handed when she created a
bunch of fake credit memos for a real vendor. She admitted she’d updated the vendor’s credit card
number to match her own card number and then issued several credit memos totaling just under
$100,000.00. Your audit reports were absolutely essential evidence to our case. Well done!” Your CIO
beams at you as well.

But now the Director’s brow once again furrows. “And that got me to thinking … how do we prevent this
in the future? Even though we’re tightening our background check procedures, there’s nothing to stop
anyone devious enough from doing this again, is there?” Your CIO chimes in, “Well, not without rewriting
a lot of our custom application software for your team. Unless …” They both turn and look at you for
another miracle. Yet again, Oracle 10g’s robust security features come to your rescue. You explain to the
Director and CIO your plans to tighten database security without having to change any application code
… and once again, there are smiles on their faces. “Outstanding! “ says your CIO. “I think we’ll keep you
around.”

Virtual Private Database: An Overview

In the previous article, I demonstrated how to determine which users are accessing which data elements
in an Oracle 10gR2 database with Fine-Grained Auditing (FGA), an extremely powerful set of features for
tracking questionable or fraudulent transactions. While FGA can certainly assist me in tracking down
security violations after they have happened, as the old proverb says, the horse has already left the barn,
and I’m just closing the gate behind it. What I’d really like to do is prevent an application or user session
from accessing and modifying sensitive data.

Some shops I’ve worked in have implemented complex security rules using specially constructed views
to limit access to their database’s critical tables. While this is certainly a noble and elegant solution, it is
also extremely time-consuming to plan, develop, test, and implement. Also, view-based security does
have a significant disadvantage: If a user is sophisticated and curious (or malicious!) enough, somehow
he will find a way to display the view and the restrictions it places upon the underlying base table(s).
Ideally, I’d like to make sure that a user isn’t even aware that he’s been prohibited from accessing the
data in the first place, as that may help quell any curiosity about where the data is stored in the database
system

The introduction of the DBMS_RLS package in Oracle 9i offered an excellent alternative to the custom-
written view implementation of security. As its name implies, DBMS_RLS allows a DBA to enforce row
level security against specific tables in the database. Whenever a row is read, added, modified or
deleted, Oracle applies fine grained access control (FGAC) rules that insure the row’s values met the
strictures of that predefined security policy.
The security policy enforces these restrictions by adding a hidden predicate to each query or DML
statement that attempts to access the data. For example, if a query attempts to access a row, and the
security policy determined that the user had insufficient permission to access it, then Oracle filtered the
row from the query’s result set. On the other hand, if a DML operation attempted to process the row, and
the security policy showed that the user was limited from accessing the row, Oracle blocked the
operation against the row. (I’ll explain more about how these predicates are constructed and applied to
data when I demonstrate how to build application security context functions later in this article.)

However, the implementation of DBMS_RLS in releases prior to Oracle 10gR2 still had some major
drawbacks:

Limited Scope Control. Fine grained access control rules could only be applied to one table at a time. If
I needed to secure multiple tables, I had to create a separate security policy for each table. Also, the
security policy lacked column-level specificity. For example, if I was only concerned with applying
security rules against the CREDIT_LIMIT column of the AP.VENDORS table, I was forced to apply the
security policy to the entire table regardless.

Finally, fine grained access control rules could not be applied directly against a view to secure it. For
example, if I needed to restrict access to the AP.RV_INVOICE_DETAILS view, I would be forced to
create a separate security policy for the AP.VENDORS and AP.INVOICES tables that underlie that view.

Repetitive Execution. In prior releases, Oracle was forced to evaluate the hidden predicate every time a
user accessed the secured table. This meant that the predicate tended to be executed extremely
frequently – even when the predicate didn’t change. For example, if I needed to restrict a user session
from querying any entries in the AP.VENDORS table whose value for CREDIT_LIMIT exceeded $5000,
and that credit limit restriction never changed for the duration of that user session, Oracle still parsed the
predicate each time the query was executed. If that query was part of an OLTP application and tended to
be evaluated hundreds of times per minute, a lot of parse time was essentially wasted.

Limited Predicate Size. Depending upon how complex the security restrictions needed to be applied to
a query or DML operation, I often heard complaints from users because their session received an ORA-
03113 error when Oracle attempted (and failed!) to parse a correspondingly complex predicate. Indeed,
I’ve seen predicates that easily exceeded the maximum size of 4K.

Upgraded FGAC Security Features in Oracle 10gR2

The good news is that Oracle 10gR2 overcomes all of these limitations:

• A single security policy can now enforce restrictions on multiple tables. In addition, data that’s accessed via indexes
and synonyms can be secured as well.
• Similar to this release’s enhancements for DBMS_FGA, DBMS_RLS can now define FGAC business rules that are
evaluated only when one or more specific columns are accessed.
• Also, a security policy can be applied directly to a view without having to define policies to secure the underlying
table(s) for that view.
• The maximum size of a security policy’s predicate has been expanded eightfold to 32K.

The most impressive set of enhancements, however, encompass the ability to tell Oracle how often a
predicate should be evaluated. In the original release of DBMS_RLS, the predicate had to be evaluated
every time the policy was enforced. This original mode is retained as the DYNAMIC policy type in Oracle
10gR2. Two additional security policy types are also available to control how often a predicate needs to
be parsed: CONTEXT_SENSITIVE and STATIC.
Context-Sensitive Policies. The underlying security policy function for a CONTEXT_SENSITIVE
security policy will only be executed under two conditions: when either the SQL statement is first parsed,
or when the SQL statement is being executed and Oracle 10gR2 detects that the local application
context has changed since the last time it was executed. This tends to eliminate an enormous amount of
unnecessary statement parsing, especially when the predicate will not change dramatically between
invocations of the statement.

Static Policies. Unlike a CONTEXT_SENSITIVE policy, the predicate of a STATIC policy is only
evaluated when a user session is initiated, and it will never be re-evaluated as long as the session
persists because the predicate is simply cached in the SGA. This type of policy is useful for predicates
that will simply not change for the duration of the session. For example, if I need to restrict access to all
deactivated entries in the AP.VENDORS table, the predicate that handles that limitation would be
ACTIVE_IND <> ‘N’. This predicate would never need to change within the session; therefore, Oracle
10gR2 will parse this predicate only once, store it in the library cache, and never re-evaluate it. This
obviously saves even more significant parsing time than does a CONTEXT_SENSITIVE security policy.

Shared Policies. In addition, Oracle 10gR2 offers a shared version of each of these policy types when it
would be advantageous to apply the same security privileges across multiple database objects. These
two policy types are SHARED_CONTEXT_SENSITIVE and SHARED_STATIC, and they work identically
as their non-shared security policy counterparts.

Oracle recommends testing a VPD policy in DYNAMIC mode before attempting to activate the policy in
either CONTEXT_SENSITIVE or STATIC mode. I heartily concur with this suggestion! In my experience,
it helped me to alleviate a lot of frustration while still learning how to best use the different VPD policy
types effectively.

Putting It All Together: A Demonstration

Now that I’ve hopefully explained the philosophy and theory behind VPD, it’s time to demonstrate its
effectiveness. I’ll use the same Accounts Payable schema objects I created in the prior article, and I’ll
apply the following business rules to the corresponding data using VPD security policies for three
different types of users:

Accounts Payable Clerk (APCLERK)

• An Accounts Payable Clerk may not view a Vendor’s credit card number if the Vendor has a credit limit of $25,000 or
above.
• An Accounts Payable Clerk is not permitted to create a new Vendor with a credit limit of $25,000 or above.
• An Accounts Payable Clerk is not permitted to upgrade the credit limit of an existing Vendor to above $25,000.
• An Accounts Payable Clerk is not permitted to access any Invoice marked as a Credit Memo.
• An Accounts Payable Clerk cannot change any Invoice from its normal status into a Credit Memo.

Accounts Payable Team Lead (APTLEAD)

• An Accounts Payable Team Lead may not view a Vendor’s credit card number if the Vendor has a credit limit of
$150,000 or above.
• An Accounts Payable Team Lead is not permitted to create a new Vendor with a credit limit of $150,000 or above.
• An Accounts Payable Team Lead is not permitted to upgrade the credit limit of an existing Vendor to above $150,000.
• An Accounts Payable Team Lead, however, is not restricted from accessing any Invoice marked as a Credit Memo,
and the Lead can also change a normal Invoice into a Credit Memo.
Accounts Payable Director (APDIR)

• The Accounts Payable Director has complete rights to change any data in the AP system.

Finally, what if a user falls into none of these groups? To play it extra-safe, I’ll make sure that my
application security context package will construct and apply the appropriate predicates to keep an
unapproved user session from seeing any credit card or credit limit information about a Vendor.

First, I’ll create three new users, APCLERK, APTLEAD, and APDIR, and assign them the appropriate
system privileges, including the APP_SECURED role that was established in the prior article, as shown
in Listing 2.1.

Next, I’ll construct a special package, AP.APP_SECURITY_CONTEXT, that implements the business
rules described previously. (See Listing 2.2 for the code for this package.) The package comprises one
procedure, SETUSERINFO, and two functions, CREDIT_LIMIT and CREDIT_MEMO. The procedure will
be used to assign the appropriate values for each different type of user accessing the data in the
Accounts Payable system, and the two functions will build the actual predicates to enforce the Accounts
Payable security policies.

Listing 2.3 shows how to create a CONTEXT object in Oracle 10gR2. This context, VPD_CONTEXT,
will provide a container for values assigned whenever a user session is established, and the
AP.APP_SECURITY_CONTEXT package is defined as the only way to modify data inside the context.
VPD_CONTEXT will be populated via the AP.ON_LOGON trigger whenever a new user session is
established via a call to the SETUSERINFO procedure.

The code in Listing 2.4 establishes four security policies for the Accounts Payable system:

• Policy AP_CREDIT_CARD ensures that Vendor credit card numbers cannot be viewed during a SELECT operation if
the Vendor’s credit limit is greater than the one specified for the user session and either the CREDIT_CARD or
CREDIT_LIMIT columns are included in the query. The AP.APP_SECURITY_CONTEXT package’s CREDIT_LIMIT
function is called to enforce this business rule.
• Policy AP_CREDIT_LIMIT ensures that a Vendor entry cannot be viewed, created, or updated if the Vendor’s credit
limit is greater than the one specified for the user session. This policy is also enforced by the CREDIT_LIMIT function.
• Policy AP_CREDIT_MEMO enforces a restriction on creation of a new Credit Memo, and it’s enforced by the
CREDIT_MEMO function. This policy also ensures that an existing Invoice can’t be transformed into a Credit Memo
(i.e. by changing the Invoice’s value for INVOICE_TYPE from ‘C’ to ‘D’).
• Finally, policy AP_RPTG_READONLY ensures that no user accessing the reporting view named
AP.RV_INVOICE_DETAILS can see any Vendor, Invoice, or Invoice Detail information for a Vendor whose credit limit
is greater than the one specified for the user session. Again, the CREDIT_LIMIT function is called to enforce this
business rule. Note that since all other users besides AP system users are sent a value of zero for their credit limit,
this policy effectively limits those users from seeing any data if either the CREDIT_CARD or CREDIT_LIMIT columns
are included in the query.

All the pieces are now in place, so it’s time to test the policies. In Listing 2.5 I’ve listed several queries
that should prove if these security policies are working properly for SELECT statements, and in Listing
2.6 I’ve set up similar tests for INSERTs and UPDATEs.

To prove out my security policies, I first logged in as the AP Director user (APDIR) and executed all DML
statements in these two sets of unit tests. As expected, the queries retrieved all data, and the DML
statements completed successfully. Since this user should have full access to all data in the AP schema,
this test turned out as expected. Listing 2.7 shows the results from the queries for the sake of later
comparisons.
Next, I logged in as the AP Team Lead user (APTLEAD) and ran the same set of queries. Since there
are no Vendors with a credit limit above $150,000, the results for all queries matched those in Listing 2.7,
which was as expected. However, when I attempted to add a new Vendor entry with a credit limit above
$150,000, or when I attempted to update an existing Vendor entry so that its credit limit exceeded
$150,000, the VPD policy was triggered, and as expected, both of the operations were rejected. As
shown in Listing 2.8, Oracle 10g raised an exception and returned an ORA-28115,Policy With Check
Option Violation error message.

So far, so good! Now for the final set of unit tests. When I ran the same queries against the AP data for
the AP Clerk user (APCLERK), I noticed significant differences. As I expected, data was returned only for
those Vendors whose credit limit didn’t exceed $25,000. Moreover, when I attempted to add a new
Vendor entry with a credit limit above $25,000, the VPD policy was triggered. However, note that for an
attempted update of a Vendor whose credit limit already exceeds $25,000, the VPD policy still fires, but
the predicate that’s generated simply prevents the DML from executing against any rows. These results
are shown in Listing 2.9.

Conclusion

Even though Oracle 9i introduced the concept of Virtual Private Database (VPD), it took several
enhancements in Oracle 10gR2 to improve significantly its breadth and flexibility. VPD now offers
extremely granular row-level security enforcement via fine grained auditing control (FGAC), limited
parsing of commonly-utilized predicates per each user session, and the creation of much larger, more
complex predicates for security enforcement.

References and Additional Reading

Even though I’ve hopefully provided enough technical information in this article to encourage you to
explore with these features, I also strongly suggest that you first review the corresponding detailed
Oracle documentation before proceeding with any experiments. Actual implementation of these features
should commence only after a crystal-clear understanding exists. Please note that I’ve drawn upon the
following Oracle 10gR2 documentation for the deeper technical details of this article:

B14214-01 Oracle Database New Features Guide

B14231-01 Oracle Database Administrator’s Guide

B14258-01 PL/SQL Packages and Types Reference

B14266-01 Oracle Database Security Guide

You might also like