Professional Documents
Culture Documents
Undo Advisor
Bigfile Tablespaces
HTML DB
Segment Shrink
Drop Database
Server Alerts
Oracle manages undo space using the undo tablespace instead of rollback segments
Oracle manages the size and number of undo segments
Relieves DBA from creating and monitoring rollback segments
New type of tablespace required – UNDO TABLESPACE
New initialization parameters
UNDO_MANAGEMENT - Decides the type of undo management.
AUTO specifies automatic undo management
MANUAL specifies pre-9i behavior where the DBA manages rollback segments.
This is the default
UNDO_TABLESPACE – Name of the undo tablespace, this is a dynamic parameter,
so you can change the name of the undo tablespace. You may have multiple undo
tablespaces on the database, only one can be active at any given time. If you do not
specify this parameter, oracle uses the first available undo tablespace.
UNDO_RETENTION - specifies (in seconds) the amount of committed undo
information to retain in the database. The default is 900 seconds. Keep this value
high to avoid “Snapshot too old” errors. This is also a dynamic parameter, which can
be changed using ALTER SYSTEM.
UNDO_SUPPRESS_ERRORS - Suppress errors while executing manual undo
management mode operations. If your application has SET TRANSACTION USE
ROLLBACK SEGMENT statement, setting this parameter value to TRUE will
suppress the error while in automatic undo.
ASSM can be specified only with the locally managed tablespaces (LMT). The CREATE
TABLESPACE statement has a new clause SEGMENT SPACE MANAGEMENT.
Oracle uses bitmaps to manage the free space. A bitmap, in this case, is a map that
describes the status of each data block within a segment with respect to the amount of
space in the block available for inserting rows. As more or less space becomes available
in a data block, its new state is reflected in the bitmap. Bitmaps allow Oracle to manage
free space more automatically.
Here is an example:
One huge benefit of having ASSM is to reduce the “Buffer Busy Waits” you see on
segments.
Use the RENAME COLUMN clause of ALTER TABLE to rename a column. The
new column name must not be the same as any other column name in
table.
Function-based indexes and check constraints that depend on the renamed column remain
valid.
Dependent views, triggers, domain indexes, functions, procedures, and packages are
marked INVALID. Oracle attempts to revalidate them when they are next accessed, but
you may need to alter these objects with the new column name if revalidation fails.
You cannot combine this clause with any of the other column_clauses in the same
statement.
You may enable compression on a table at the time of table creation or by altering the
table. Remember the existing data in the table is not compressed on uncompressed when
you do the “alter”.
CREATE TABLE MYTABLE (
COL1 VARCHAR2 (20),
COL2 DATE)
TABLESPACE MYTABLESPACE
NOLOGGING
COMPRESS
PCTFREE 0;
The data compression is transparent to the user. You run queries against the table the
same way you use to do before. Oracle compresses data blocks only when the data is
loaded in direct path. The statements could be
Compression is suitable for large tables, where the updates/deletes are close to none. If
there are updates/deletes, you may end up using more space – to update, Oracle has to
uncompress the row, and insert it again; row deleted will free up some space which may
not be sufficient for the next inserted row, because conventional inserts are not
compressed, direct load inserts always load above the HWM.
You can either compress the table, or selectively on partitions. It may be a good idea to
compress the older data on a partitioned table. To do this, you have to perform a
Another place to use compression is when you create materialized views, because most of
the MVs are read only. If the MV already exist, you may do
Restrictions:
You cannot specify data segment compression for an index-
organized table, for any overflow segment or partition of an overflow
segment, or for any mapping table segment of an index-organized
table.
You cannot specify data segment compression for hash partitions or
for either hash or list sub-partitions.
You cannot specify data segment compression for an external table.
Generally, keys in an index have two pieces, a grouping piece and a unique piece. If the
key is not defined to have a unique piece, Oracle provides one in the form of a rowid
appended to the grouping piece. Key compression is a method of breaking off the
grouping piece and storing it so it can be shared by multiple unique pieces.
Key compression is achieved by breaking the index entry into two pieces – a prefix entry
(or the grouping piece) and the suffix entry (the unique piece). Key compression is done
within an index block but not across multiple index blocks. Suffix entries form the
compressed version of index rows. Each suffix entry references a prefix entry, which is
stored in the same index block as the suffix entry.
Although key compression reduces the storage requirements of an index, it can increase
the CPU time required to reconstruct the key column values during an index scan. It also
incurs some additional storage overhead, because every prefix entry has an overhead of 4
bytes associated with it.
You can specify an integer along with the COMPRESS clause, which specifies the
number of prefix columns to compress. For unique indexes, the valid range of prefix
length values is from 1 to the number of key columns minus 1. The default is the number
of key columns minus 1. For non-unique indexes, the valid range is from 1 to the number
of key columns. The default is the number of key columns.
Since flashback query is done using the “AS OF” clause of the SELECT statement, the
developers/users do not need any administrative privileges or DBA intervention. While
you query the old data, the table with current data is available for other users.
The flashback clause of the SELECT statement has the following syntax:
Let’s demonstrate the flashback-query using example. Notice that the COMMIT is what
completes a transaction.
By writing proper sub queries and update statements you can recover the rows.
GRANT ANY PRIVILEGE still gives the grantee privilege to grant any system privilege.
When you ALTER the DEFAULT TEMPORARY TABLESPACE for the database,
Oracle reassigns the temporary tablespace of the users with the default assignment to the
new tablespace. Here is an example:
A default temporary tablespace cannot be taken offline until a new default temporary
tablespace is online
If you specify the EXTENT MANAGEMENT LOCAL clause for the SYSTEM
tablespace when creating a database, the database must have a default temporary
tablespace, because a locally managed SYSTEM tablespace cannot store temporary
segments.
If you specify EXTENT MANAGEMENT LOCAL but you do not specify the
DATAFILE clause, you can omit the default_temp_tablespace clause. Oracle will
create a default temporary tablespace called TEMP with one datafile of
size 10M with autoextend disabled.
If you specify both EXTENT MANAGEMENT LOCAL and the DATAFILE clause,
then you must also specify the default_temp_tablespace clause and explicitly
specify a datafile for that tablespace.
Resumable space allocation solution can be used for the following errors:
View Description
DBA_RESUMABLE These views contain rows for all currently executing or suspended
USER_RESUMABLE
resumable statements. They can be used by a DBA, AFTER SUSPEND
trigger, or another session to monitor the progress of, or obtain
specific information about, resumable statements.
V$SESSION_WAIT When a statement is suspended the session invoking the statement is
put into a wait state. A row is inserted into this view for the session
with the EVENT column containing "statement suspended, wait error to
be cleared".
To read more:
If you omit the partition name, then Oracle assigns partition names of the form SYS_Pn.
The DEFAULT keyword, introduced in Oracle9i release 2, creates a partition into which
Oracle will insert any row that does not map to another partition. Therefore, you can
specify DEFAULT for only one partition, and you cannot specify any other values for that
partition. Further, the default partition must be the last partition you define (similar to the
use of MAXVALUE for range partitions).
The following are the versions and partitioning methods available, just to refresh your
memory: <![endif]>
You create the external table using the ORAGANIZATION EXTERNAL clause of the
CREATE TABLE.
To create an external table, you must create a DIRECTORY in Oracle and have the READ
object privilege on the directory in which the external data resides. Also, no constraints
are permitted on external tables.
External tables can be used to load data to a database, where you have to do some data
manipulation. This avoids couple of steps in the conventional method, where you would
load the data to a temporary table and then load the destination tables using queries.
Estimate single-table
predicate selectivities
when collected statistics
cannot be used or are
likely to lead to
significant errors in
estimation.
Estimate table
cardinality for tables
without statistics or for
tables whose statistics
are too out of date to
trust.
A value of 0 means
dynamic sampling will
not be done.
A value of 1 (the default)
means dynamic
sampling will be
performed if all of the
following conditions are
true:
There is more than one
table in the query.
Some table has not been
analyzed and has no
indexes.
The optimizer
determines that a
relatively expensive
table scan would be
required for this
unanalyzed table.
Increasing the value of the parameter
results in more aggressive application of
dynamic sampling, in terms of both the type
of tables sampled (analyzed or unanalyzed)
and the amount of I/O spent on sampling.
SHARED_POOL_SIZE
LARGE_POOL_SIZE
LOG_CHECKPOINTS_TO_ALERT
SERVICE_NAMES
LOCAL_LISTENER
OPEN_CURSORS
The following are the new parameters introduced in Oracle9i that are dynamic.
NAME DESCRIPTION
archive_lag_target Maximum number of seconds of redos the standby could lose
db_16k_cache_size Size of cache for 16K buffers
db_2k_cache_size Size of cache for 2K buffers
db_32k_cache_size Size of cache for 32K buffers
db_4k_cache_size Size of cache for 4K buffers
db_8k_cache_size Size of cache for 8K buffers
db_cache_advice Buffer cache sizing advisory
db_cache_size Size of DEFAULT buffer pool for standard block size buffers
db_create_file_dest default database location
db_create_online_log_dest_1 online log/controlfile destination #1
db_create_online_log_dest_2 online log/controlfile destination #2
db_create_online_log_dest_3 online log/controlfile destination #3
db_create_online_log_dest_4 online log/controlfile destination #4
db_create_online_log_dest_5 online log/controlfile destination #5
db_keep_cache_size Size of KEEP buffer pool for standard block size buffers
db_recycle_cache_size Size of RECYCLE buffer pool for standard block size buffers
dg_broker_config_file1 data guard broker configuration file #1
dg_broker_config_file2 data guard broker configuration file #2
dg_broker_start start Data Guard broker framework (DMON process)
dispatchers specifications of dispatchers
drs_start start DG Broker monitor (DMON process)
fal_client FAL client
fal_server FAL server list
fast_start_mttr_target MTTR target of forward crash recovery in seconds
file_mapping enable file mapping
filesystemio_options IO operations on filesystem files
log_archive_dest_10 archival destination #10 text string
log_archive_dest_6 archival destination #6 text string
log_archive_dest_7 archival destination #7 text string
log_archive_dest_8 archival destination #8 text string
log_archive_dest_9 archival destination #9 text string
log_archive_dest_state_10 archival destination #10 state text string
log_archive_dest_state_6 archival destination #6 state text string
log_archive_dest_state_7 archival destination #7 state text string
log_archive_dest_state_8 archival destination #8 state text string
log_archive_dest_state_9 archival destination #9 state text string
nls_length_semantics create columns using byte or char semantics by default
nls_nchar_conv_excp NLS raise an exception instead of allowing implicit conversion
olap_page_pool_size size of the olap page pool in bytes
optimizer_dynamic_sampling optimizer dynamic sampling
pga_aggregate_target Target size for the aggregate PGA memory consumed by the instanc
plsql_compiler_flags PL/SQL compiler flags
plsql_native_c_compiler plsql native C compiler
plsql_native_library_dir plsql native library dir
plsql_native_library_subdir_count plsql native library number of subdirectories
plsql_native_linker plsql native linker
plsql_native_make_file_name plsql native compilation make file
plsql_native_make_utility plsql native compilation make utility
remote_listener remote listener
shared_servers number of shared servers to start up
standby_file_management if auto then files are created/dropped automatically on standby
statistics_level statistics level
trace_enabled enable KST tracing
undo_retention undo retention in seconds
undo_suppress_errors Suppress RBU errors in SMU mode
undo_tablespace use/switch undo tablespace
workarea_size_policy policy used to size SQL working areas (MANUAL/AUTO)
With Oracle 9i a new method of tuning the PGA memory areas was introduced.
Automatic PGA Memory Management takes the place of setting the sort_area_size,
sort_area_retained_size, sort_area_hash_size and other related memory management
parameters that all Oracle DBA's are familiar with.
pga_aggregate_target,
workarea_size_policy
Note that work_area_size_policy can be altered per database session, allowing manual
memory management on a per session basis if needed. eg. a session is loading a large
import file and a rather large sort_area_size is needed. A logon trigger could be used to
set the work_area_size policy for the account doing the import.
Also note that Automate PGA management can only be used for dedicated server
sessions.
For more some good reading on Automatic PGA management, please see:
The documentation contains some good guidelines for initial settings, and how to monitor
and tune them as needed.
If your 9i database is currently using manual PGA management, there are views available
to help you make a reasonable estimate for the setting.
If your database also has statspack statistics, then there is also historical information
available to help you determine the setting.
An initial setting can be determined by simply monitoring the amount of PGA memory
being used by the system as seen in v$pgastat, and by querying the
v$pga_target_for_estimate view.
v$pgastat:
select *
from v$pgastat
order by lower(name)
/
16 rows selected.
The statistic "maximum PGA allocated" will display the maximum amount of PGA
memory allocated during the life of the instance.
The statistic "maximum PGA used for auto workareas" and "maximum PGA used for
manual workareas" will display the maximum amount of PGA memory used for each
type of workarea during the life of the instance.
v$pga_target_advice:
select *
from v$pga_target_advice
order by pga_target_for_estimate
/
PGA TARGET PGA TARGET ESTIMATED EXTRA
ESTIMATED PGA ESTIMATED OVER
FOR EST FACTOR ADV BYTES PROCESSED BYTES RW
CACHE HIT % ALLOC COUNT
---------------- ---------- --- ---------------- ----------------
------------- --------------
12,582,912 .50 ON 17,250,304 0
100.00 3
18,874,368 .75 ON 17,250,304 0
100.00 3
25,165,824 1.00 ON 17,250,304 0
100.00 0
30,198,784 1.20 ON 17,250,304 0
100.00 0
35,231,744 1.40 ON 17,250,304 0
100.00 0
40,264,704 1.60 ON 17,250,304 0
100.00 0
45,297,664 1.80 ON 17,250,304 0
100.00 0
50,331,648 2.00 ON 17,250,304 0
100.00 0
75,497,472 3.00 ON 17,250,304 0
100.00 0
100,663,296 4.00 ON 17,250,304 0
100.00 0
150,994,944 6.00 ON 17,250,304 0
100.00 0
201,326,592 8.00 ON 17,250,304 0
100.00 0
12 rows selected.
Keep in mind that pga_aggregate_target is not set in stone. It is used to help Oracle better
manage PGA memory, but Oracle will exceed this setting if necessary.
There are other views that are also useful for PGA memory management.
v$process:
select
max(pga_used_mem) max_pga_used_mem
, max(pga_alloc_mem) max_pga_alloc_mem
, max(pga_max_mem) max_pga_max_mem
from v$process
/
This displays the sum of all current PGA usage per process:
select
sum(pga_used_mem) sum_pga_used_mem
, sum(pga_alloc_mem) sum_pga_alloc_mem
, sum(pga_max_mem) sum_pga_max_mem
from v$process
/
Examples:
SPFILE can be generated from the traditional initSID.ora file using the CREATE
SPFILE FROM PFILE statement.
To change a parameter that need to open the database if the database is down, you need to
do the following:
1. STARTUP NOMOUNT
2. ALTER SYSTEM SET … SCOPE=SPFILE;
3. SHUTDOWN
4. STARTUP
Read, “Managing initialization parameters using a server parameter file” from Oracle
documentation.
DB_CREATE_FILE_DEST – Specifies the location of the data files. This parameter can
be altered dynamically.
Advantages:
No need to specify the location, size or name of the data files when
creating tablespaces.
Automatically removes files from the OS when tablespace or redo
group is dropped.
Third party applications need not worry about OS specific file name
conventions.
I’m personally not a fan of OMF, but I use the INCLUDING CONTENTS AND
DATAFILES clause of DROP TABLESPACE to remove the OS files when a tablespace
is dropped. I like this feature.
In the pre-9i releases, you define the BLOCK_SIZE when creating the database and it
cannot be changed. In 9i also this is true. In addition to the standard block size of the
database, you can create tablespaces with different block size. The block size of the
tablespace is specified using the BLOCK SIZE clause of CREATE TABLESPACE.
For you to use this feature, you need to set the right buffer cache parameter. The
DB_CACHE_SIZE specifies the buffer cache size for the objects in tablespaces created
with the standard block size. DB_nK_CACHE_SIZE parameter sets the appropriate
buffer cache for the non-standard block sized tablespace. ‘n’ could be 2, 4, 8, 16 or 32 but
it should not be equal to your standard block size. The default values for
DB_nK_CACHE_SIZE parameters are 0.
Look at the above parameters; since the database was upgraded from 8i, it is still using
the old style buffer cache sizing using DB_BLOCK_BUFFERS. Here we cannot set any
of the DB_nK_CACHE_SIZE parameter because you have not used the
DB_CACHE_SIZE parameter to start the database.
Prior to Oracle9i, you may have to write multiple INSERT statements for each table or
use a PL/SQL cursor to insert into different tables.
[ ALL | FIRST ]
WHEN condition THEN insert_into_clause [values_clause]
[insert_into_clause [values_clause]]...
[WHEN condition THEN insert_into_clause [values_clause]
[insert_into_clause [values_clause]]...
]...
[ELSE insert_into_clause [values_clause]
[insert_into_clause [values_clause]]...
]
The INSERT ALL clause inserts rows into the target tables unconditionally. Each row
read is processed against each INSERT clause.
The following example takes each row from SALES_HISTORY table and inserts into the
SALES_MONTHLY table a total amount for each month (flat table to a normalized
table):
DESC SALES_HISTORY
Name Null? Type
----------------------------------------- -------- -------------
YEAR NUMBER(4)
REGION CHAR (2)
JAN NUMBER
FEB NUMBER
MAR NUMBER
APR NUMBER
MAY NUMBER
JUN NUMBER
JUL NUMBER
AUG NUMBER
SEP NUMBER
OCT NUMBER
NOV NUMBER
DEC NUMBER
DESC SALES_MONTHLY
Name Null? Type
----------------------------------------- -------- -------------
MONTH_YEAR DATE
AMOUNT NUMBER
INSERT ALL
INTO SALES_MONTHLY (MONTH_YEAR, AMOUNT)
VALUES (TO_DATE('01/'||YEAR,'MM/YYYY'), JAN)
INTO SALES_MONTHLY VALUES (TO_DATE('02/'||YEAR,'MM/YYYY'), FEB)
INTO SALES_MONTHLY VALUES (TO_DATE('03/'||YEAR,'MM/YYYY'), MAR)
INTO SALES_MONTHLY VALUES (TO_DATE('04/'||YEAR,'MM/YYYY'), APR)
INTO SALES_MONTHLY VALUES (TO_DATE('05/'||YEAR,'MM/YYYY'), MAY)
INTO SALES_MONTHLY VALUES (TO_DATE('06/'||YEAR,'MM/YYYY'), JUN)
INTO SALES_MONTHLY VALUES (TO_DATE('07/'||YEAR,'MM/YYYY'), JUL)
INTO SALES_MONTHLY VALUES (TO_DATE('08/'||YEAR,'MM/YYYY'), AUG)
INTO SALES_MONTHLY VALUES (TO_DATE('09/'||YEAR,'MM/YYYY'), SEP)
INTO SALES_MONTHLY VALUES (TO_DATE('10/'||YEAR,'MM/YYYY'), OCT)
INTO SALES_MONTHLY VALUES (TO_DATE('11/'||YEAR,'MM/YYYY'), NOV)
INTO SALES_MONTHLY VALUES (TO_DATE('12/'||YEAR,'MM/YYYY'), DEC)
SELECT YEAR,
SUM(JAN) JAN, SUM(FEB) FEB, SUM(MAR) MAR, SUM(APR) APR,
SUM(MAY) MAY, SUM(JUN) JUN, SUM(JUL) JUL, SUM(AUG) AUG,
SUM(SEP) SEP, SUM(OCT) OCT, SUM(NOV) NOV, SUM(DEC) DEC
FROM SALES_HISTORY
GROUP BY YEAR
/
The ALL in the conditional insert clause makes Oracle evaluate all the WHEN conditions
irrespective of the other WHEN conditions. The FIRST clause stops evaluating other
WHEN conditions the first time any condition is evaluated true. The conditions are
evaluated in the order they appear.
INSERT FIRST
WHEN REGION = 'TX' THEN
INTO SALES_TEXAS (YEAR, TOTAL_AMOUNT)
VALUES (YEAR, TOTAMT)
WHEN REGION = 'CA' THEN
INTO SALES_CALIF (YEAR, TOTAL_AMOUNT)
VALUES (YEAR, TOTAMT)
WHEN REGION = 'NM' THEN
INTO SALES_NEWMEX (YEAR, TOTAL_AMOUNT)
VALUES (YEAR, TOTAMT)
WHEN REGION = 'AZ' THEN
INTO SALES_ARIZ (YEAR, TOTAL_AMOUNT)
VALUES (YEAR, TOTAMT)
ELSE
INTO SALES_OTHER (YEAR, TOTAL_AMOUNT)
VALUES (YEAR, TOTAMT)
SELECT YEAR, REGION,
(JAN+FEB+MAR+APR+MAY+JUN+JUL+AUG+SEP+OCT+NOV+DEC) TOTAMT
FROM SALES_HISTORY
/
The index and table must be analyzed for Oracle to take advantage of index skip scan.
SQL>
SQL> create table test as
2 select * from dba_objects;
Table created.
SQL> create index test_i1 on test (owner, object_name);
Index created.
SQL> set autotrace on
SQL> select object_type, object_name
2 from test
3 where object_name = 'DWA_JOB_LOG';
OBJECT_TYPE OBJECT_NAME
------------------ -------------------------
TABLE DW_JOB_LOG
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'
SQL>
SQL> analyze table test compute statistics;
Table analyzed.
SQL> select object_type, object_name
2 from test
3* where object_name = 'DWA_JOB_LOG'
SQL> /
OBJECT_TYPE OBJECT_NAME
------------------ -------------------------
TABLE DW_JOB_LOG
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=46 Card=2 Bytes=62)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TEST' (Cost=46 Card=2 By tes=62)
2 1 INDEX (SKIP SCAN) OF 'TEST_I1' (NON-UNIQUE) (Cost=45 Card=2)
SQL>
******** DBMS_METADATA
DBMS_METADATA is a powerful package provided in Oracle9i to extract the object
definitions from the database. The following are the programs available in the package:
Subprogram Description
OPEN Procedure
Specifies the type of object to be
retrieved, the version of its metadata,
and the object model.
Subprogram Description
SET_FILTER Procedure
Specifies restrictions on the objects to
be retrieved, for example, the object
name or schema.
SET_COUNT Procedure
Specifies the maximum number of
objects to be retrieved in a single
FETCH_xxx call.
GET_QUERY Procedure
Returns the text of the queries that are
used by FETCH_xxx.
SET_PARSE_ITEM Procedure
Enables output parsing by specifying
an object attribute to be parsed and
returned.
ADD_TRANSFORM Procedure
Specifies a transform that FETCH_xxx
applies to the XML representation of
the retrieved objects.
SET_TRANSFORM_PARAM
Procedure Specifies parameters to the XSLT
stylesheet identified by
transform_handle.
FETCH_xxx Procedure
Returns metadata for objects meeting
the criteria established by OPEN,
SET_FILTER, SET_COUNT,
ADD_TRANSFORM, and so on.
CLOSE Procedure
Invalidates the handle returned by
OPEN and cleans up the associated
state.
GET_XML and GET_DDL Functions
Returns the metadata for the specified
object as XML or DDL.
GET_DEPENDENT_XML and
GET_DEPENDENT_DDL Functions Returns the metadata for one or more
dependent objects, specified as XML or
Subprogram Description
DDL.
GET_GRANTED_XML and
GET_GRANTED_DDL Functions Returns the metadata for one or more
granted objects, specified as XML or
DDL.
Here is an example:
The two tables on this page are excerpt from Oracle documentation – “PL/SQL Supplied
Packages”
******** DBMS_XPLAN
Remember the SQL you use to get the explain plan from the PLAN_TABLE and you
perform and EXPLAIN on a statement:
Well, you may never use the above SQL, Oracle9i gives you a package to look at the
explain plans – DBMS_XPLAN.
Here is an example:
The following are excerpts from Oracle Documentation – SQL Reference Guide
An outer join extends the result of a simple join. An outer join returns all rows that
satisfy the join condition and also returns some or all of those rows from one table for
which no rows from the other satisfy the join condition.
Oracle Corporation recommends that you use the FROM clause OUTER JOIN syntax rather than
the Oracle join operator. Outer join queries that use the Oracle join operator (+) are subject to the
following rules and restrictions, which do not apply to the FROM clause join syntax:
You cannot specify the (+) operator in a query block that also
contains FROM clause join syntax.
The (+) operator can appear only in the WHERE clause or, in the
context of left-correlation (that is, when specifying the TABLE clause)
in the FROM clause, and can be applied only to a column of a table or
view.
If A and B are joined by multiple join conditions, then you must use
the (+) operator in all of these conditions. If you do not, then Oracle
will return only the rows resulting from a simple join, but without a
warning or error to advise you that you do not have the results of an
outer join.
The (+) operator does not produce an outer join if you specify one
table in the outer query and the other table in an inner query.
You cannot use the (+) operator to outer-join a table to itself,
although self joins are valid. For example, the following statement is
not valid:
The following example uses a left outer join to return the names of all departments in the
sample schema hr, even if no employees have been assigned to the departments:
Users familiar with the traditional Oracle outer joins syntax will recognize the same
query in this form:
The left outer join returns all departments, including those without any employees. The
same statement with a right outer join returns all employees, including those not yet
assigned to a department:
Because the column names in this example are the same in both tables in the join, you
can also use the common column feature (the USING clause) of the join syntax, which
coalesces the two matching columns department_id. The output is the same as for the
preceding example:
When using the SAMPLE clause, the query must select only from one table, join queries
are not supported.
MERGE is a new SQL statement introduced in Oracle9i to update or insert rows selected
from one table to another based on a condition. This avoids writing multiple DML
statements to satisfy a conditional update or insert. The syntax is:
Use the MERGE statement to select rows from one table for update or insertion into another
table. The decision whether to update or insert into the target table is based on a condition
in the ON clause.
This statement is a convenient way to combine at least two operations. It lets you avoid
multiple INSERT and UPDATE DML statements.
MERGE is a deterministic statement. That is, you cannot update the same row of the target
table multiple times in the same MERGE statement.
Prerequisites
You must have INSERT and UPDATE object privileges on the target table and SELECT
privilege on the source table.
Syntax
merge::=
Syntax
merge::=
merge_update_clause::=
Text description of merge_update_clause
merge_insert_clause::=
Semantics
INTO Clause
Use the INTO clause to specify the target table you are updating or inserting into.
USING Clause
Use the USING clause to specify the source of the data to be updated or inserted. The
source can be a table, view, or the result of a subquery.
ON Clause
Use the ON clause to specify the condition upon which the MERGE operation either updates
or inserts. For each row in the target table for which the search condition is true, Oracle
updates the row based with corresponding data from the source table. If the condition is
not true for any rows, then Oracle inserts into the target table based on the corresponding
source table row.
Use these clauses to instruct Oracle how to respond to the results of the join condition in
the ON clause. You can specify these two clauses in either order.
merge_update_clause
The merge_update_clause specifies the new column values of the target table. Oracle
performs this update if the condition of the ON clause is true. If the update clause is
executed, then all update triggers defined on the target table are activated.
merge_insert_clause
The merge_insert_clause specifies values to insert into the column of the target table
if the condition of the ON clause is false. If the insert clause is executed, then all insert
triggers defined on the target table are activated.
Examples
The following example creates a bonuses table in the sample schema oe with a default
bonus of 100. It then inserts into the bonuses table all employees who made sales (based
on the sales_rep_id column of the oe.orders table). Finally, the Human Resources
manager decides that all employees should receive a bonus. Those who have not made
sales get a bonus of 1% of their salary. Those who already made sales get an increase in
their bonus equal to 1% of their salary. The MERGE statement implements these changes in
one step:
EMPLOYEE_ID BONUS
----------- ----------
153 100
154 100
155 100
156 100
158 100
159 100
160 100
161 100
163 100
EMPLOYEE_ID BONUS
----------- ----------
153 180
154 175
155 170
156 200
158 190
159 180
160 175
161 170
163 195
157 950
145 1400
170 960
179 620
152 900
169 1000
.
DO U KNOW
2001/03 You can move a table from one tablespace to another using
the MOVE clause of ALTER TABLE statement. All indexes of moved
table need to be rebuilt.
2001/11 In Oracle9i, you can define primary key, unique key and
foreign key constraints on views. These constraints are declarative
hence the only valid state is DISABLE NOVALIDATE.